|
Post by jimcripwell on May 22, 2009 20:43:54 GMT
steve writes " not uncertainty in the greenhouse forcings."
You can go on repeating the mantra that we know what greenhouse forcings are "accurately". Until I see actual experimental data as to what the radiative forcing for a doubling of CO2 is, so far as I am concerned, the values are meaningless.
|
|
|
Post by socold on May 23, 2009 0:29:57 GMT
What are parameterisations?www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/"Some physics in the real world, that is necessary for a climate model to work, is only known empirically. Or perhaps the theory only really applies at scales much smaller than the model grid size. This physics needs to be 'parameterised' i.e. a formulation is used that captures the phenomenology of the process and its sensitivity to change but without going into all of the very small scale details. These parameterisations are approximations to the phenomena that we wish to model, but which work at the scales the models actually resolve. A simple example is the radiation code - instead of using a line-by-line code which would resolve the absorption at over 10,000 individual wavelengths, a GCM generally uses a broad-band approximation (with 30 to 50 bands) which gives very close to the same results as a full calculation. Another example is the formula for the evaporation from the ocean as a function of the large-scale humidity, temperature and wind-speed. This is really a highly turbulent phenomena, but there are good approximations that give the net evaporation as a function of the large scale ('bulk') conditions. In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are 'tuned' to reproduce the observed processes as much as possible."
|
|
|
Post by socold on May 23, 2009 0:41:57 GMT
steve writes " not uncertainty in the greenhouse forcings." You can go on repeating the mantra that we know what greenhouse forcings are "accurately". Until I see actual experimental data as to what the radiative forcing for a doubling of CO2 is, so far as I am concerned, the values are meaningless. There is experimental data for the amount of IR absorbed by different amounts of co2 at different pressures. There is experimental data for IR transmission at different heights the atmosphere. Experimental data for co2 concentrations throughout the atmosphere. The ability of co2 to absorb IR can be derived from quantum theory, again thanks to experiments. We know how much co2 is in the atmosphere thanks to experiments. We know Earth's emission spectrum thanks to experiments. It's all in place to calculate the figure. We don't need to physically double co2 in the atmosphere (which is the only possible experiment I can imagine you are demanding) to calculate the forcing from doubling co2.
|
|
|
Post by icefisher on May 23, 2009 0:57:27 GMT
What are parameterisations? In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are 'tuned' to reproduce the observed processes as much as possible." You mean what they do is build the model according to the way they think the system works then massage the coefficients to an adequate extent to continue the warming like it had been in the previous 10 years. That was a neat trick in 1998 but you won't see them doing that for 2008. Do you have a clue why?
|
|
|
Post by magellan on May 23, 2009 3:16:02 GMT
What are parameterisations?www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/"Some physics in the real world, that is necessary for a climate model to work, is only known empirically. Or perhaps the theory only really applies at scales much smaller than the model grid size. This physics needs to be 'parameterised' i.e. a formulation is used that captures the phenomenology of the process and its sensitivity to change but without going into all of the very small scale details. These parameterisations are approximations to the phenomena that we wish to model, but which work at the scales the models actually resolve. A simple example is the radiation code - instead of using a line-by-line code which would resolve the absorption at over 10,000 individual wavelengths, a GCM generally uses a broad-band approximation (with 30 to 50 bands) which gives very close to the same results as a full calculation. Another example is the formula for the evaporation from the ocean as a function of the large-scale humidity, temperature and wind-speed. This is really a highly turbulent phenomena, but there are good approximations that give the net evaporation as a function of the large scale ('bulk') conditions. In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are 'tuned' to reproduce the observed processes as much as possible." Why are you referencing the fraudsters at ReinventedClimate? Gavin Schmidt is not giving an accurate accounting of just how much parametrization is involved. Seriously socold, there is not one single climate model registered as IV&V. It there were, and it were based on standardized software validation practices, you'd have me hook, line and sinker a #1 fan of climate models. Read the following and hopefully you'll begin to understand the smoke and mirrors that Gavin Schmidt so willfully pens on his blog: climatesci.org/2008/11/28/real-climate-misunderstanding-of-climate-models/climatesci.org/2009/01/29/real-climate-suffers-from-foggy-perception-by-henk-tennekes/climatesci.org/2009/02/04/dissecting-a-real-climate-text-by-hendrik-tennekes/
|
|
|
Post by steve on May 23, 2009 10:20:25 GMT
Magellan, thank you for being so constructive.
|
|
|
Post by steve on May 23, 2009 10:26:31 GMT
What are parameterisations? In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are 'tuned' to reproduce the observed processes as much as possible." You mean what they do is build the model according to the way they think the system works then massage the coefficients to an adequate extent to continue the warming like it had been in the previous 10 years. That was a neat trick in 1998 but you won't see them doing that for 2008. Do you have a clue why? This is a misunderstanding. The parametrizations are not tuned to produce the observed warming, they are tuned to produce the observed average climatology. That is the reason why validating them in weather models is useful; the parametrizations are tested at a range of model resolutions, and a range of different environments and seasons. Tuning of parametrizations is different from the suggestion I made in my post that 20th century hindcasts were unreliable because they could be the result of chosen assumptions about the amount of aerosols in the atmosphere.
|
|
|
Post by steve on May 23, 2009 10:27:44 GMT
steve writes " not uncertainty in the greenhouse forcings." You can go on repeating the mantra that we know what greenhouse forcings are "accurately". Until I see actual experimental data as to what the radiative forcing for a doubling of CO2 is, so far as I am concerned, the values are meaningless. What was wrong with the experimental evidence I showed you?
|
|
|
Post by poitsplace on May 23, 2009 11:05:08 GMT
steve writes " not uncertainty in the greenhouse forcings." You can go on repeating the mantra that we know what greenhouse forcings are "accurately". Until I see actual experimental data as to what the radiative forcing for a doubling of CO2 is, so far as I am concerned, the values are meaningless. What was wrong with the experimental evidence I showed you? The real problem is that there is a difference between testing absorption by CO2 in a tube in a lab and the absorption in the real world. In a recent (technically still ongoing) experiment...we used computer models to work out what sort of impact such absorption would have on the distribution of atmospheric temperatures and then released enough CO2 into the atmosphere of a test planet to raise its atmospheric CO2 concentration by approximately 40%. The expected temperature distribution failed to show up. During the same experiment we also checked to see if any feedbacks were detected. Although scientists were unable to determine the sources of the observed warming (the test planet had been warming anyway)...the observed warming fell short of the warming expected from only the CO2 forcing in the hypothesis. As such it should be assumed that the feedbacks are negative overall. Apparently shining a light down a tube in a lab has little to do with real-world conditions or the computer models are wrong. Either way, it's a case of garbage in, garbage out...and the hypothesis has been shown to be largely incorrect.
|
|
|
Post by jimcripwell on May 23, 2009 11:07:57 GMT
steve writes "It's all in place to calculate the figure. We don't need to physically double co2 in the atmosphere (which is the only possible experiment I can imagine you are demanding) to calculate the forcing from doubling co2. "
We need to be careful with the use of the word "calculate". In my book it is different from "estimate". We seem to have agreed that it is impossible to do an experiment whereby we increase the amount of CO2 from current levels, and MEASURE how much the earth's temperature increases. Assuming this is right, then there can be no experimental data the measure how much the earth warms up as we add CO2 from current levels. Maybe there is a way around this problem. If so, I have not seen you produce a reference to it. We are still relying on Myhre et al 1998, which uses non-validated computer models. Where is the reference which includes only experimental data, and finishes up with a value for how much temperature increases as a function of how much CO2 there is in the air, at current levels? I am not sure what you mean that "it is all in place to calculate the figure", but if it is all in place, why hasn't someone actually done the calculations?
|
|
|
Post by jimcripwell on May 23, 2009 11:10:32 GMT
Sorry for the minor error. socold wrote what I attributed to steve, but this responds to steve's comment as well.
|
|
|
Post by steve on May 23, 2009 11:27:27 GMT
Sorry for the minor error. socold wrote what I attributed to steve, but this responds to steve's comment as well. I provided the following reference which is intended to validate the radiation models that Myhre compared and contrasted in the work to calculate greenhouse gas forcing. It is based on satellite observations of outgoing longwave radiation with different amounts of CO2. This doesn't link amounts of CO2 to temperature, but it does help to validate the radiation model, and it does demonstrate that an energy imbalance potentially exists. I would say that if an energy balance exists, then this leads to warming. By reference to basic physics *and* the validated radiation models, it is possible to calculate what uniform temperature rise would overcome the radiation imbalance, which is the first step in establishing the climate impact of increased greenhouse gas. Journal of Climate Observations of the Infrared Outgoing Spectrum of the Earth from Space: The Effects of Temporal and Spatial Sampling H. E. Brindley and J. E. Harries Space and Atmospheric Physics Group, Imperial College, London, United Kingdom (Manuscript received September 16, 2002, in final form April 10, 2003)
|
|
|
Post by nautonnier on May 23, 2009 13:03:59 GMT
Its calculated so it must be true argument It is not yet possible to model accurately the weather in a 10 km dome over a point in the surface more than a hour into the future. People are spending a LOT of time trying as it would save lives and lots of money. Accuracies of 1KM are difficult but almost achievable out to an hour Forecasting what will happen in 6 months time becomes an educated guess. There are many many industries that will pay extremely good money to know what will happen in a year or more's time. So as your modelers are SO capable I am amazed that they need any funding - they would be awash with eager industry customers. They are not sufficiently good - and so they are not awash with customers. Its easy to get politicians looking for a cause and media looking for a story convinced - but it gets to be different when you enter an industrial contract with real money involved and where it will cost you real money if you get it wrong.. "Forecasting what will happen in 6 months time becomes an educated guess" It can be forcasted that in 6 months time northern hemisphere will be entering a cold period known as winter and the southern hemisphere will be entering a warm period known as summer. This can be forcasted from the physics alone. So the idea that forcasting anything 6 months ahead is impossible is false. The mistake you are making is thinking weather models which look at small scale stuff like the paths of individual storms are equivalent to climate models which involve far bigger processes. Sure and its quite simple to say that the Earth is likely to continue rotating in the same direction. But your climate models are saying that the cold in winter will be so much less that there will be no ice at the poles. Making the arctic the same climate as Devon perhaps. If your models had stopped at just saying winter is likely to be cooler than summer then there would be no debate here. However you are telling us the number of degrees of warming caused by CO 2 and projecting that forward to an unstoppable 'climate catastrophe' in less than 100 years time. The problems of ordering and inaccuracies scale up SoCold - modeling a chaotic system at higher temporal granularities makes large errors more likely not less. Give the board here (and this applies to glc and Steve) the references to a climate model that has been CORRECT over a climatological period say 100 years so one that was given the start parameters of 1909 and has then run correctly modeling what has happened in the past century all the way to what is happening now. This should be easy you have all the models from AR4 to choose from - there must be a climatologist out there that felt that her/his model should be validated before using it to persuade politicians to tax industries into bankruptcy and put their CEOs on trial for crimes against humanity?
|
|
|
Post by socold on May 23, 2009 14:29:51 GMT
What are parameterisations?www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/"Some physics in the real world, that is necessary for a climate model to work, is only known empirically. Or perhaps the theory only really applies at scales much smaller than the model grid size. This physics needs to be 'parameterised' i.e. a formulation is used that captures the phenomenology of the process and its sensitivity to change but without going into all of the very small scale details. These parameterisations are approximations to the phenomena that we wish to model, but which work at the scales the models actually resolve. A simple example is the radiation code - instead of using a line-by-line code which would resolve the absorption at over 10,000 individual wavelengths, a GCM generally uses a broad-band approximation (with 30 to 50 bands) which gives very close to the same results as a full calculation. Another example is the formula for the evaporation from the ocean as a function of the large-scale humidity, temperature and wind-speed. This is really a highly turbulent phenomena, but there are good approximations that give the net evaporation as a function of the large scale ('bulk') conditions. In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are 'tuned' to reproduce the observed processes as much as possible." Why are you referencing the fraudsters at ReinventedClimate? Gavin Schmidt is not giving an accurate accounting of just how much parametrization is involved. I don't care how much parametrization is involved given what it is. Taking the general output of a more complex small scale model is not a bad thing. It's similar to the use of trigonometery lookup tables before calculators. You have a table which has general results from a "smaller scale" function and you use those general results to do your work. And then skeptics come along and complain that you are using a lookup table and not the actual calculations. However the complaint would only be valid if it made any significant difference. The models are independently validated by the existance of independent models. "IV&V" is just a rubber stamp. It's suprising you think that would make any difference to you. Well I see contention over a few points that I don't consider an issue. It seems to avoid the meat of the models and pick around the edges. I also notice there's no way to comment on that blog.
|
|
|
Post by socold on May 23, 2009 14:32:13 GMT
What are parameterisations? In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are 'tuned' to reproduce the observed processes as much as possible." You mean what they do is build the model according to the way they think the system works then massage the coefficients to an adequate extent to continue the warming like it had been in the previous 10 years. That was a neat trick in 1998 but you won't see them doing that for 2008. Do you have a clue why? No by "'tuned' to reproduce the observed processes" they mean patterns in precipitation, magnitude of seasonal cycle, etc. Without any forcing applied the models show a flat temperature trend. There is no global warming built into the models.
|
|