|
Post by socold on Apr 6, 2010 20:41:37 GMT
Steve Easterbrook's preliminary validation of one major climate model identified 0.3 errors per 10,000 lines of code. Space Shuttle software has around 3 times as many errors per 10,000 lines of code according to Easterbrook. He reported that here, but was sent away with a flea in his ear. As that was my proposed measure of reliability I hereby announce models to be reliable. Although the fashion du jour on the thread seems to be moving from an interest in reliability to an interest accuracy, which is great because that's what I think is more important anyway. As Poitsplace said, it matters not how many coding errors are in a model if the underlying physical basis for the model is incorrect. I really think the focus on software engineering practice of the models is beside the point, which should be how well the models reproduce observed climate behavior. If a model were to reproduce climate behavior exactly, I don't really care how many bugs it has or how badly written the code is.
|
|
|
Post by socold on Apr 6, 2010 20:43:07 GMT
But given the way that models are validated (through comparison with elements of the real world), arguably the most important, and certainly the most interesting, documentation is the validation documentation which is the results published in scientific papers. The most perfectly designed and structured model is uninteresting if it predicts an ice age next Christmas. The worst designed piece of code written with lots of GOTO statements, recursive loops in one giant subroutine, that manages to predict weather and climate for the next 2 months would be very interesting, though its design would probably make adding new science very hard. If I had bothered reading the whole thread and saw this I wouldn't have bothered repeating you in my post above...
|
|
|
Post by magellan on Apr 6, 2010 23:48:05 GMT
Steve Easterbrook's preliminary validation of one major climate model identified 0.3 errors per 10,000 lines of code. Space Shuttle software has around 3 times as many errors per 10,000 lines of code according to Easterbrook. He reported that here, but was sent away with a flea in his ear. As that was my proposed measure of reliability I hereby announce models to be reliable. Although the fashion du jour on the thread seems to be moving from an interest in reliability to an interest accuracy, which is great because that's what I think is more important anyway. As Poitsplace said, it matters not how many coding errors are in a model if the underlying physical basis for the model is incorrect. I really think the focus on software engineering practice of the models is beside the point, which should be how well the models reproduce observed climate behavior. If a model were to reproduce climate behavior exactly, I don't really care how many bugs it has or how badly written the code is. If a model were to reproduce climate behavior exactly..... Don't you realize by now that line of argument is a logical fallacy?
|
|
|
Post by poitsplace on Apr 7, 2010 3:15:15 GMT
As that was my proposed measure of reliability I hereby announce models to be reliable. Yes, the ability of climate models to reproduce the imaginary world based on imaginary (and highly parametrized) phyiscs...is quite good. Meanwhile, out here in the REAL world...we can see some rather obvious problems. For instance, the temperature of the ground goes up by around 3C according to this models. BUT we've seen no change to the gradient of the troposphere. If the temperature of the atmosphere goes up at the same rate as the temperature of the surface then the emissions go up in a pretty straight forward way...and the 3.7 watts of "forcing" has to somehow, magically...produce about a 6 WATT increase in outgoing radiation. Hmmm...bit of a poser there. Then we hit water vapor and with the oceans showing (for that same 3C increase) an average increase of about 2C...the increase in vapor pressure REQUIRES that the convection/latent heat increase by about 11 WATTS! Now you'll excuse my incredulity but...how does it make the slightest hint of sense that 3.7 watts of CO2 "forcing" can maintain 17 watts of additional energy transfer across the troposphere? I don't need fancy, schmancy computer models to tell me that's a load of crap.
|
|
|
Post by steve on Apr 7, 2010 9:59:47 GMT
Good examples of the 2% of validation done after changes in atmospheric components would be projections of warming done in the 1970s and 1980s that were followed by sequentially warmest decades, cooling following Pinatubo, stratospheric cooling and increases in humidity. Though for many phenomena, the climatological changes are hard to determine due to poorer levels observations in the past. It is always amazing to me...even though I know what deficiency causes the problem...when people do things like this. Yes, they made a prediction for the 80s and 90s...and then at the end of the 90s they suddenly discovered the PDO and the temperature increase leveled off NOT where the CO2 forcing hypothesis said...but where an ocean-current dominated model said. You are overstating the importance of PDO and understating the ability of climate modellers to understand the limitations of their models. The models of the 80's and 90's were largely atmosphere models, with mostly slab oceans that dealt with the atmosphere-ocean energy transfers at a basic level. Even so, they showed variability that undermines your implication that the CO2 "hypothesis" requires non-stop warming. Now you are suggesting we have had a shifting ocean current that may have caused the warming. Yet the descendents of the above atmosphere models that include more complex oceans have incorporated these shifting currents in their seasonal and decadal configurations, and reproduced/forecast the warming trends, including the slower warming trends, while still leaving a role for CO2-induced warming. In short, the hypothesis that PDO could be the cause of the warming has gained no support from either observations or models. It hasn't cooled and signs are that widespread warming is starting again. The evidence of the Holocene and glacial stages is that the climate is sensitive. Obviously the models have difficulty in reproducing a glacial cycle because there are limited observations of forcings and because you need a heck of a fast model to run for the 10s of thousands of years (a typical GCM runs typically 2-10 years per day on a typical supercomputer depending on its complexity). They don't have too much difficulty in demonstrating that they would be capable of simulating the holocene though. If the forcing of CO2 was magically balanced during the glacial cycles, what caused the 10C difference between the peaks and the troughs? Why didn't the magical thing that balanced the CO2 changes also magically balance the cause of the cycles?
|
|
|
Post by trbixler on Apr 7, 2010 15:09:18 GMT
"A New And Effective Climate Model" Not to say that it cannot be done just that it is hard to do. Sometimes I hear people suggest that the existing programs are reliable, my view is that they have not passed any tests of prediction. Programming large scale systems is hard. Lifetimes can be spent in fixing minor bugs. Major bugs are of philosophy and can render complete efforts worthless. " As they stand at present the models assume a generally static global energy budget with relatively little internal system variability so that measurable changes in the various input and output components can only occur from external forcing agents such as changes in the CO2 content of the air caused by human emissions or perhaps temporary after effects from volcanic eruptions, meteorite strikes or significant changes in solar power output. If such simple models are to have any practical utility it is necessary to demonstrate that some predictive skill is a demonstrable outcome of the models. Unfortunately it is apparent that there is no predictive skill whatever despite huge advances in processing power and the application of millions or even billions of man hours from reputable and experienced scientists over many decades." wattsupwiththat.com/2010/04/06/a-new-and-effective-climate-model/
|
|
|
Post by steve on Apr 7, 2010 16:11:18 GMT
They predicted warming and it warmed. They predicted cooling after Pinatubo and it cooled. They predicted a cooler stratosphere. They can predict Atlantic storm activity. They have passed many other tests of prediction. What you really mean is that you have set your bar at a level beyond which they have not reached. Possibly you do not understand that models are not just about predictions of warming, they are also useful tools for understanding the climate.
The WUWT guy is mostly bonkers. The first sentence would be better written as "As they stand at present the models assume a generally static global energy budget with relatively little internal system variability that aims to be in line with observed climatology".
Looking for causes due to undiscovered "internal system variability" is all very well (and I don't doubt it exists). But you are often unwise to bet on them particularly when you don't know whether the variability is in your favour or, in fact, against you.
Obviously there is real internal variability that can be observed but that is hard to predict (ENSO, PDO etc). But while models can't necessarily predict when ENSO happens they can exhibit ENSO and PDO behaviour to make predictions of what happens to the earth's energy balance during their different phases. That's why models can be useful as tools for understanding.
|
|
|
Post by icefisher on Apr 7, 2010 16:27:16 GMT
Obviously there is real internal variability that can be observed but that is hard to predict (ENSO, PDO etc). But while models can't necessarily predict when ENSO happens they can exhibit ENSO and PDO behaviour to make predictions of what happens to the earth's energy balance during their different phases. That's why models can be useful as tools for understanding. Thats fine but fundamentally there seems something wrong with the models. When actual greenhouse experiments are conducted we find about 95% of near surface heat being transported by convection far more than is depicted in the various budget diagrams. How are the values in these diagrams established? Seems to me the answer to that is likely quite simple. . . .they assumed the recent warming we saw in the 80's and 90's was due to surface IR being captured by CO2 and then emitted back at the surface instead of variations due to natural internal processes. Of course to make that practical they needed IR to be a big enough surface player in the first place so they plugged in the necessary process at the surface. . . . experimental evidence of other natural processes being responsible for surface cooling not withstanding. In other words they needed to adjust the budget to make it possible. Yet when the natural anomalies that created the surface warmth in the first place actually reversed recently they become like the 3 apes, hear no evil, see no evil, speak no evil. Can you offer anything at all concrete to dispel that view?
|
|
|
Post by steve on Apr 8, 2010 9:51:03 GMT
That 95% is a meaningless figure if you don't explain how you calculated it. If you can explain how you calculated it then I will explain why the figure is either uninteresting or nonsense. 100% of my money flows through my bank account. Is that a useful figure for telling you anything?
As models are so reliable that you have to resort to arguing about the accuracy of "budget diagrams", where is the "budget diagram" that includes a transfer of "near surface heat". Probably the most famous of such budget diagrams does not directly reference transfer of near surface heat for reasons that are obvious to me.
|
|
|
Post by icefisher on Apr 8, 2010 14:21:43 GMT
That 95% is a meaningless figure if you don't explain how you calculated it. If you can explain how you calculated it then I will explain why the figure is either uninteresting or nonsense. 100% of my money flows through my bank account. Is that a useful figure for telling you anything? As models are so reliable that you have to resort to arguing about the accuracy of "budget diagrams", where is the "budget diagram" that includes a transfer of "near surface heat". Probably the most famous of such budget diagrams does not directly reference transfer of near surface heat for reasons that are obvious to me. It is an experiment with two greenhouses. One "A" has glass that blocks IR (and convection) the other "B" has covering that is transparent to IR. The temperature rise is measured in each. Greenhouse "A" heat increase is 95%+ that of "B".
|
|
|
Post by trbixler on Apr 8, 2010 15:37:15 GMT
|
|
|
Post by steve on Apr 8, 2010 17:08:24 GMT
That 95% is a meaningless figure if you don't explain how you calculated it. If you can explain how you calculated it then I will explain why the figure is either uninteresting or nonsense. 100% of my money flows through my bank account. Is that a useful figure for telling you anything? As models are so reliable that you have to resort to arguing about the accuracy of "budget diagrams", where is the "budget diagram" that includes a transfer of "near surface heat". Probably the most famous of such budget diagrams does not directly reference transfer of near surface heat for reasons that are obvious to me. It is an experiment with two greenhouses. One "A" has glass that blocks IR (and convection) the other "B" has covering that is transparent to IR. The temperature rise is measured in each. Greenhouse "A" heat increase is 95%+ that of "B". I don't see that such an experiment bears a relation to any "AGW" budget diagram or model.
|
|
|
Post by icefisher on Apr 8, 2010 17:19:22 GMT
It is an experiment with two greenhouses. One "A" has glass that blocks IR (and convection) the other "B" has covering that is transparent to IR. The temperature rise is measured in each. Greenhouse "A" heat increase is 95%+ that of "B". I don't see that such an experiment bears a relation to any "AGW" budget diagram or model. Stumped huh? LOL! Then what do see bearing a relation to any budget diagram or model. . . .say in the form of an empirical study or experiment?
|
|
|
Post by steve on Apr 8, 2010 17:24:41 GMT
So the models did quite well for 16 years. The dip or divergence may be due to a temporary or a one-off change to the earth system in which case it has little impact in the long run. Or it could be to over-sensitive models - which is Lucia's proposition. The evidence is not sufficient to be confident of the latter as Lucia originally claimed before she became fixated on starting at 2000/2001. Just to be clear (as a quick look at the graph can give the wrong impression) *all* the 20-year trends are positive. The dip is a lowering of the positive trend, not a dip in temps.
|
|
|
Post by steve on Apr 8, 2010 17:28:21 GMT
I don't see that such an experiment bears a relation to any "AGW" budget diagram or model. Stumped huh? LOL! Then what do see bearing a relation to any budget diagram or model. . . .say in the form of an empirical study or experiment? I'm stumped as to why you think an experiment involving greenhouses tells you about convection in models and budget diagrams. The Kiehl and Trenberth budget diagram is a bringing together of a number of empirical studies and experiments. www.geo.utexas.edu/courses/387H/PAPERS/kiehl.pdf
|
|