|
Post by icefisher on Apr 13, 2010 17:23:04 GMT
Parametrizations are not simple bodges. There will be physical assumptions that underlie them, and the assumptions will have to be justified by comparing their outputs with observations. So why are we not seeing these comparisons Steve? The answer is no doubt because some models got closer than others because it parameterized stuff like volcanic variation in different ways. But since volcanic activity has not been particularly high all that does is reflect negatively on the model result so that comparison is not publicly unveiled. If hits were made by individual models they would be paraded around with scientists widely smiling over scoring a bullseye. But ignorance is not a good way to win the next grant contest, it also takes away from the argument that some models were close to getting it right, and if you publicize that you go on the shiitlist. The same can be found throughout the science community. Dr Lonnie Thompson and his wife who is on the NAS, run the Byrd Center at Ohio State regaled us regularly with his expensive taxpayer funded expeditions to Peru to observe and predict the Qori Kalis glacier would be gone by 2012 and be half the size for his next visit which was supposed to be a couple years ago. As late as last year his website was festooned with pictures of the shrinking glacier and attributions to global warming. Today instead of an update all the old pictures and predictions are gone. What happened to Qori Kalis? Is it now the size of an icecube tray? Nobody is talking. Wanna take a bet? www.timesonline.co.uk/tol/news/world/us_and_americas/article1392295.ecep.s. and oh yeah Thompson led expeditions back to this area not just in 2007 but in 2008, and 2009 apparently desperately looking for documenting a result in support of his predictions but does the public get any information. . . .uh. . . . no!
|
|
|
Post by poitsplace on Apr 13, 2010 17:28:13 GMT
You cannot EVER trust parametrization entirely. In things like crash test models or aerodynamic simulations the models have sometimes been tested MILLIONS of times and tweaked in a variety of circumstances...because although the basic theories were known, the intricate interactions of those theories often produced odd results.
Each climate model has technically only ever been tested ONCE...and against a very dodgy record. The results are NOT very reliable. This sort of "Oops, I got it wrong before but THIS TIME..." nonsense can go on for centuries without actually getting anything solved.
|
|
|
Post by icefisher on Apr 13, 2010 21:16:00 GMT
You cannot EVER trust parametrization entirely. In things like crash test models or aerodynamic simulations the models have sometimes been tested MILLIONS of times and tweaked in a variety of circumstances...because although the basic theories were known, the intricate interactions of those theories often produced odd results. Each climate model has technically only ever been tested ONCE...and against a very dodgy record. The results are NOT very reliable. This sort of "Oops, I got it wrong before but THIS TIME..." nonsense can go on for centuries without actually getting anything solved. The only thing the models are for is to build the scary scenario. And typical fashion of the NGOs they are replicated to argue for the idea of consensus and replication. But as Trenberth says it is a travesty that the climate is not tracking the models and they have no physical explanation for why. What it shows is the modeling isn't ready for primetime nor reliance and the real travesty is how much money they threw into building the scary scenarios when no indication exists there is anything to be frightened of. Its like that architect trying to make an excuse to the homeowner who paid for an ocean view that there are other benefits of a two story house despite selling the job on the ocean view just like Steve has been trying to sell us again on.
|
|
|
Post by steve on Apr 14, 2010 10:24:53 GMT
You do see the comparisons between the parametrizations and the observations every day if you check the weather forecast.
You and poitsplace are making the mistake of assuming that the projections of warming temperatures are solely dependent on the validity of the parametrizations. They are not. They are dependent on our knowledge of basic physics, our application of this physics that has given rise to 30 years of warming projections followed by 30 years of warming, and evidence of past climates that show the climate is sensitive to small changes.
Maybe it is not so much a mistake. I accept that detailed predictions of the impact of warming are hard to predict. I don't accept that this is because there is something fundamentally wrong with the parametrizations such that we should assume that warming is unlikely. Assuming that the former means the latter is the mistake that is being made.
|
|
|
Post by steve on Apr 14, 2010 10:47:45 GMT
Magellan,
That statement is not an either/or statement. It does not say that trusting the developer is clever and honest means that you trust the parametrization is correct. The statement says that *not* trusting the developer will lead to lack of trust in the parametrizations.
I think also being wrong is at some level a definition of being stupid. So you are also over-analysing things.
The context of the post should have made this distinction clear to you. The "trust" depends more on whether the parametrizations match the observations than on the mindset of the developer as suggested by ratty.
|
|
|
Post by magellan on Apr 14, 2010 13:24:09 GMT
Magellan, That statement is not an either/or statement. It does not say that trusting the developer is clever and honest means that you trust the parametrization is correct. The statement says that *not* trusting the developer will lead to lack of trust in the parametrizations. I think also being wrong is at some level a definition of being stupid. So you are also over-analysing things. The context of the post should have made this distinction clear to you. The "trust" depends more on whether the parametrizations match the observations than on the mindset of the developer as suggested by ratty. If you had evidence of GCM reliability and the "basic physics" integrated into them, you'd have already posted it. If there were a model that closely matches reality, we'd all know it by name, but then that would reveal there's nothing for warmologists to get their panties in a bind about. So where is the GCM that predicted 10+ years of flat to cooling T? Now all you can do is try to defend parametrization (fudging) as if it is a normal part of model validation. According to Roger Pielke, there are only three components of "basic physics" incorporated into the models. The rest are an undetermined numeration of degrees of freedom; a bunch of bumpkins pulling levers.
|
|
|
Post by icefisher on Apr 14, 2010 14:01:18 GMT
You do see the comparisons between the parametrizations and the observations every day if you check the weather forecast. You and poitsplace are making the mistake of assuming that the projections of warming temperatures are solely dependent on the validity of the parametrizations. They are not. Where did you learn to read Steve. . .in France? Are you a frog? If you go back through this thread I have said more than once you don't have to get the parameterization right you have to get the modeling of the parameter right. If you get the modeling of the parameter right then you can look at observations and explain the variance between your model prediction and the climate as being because . . . .say. . . .volcanic activity was unexpectedly high and that when we rerun the model with the actual volcanic activity it scores a hit. You are building a strawman here out of imaginary straw. They are dependent on our knowledge of basic physics, our application of this physics that has given rise to 30 years of warming projections followed by 30 years of warming, and evidence of past climates that show the climate is sensitive to small changes. The extension of that argument is arguing that if in a huge complex model the modeler in one calcuation got the right answer for 2 plus 2; therefore the model must be good. LOL! Its a falsehood Steve. The modelers clearly know some physics. They didn't go to school for 20 years to not learn anything at all. Its just that they don't know the actual physics of the atmosphere or they would be able to correctly model the atmosphere and explain how the models correctly represent the real world. But instead we see guys like Kevin Trenberth pulling on his hair exclaiming its a "Travesty!" They don't know how the climate works. Maybe it is not so much a mistake. I accept that detailed predictions of the impact of warming are hard to predict. I don't accept that this is because there is something fundamentally wrong with the parametrizations such that we should assume that warming is unlikely. Assuming that the former means the latter is the mistake that is being made. I don't know why you would think that. With our current state of knowledge warming is likely, but you don't need a CO2 theory to come to that conclusion. That brings us back to that unit root problem. Its been getting warmer for a long time and its likely to continue and history strongly suggests this warming predates significant emissions. If you dispute that explain why the 33 years warming from 1911 to 1944, according to Hadcrut, exceeds the most recent 33 years of warming by .01degC. Both periods started at the end of a cooling period at the coldest point recorded followed by a steady and extended warming period. Yet emissions of CO2 increased by more than 10 times from 1911 to the present and was unable to sustain warming as much as warming was sustained a century ago. Why would we not be seeing significantly longer and warmer trends if indeed climate is controlled by CO2? What we saw instead was during the latter period a record warming surge that was more closely correlated to the recent solar grand maximum than to CO2 increases but it did not sustain itself. . . .CO2 was too weak to carry the ball even back to the line of scrimmage. p.s. That record surge provided the impetus for what really happened in the AGW community. Loud declarations were made that natural change had been overcome and it was like the Oklahoma land rush for a scientist to get out front and stake a claim with a prediction to put on their C.V. Nothing but a gold rush in the science community. Hopefully the backlash doesn't reflect too badly on science but IMHO it is going to take either a redemption that probably isn't likely or a fundamental restructuring of how science is validated. Waiting for redemption would be a mistake. The science community itself needs to take the lead on reformation. They have at least a brief respite that is likely to change with the next Congress because quite clearly even if parties do not change, the position of both parties will and that will translate into campaign promises and the last thing you want is for reformation to take place without scientists at the front of it. Whitewashing commissions will end up doing nothing more than demonstrating to the public that the science community is incapable of policing itself. Think about that.
|
|
|
Post by steve on Apr 14, 2010 15:35:02 GMT
Where did you learn your manners, Icefisher? Are you an American? Some of the same issues apply to parameters as to parametrizations because parametrizations depend on appropriate choice of parameters. As far as I can understand, models behaved quite well when Pinatubo erupted.
As people keep informing me, we can calculate gravity to many decimal places, but can't solve the 3-body problem. Even a perfect copy of the earth will diverge from our earth. We would not know why they diverged because our observing systems are not good enough. That's what Trenberth was referring to.
By including a realistic scenario of CO2, aerosols and solar variations, models are capable of demonstrating 20th Century climate change.
You appear to have latched onto unit roots in the same way you latched onto Akasofu. I'm afraid the world is a physical system governed by physical laws, not a financial system that can be overwhelmed by sentiment.
|
|
|
Post by scpg02 on Apr 14, 2010 15:37:00 GMT
Where did you learn your manners, Icefisher? Are you an American? Excuse me?
|
|
|
Post by icefisher on Apr 14, 2010 17:11:22 GMT
Where did you learn your manners, Icefisher? Are you an American? Some of the same issues apply to parameters as to parametrizations because parametrizations depend on appropriate choice of parameters. As far as I can understand, models behaved quite well when Pinatubo erupted. Good manners is to carefully read what you are responding to. If English is not your first language perhaps you can be excused. Latin based languages tend to vary in structure so perhaps there is a reason why you are responding with strawman arguments because that would be the only excuse that does not involve bad manners. And just in case you consider "frog" a racial insult, I have French ancestors so consider it the equivalent of calling you "brother". Getting Pinatubo right demonstrates that the volcanic portion of climate models are working to some extent but nothing apparently beyond that. So they got 2 plus 2 right in a portion of the model. The lack of success of the model indicates the problem is elsewhere in the model. If the models that were the closest to actual had underestimated cooling from volcanic activity then correction of that would put the model even closer. If that were the case we would have already heard about that as there is no shame in underestimating volcanic activity and it would have been used to explain at least a portion of the margin of error and demonstrate competence. However, the stony silence on that element suggests maybe they were close because they overestimated cooling from volcanic activity instead. But declaring that is like declaring being twice as stupid. As people keep informing me, we can calculate gravity to many decimal places, but can't solve the 3-body problem. Even a perfect copy of the earth will diverge from our earth. We would not know why they diverged because our observing systems are not good enough. That's what Trenberth was referring to. Building models with garbage as input results in garbage as an output. This is completely 100% in support of my position of taking money from the modeling work and put it into observing work. What the models have proved is it is too soon to be doing modeling. By including a realistic scenario of CO2, aerosols and solar variations, models are capable of demonstrating 20th Century climate change. You appear to have latched onto unit roots in the same way you latched onto Akasofu. I'm afraid the world is a physical system governed by physical laws, not a financial system that can be overwhelmed by sentiment. I must have missed that paper. So you maintain you have the solution to the mind/body problem and that the mind is essentially a separate entity from the body not governed by physical laws. Last I checked that was still a topic of philosophical speculation. If you do establish that you could then prove large portions of the community of economists as being wrong that do believe economics follows sets of laws. The issue being that the drivers are complex and difficult to measure . . . .same deal with climate. . . .and for that reason you should treat both the same. Philosophically that should apply for any significant extension of a simple physical law as an explanation for a trend in a complex physical system. When you do that you approach the equivalence of quackery in medical diagnosis that seems to be more common than not in young sciences. "By including a realistic scenario of CO2, aerosols and solar variations, models are capable of demonstrating 20th Century climate change."
Lets examine this statement. So CO2 is ruled out as a cause of the recent divergence because it was given everything in the models. Aerosols have reduced some but the physical evidence is lacking due to poor observations on this element in favor of featuring CO2 scenarios. Now solar variations, thats interesting because what new have we learned here that we didn't know previously? Seems to me that I heard Leif Svalgaard had historical solar variation was in fact less than previously estimated. So you need to be more explicit on this. If solar variation was less then less explanation exists for that warming of 1911 to 1944. That information does not suggest better understanding of natural variations. I have asked you repeatedly to lay this out via a layered model ala Akasofu showing a series of variables and what they are attributable to. . . .and of course if you consider it important to link these observations it to a physical mechanism (your criticism of Akasofu) you should provide references. Now Trenberth stepping back and claiming that can't be done because observations are too imprecise merely comes around to support what I said above that it is far to premature to be spending money on modeling versus improving our ability to observe the world. This is something you should not have to build a model to understand in the first place. But because this became a gold rush on untrue speculation of knowing all the important elements of the climate that money was scandalously wasted. . . .money that could have been diverted to improving observations instead that would put us years ahead of where we are now.
|
|
|
Post by steve on Apr 15, 2010 8:15:41 GMT
Icefisher Economic modelling is not the same because the rules change all the time such that every single model ends up going out of scope every time a whole new generation of people get used to the conditions brought about by the previous model. A better analogy would be trying to run an earth system model on the climate of Venus or Mars. It wouldn't be impossible, but you'd need to tweak the model a lot. You misunderstand Trenberth. In essence Trenberth is concerned about attributions of changes in the climate that do not have a physical explanation. Explicitly he is critical of attributing changes to an undefined or mysterious process (eg arguments that warming is due to a "recovery from the little ice age...are inadequate as they do not provide the physical mechanism involved." ) This leads him to his concern about short term variability which must again have a physical explanation that is missing because we don't have the detailed observations. www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/EnergyDiagnostics09final2.pdfI think you are wrong to throw the baby out with the bath water. It's a bit like saying that since Bernie Madoff's payouts were all recycled money from new investors, we should assume that any company dividends have been generated in the same way until we have examined all of the company's invoices and bank transactions. Short term high frequency variation is the exception that has always happened. The rule is that long term low frequency variation is driven by radiative forcing.
|
|
|
Post by steve on Apr 15, 2010 8:24:39 GMT
Magellan, That statement is not an either/or statement. It does not say that trusting the developer is clever and honest means that you trust the parametrization is correct. The statement says that *not* trusting the developer will lead to lack of trust in the parametrizations. I think also being wrong is at some level a definition of being stupid. So you are also over-analysing things. The context of the post should have made this distinction clear to you. The "trust" depends more on whether the parametrizations match the observations than on the mindset of the developer as suggested by ratty. If you had evidence of GCM reliability and the "basic physics" integrated into them, you'd have already posted it. If there were a model that closely matches reality, we'd all know it by name, but then that would reveal there's nothing for warmologists to get their panties in a bind about. So where is the GCM that predicted 10+ years of flat to cooling T? Now all you can do is try to defend parametrization (fudging) as if it is a normal part of model validation. All you can do is pick at my latest point (made in response to a query) and claim that it is "all" I can do. This is the "Ah! but..." school of argument where you hope that by the time that your series of picky little arguments circle around back to where you started that most of your audience have forgotten that there is a big self-consistent picture involved where models are demonstrably capable of modelling the earth *only if* the basic physics of the radiative properties of various atmospheric gases are included.
|
|
|
Post by icefisher on Apr 15, 2010 8:55:11 GMT
Icefisher Economic modelling is not the same because the rules change all the time such that every single model ends up going out of scope every time a whole new generation of people get used to the conditions brought about by the previous model. A better analogy would be trying to run an earth system model on the climate of Venus or Mars. It wouldn't be impossible, but you'd need to tweak the model a lot. Let me see if I have this one right. An economic model is built and as a result the government regulates so society adjusts. Are you saying that if you regulate climate impacts nothing will change? You misunderstand Trenberth. In essence Trenberth is concerned about attributions of changes in the climate that do not have a physical explanation. Explicitly he is critical of attributing changes to an undefined or mysterious process (eg arguments that warming is due to a "recovery from the little ice age...are inadequate as they do not provide the physical mechanism involved." ) This leads him to his concern about short term variability which must again have a physical explanation that is missing because we don't have the detailed observations. LOL! Where is the evidence the change we are seeing is short term? I think you are wrong to throw the baby out with the bath water. It's a bit like saying that since Bernie Madoff's payouts were all recycled money from new investors, we should assume that any company dividends have been generated in the same way until we have examined all of the company's invoices and bank transactions. Short term high frequency variation is the exception that has always happened. The rule is that long term low frequency variation is driven by radiative forcing. The rule huh? Well we had long term warming in 1911 to 1944 of .7degC, which is more than 1976 to 2009. Why is that when CO2 emissions multiplied by 10 times over that century (minus 2 years) it isn't warming faster? Seems to be some problems with your rule.
|
|
|
Post by steve on Apr 15, 2010 9:58:05 GMT
Something like that. An economic model suggests that inflation should be controlled by interest rates. The benign conditions that follow result in more stability for companies, not least from better wage restraint, but also encourages people to take excessive risks, and contributes to increasing the flood of money from China to the over eager Western borrowers leading to bubbles that break the model.
Do I sense an introduction of the word "regulate"? To break a climate *projection* one breaks the assumptions of the projection which are the emissions scenarios. That's called mitigation.
We have always seen short term variability. We expect to see short term variability. Where is the evidence that it is not short term variability?
Because radiative forcing is not purely a function of CO2, temperature is not a real-time linear function of radiative forcing even ignoring short term (travesty) variability, temperature is also affected by short term variability which allows one to cherry-pick lows and highs. That's why it is pretty straightforward to demonstrate that models with realistic choices of solar and aerosol can demonstrate most of the 20th Century variability.
|
|
|
Post by icefisher on Apr 15, 2010 16:16:53 GMT
Something like that. An economic model suggests that inflation should be controlled by interest rates. The benign conditions that follow result in more stability for companies, not least from better wage restraint, but also encourages people to take excessive risks, and contributes to increasing the flood of money from China to the over eager Western borrowers leading to bubbles that break the model. Do I sense an introduction of the word "regulate"? To break a climate *projection* one breaks the assumptions of the projection which are the emissions scenarios. That's called mitigation. And so regulating underwriting standards by lowering them, effectively messing with the law of supply and demand does not apply as a mitigation to the law you hypothesized about inflation control via interest rates? You are drawing meaningless distinctions here Steve. You are not going to be proving that the mind is a separate entity from the body and not subject to physical laws in this thread so why even begin to try? As an aside, if you were able to solve the mind/body issue and demonstrate it followed a non-physical path, forget the Nobel Prize, you would be a GOD! But hey maybe you really are Al Gore! We have always seen short term variability. We expect to see short term variability. Where is the evidence that it is not short term variability? Because radiative forcing is not purely a function of CO2, temperature is not a real-time linear function of radiative forcing even ignoring short term (travesty) variability, temperature is also affected by short term variability which allows one to cherry-pick lows and highs. That's why it is pretty straightforward to demonstrate that models with realistic choices of solar and aerosol can demonstrate most of the 20th Century variability. The starting points are not cherry picked. 1911 and 1976 are both clear temperature bottomings where one can point to multi-decadal shifts in the temperature trends. Both are followed by an extended warming period. Now granted the more recent trend was shorter before it went into a period flattening; but that is essentially the question why was the warming not sustained in 1999 and it was in 1934? In other words at this point in time there is probably a better argument for smoothing off 1998 as one of those "short term variations" than not. If CO2 is an important climate control and CO2 emissions have increased 10 fold then one would expect warming trends to both be more intense and longer lasting. But we see that so far neither appears to be the case. Indeed warming should continue as one would expect it should be at least a little more sustained than a century ago as one would presume that CO2 would not amount to a big fat zero. But lets say the test is really now only starting. It has taken 33 years of warming to just almost equal the warming of a century ago and the warming of a century ago was not sustained beyond 33 years. Over that 33 years it was in fact slightly less intense and that is troubling too). Indeed there as a "short term variation" within that 33 years that was unusually intense, but that correlates better to solar variation than CO2. One should expect that CO2 forcing isn't zero (though saturation is a possibility that hasn't been robustly tested for) but we will have to wait perhaps for another decade or two to confirm that as the empirical test is really just starting. Up to now there is no argument statistical, logical, or otherwise to believe that the recent warming was from fundamentally a different cause or intensity than the warming of a century ago. You said I had early adopted Akasofu. Thats true but you have to realize that the Akasofu hypothesis about what appears to be a steady trend over the past century of around .5degC/century was a (and I put in quotes) "LIA recovery?" Note the question mark. Thats how science is supposed to work. I am fully committed to the question mark. One could just as well put "CO2 forcing?" there instead. But I tend to think that would not have set off any less of a firestorm in the AGW community which seems fully committed to "CO2 forcing!!!!!!". Amazing what punctuation does, if you know how to read (or listen). . . . that is.
|
|