|
Post by icefisher on Jul 21, 2009 9:12:57 GMT
I believe you're missing the part of the concept of "validation" we're critical of here. If someone reprograms a car's computer so all it does is play a tune, it makes no difference if it's full of errors or completely without errors...its still completely useless as a car's computer. Stevees confounded auditing the application with auditing the code of the software used for the application. Thats simply a case of not reading the thread. Magellan clarified his remarks with: "Glowing accolades of the predictive skills of climate models even in the face of realtime falsification makes such claimants look all the more foolish."
|
|
|
Post by steve on Jul 21, 2009 9:31:31 GMT
I believe you're missing the part of the concept of "validation" we're critical of here. If someone reprograms a car's computer so all it does is play a tune, it makes no difference if it's full of errors or completely without errors...its still completely useless as a car's computer. Stevees confounded auditing the application with auditing the code of the software used for the application. Thats simply a case of not reading the thread. That might be your back-stop postion, but we never address it properly because you shroud it with all manner of unprovable allegations and flim flam. eg here we have you alleging fraud and cover-up: Yet when it comes down to it, all these codes that you claim noone was prepared to divulge turn out to have fewer errors than NASA software despite containing hundreds of thousands of lines of code. Is it really possible that a bunch of fraudsters (well actually multiple bunches of fraudsters) could be so careful and diligent on the one hand, and wreckless and devious on the other? That's my last word on this thread.
|
|
|
Post by slh1234 on Jul 21, 2009 13:36:34 GMT
Thanks for dropping by Steve Easterbrook, but I'm kind of embarrassed now that you found the link from here to your site and got involved Well I for one found it interesting that an independent "audit" or whatever you might call it, comes up with a lower rate of errors than NASA software, and it seems to fit with my observation that you can iron out bugs by repeated testing just as well as you can iron them out by high quality V&V procedures in the "one-shot" scenario in which you can't run in full production mode till the critical moment. It'd be nice for people to acknowledge that maybe these models *are* adequately verified and audited before moving on to the question as to whether they are validated as well. One method of validation is comparing a model against a chosen set of observations in a different set of scenarios (eg. Pinatubo, winter/summer differences, the 1998 El Niño). Another is what socold said - comparing with results from different organisations who are doing a similar work. In both cases, given that a model is not and cannot be a perfect representation of the earth, a high degree of judgement is required as to what observations are useful, and peer review is one way of getting that judgement. There are a number of kinds of "errors" in code. The more complex the code becomes, the more types of errors and more errors there can be. What is meant when someone claims that there are fewer errors in one set of code than in another? What types of audits are performed that can locate compelx logic errors that only show up in specific scenarios down code paths that are seldom followed? The types of testing that you mention may produce a beta product, but I am very skeptical that it can produce a mature "technology" (to borrow the terminology of my industry.) Even after rigorous testing (regression testing included) errors can show up after deployment due to situations that cannot be tested. This happens frequently in the commercial world to system programmers as well as application programmers. (I'll compare with system programmers here because of my experience with the two.) By "Situations that cannot be tested" I don't mean they are lax in testing, but with millions of lines of code in a relational database system, for example, there are many millions more scenarios that can develop. At some point, someone does something that nobody could anticipate. That doesn't always produce a bug, but sometimes it does. That's a part of the maturing of the technology with commercial software. Literally, there are millions of users, and just as many scenarios. Occasionally, an error is discovered, but it is an error that auditing would never have detected. There are those of us who are responsible for reproducing those errors so the exact situation causing the unexpected results can be identified and it can be evaluated to determine what else is affected, how it can be fixed, it can then be fixed and go back to regression testing. And sometimes, that fix does introduce errors in other parts of the application or platform, and sometimes they are not found until someone else does something else unexpected with it. I have to agree with you that the earth can never be accurately represented in software. That's part of my issue with modeling anything. My skepticism in this area has nothing to do with the competency of the people trying. It has a lot more to do with my experience with system/platform development. One thing that experience tells me is that verifiable usage is essential in the maturing of a technology. I don't see how that can happen with models such as climate models. Edit: I have to come back for this one after reading this from Steve: I know I spend a lot of time pointing out how easily someone will accept arguments that support what they want to believe, and will spend all their energy arguing against things that do not support what they want to believe, but this one surprised even me. No questions? No professional skepticism? No questioning of the fallacy that can develop from two people comparing results for accuracy? Not even a clarification of the meaning of "errors" as used in the statement? Just a wholesale swallowing of the line.
|
|
|
Post by magellan on Jul 21, 2009 13:48:40 GMT
Define "error". Ah, that is the problem isn't it? Metric error caused Crash of Mars Orbiterwww.tcc.edu/faculty/webpages/PGordy/Egr120/MarsOrbt.pdfThe code looked great, no "errors". Yet, as steve and Mr. Easterbrook still cannot grasp is that "error" is not defined by how well the code is executed. Note that Mr. Easterbrook did not mention anything about parametrization (tuning) or the number of the degrees of freedom involved. And although he does not address the issue of whether the physics are correct, neither does steve or socold because they cannot make such claims and be honest at the same time. According to Roger Pielke Sr., The only basic physics in the models are the pressure gradient force, advection and the acceleration due to gravity. These are the only physics in which there are no tunable coefficients. I am open for refutation of that statement. Then there is the issue of several models being in "agreement". As Dr. William Briggs argues, this is a fallacy. Why multiple climate model agreement is not that excitingwmbriggs.com/blog/?p=118So steve, Mr. Easterbrook and socold, please compile a list of the relevant physics and uncertainties included in the coding of GCM's whereby it can shown the models are by strict definition "validated". Again, define "error". Mr. Easterbrook, Roger Pielke has an open invitation to guest host on his weblog. May I suggest you contact him and present your views on these matters as Gavin Schmidt refuses to engage? It would be in response to as an example: climatesci.org/2008/11/28/real-climate-misunderstanding-of-climate-models/climatesci.org/2009/01/20/comments-on-real-climates-post-faq-on-climate-models-part-ii/
|
|
|
Post by icefisher on Jul 21, 2009 16:20:25 GMT
I have to agree with you that the earth can never be accurately represented in software. That's part of my issue with modeling anything. My skepticism in this area has nothing to do with the competency of the people trying. It has a lot more to do with my experience with system/platform development. One thing that experience tells me is that verifiable usage is essential in the maturing of a technology. I don't see how that can happen with models such as climate models. Thats a classic issue. Applying a lot of really expert programmers to the job of building a climate model. You get great code but. . . . all the theories that are coded remain completely untested and the sycophants claim the writing of the perfect code by 1,000 teams of expert programmers is the validation of the theory. Its like a 1st year physics student claiming he proved the theory of relativity by writing the equation on the blackboard 1,000 times. Thats certainly overly simplified but the analogy holds even for the CDS fiasco. Everybody in the business was using some kind of variant of a model of likely cash flows based upon recent historic performance of the underlying assets. Nobody noticed that the government had stepped in and destroyed the underwriting in an effort to extend home ownership. . . .or nobody wanted to notice. Wallstreet is making commissions right and left selling the stuff (the functional equivalent of grant sucking). "Uh excuse me Ms Trustfunder. . . .we made an error in our calculations but another $1mm grant will allow our programmers to correct it." Works great getting grants. . . .everybody ought to try it!! LOL!!
|
|
|
Post by steve on Jul 21, 2009 16:39:35 GMT
Edit: I have to come back for this one after reading this from Steve: I know I spend a lot of time pointing out how easily someone will accept arguments that support what they want to believe, and will spend all their energy arguing against things that do not support what they want to believe, but this one surprised even me. No questions? No professional skepticism? No questioning of the fallacy that can develop from two people comparing results for accuracy? Not even a clarification of the meaning of "errors" as used in the statement? Just a wholesale swallowing of the line. OK almost my last word. In what way is your comment relevant to my criticism of icefisher's arguing technique and his abject failure to prove his absurd allegations that climate modeller developers are fraudulent and incompetent. Just once it would be nice for someone to at least acknowledge "yes it appears they might have done as good a job as possible of building a model" even if you then go on to argue that building a useful model is an impossible task. When that is done, perhaps a more civilised conversation might ensue about whether models might reasonable represent an earthlike climate and whether models subjected to a forcing from greenhouse gases, aerosols or solar effects might reasonably represent an earth-like response. If you read my posts you will find me discussing the unreasonably good results from 20th century hindcasts (implying that something, probably inadvertent, is up), the fact that models do not fully represent solar cycle variation, and the fact that models do not agree on feedbacks or even the relative causes of different sources of feedbacks. I will even accept that the apparent overlap of the models and satellite data when you take into account natural variability and uncertainty in the observations, is not yet good enough - though it is as much a question for the satellite data as the models. But when you get down to it. CO2 warms the climate, so the initial expectation really ought to be that more CO2 might just warm it more.
|
|
|
Post by poitsplace on Jul 21, 2009 18:56:38 GMT
OK almost my last word. In what way is your comment relevant to my criticism of icefisher's arguing technique and his abject failure to prove his absurd allegations that climate modeller developers are fraudulent and incompetent. Just once it would be nice for someone to at least acknowledge "yes it appears they might have done as good a job as possible of building a model" even if you then go on to argue that building a useful model is an impossible task. When that is done, perhaps a more civilised conversation might ensue about whether models might reasonable represent an earthlike climate and whether models subjected to a forcing from greenhouse gases, aerosols or solar effects might reasonably represent an earth-like response. I personally have never really considered the disembodied accuracy of the climate model code to be an issue. From the other steve's account I would certainly agree that many models are well (and perhaps even "lovingly") crafted pieces of software. And I agree here. The fact that there are multiple models using different sensitivities and feedbacks to come up with loosely similar answers (including semi-accurate hindcasting) is testament to the incredible pliability of the models. They obviously have sufficient numbers of tweakable parameters that they could show just about anything. In this case, however that 'anything' always involves warming...and its because that's what they set out to show. And that's why they make it show warming. But the problem I see is that the mechanism required for this to work (finally getting back to radiation balance at TOA)...isn't controlled by CO2. CO2 doesn't in any way control the gradient to the coldest part of the atmosphere. If you look, it's clearly controlled by water vapor. CO2 cannot create a new gradient in which it is warmer below...because that would simply change the altitude at which water vapor condensed out. Obviously such a change would still involve a small amount of warming but far less warming than would be suggested by the grossly oversimplified (and rather misleading) math on absorption.
|
|
|
Post by socold on Jul 21, 2009 20:03:34 GMT
There are a number of kinds of "errors" in code. The more complex the code becomes, the more types of errors and more errors there can be. What is meant when someone claims that there are fewer errors in one set of code than in another? What types of audits are performed that can locate compelx logic errors that only show up in specific scenarios down code paths that are seldom followed? This relates to code coverage, which is the proportion of code which is passed through during testing. Having thought more about what steveeasterbrook mentioned, I suspect even a single climate model run will provide a very high level of code coverage by the nature of how climate models work. In a climate model run, say a run of 100 years, nearly all the physics procedures will be run through countless millions of times by virtue of them being called for so many grid cells over so many time steps. In fact a climate model run seems like one of the most thorough software tests I can think of as the slightest error has the ability to significantly and detectably alter the system output.
|
|
|
Post by icefisher on Jul 21, 2009 21:37:26 GMT
OK almost my last word. In what way is your comment relevant to my criticism of icefisher's arguing technique and his abject failure to prove his absurd allegations that climate modeller developers are fraudulent and incompetent. Thats probably because your criticism doesn't hold any water. There is no way you can sustain an argument that I ever made an allegation that climate modeler developers are fraudulent and incompetent, because I never did. Fraud exists but its certainly not endemic in the climate modeler developer community. And if you define incompetence as somebody who borrows an erroneous idea from somebody else and uses it then we are all incompetent and none of use are perfect.
|
|
|
Post by Ratty on Jul 21, 2009 23:31:06 GMT
Of course NASA's missile code actually has to get to where its going. With the climate models they can land on that planet ruled by turtles and nobody would know the difference. I'm sorry. I mistook this forum for one where intelligent conversation was possible. You might as well just stick your fingers in your ears and shout la la la. I'll shall stop wasting my time. I too would get defensive when I first started writing code but I later learned that I wasn't infallible. There may be no errors in the code but logic errors are another matter.
|
|
|
Post by steve on Jul 22, 2009 6:45:01 GMT
OK almost my last word. In what way is your comment relevant to my criticism of icefisher's arguing technique and his abject failure to prove his absurd allegations that climate modeller developers are fraudulent and incompetent. Thats probably because your criticism doesn't hold any water. There is no way you can sustain an argument that I ever made an allegation that climate modeler developers are fraudulent and incompetent, because I never did. Icefisher wrote:
|
|
|
Post by icefisher on Jul 22, 2009 8:17:01 GMT
Thats probably because your criticism doesn't hold any water. There is no way you can sustain an argument that I ever made an allegation that climate modeler developers are fraudulent and incompetent, because I never did. Icefisher wrote: Thats really weak Steve. You had to eliminate the real context and insert yours to make your point?
|
|
|
Post by steve on Jul 22, 2009 9:34:29 GMT
|
|
|
Post by icefisher on Jul 23, 2009 2:43:50 GMT
Well a page 9 correction is better than no correction anyway.
|
|
|
Post by magellan on Jul 23, 2009 16:34:12 GMT
The long awaited paper from Lindzen. I can hear the zombies screaming at ReinventedClimate......their masters must destroy this tobacco industry lackey. On the determination of climate feedbacks from ERBE datawww.leif.org/EOS/2009GL039628-pip.pdfDiscussion at WUWT: wattsupwiththat.com/2009/07/23/new-paper-from-lindzen/#more-9519“…ERBE data appear to demonstrate a climate sensitivity of about 0.5°C which is easily distinguished from sensitivities given by models.” Observational data to test a hypothesis; what a novel idea.
|
|