|
Post by icefisher on Jan 15, 2010 19:29:14 GMT
Hadcrut3 1976 -.254; 1998 +.546; exactly .8c and 1911 -.581; 1944 +.12; .601c both cover an identical number of years and roughly coincide with recognized ocean oscillation switches and consist of the most aggressive interpretation, respectively. Thus when you subtract one from the other you get a residual of .199C which likely is aggressive as well. So if you take .1 off it the residual for 33 years is .099c. . . .lets call it .1c and amount you yourself call "making a mountain out of a molehill" ROTFLMAO! Now you may not believe it but thats a religious experience for you because scientifically you have zero basis for excluding the possibility. Maybe you can do 3 kowtows in honor of its religious status. Yes, and if you take this and that period and add some, devided by that and that period and subtract some, and let it boil for about 60 years then you will really get something Ok, again, how much of the warming (in degrees C) in the last century is from natural variability, and what are the cycle lenghts you are assuming? Thats the real issue. I was just poking holes in Socold's attempts to encapsule it in an improper wrapper. You can't look for the warming over the short term and do short term comparisons between satellite and ground stations to pooh pooh skeptic concerns about data manipulation. We have seen how CRU lowered pre 1980's temperatures effectively distancing the 1944 peak from the 1998 peak. That combined with the close associations in the Climategate emails with across the pond right into GISS doesn't provide much faith in the underlying centennial records (all of which do not have satellite ground truthing). Akasofu estimated that underlying warming at .5C per century. Hadcrut shows about .5c per century but that is up from an earlier version used by Bryant (1997) and provided a linear fit at .4c per century. Of course Socold says that .1c is peanuts. Being a veteran modeler I have always known when you have multiple input models it only takes a little here, a little there, a bit more over there to get to the end of the rainbow. Socold noted an exponential curve for this so it was or is accelerating. But I pointed out solar activity excluding the last 11 years also displays as an exponential fit. Lowering 1940 in the temperature record of course adds to Socold's exponential acceleration if he used Hadcrut. And of course CO2's forcing diminishes exponentially too. Seems like simple enough math somebody could do to see if CO2 forcing is exponentially upwards or not due to accelerating emissions but most of the IPCC lines seem to bend the other way for most emission scenarios but I have not seen an analysis of historic emissions to see if it also fits to the accelerating historic temperature curve. I assume it must but with all the skullduggery who knows? Ultimately its not going to be an easy answer unless the sun does go into an extended quiet period with the exponential (and better) fit of sun activity to the temperature record. If you look closely you can even see a clear 3 to 4 cycle pattern in sunspot activity potentially matching the ocean oscillations. But its a bit premature to predict what the sun is going to do over the next 30 years. This race is apt to be won by the horse that has been using a better brain (ala Revell?) for pacing rather than seizing the bit and running away with it. But all those horses will be running for as long as they can still suck air because both ego and money is at stake.
|
|
|
Post by socold on Jan 15, 2010 20:17:59 GMT
I think you mean .5C perhaps .8f. I completely disagree that 3/4ths of that has been likely due to solar/ocean oscillations. Even if the early 20th century warming was completely natural, that doesn't logically suggest to me that the same magnitude of late 20th century warming must be of the same cause. Hadcrut3 1976 -.254; 1998 +.546; exactly .8c and 1911 -.581; 1944 +.12; .601c both cover an identical number of years and roughly coincide with recognized ocean oscillation switches and consist of the most aggressive interpretation, respectively. Thus when you subtract one from the other you get a residual of .199C which likely is aggressive as well. So if you take .1 off it the residual for 33 years is .099c. . . .lets call it .1c and amount you yourself call "making a mountain out of a molehill" ROTFLMAO! Now you may not believe it but thats a religious experience for you because scientifically you have zero basis for excluding the possibility. Such a simplistic cycle would have gone negative 1944 - 1976, negating all the warming it produced 1911 to 1944. Why then didn't temperature in 1976 return to 1911 levels? Quite clearly the global temperature record shows more is going on than such a perfectly simple cycle. The burden of proof is on you to provide evidence of this wild claim, not me for excluding the possibility. Lets look at one well known cycle - the PDO. It's not a perfect cycle but over the 20th century the PDO went both up and down, but the overall trend is flat because it keeps returning to where it started. Therefore it couldn't have contributed any of the 20th century warming, it could have only masked or accelerated certain periods if anything. It also went negative from the mid 1980s onwards, so if anything I think the PDO contribution over the past 20 years must have been cooling, not warming. www.woodfortrees.org/plot/hadcrut3vgl/normalise/mean:60/from:1900/plot/jisao-pdo/normalise/mean:60
|
|
|
Post by socold on Jan 15, 2010 21:00:51 GMT
Yes, and if you take this and that period and add some, devided by that and that period and subtract some, and let it boil for about 60 years then you will really get something Ok, again, how much of the warming (in degrees C) in the last century is from natural variability, and what are the cycle lenghts you are assuming? Thats the real issue. I was just poking holes in Socold's attempts to encapsule it in an improper wrapper. You can't look for the warming over the short term and do short term comparisons between satellite and ground stations to pooh pooh skeptic concerns about data manipulation. I find that I can. In short if you believe "data manipulation" puts AGW into doubt you must believe the lower troposphere has warmed more than the surface. Holding the first view but not the second is inconsistent with the data. You argued that 1/8 data manipulation is a concern if 3/4 of the recent warming is natural. But that is wrong. The real concern there would be that 3/4 of the recent warming is natural. That's what would break AGW. The 1/8 "data manipulation" wouldn't make a difference either way. That claim also defies available data. NASA GISS, NOAA and the JMA all find the 40s peak fell a similar distance below 1998.
|
|
|
Post by icefisher on Jan 15, 2010 23:28:44 GMT
Such a simplistic cycle would have gone negative 1944 - 1976, negating all the warming it produced 1911 to 1944. Why then didn't temperature in 1976 return to 1911 levels? Quite clearly the global temperature record shows more is going on than such a perfectly simple cycle. The burden of proof is on you to provide evidence of this wild claim, not me for excluding the possibility. First I agree the burden is on anybody who has a theory, including you or anybody else. Second, you should read my reply to AJ. The explanation is either the cycle is not simplistic or it is a simple cycle with complex impacts, or its not a cycle at all just chaotic variation. Third, the theory I put forth is actually Akasofu's theory and he thoroughly recognizes it isn't a tested theory simply a simple potential explanation that covers the observed record. Fourth, I or anybody else can create any theory we wish using enough parameters to fit any observed record; thus all it is is an untested theory. . . .just like AGW that is currently in a perennial mode of adjusting their theory as nature proves them to be inaccurate, incomplete, or just plain wrong. Lets look at one well known cycle - the PDO. It's not a perfect cycle but over the 20th century the PDO went both up and down, but the overall trend is flat because it keeps returning to where it started. Therefore it couldn't have contributed any of the 20th century warming, it could have only masked or accelerated certain periods if anything. It also went negative from the mid 1980s onwards, so if anything I think the PDO contribution over the past 20 years must have been cooling, not warming. We are not in disagreement here on the general ocean oscillation effects. However, the PDO went to a positive anomaly about 1976 according your link and did not go negative until 1999. Fits almost perfectly to the 1976 to 1998 temperature rise. You get a divergence from PDO performance from 1986 to 1989 but there was an El Nino in 1986 and a solar max in 1989
|
|
|
Post by magellan on Jan 16, 2010 0:09:32 GMT
Thats the real issue. I was just poking holes in Socold's attempts to encapsule it in an improper wrapper. You can't look for the warming over the short term and do short term comparisons between satellite and ground stations to pooh pooh skeptic concerns about data manipulation. I find that I can. In short if you believe "data manipulation" puts AGW into doubt you must believe the lower troposphere has warmed more than the surface. Holding the first view but not the second is inconsistent with the data. You argued that 1/8 data manipulation is a concern if 3/4 of the recent warming is natural. But that is wrong. The real concern there would be that 3/4 of the recent warming is natural. That's what would break AGW. The 1/8 "data manipulation" wouldn't make a difference either way. That claim also defies available data. NASA GISS, NOAA and the JMA all find the 40s peak fell a similar distance below 1998. That claim also defies available data. NASA GISS, NOAA and the JMA all find the 40s peak fell a similar distance below 1998. And where do they get their data from? Tamino? Speaking of Tamino, since he has declared himself God, as you linked to his glorious repudiation of anything short of 30 years being useful for determining breaks from trends, a few rules to remember when entering the Kingdom of Tamino [God], aka OpenMind. 1) Tamino is God 2) Tamino is infallible. 3) Anything Tamino says is not to be questioned 4) All things Tamino says is holy and just 5) When entering the Kingdom of Tamino 1-4 are to be adhered to, else you will be rejected from his Kingdom or refused entry Since Tamino has arbitrarily determined that only 30 years of data is sufficient to establish a statistical truth, it must be so because Tamino is God. Given: 6) no references to literature or statistical practices refuting Tamino is permitted in the Kingdom of Tamino 7) only posts that glorify Tamino and making it easy for him to insult the author are permitted. 8) no person shall be given entry to the Kingdom save those that praise Tamino. 9) Data less than 30 years in length shall not be permitted discussion in the Kingdom, save Hansen et al 2005 or that which Tamino approves. 10) Any period of length is cherry picking, save that which Tamino approves as in the case of Santer 08 cutting off the satellite data at 1999. 11) Tamino is God The sad part is, some believe every word Tamino types. Can you see the folly in this? There is no shortage of examples of his arrogance and Texas Sharpshooting abilities. Shocking Revelation: Correcting Projections After Observing Data Results in Better Agreement.
14 January, 2010 (14:03) | Statistics Written by: lucia
In a shocking, shocking, shocking revelation, Tamino has shown that the following process results in “projections” that match observations:
1. Devise a method of creating projections for the earth’s surface temperature in 2007 making decisions based on what one knows in 2006-2007. Report these in a formal report. 2. Observe the earth’s surface temperature between 2007 and 2010. If projections match, decree projections were remarkably good. If they don’t continue on to next step. 3. Examine basis for projections. Modify the basis for projections to create new projections that better match observations made after the projections were published. 4. Decree original projections whose short comings you corrected in light of later data were perfectly good because they would have been good if only you’d known enough to come up with the correct basis for making projections back in 2006-2007.
Specifically, Tamino threw out the a model with a high trend after 2000 from the suite of models used by the authors of the IPCC AR4 when making projections.
|
|
|
Post by icefisher on Jan 16, 2010 0:19:18 GMT
I find that I can. In short if you believe "data manipulation" puts AGW into doubt you must believe the lower troposphere has warmed more than the surface. Holding the first view but not the second is inconsistent with the data. That is only true if you believe the data after 1979 was manipulated more than the amount of divergence. The adjustments I saw only served to lower the 1940's. You argued that 1/8 data manipulation is a concern if 3/4 of the recent warming is natural. But that is wrong. The real concern there would be that 3/4 of the recent warming is natural. That's what would break AGW. The 1/8 "data manipulation" wouldn't make a difference either way. Fine fine fine. We are talking hypotheticals here as nobody really knows one way or the other except that you believe whatever you believe to be true whether you have evidence or not. A good pious man you are. That claim also defies available data. NASA GISS, NOAA and the JMA all find the 40s peak fell a similar distance below 1998. It is only GISS and CRU that have been observed making adjustments to lower the 1940's. The big difference is whether the underlying centennial warming is accelerating or not. The JMA graph lacks that lowered 40's peak. Just guessing but perhaps the adjustments were for clarity like the "hide the decline" objective in eliminating the late years of the tree ring data to make their point visually clear for policy makers. After all it could be difficult to explain the logarithmic decay of CO2 temperature effects in view of accelerating emissions. . . .or alternatively it was a model induced fit to fit to the emissions history. Certainly there is precedence for both approaches. Update: Followed the recent WUWT link to the methodology used by GISS and found this: "The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes. They obtained quantitative estimates of the error in annual and 5-year mean temperature change by sampling at station locations a spatially complete data set of a long run of a global climate [glow=red,2,300]model[/glow], which was shown to have realistic spatial and temporal variability."So it appears my audit "nose" was largely on track. Recent adjustments completed with the aid of more up to date climate models.
|
|
bxs
Level 3 Rank
Posts: 115
|
Post by bxs on Jan 16, 2010 2:01:20 GMT
|
|
|
Post by icefisher on Jan 16, 2010 3:32:09 GMT
Skeptics must believe one of two things: a) The surface record shows 0.1C too much warming over the past 30 years (the difference between them and UAH) Neither options cast skeptics in a good light. Arguing over 0.1C in the last 30 year is making a mountain out of a molehill, that is neither the make or break of AGW. Here is some more on this issue for you Socold. This is basic stuff, call it apprenticeship auditing or modeling as it applies to both. . . .and I have done both professionally. Assume for a moment that the .1C is either a deliberate fudge or an overestimation based upon methodology of analysis and its real. You claim its "making a mountain out of a molehill". Let me show you why it is not. First you have this .1 difference applied at the modern end of a 120 year history of temperatures. Say the same error rate applied throughout the history. Since the error accumulated to .1c in 30 years its logical the entire error could .4c over the 120 year period during which co2 emissions increased by 70ppm. This is especially true with a methological analysis applied thoroughly. Now using the low end sensitivity number for 70ppm equals about .55degC. So the fudge factor is accounting for about 75% of the lowend expected warming from a 70ppm increase in CO2. Wow!!! Now looking at this from a data standpoint as you go far back to the 1890's spacing of stations start increasing rapidly thus the fudging or misanalysis back then (and as we have seen in some the datasets here) the adjustments are bigger back then. Fact is it would only take a modest decrease in station coverage going back into time to absorb the entire lowend sensitivity expected increase in temperature. If the station coverage tripled or quadrupled you could quickly eat up the entire expected temperature increase for the midpoint sensitivity factor. So would you still consider .1C making a mountain out of molehill? Its funny how little changes in models can have big influences and is perhaps the biggest issue for professional modelers and auditors of them all. Little deficiencies that blow up into institution destroyers are far more common than just about anybody can imagine. Seldom though can you establish if such error is intentional. When predicting the future, particularly for financial outcomes, the historical record often supports a wide range of variation, thus when done to meet regulatory guidelines there tends to be a lot meetings about estimates being defensible. Its fairly clear that the certainty some exhibit about CO2 forcing and sensitivity numbers. . . .they are born entirely from theoretical models. The historical record is poor evidence one way or the other so you get people who see their role as to mold the historical record to fit their world view knowing with impunity that nobody can ever actually find them wrong. This issue raises a lot of conflicts between auditors and clients. Auditors don't have the same incentives that clients have but the rules of the game is auditors put up the same risk as their client, namely their entire personal net worth. Its a tough gig! P.S. A little addendum for GLC. He has pointed out the UAH has a slope of .13 per decade. But this is primarily over a period of recent warming that is not consistent with the entire record. GISS for example can be extrapolated as having a .163 slope to acheive the .1C difference in 3 decades. But the GISS slope for the 120 years is more like .058 per decade over the entire GISS data set. Factoring that to the UAH dataset (and using a linear error rate) its possible UAH would have found a .046degC slope per decade or about .5c per century. All that is a rough cut analysis and I am sure it could be improved but I tend to think its in the ballpark and that seems reinforced by Dr. Akasofu coming up with the same number.
|
|
|
Post by spaceman on Jan 16, 2010 4:06:37 GMT
One of the big accounting firms in the US went out of business a few years back for signing off on the bottom line. Whatever the company(client) said was what was certified. Who wants to loose a big account? Some would have us believe that this is noble and pure, and yet the recent past is replete with fraud, in so many different areas.
|
|
|
Post by nautonnier on Jan 17, 2010 14:40:30 GMT
<<SNIP>> Its funny how little changes in models can have big influences and is perhaps the biggest issue for professional modelers and auditors of them all. Little deficiencies that blow up into institution destroyers are far more common than just about anybody can imagine. Seldom though can you establish if such error is intentional. <<SNIP>> Small errors like ordering errors are easy to introduce in complex algorithms and in iterative models even relatively insignificant errors can be amplified. Then more processing is done on that output to support second level erroneous conclusions - as you say this is extremely common. In my experience of these issues whether there is a search for an error in data and its processing depends on the match of the output of the model with the modelers preconceptions of the result. If the model output appears to prove what the modeler (or their customer) wants there will be very little effort put into looking for errors. This is why the intentional or unthinking destruction/loss of original base data and concealment of its processing is so unforgivable.
|
|
|
Post by steve on Jan 17, 2010 20:13:56 GMT
Two separate points: 1. While the climate models do not show the same pattern of variability as the earth (probably most of them in AR4 didn't show much ENSO), they do show variability. This variability has been evened out in the most famous IPCC plots. At the more detailed level, Lucia and Gavin Schmidt seem to be at loggerheads arguing whether current temperatures are outside the model envelope when you take into account the actual variability of the models and the expected variability of the climate. clLucia has taken a dogmatic approach that she *must* start her comparisons from 2001, because this was when the models were started. But she made this choice in about 2007 so it can be argued that she preselected a period that would allow her story to run and run. First of all AR4 isn't relevant to an analysis of failure/success. It is dated 2007 and I am certain they took the opportunity patch their models. The projections of AR4 are about the same as TAR, but I believe Lucia usually cites the plot taken from AR4. If the models had been "patched" they would have shown a good fit for 2001-2005 wouldn't they? What we have here, though, is you speculating about something that you don't know anything about, because models are spun up for a number of years before the scenarios are applied. The IPCC lays out the emissions scenarios. The emissions scenarios don't contain any volcanoes. None of the cooling episodes that models exhibit are due to volcanoes, they are due to the variability of the model. No I didn't say that at all (I don't think). I don't know what you think I said, but basically my understanding is that scientists think they know how warm it will get (to within a factor of 2!), but are not sure how quickly we'll get there, or what the timing of the various fits and starts will be. It's maybe a bit like dragging a heavy weight with elastic rope - the weight will stick for a bit then jump forward. Here here. But I think that there is less of a hegemony about climate modelling. Many models are freely available. The problem with models is finding the resources to run them. Though a large university could manage it.
|
|
|
Post by spaceman on Jan 17, 2010 22:02:48 GMT
Steve said "but basically my understanding is that scientists think they know how warm it will get (to within a factor of 2!), but are not sure how quickly we'll get there, or what the timing of the various fits and starts will be. It's maybe a bit like dragging a heavy weight with elastic rope - the weight will stick for a bit then jump forward."
Steve that is exactly the point, the hockey stick goes straight up. The AGW crowd has said that the model is so certain that there is no error or so little error as to not matter. The actual weather (climate)has to match their predictions. If it doesn't, there is something wrong with the model or the data... or both. BTW, when you guys put up temp graphs to prove your point, they end in 2002. It's 2010, what happened in the last 6- 8 years?
|
|
|
Post by icefisher on Jan 18, 2010 1:12:05 GMT
First of all AR4 isn't relevant to an analysis of failure/success. It is dated 2007 and I am certain they took the opportunity patch their models. The projections of AR4 are about the same as TAR, but I believe Lucia usually cites the plot taken from AR4. If the models had been "patched" they would have shown a good fit for 2001-2005 wouldn't they? What we have here, though, is you speculating about something that you don't know anything about, because models are spun up for a number of years before the scenarios are applied. The IPCC lays out the emissions scenarios. The emissions scenarios don't contain any volcanoes. None of the cooling episodes that models exhibit are due to volcanoes, they are due to the variability of the model. You might consider revising the above. Here is a cut and paste from the SPM TAR Figure SPM-2: Simulating the Earth’s temperature variations (°C) and comparing the results to the measured changes can provide insight to the underlying causes of the major changes. A climate model can be used to simulate the temperature changes that occur from both natural and anthropogenic causes. The simulations represented by the band in (a) were done with only natural forcings: solar variation and volcanic activity.
|
|
|
Post by steve on Jan 18, 2010 10:39:39 GMT
The projections of AR4 are about the same as TAR, but I believe Lucia usually cites the plot taken from AR4. If the models had been "patched" they would have shown a good fit for 2001-2005 wouldn't they? What we have here, though, is you speculating about something that you don't know anything about, because models are spun up for a number of years before the scenarios are applied. The IPCC lays out the emissions scenarios. The emissions scenarios don't contain any volcanoes. None of the cooling episodes that models exhibit are due to volcanoes, they are due to the variability of the model. You might consider revising the above. Here is a cut and paste from the SPM TAR Figure SPM-2: Simulating the Earth’s temperature variations (°C) and comparing the results to the measured changes can provide insight to the underlying causes of the major changes. A climate model can be used to simulate the temperature changes that occur from both natural and anthropogenic causes. The simulations represented by the band in (a) were done with only natural forcings: solar variation and volcanic activity. We're talking about different things. Simulations of the 20th Century include solar and volcanic. The IPCC projections into the 21st century do not include variations due to solar and volcanic, as the aim was to project the anthropogenic impact. So the causes of any of the cooling episodes that most of the models demonstrate throughot the whole of the 21st century are not volcanoes.
|
|
|
Post by Purinoli on Jan 18, 2010 12:32:03 GMT
You might consider revising the above. Here is a cut and paste from the SPM TAR Figure SPM-2: Simulating the Earth’s temperature variations (°C) and comparing the results to the measured changes can provide insight to the underlying causes of the major changes. A climate model can be used to simulate the temperature changes that occur from both natural and anthropogenic causes. The simulations represented by the band in (a) were done with only natural forcings: solar variation and volcanic activity. We're talking about different things. Simulations of the 20th Century include solar and volcanic. The IPCC projections into the 21st century do not include variations due to solar and volcanic, as the aim was to project the anthropogenic impact. So the causes of any of the cooling episodes that most of the models demonstrate throughot the whole of the 21st century are not volcanoes. Maybe I missunderstood this, but to my mind we probably have a good record of volcanic activity during last 200 yrs and maybe also some calculations of its impact on climate ( mostly cooling due to sulfur aeorosols and dust covering the sky...). In the case of CO2 we have a lot of confusion, from missundrstanding of Stefan's law, half life of CO2 in the atmosfere, missing budget of CO2 and many more. So here we come to the unusual Sun's inactivity. And outer space with its bombardment of cosmic rays. Any of these parameters has more or less the same uncertainty as influence factor on climate. So why than IPCC focused only on CO2? This is a real mistery for me. And also one here mentioned that 1/8 of cooked data ( Climategate I) should not mean that all IPCC work is garbage. As we are now slowly coming to Climategate II ( USA), I can say that 1/8 is already more than the top of iceberg which sunked Titanic. Climategate I+2 are just top of iceberg. Once a thief, always a thief. Once a lier, always a lier. So forget 1/8 and let's find out how much is x of x/8.
|
|