|
Post by icefisher on Dec 3, 2009 21:55:48 GMT
"Christopher C. Horner, a senior fellow at the Competitive Enterprise Institute, said NASA has refused for two years to provide information under the Freedom of Information Act that would show how the agency has shaped its climate data and explain why the agency has repeatedly had to correct its data dating as far back as the 1930s. ... "NASA and CRU data are considered the backbone of much of the science that suggests the Earth is warming as a result of man-made greenhouse gas emissions. NASA argues that its data suggest this decade has been the warmest on record. "On the other hand, data from the University of Alabama-Huntsville suggest temperatures have been relatively flat for most of this decade." Washpost is clearly a biased mouthpiece, albeit not as extreme as the telegraph or the register. CEI is the red flag, describing has PR attack written all over it. The best distortion though is this part: "NASA argues that its data suggest this decade has been the warmest on record. "On the other hand, data from the University of Alabama-Huntsville suggest temperatures have been relatively flat for most of this decade."" UAH data also shows this decade has been the warmest on record, so their "on the otherhand" is false. But yeah a nice wordplay trick was done there. Since the context is "global warming"; a decade of flat temperatures is an "on the other hand" to an accelerating forcing that was allegedly solely responsible for making the last decade the warmest. No wordplay trick there whatsoever. The missing heat is a legitimate concern when AGW was supposed to be overpowering natural variation. Further, if you can't predict nor explain a decade long flat period; the logical extension of that is you also cannot explain the decade long warming that preceded it. Hansen and Company are like a bunch of old geezers with divining sticks that told a lot of people where to dig and hit water about as often as you would if you dug holes randomly.
|
|
|
Post by dwerth on Dec 3, 2009 22:35:09 GMT
Graeme-
It is true that you can design an experiment that could minimize the error margin. I concur that the historical measurements were probably accurate, but not precise as you said eyballing a temperature gauge when it is below freezing in white-out conditions might possibly lead the measurement taker to, um, hurry things along a bit shall we say?
However, it is not actually the historical measurements that I am worried about. It is the high precision that is currently claimed for modern measurements of surface temperatures. This seems very, very problematic to me, given the problems that we are finding with the siting and surroundings of a very high percentage of surface temperature stations.
*edit: name
|
|
|
Post by Graeme on Dec 3, 2009 22:53:10 GMT
dwerth, I think we're in general agreement. However, if we're trying to measure trends, rather than an accurate 'global temperature', then I'm less concerned than you on the issue of siting, etc. While a given site may be biased, that would be acceptable if the bias is shown to be consistent (not a given), because the trend would be the same regardless of the bias. I've been thinking about the problem and I'd like to see several assessments made, rather than one global mean temperature. What I'd like to see is: 1. A major city temperature anomaly chart 2. A rural temperature anomaly chart 3. An airport temperature anomaly chart (since so many sites are now at airports, for practical reasons) 4. A sea surface temperature anomaly chart 5. Satellite measured temperature anomaly chart(s) All of these would be off UNADJUSTED data. Between these, we should be able to identify what trends are applicable world wide and which are localised (eg. major city/airports showing one trend, and rural showing another). more importantly, I think it would then be possible to at least partially quantify UHI effects that could then be used to produce adjusted versions of the above for historical data. The reason I stated I would prefer to see unadjusted data used is that I believe current measurements are more accurate than historical ones. If any adjustments need to be made to bring different data sets into line (eg. due to UHI or a site location move), then those adjustments should be made to the historical records, not the current ones (as is done at the moment). My logic is that we know the current data better than the historical data, and hence it should be left unmodified. That will give a more accurate 'current' situation than modifying the current data to make it compatible with historical data. Just my humble opinion
|
|
jtom
Level 3 Rank
Posts: 248
|
Post by jtom on Dec 3, 2009 23:27:20 GMT
I've seen a great deal of info on other sites, like WUWT, that shows the operating conditions at many, many stations have not remained static, which make looking for trends in the raw data meaningless. Trees are removed, concrete poured, a.c. units installed, parking lots built, new runways constructed.... Also some stations seem to have been physically relocated in the area, but kept the same name, with an 'adjustment' made to equate the before and after stations.
Even at stations where the environment does not change significantly, you run into problems when equipment is replaced.
For satellite data, satellite drift demands data adjustments, making rendering trend studies moot.
I think it is possible to get enough data to get an idea as to what the global average is, but I don't think you'll ever have a process with such accuracy as to determine fractional temperature changes from one month to the next or even from year to year.
|
|
|
Post by dwerth on Dec 3, 2009 23:29:39 GMT
Graeme-
I definately agree with you that we need more than just an attempt to amalgamate all the data sources that we have into one average global temperature. I like your suggestions for what data sets to use. If we were to do this, I think then we would get down to arguing over which data set should have higher weight given, but that is all good scientific and mathematical debate, which is healthy.
In addition, how expensive is it to set aside enough money every year to purchase land for siting more ground based observational stations? We should be having more stations as time progresses, not less. However, the opposite seems to be the case.
|
|
|
Post by Graeme on Dec 3, 2009 23:44:26 GMT
Graeme- I definately agree with you that we need more than just an attempt to amalgamate all the data sources that we have into one average global temperature. I like your suggestions for what data sets to use. If we were to do this, I think then we would get down to arguing over which data set should have higher weight given, but that is all good scientific and mathematical debate, which is healthy. That's what I thought, too. With 'raw' data being presented first, any manipulation of that data to allow for particular biases would then be up front and known... and rigourously debated. In addition, how expensive is it to set aside enough money every year to purchase land for siting more ground based observational stations? We should be having more stations as time progresses, not less. However, the opposite seems to be the case. For rural sites, it shouldn't be too expensive. All of our countries have farms in place. 'Renting' half an acre (or more) of otherwise low productivity land from farmers would be a much needed source of extra income for those farmers, and would provide a stable environment in which take measurements. Using a bit of modern technology, they could include web cams and mobile transmitters so the sites could be monitored remotely, reducing labor costs. It would only be if there was a problem, or if maintenance of nearby plant growth was required would someone need to go out there (and the latter could probably be subcontracted out to the farmer, too). Urban land, on the other hand, is probably cost-prohibitive. It would cost too much to ensure an adequate space was maintained around the sites.
|
|
|
Post by dwerth on Dec 3, 2009 23:55:33 GMT
Graeme -
It might not be too prohibitive to site sensors in cities- if we were to use a fairly sizeable array of simple sensors around the city. Remember, we are allowing as a given that the temperatures in the city will be subject to the UHI affect. Therefore, let us plan on it, and rather than using one sensor as the data point, let us use a couple hundred (thousand if they are cheap enough) sensors that are widely scattered throughout a city to make the whole city one big thermometer. This would cut down dramatically on the variability of one sensor as was discussed earlier.
|
|
|
Post by Graeme on Dec 4, 2009 0:04:43 GMT
dwerth, that's a good idea! Essentially averaging the UHI effect over the whole city.
On a related topic, I've been looking at the historical proxy reconstructions of global or northern hemisphere temperatures (paleoclimatology). One thing that struck me was that they produced their figures for a global (or northern hemisphere) temperature from only a small number of sites. Has there been any attempt to produce a modern mean temperature from the instrument records from those sites, and how it compares to the more comprehensive figures that cover as much of the globe as possible?
|
|
|
Post by sigurdur on Dec 4, 2009 2:32:17 GMT
One question I would like CRU to answer is related the discussion between myself and Steve on the Akasofu paper. It seems a paper by Bryant, E. (1997), Climate Process and Change, Cambridge University Press uses a 1996 version of the Hadcrut data. This data did not have a depressed temperature bump around 1940, instead the smoothed bump of 1940 was right in line with the half bumps of 1880 and 1998. So apparently some adjustments to the data were made subsequent to 1996. Further Phil Jones says the raw data was destroyed back around 1980 long before he became in charge of CRU. So the question is if you don't have the raw data and the raw data had been processed into a grid using some judgements back in 1980 and you cannot any longer review these adjustments. . . .how can you reliably make more adjustments to the data set without risking replicating adjustments previously made either explicitly or inadvertently? Seems this science tale is somewhat like that parlor game where the host recites a written story privately to one party member, who then repeats the story from memory privately to another party member and so on. After about 4 tellings of the story and about 10 minutes elapsing in all the results are knee slapping hilarious in that the story has become entirely unrecognizable to the original. Unsurprisingly, the answer seems reasonably straightforward. 1. Only some of the original data is lost. 2. The loss refers to ground stations - there are other sources of data, and in particular, the 1940's bump is from the sea surface temperatures. It is not a ground station issue as the image below shows. 3. There are other sources of data coming on-line all the time (eg. digitisation of old log books etc.). Steve........See this: ok......see the post Magellan posted........the temps of the 40's. They vaired a helllllllllllllllllllllllll of a lot more than .2C. Yes, this is US......but it was much more than US. Now I will have to spend hours.......dang it!
|
|
|
Post by nautonnier on Dec 4, 2009 10:53:53 GMT
True. Can we really trust the Met Office anymore?Use UAH then. And while you're at it explain why we have the warmest November on record duing the deepest solar minimum in a century. Strange there is someone talking about the 'hump' of the Medieval Warm period apparently being massaged away - Steve shows HadCRUT data which someone else points out cannot be trusted - and glc proposes using UAH to look for the MWP?
|
|
|
Post by smoore on Dec 4, 2009 15:42:38 GMT
Graeme, Dwerth:
Thanks for the discussion. I believe for the MET graph, they are trying to show more certainty than is reasonable for temperatures of 1850. I agree, Graeme, a trend is a trend given the site bias stays static. I'm afraid though, that any given station that is poorly sited probably has an artificial increase in the trend due to increase in UHI. I'm not sure we'll find too many cities that are static for the past 160 years.
|
|
|
Post by steve on Dec 4, 2009 16:04:05 GMT
Graeme, Dwerth: Thanks for the discussion. I believe for the MET graph, they are trying to show more certainty than is reasonable for temperatures of 1850. I agree, Graeme, a trend is a trend given the site bias stays static. I'm afraid though, that any given station that is poorly sited probably has an artificial increase in the trend due to increase in UHI. I'm not sure we'll find too many cities that are static for the past 160 years. UHI is weather dependent at many sites, so it is possible to look at the trends under different weather conditions - eg. the trend for windy days vs the trend for non-windy days. Initial results from a few years ago indicate that even though some sites show a different trend, the effect on the average trend is negligible. hadleyserver.metoffice.com/urban/Parker_JClimate2006.pdfThis finding is a few years old, and the analysis could easily be redone by anyone who wants to do it.
|
|
|
Post by magellan on Dec 4, 2009 17:49:04 GMT
Graeme, Dwerth: Thanks for the discussion. I believe for the MET graph, they are trying to show more certainty than is reasonable for temperatures of 1850. I agree, Graeme, a trend is a trend given the site bias stays static. I'm afraid though, that any given station that is poorly sited probably has an artificial increase in the trend due to increase in UHI. I'm not sure we'll find too many cities that are static for the past 160 years. UHI is weather dependent at many sites, so it is possible to look at the trends under different weather conditions - eg. the trend for windy days vs the trend for non-windy days. Initial results from a few years ago indicate that even though some sites show a different trend, the effect on the average trend is negligible. hadleyserver.metoffice.com/urban/Parker_JClimate2006.pdfThis finding is a few years old, and the analysis could easily be redone by anyone who wants to do it. The Parker paper is hogwash. We've been through this too many times.
|
|
|
Post by nautonnier on Dec 4, 2009 19:41:52 GMT
dwerth, that's a good idea! Essentially averaging the UHI effect over the whole city. On a related topic, I've been looking at the historical proxy reconstructions of global or northern hemisphere temperatures (paleoclimatology). One thing that struck me was that they produced their figures for a global (or northern hemisphere) temperature from only a small number of sites. Has there been any attempt to produce a modern mean temperature from the instrument records from those sites, and how it compares to the more comprehensive figures that cover as much of the globe as possible? " Has there been any attempt to produce a modern mean temperature from the instrument records from those sites"
Whyever do that when you have the results that you want? Note that Biffra's paper used a single tree on the Yamal peninsula to support the hockey stick. Check his paper and see if that is mentioned or the statistical significance of a single tree, or the accuracy of tree ring widths as a proxy for temperature (and _only_ temperature). You are mistaking climatology for science
|
|
|
Post by socold on Dec 4, 2009 19:52:52 GMT
"Christopher C. Horner, a senior fellow at the Competitive Enterprise Institute, said NASA has refused for two years to provide information under the Freedom of Information Act that would show how the agency has shaped its climate data and explain why the agency has repeatedly had to correct its data dating as far back as the 1930s. ... "NASA and CRU data are considered the backbone of much of the science that suggests the Earth is warming as a result of man-made greenhouse gas emissions. NASA argues that its data suggest this decade has been the warmest on record. "On the other hand, data from the University of Alabama-Huntsville suggest temperatures have been relatively flat for most of this decade." Washpost is clearly a biased mouthpiece, albeit not as extreme as the telegraph or the register. CEI is the red flag, describing has PR attack written all over it. The best distortion though is this part: "NASA argues that its data suggest this decade has been the warmest on record. "On the other hand, data from the University of Alabama-Huntsville suggest temperatures have been relatively flat for most of this decade."" UAH data also shows this decade has been the warmest on record, so their "on the otherhand" is false. But yeah a nice wordplay trick was done there. Since the context is "global warming"; a decade of flat temperatures is an "on the other hand" to an accelerating forcing that was allegedly solely responsible for making the last decade the warmest. They made a false claim. They claimed that UAH didn't show this decade was the warmest on record. It does. Also note even they didn't claim a decade of flat temperatures, they said "for most of this decade" It is indeed a problem Which is why the predictions span multiple decades.
|
|