|
Post by radiant on Aug 20, 2009 9:12:22 GMT
1. You are wanting to measure 0.6 of a degree change in 100 years.No I'm not - and when I asked for an analysis you haven't quite provided what I was looking for. Try this: Let's say we have a station which meets all your requirements with respect to measurement accuracy and local station environment and has done so for the last 30 years. Also lets assume we have sampled the temperature every minute rather than twice a day. Ok - we now want to know how much temperatures in September have changed in the 20 years between 1986 and 2006. The mean temperatures are as follows: Sept 1986 : 11.3 deg Sept 2006 : 16.8 deg Conclusion : Temperatures in September have risen 5.5. deg (i.e. 2.75 deg per decade) in the past 20 years. Discuss. Where are you going with this? It is theoretical. The data of one station is more or less irrelevant without the data for all stations. It does not exist. I am not interested in models. To know if the earths temperature has changed significantly in the last 30 years we need to know what the earths temperature was before the last 30 years. This is a conversation about the climate change. Has the climate changed in the last 100 years? To answer that we need to look at the data and the science that produced it. We dont know how hot the earth was 30 years ago. All we have is some guides.
|
|
|
Post by glc on Aug 20, 2009 11:38:48 GMT
Where are you going with this? It is theoretical. The data of one station is more or less irrelevant without the data for all stations. It does not exist.
Exactly. It might be the Armagh station or an other station. Are you now saying that one station doesn't matter and you need lots of stations (as well as lots more station measurements). You have guaranteed accurate measurements - what's the problem.
[hint: The first thing you ought to notice is that the variability in the data dwarfs any measurement error.]
I am not interested in models.
Nor me - unless they happen to be female and around 25 years of age, that is.
To know if the earths temperature has changed significantly in the last 30 years we need to know what the earths temperature was before the last 30 years.
Not necessarily - we just need to know how much the earth's temperature has changed. To use the lake analogy (again) if I want to know how much the water level in a lake has risen, I don't need to know the exact depth of the lake.
This is a conversation about the climate change.
Indeed.
Has the climate changed in the last 100 years?
I think so.
To answer that we need to look at the data and the science that produced it.
Ok.
We dont know how hot the earth was 30 years ago. All we have is some guides.
We have sampled more than enough points over the earth during the last 30 years to give a statisitically robust estimate of the increase in temperature.
|
|
|
Post by radiant on Aug 20, 2009 12:49:59 GMT
Where are you going with this? It is theoretical. The data of one station is more or less irrelevant without the data for all stations. It does not exist.Exactly. It might be the Armagh station or an other station. Are you now saying that one station doesn't matter and you need lots of stations (as well as lots more station measurements). You have guaranteed accurate measurements - what's the problem. [hint: The first thing you ought to notice is that the variability in the data dwarfs any measurement error.] I am not interested in models.Nor me - unless they happen to be female and around 25 years of age, that is. To know if the earths temperature has changed significantly in the last 30 years we need to know what the earths temperature was before the last 30 years.Not necessarily - we just need to know how much the earth's temperature has changed. To use the lake analogy (again) if I want to know how much the water level in a lake has risen, I don't need to know the exact depth of the lake. This is a conversation about the climate change.Indeed. Has the climate changed in the last 100 years?I think so. To answer that we need to look at the data and the science that produced it. Ok. We dont know how hot the earth was 30 years ago. All we have is some guides.We have sampled more than enough points over the earth during the last 30 years to give a statisitically robust estimate of the increase in temperature. We are coming at this from different directions. You are saying it is warmer and the rest is just noise or the details I am saying is it warmer? there is alot of noise and details to consider. My approach is more practically based. I live in the world and i am asking myself has the earth got warmer? How would i know that? What science would i have to do? Your approach is very theoretical. 1. You have a theory the world has got warmer due to man 2. You have a theory that 30 years is sufficient time for all climate cycles to play out to enable you to process the data and get a result 3. You have a theory that the points raised by myself and others about accuracy dont matter. 4. You have a theory the world would not have warmed without man. 5. You have a theory that a one room one thermometer example can give a crude but correct example of the reasons why accuracy is secondary to trend and why this example makes a powerful example to support your theory. 6. You have a theory that if you make up temperatures you can create a terrestial mean temperature by which you can then make meaningful announcements of anomalies in the record. 5. You have a theory you have no theory
|
|
|
Post by icefisher on Aug 20, 2009 13:29:32 GMT
Actually reading your homework is recommended. Akasofu uses the current surface temperature record with a few isolated long term records including the CET to make his point. I would number it quite a bit beyond a few. I would say 3 to 5 being a few. But the real question is can you string together a dataset that comprises half the diversity Akasofu has put together that refutes his work? If not you are just a science denier. However there is NOT a significant trend in the CET record between 1800 and 1900. A Least Squares fit gives a total increase of ~0.03 deg over the century. On the other hand, there is an increase of ~0.7 deg between 1900 and 2000. There's a lot more like this. CET may have been flatter. But for your UK types who love to examine your own belly buttons for a science project; how about a few non-european examples to build on your case? Logically, knowing the role of the Atlantic being the ice route out of the arctic one could say England and especially Ireland sit on downwind side of the icebox. One would expect their records to be heavily slurred towards a late recovery as it gets the lionshare of the cold stuff and is the small ocean for absorbing heat.
|
|
|
Post by nautonnier on Aug 20, 2009 16:53:00 GMT
1. You are wanting to measure 0.6 of a degree change in 100 years.No I'm not - and when I asked for an analysis you haven't quite provided what I was looking for. Try this: Let's say we have a station which meets all your requirements with respect to measurement accuracy and local station environment and has done so for the last 30 years. Also lets assume we have sampled the temperature every minute rather than twice a day. Ok - we now want to know how much temperatures in September have changed in the 20 years between 1986 and 2006. The mean temperatures are as follows: Sept 1986 : 11.3 deg Sept 2006 : 16.8 deg Conclusion : Temperatures in September have risen 5.5. deg (i.e. 2.75 deg per decade) in the past 20 years. Discuss. Lets look at what has actually happened. Temperatures have been taken at varying times during the day for over centuries. Some based on clocks, not chronometers others on dawn / dusk or good guesses. The thermometers themselves have not been calibrated they may have a mark put on by eye, for 0C at freezing and for 100C for boiling water (at some random atmospheric pressure) then interpolated markings between and beyond these points which may or may not be accurate to a degree let alone a tenth of a degree. Reading the thermometer is a boringly routine chore, not a 'life and death' issue and is normally done by an 'observer' who is not the brightest person on the planet and may or may not have myopia and can be relied on probably to 1 degree accuracy and usually rounded to a degree. The net result is that the errors can be more than 1 degree, skewed toward the degree below, and will differ observer to observer and station to station. As more precise technology is purchased and distributed over the last century these variances of observed temperature at a required time around the actual temperature at the actual required time slowly reduce. Indeed despite the interpolation between 'known' temperatures there may be a new capability for precision. Just moving to precision of a tenth of a degree could 'show warming' over a tendency with less precision to round to the degree below. Have you ever gone out to a Stevenson instrument shelter at night in a gale and sleet to read the thermometers? or peered into one when you have come out of bright sunlight? Accurate to 100th of a degree? Remember no-one really thought there was a point in precision of more than a degree on these routine observations - whoever would have thought that people many years into the future would be trying to prove cases by arguing about a few tenths of a degree C change per _century_ based on these readings??!!. This is called hypothetical mathematics meets real world. Discuss
|
|
|
Post by magellan on Aug 20, 2009 17:32:30 GMT
Glc said: The ocean areas show warming in both satellite and surface records. This cannot be due to urban heat. The satellites show warming in the troposphere which again cannot be due to urban heat. Yet Pielke cites a paper (his own) which suggests a conservative estimate for UH warming is ~0.21 deg per decade.
In other words, Pielke is claiming that although the oceans and the troposphere have warmed for whatever reason, the land surface has not warmed and the only reason it shows warming is due to the siting of station thermometers. It's totally and urtterly implausible. I called foul on his claim that Pielke stated UH warm bias is ~.21 deg per decade, with no reference. A bit more checking reveals glc hasn’t the foggiest notion of what that ~.21 deg per decade represents. It is the T min night time bias, which is generally a clear telltale for site contamination due to urbanization/micro-site/station changes et al. Note the ignorance (no ad hom intended) of John Finn’s statement and Pielke’s response below. Unfortunately, Pielke assumed John Finn’s response was more than rhetorical. Nowhere did Pielke state there is a ~.21 deg per decade warming bias in the global records. wattsupwiththat.com/2009/08/13/pielke-sr-on-warm-bias-in-the-surface-temperature-trend/John Finn quotes Pielke: From our papers (Pielke and Matsui 2005 and Lin et al. 2007), a conservative estimate of the warm bias resulting from measuring the temperature near the ground is around 0.21 C per decade (with the nightime T(min) contributing a large part of this bias) . John Finn’s response: I’m clearly misunderstanding something here. If there is a warm bias of ~0.21 deg per decade over land then virtually all the temperature increase over land can be explained by the ‘bias’. However the UAH satellite-measured trend over land is ~0.16 deg per decade since 1979 and ~0.22 deg per decade since 1990. What is the reason for the LT warming? He’s correct; he clearly doesn’t understand and is intent on mixing apples with oranges. Since glc says he could shred Pielke's latest paper on warm bias in the surface records, then why doesn't he do it? Pielke will give him a guest post to shred all he wants, the only prerequisite of course being it is coherent and based on specific refutations with data and references. Oops, I guess that disqualifies glc out of the gate. The truth is Pielke would eat glc’s lunch. Maybe this is why he John Finn didn’t respond to Pielke at WUWT. He’d be reduced to mud in one paragraph. A hint for glc: Shredding Pielke’s paper will require more than asking questions, hand waiving and applying your own untested hypotheses. Glc and socold are so stuck on anomalies, they can’t seem to grasp that a trend greater than zero is in fact a rise in temperature regardless of the initial conditions, aka the Zero baseline. That it is an “anomaly” does not protect the results from being contaminated by spurious values. The baseline chosen is arbitrary; it has no bearing on the trend itself other than which side of the baseline it resides. If the surface trend is higher than the atmosphere trend, that means there is error somewhere in the measurements; it’s that simple. Warmologists can’t legitimately demonstrate the error resides in satellite data, so they haphazardly defend the surface station values. The bottom line is per CO2 AGW hypothesis, the atmosphere should be warming at a faster rate than the surface despite recent efforts by AGW promoters to downplay this. I’ve asked for references numerous times for this “new and improved” hypothesis that it doesn’t matter if the surface is warming faster than the atmosphere. I'm still waiting. The example of Los Angeles, California having a 5+ degC rise in 100 years means temperatures are increasing with time, which means........it is a trend by definition. glc attempted to show (without references) that UHI et al does not increase with time. He seems to think some us don't understand what an anomaly is, but in reality he apparently doesn't understand that a rise in temperature over time is a trend and if it is greater than it's surrounding areas, then one or the other is erroneous. You can't have it both ways glc. Irrefutable hypotheses are great aren’t they? No matter what the data says, Warmologists just ignore it or make up rules as the game proceeds. glc said IPCC and WMO does not recognize NCDC data. That is complete nonsense, references to follow as his 24 hour grace period has passed. He then questions the graph of South Africa highlighting the huge discrepancy between satellite and surface as being "smoothed data" or compare it to surrounding areas; a completely ridiculous comment. No glc, it is the compiled data for a gridded region. Here, you can rip Christy to shreds too ;D wattsupwiththat.com/2009/07/18/out-of-africa-a-new-paper-by-christy-on-surface-temperature-issues/#more-9410The U.S. is somewhat adjusted for UHI (other issues are totally ignored); it wasn’t any warmer than the 1930's until Hansen (via his mentor Tom Karl) "adjusted" it, and does not correlate with ROW. The U.S. still has the highest concentrated network of surface stations, yet it is said to be irrelevant because the U.S. only comprises 2% of the world land mass. Now we have glc cherry picking single stations that somehow does represent the entire planet. glc then makes the unsubstantiated impossible claim We have sampled more than enough points over the earth during the last 30 years to give a statisitically robust estimate of the increase in temperature. --------------------------------------------------------------------------------- glc's lake analogy: I'll use the 'lake' analogy again. If you want to know the change in water level in a lake over the past 10 years, say, you don't need to have an accurate measurement of the lake depth. You just need estimates with respect to some reference point. Providing you have enough readings your errors are not likely to be significant. The above is a good example of how sound principles of metrology apply. Questions need to be asked, the first being what is the purpose of the experiment, to measure change in the volume of the water in the lake or simply take a reading of a reference point that has no value added result? Then: 1) What were the initial conditions of the first reading? 2) Is the reference point stable? Has it been moved or shifted in height? How is that determined? 3) Has the lake had changes to it, it's enviornment or shoreline during the period; dredging, dams, etc. What effects would that have on the true level? 4) What was the reference temperature at the initial time of measurement? 5) Were there recent rain storms that could give anomalous reading of rise? Then of course other important items: a) what method was used to take the first measurement? Will the identical method be used in the second one? b) Was the instrument calibrated? To what standard? What is the uncertainty? c) What is the expected accuracy of the test? Was a GR&R performed? It goes on....... Those are a few issues I'd consider before taking any readings. No doubt there are more, but it is a good example of why your analogy is not as cut and dry as it appears. -------------------------------------------------------------------------- glc said What is the 1990-2007 UAH trend? What is the 1979-1997 UAH trend? See how easy it is?
|
|
|
Post by glc on Aug 20, 2009 19:26:57 GMT
I called foul on his claim that Pielke stated UH warm bias is ~.21 deg per decade, with no reference. A bit more checking reveals glc hasn’t the foggiest notion of what that ~.21 deg per decade represents. It is the Tmin night time bias,
No it isn't the Tmin night time bias. Read your own post.
From our papers (Pielke and Matsui 2005 and Lin et al. 2007), a conservative estimate of the warm bias resulting from measuring the temperature near the ground is around 0.21 C per decade (with the nightime T(min) contributing a large part of this bias) .
The nighttime Tmin contributes a large part of the bias - it isn't THE bias. But, whether it's due to nighttime Tmin or not is irrelevant. If this "warm bias" is removed then there is little or no land based warming. However the ocean surface temperatures show warming and UAH shows a warming trend of ~0.16 deg per decade over land. In case you think this is another example of my statistical jiggerypokery, this is from the UAH record.
Year Mo Globe Land Ocean Trend 0.13 0.16 0.10
Got that, Magellan, 0.16 deg per decade - exactly as I posted. The UAH warming trend over land since 1990 is +0.22 deg per decade.
But it doesn't really matter because, in his WUWT post, Roger writes
In other words, consideration of the bias in temperature would reduce the IPCC trend to about 0.14 degrees C per decade
Bearing in mind, GISS and Hadley (and NCDC and RSS) show a global surface trend of 0.16 to 0.17 deg per decade, it's hardly worth bothering about.
|
|
|
Post by glc on Aug 20, 2009 20:09:18 GMT
Glc and socold are so stuck on anomalies, they can’t seem to grasp that a trend greater than zero is in fact a rise in temperature regardless of the initial conditions, aka the Zero baseline.
Magellan
I understand mathematics and statistics. I know how anomalies work. I also know how Least Squares linear regression works.
Magellan, Nautonnier, Radiant
Re: the Sept 1986 / Sept 2006 'problem'
There is nothing wrong with the data, the readings, the instrumentation or anything else. It's simply a matter of natural variability. The data is real. September 2006 just happened to be a warm month; September 1986 was a cool month.
This is to illustrate the point that, in looking at climate trends, all the accuracy in the world matters not one jot if you don't have sufficient samples. The actual trend using all 21 data points is less than half the 2 data point trend. The 1986-2006 annual trend using all 12 months is less than half the September trend. If we use 30 years rather than 20 years both the September and annual trends are reduced still further and are much closer in value. The greater the number of samples the less influence of the 'rogue' or 'outlier' values.
That's why there is a reasonable level of confidence in the ~0.7 deg 20th century warming figure. That'll have to do for now.
|
|
|
Post by zer0th on Aug 20, 2009 21:20:08 GMT
|
|
|
Post by nautonnier on Aug 20, 2009 22:15:41 GMT
Glc and socold are so stuck on anomalies, they can’t seem to grasp that a trend greater than zero is in fact a rise in temperature regardless of the initial conditions, aka the Zero baseline. Magellan I understand mathematics and statistics. I know how anomalies work. I also know how Least Squares linear regression works. Magellan, Nautonnier, Radiant Re: the Sept 1986 / Sept 2006 'problem' There is nothing wrong with the data, the readings, the instrumentation or anything else. It's simply a matter of natural variability. The data is real. September 2006 just happened to be a warm month; September 1986 was a cool month. This is to illustrate the point that, in looking at climate trends, all the accuracy in the world matters not one jot if you don't have sufficient samples. The actual trend using all 21 data points is less than half the 2 data point trend. The 1986-2006 annual trend using all 12 months is less than half the September trend. If we use 30 years rather than 20 years both the September and annual trends are reduced still further and are much closer in value. The greater the number of samples the less influence of the 'rogue' or 'outlier' values. That's why there is a reasonable level of confidence in the ~0.7 deg 20th century warming figure. That'll have to do for now. "I understand mathematics and statistics. I know how anomalies work. I also know how Least Squares linear regression works. "And have never carried out a meteorological observation in your life. It would be a salutary exercise for you - you should try it. As with all metrics, definition and precision of the metric, methodologies, standards and standard procedures can considerably alter the reported metric - and all of these have changed over time.
|
|
|
Post by glc on Aug 20, 2009 22:45:23 GMT
Z says
Pielke Snr....
Quote:Until further analysis is completed using temperature trend data at two or more levels near the surface, the best estimate that we have is that this warm bias explains about 30% of the IPCC estimate of global warming [based on a global average surface temperature trend]. Ok, let's look at the entire statement, because Magellan thinks I don't understand it. So you (or Magellan) tell me what you think it means. Here it is:
“[F]rom our papers (Pielke and Matsui 2005 and Lin et al. 2007), a conservative estimate of the warm bias resulting from measuring the temperature near the ground is around 0.21 C per decade (with the nightime T(min) contributing a large part of this bias) . Since land covers about 29% of the Earth’s surface (see), the warm bias due to this influence explains about 30% of the IPCC estimate of global warming. In other words, consideration of the bias in temperature would reduce the IPCC trend to about 0.14 degrees C per decade, still a warming, but not as large as indicated by the IPCC"
|
|
|
Post by glc on Aug 20, 2009 23:01:41 GMT
As with all metrics, definition and precision of the metric, methodologies, standards and standard procedures can considerably alter the reported metric - and all of these have changed over time.
Indeed - but the key is to understand the changes. Natural variability is the first issue as we saw in the 'september' example. It is the variability that has given us the misleading statistic - not the accuracy or precision of the measurements.
|
|
|
Post by icefisher on Aug 21, 2009 14:03:54 GMT
As with all metrics, definition and precision of the metric, methodologies, standards and standard procedures can considerably alter the reported metric - and all of these have changed over time.Indeed - but the key is to understand the changes. Natural variability is the first issue as we saw in the 'september' example. It is the variability that has given us the misleading statistic - not the accuracy or precision of the measurements. Both are issues. As pointed out by Akasofu you have to first separate out the natural variability issues and he makes a strong science case that the IPCC failed in that. So whats left? Well its a lot smaller clearly and what exactly it is isn't exactly clear. More precision in measurement might have answered that question but the system was not initially designed with all these issues in mind. Continuity of measurements was not an initial objective, what was an objective was seasonal forecasting and thus when changes were being made it was with an eye to improving next spring's, or winter's forecast. Actually who did a better job of that over the past two hundred years were the solar based Farmer's Almanacs. Those nasty astrology books from across the pond.
|
|
|
Post by glc on Aug 21, 2009 16:14:18 GMT
So whats left? Well its a lot smaller clearly and what exactly it is isn't exactly clear. More precision in measurement might have answered that question ....
No it wouldn't. That's the whole point. It's only by building a huge network of measurements that a confident estimate of global warming can be made.
Let's say I want to find the average weight of males in the US. I can start by picking 5 american males at random. Even if I have the most super-sensitive, highly accurate, highly precise equipment the chances of me hitting on the true average weight from a sample of 5 is quite small. One 20 stone guy in the sample and I've got no chance. With 20 - I might be close; 100 - closer still. But the accuracy of the instrumentation is secondary. I could have an old set of scales with an accuracy of +/- 2lb. Providing there is an even distribution in the error, i.e the probability of being 2lb over the true weight is thesame as the probability of being 2lb under the true weight, then there's a reasonable chance that most of the measurement error will cancel. In a sample of 100, you'd expect about 50 over their true weight and about 50 under their true weight. Take the average and you've lost most of the error.
If there's a bias that's more of a problem - unless you are looking for the change in weight. In which case you've just got an offset. The guy who really weighs 160 lb will weigh 162 lb on scales with a +2 lb bias. If he puts on 7lb then the scales will measure 169 lb but the weight gain is still 7 lb.
Ok - its still just possible that this sample of 100 from say, California, has some measurement bias or a few rogue values which could distort the calculated mean, but if I then note that separate samples using independent measuring instruments from Texas, Arizona, ..... etc are all telling the same story then confidence in the calculation grows.
It's the same with temperature.
Both are issues. As pointed out by Akasofu.....
Incredibly, you have accepted Akasofu's assertion that temperatures began climbing in 1800 based on a tiny sample of data. One dataset (CET) is, as mentioned in an earlier post, quite definitely flat throughout the entire 19th century. I suspect we'd find the same thing if we analysed his other sources of evidence. The De Bilt "hockey stick" (Fig 5d) also looks flat up until the early 20th century until it shoots up just like Mann's.
|
|
|
Post by icefisher on Aug 22, 2009 5:16:48 GMT
So whats left? Well its a lot smaller clearly and what exactly it is isn't exactly clear. More precision in measurement might have answered that question ....No it wouldn't. That's the whole point. It's only by building a huge network of measurements that a confident estimate of global warming can be made. Obviously you have a weak grasp on sampling. Either that or you enjoy the role of Devil's Advocacy. No you don't need a huge network. Expanding an unrepresentative sample does nothing whatsoever. More garbage in does not produce anything more than more garbage out. Clear consistent and accurate measurements at least tell you if you actually have longterm warming at a particular site. Both are issues. As pointed out by Akasofu..... Incredibly, you have accepted Akasofu's assertion that temperatures began climbing in 1800 based on a tiny sample of data. One dataset (CET) is, as mentioned in an earlier post, quite definitely flat throughout the entire 19th century. There is an indication that there is a multi-hundred year sink in our weather system. CO2 tends to follow temperature by hundreds of years. I would bet on the ocean being the mechanism. Since the ocean likely has both short term and long term cycles from say ENSO, PDO, and deep-ocean/interocean exchanges. Its perfectly compatible with Akasofu's paper for there to be 100 year (or even more) variations in where climate change is being manifested at the surface as well as at the bottom of the ocean. You are trying to stuff words in Akasofu's report that simply are not there. Akasofu does not claim 1800 as a hard and fast start of LIA iceage recovery. He notes that broadly it is manifested in the record between 1825 and 1880 depending upon region. The De Bilt record reinforces that, except it does not an initial bump in the early 18th century followed by an early 19th century regression then a steady warming after about 1820. It may look vaguely like a hockey stick but its about 200 years different than Michael Mann's and clearly shows a significant MWP and a LIA. AGW worshipers love to scream about disappearing glaciers but if you look at the compilation of Akasofu, it looks like a steady retreat has been underway throughout the 19th century. Albedo changes have been gradual not suddenly accelerated in the last 40 years. CET warming being delayed for 60 years or more seems perfectly reasonable since the Atlantic is the gateway from the arctic. If there is a global pattern to such a delay. . . .where is it?
|
|