|
Post by kiwistonewall on Dec 21, 2008 4:17:33 GMT
walterdnes
Oh it is, it is!! There IS no "scientific consensus" on anything.
Science is simply knowledge - and there is NEVER consensus on anything. We live in a world of competing ideas - you may not like it, but that is the way it is.
As soon as you try and support one theory or idea over another, you don't have freedom. CONSENSUS leads to stagnation.
Of course, this doesn't make the fringe groups "correct". In any case, even "Truth" is a difficult concept, and there is no epistemological consensus on that from Pontius Pilate through to modern Philosophy.
Half the fun of being part of mankind is the flux of ideas we live in.
Enjoy, and give intellectual room to others. By all means, shut them out of your own head. That is our freedom.
|
|
|
Post by jimg on Dec 21, 2008 6:19:56 GMT
Apparently, the temperature relationship is to the number of stations reporting and not CO2!
So to get things cooling, we just need to add more stations!
|
|
|
Post by jimg on Dec 21, 2008 6:28:15 GMT
There should be a positive bias in ground surface temperatures over time, but not in the ground surface temperature records which are adjusted to eliminate the UHI bias. Given that you're the AGW afficionado, how do you know that the UHI offset is adequete? Are you adding too much? Too little? How do you differentiate between stations? What about those stations that are mounted on rooftops or near AC units. Do you subtract the offset on weekends too? How do you know the building is occupied or not, or if the AC is on or not? Sounds to me like it would be impossible to collect data and come up with a global anomoly within a fraction of a degree, with absolute certainty. On another subject, perhaps you may know this answer. What is the margin of error in the Hadcrut, GISS etc. anomoly plots? I have not been able to find it thus far.
|
|
|
Post by socold on Dec 21, 2008 23:03:22 GMT
So how are they eliminating the urban heat island bias when they are adjusting the city temperatures upward? They aren't just eliminating urban heat island bias. They are also eliminating a number of other effects, including some that cause cooling biases. And you still haven't addressed the problem I keep raising - if GISS warming over the past 30 years is significantly greater than it should be, then that means the satellite records have significantly more warming over the past 30 years than the surface records.
|
|
|
Post by socold on Dec 21, 2008 23:54:09 GMT
Given that you're the AGW afficionado, how do you know that the UHI offset is adequete? Are you adding too much? Too little? How do you differentiate between stations? What about those stations that are mounted on rooftops or near AC units. Do you subtract the offset on weekends too? How do you know the building is occupied or not, or if the AC is on or not? Sounds to me like it would be impossible to collect data and come up with a global anomoly within a fraction of a degree, with absolute certainty. On another subject, perhaps you may know this answer. What is the margin of error in the Hadcrut, GISS etc. anomoly plots? I have not been able to find it thus far. The margin of error for hadcrut is given on these graphs: www.metoffice.gov.uk/research/hadleycentre/obsdata/HadCRUT3.htmlGISTEMP also has error bars on graphs on their site. There are now 3 factors, UHI, microsite biases and station closures. Seeing as the adjustment is for all of those lets look at the total error from all 3. The GISS surface record shows about 0.45C warming over the past 30 years. So how much of that is wrong? How much of that is simply due to the combined error of UHI, microsite bias and station closures? Anyone want to give any figures? How much of the 0.45C warming in GISS are people suggesting is wrong? 0.1C? 0.2C? 0.4C?
|
|
|
Post by jimg on Dec 22, 2008 1:03:45 GMT
Thanks for pointing to those graphs.
But now my next question.
If the data is recorded at 1F intervals. Assuming that the data was recorded at that accuracy in the early 1900's and not 2F scales.
How do you get an accuracy greater than 1F (.56C)? Let alone an error bar of +/- .15C?
|
|
|
Post by socold on Dec 22, 2008 2:44:16 GMT
If one thermometer shows 9F the actual temperature could be anywhere between 8.5F and 9.5F
If another thermometer shows 10F then the actual temperature could be anywhere between 9.5F and 10.5F
So the actual temperature average could be anywhere between 9F and 10F
The mid point becomes more likely as you add in more thermometers (ie the overestimating thermometers should largely cancel out the underestimating ones with a large enough sample). With enough thermometers you could figure the temperature average is between 9.3F and 9.4F with 95% confidence.
|
|
|
Post by Acolyte on Dec 22, 2008 3:02:59 GMT
If one thermometer shows 9F the actual temperature could be anywhere between 8.5F and 9.5F If another thermometer shows 10F then the actual temperature could be anywhere between 9.5F and 10.5F So the actual temperature average could be anywhere between 9F and 10F The mid point becomes more likely as you add in more thermometers (ie the overestimating thermometers should largely cancel out the underestimating ones with a large enough sample). With enough thermometers you could figure the temperature average is between 9.3F and 9.4F with 95% confidence. Would this only be true if all the thermometers were in exactly the same conditions? ie. same location, same wind effects, same shade or sunlight etc. If one is in a valley where the wind blows & the other is in a cul-de-sac where air is still, the whole averaging thing becomes invalid.
|
|
|
Post by ron on Dec 22, 2008 3:31:13 GMT
I'm not sure about that.... as we're not trying to measure just one type of location. We're averaging for the sake of finding the general overall average of all thermometers in all types of environments.
Of course that doesn't take into account the pollution biases of improper locations, which is why GISS data is so unusable and must be adjusted by some magical formula to try to remove teh biases. I'm wondering (along with everyone else)... how do they come up with that formula? My strong guess is that they are now trying to adjust towards other measurements, which makes their data totally worthless.
Just my 2 pennies.
|
|
|
Post by socold on Dec 22, 2008 3:42:45 GMT
If one thermometer shows 9F the actual temperature could be anywhere between 8.5F and 9.5F If another thermometer shows 10F then the actual temperature could be anywhere between 9.5F and 10.5F So the actual temperature average could be anywhere between 9F and 10F The mid point becomes more likely as you add in more thermometers (ie the overestimating thermometers should largely cancel out the underestimating ones with a large enough sample). With enough thermometers you could figure the temperature average is between 9.3F and 9.4F with 95% confidence. Would this only be true if all the thermometers were in exactly the same conditions? ie. same location, same wind effects, same shade or sunlight etc. If one is in a valley where the wind blows & the other is in a cul-de-sac where air is still, the whole averaging thing becomes invalid. If the valley is cooler because of the wind then so be it. We would be measuring the average temperature of two different locations. If the valley stays windy and the cul-de-sac stays windless then the average temperature trend should be flat over time.
|
|
|
Post by Acolyte on Dec 22, 2008 6:04:48 GMT
But the whole point of using multiple data sources to try to improve accuracy is that they need to be measuring the same thing under the same conditions. You also have the physical imperfection problem - if they are in different conditions, the very fact of 1º relaiability or error means the 10.5 might REALLy be 10.5 while the one down the valley might REALLY be 8.5.
That's what error estimates are about, each individual item might be at the top of range, might be at bottom of range or could be anywhere in between - to assume both thermometers are out in the same direction is unwarranted & misleading.
If you put (say) 10 thermometers in the same location then yes, you can take the average & it's a good bet (but by no means certain) that you will get a more accurate measure.
To try to average to improve accuracy can ONLY work when they are measuring the same things. Averaging different instruments in different environments adds up to meaningless noise. You might as well wet your finger & stick it in the air in both locations - it's probably going to be more accurate.
|
|
|
Post by ron on Dec 22, 2008 6:32:43 GMT
Assuming there's no bias and there is just random inaccuracy, I would think that one could fairly safely assume that there are sufficient numbers of similar sites in the plethora of sites.
Even without such assumption, sufficient numbers of varied sites will average out error. If you don't need precision in what the airspeed is over the London Bridge then you do not need to be assured that the London Bridge anemometer is precise. What you need to know is that there are wind speed detectors over thousands of bridges and the average airspeed over all of those bridges will be extreeemly precise, again, given the presumption that the devices are randomly imprecise.
Since the goal of these weather stations (in this usage) is to gather data for an average global temperature and not the temperature of a single site, it is really OK if they are randomly imprecise. It is NOT OK if they are biased internally or systemically placed in locations that are substantially biased by uncontrollable factors, and doubly ungood that they are all or nearly all biased in similar circumstances (which we believe to be biased towards warming).
If there is a bias towards under or over reporting of the wind speed, then no amount of anemometers in any set of identical placements will suffice to remove the error.
|
|
|
Post by jimg on Dec 22, 2008 7:38:56 GMT
There it is.
As far as I have found, the analog mercury thermometers were graduated and the measurement was recorded in 1F increments.
So for a 50F day, the temp could be somewhere between 49.5 and 50.4F or 50F +/- .5F
Now we take all these numbers and come up with an average.
That average will be x +/- .5F (since that is the accuracy of the measuring device.)
You will never get greater than +/- .5 degrees of accuracy no matter how many decimal places you take the average to.
No take this to the anomoly side. Subtract the historical avearage from (call it) todays average and you have the anomoly. If the data was only accurate to +/- .5F then the anomoly is x +/- .5F
You can't calculate greater accuracy into a measurement that wasn't there before. Even if your calculator goes to 10 decimal places.
And like others have suggested, .5F accuracy seems very generous. Of course, this is before the UHI offsets are added in.
|
|
|
Post by Acolyte on Dec 22, 2008 9:28:17 GMT
Assuming there's no bias and there is just random inaccuracy, I would think that one could fairly safely assume that there are sufficient numbers of similar sites in the plethora of sites. Even without such assumption, sufficient numbers of varied sites will average out error. If you don't need precision in what the airspeed is over the London Bridge then you do not need to be assured that the London Bridge anemometer is precise. What you need to know is that there are wind speed detectors over thousands of bridges and the average airspeed over all of those bridges will be extreeemly precise, again, given the presumption that the devices are randomly imprecise. Since the goal of these weather stations (in this usage) is to gather data for an average global temperature and not the temperature of a single site, it is really OK if they are randomly imprecise. It is NOT OK if they are biased internally or systemically placed in locations that are substantially biased by uncontrollable factors, and doubly ungood that they are all or nearly all biased in similar circumstances (which we believe to be biased towards warming). If there is a bias towards under or over reporting of the wind speed, then no amount of anemometers in any set of identical placements will suffice to remove the error. The problem comes when you DO need the wind speed on London Bridge to be precise. The temp quotes ar in 1/ 100ths of degrees. If the measuring devices are accurate to +/- 1º the we are talking errors of up to 100 times the quoted accuracy. So on London Bridge, given the same parameters, the wind speed could be desired to be accurate in 1mph increments, but if that's 100 times the accuracy the wind speed might actually be 150mph or even 50mph in the other direction. These people are talking accuracy across decades of 100 ths of 1º based on something that might be capable of being within a degree across an hour.
|
|
|
Post by Acolyte on Dec 22, 2008 9:34:10 GMT
And still, unless you know the actual inaccuracy of individual instruments, (in which case you don't need to do the averaging exercise at all) then comparing different instruments in different conditions will not assist you to determine accuracy at all & in fact may lead you directly astray.
It isn't hard at all to see that a thermometer in still air may be out by +.5º but one where the cold air comes of the mountain will not only be chilled by the air but may in fact be so much less efficient that the error is more than -.5º - and that slight difference, given we are talking such tiny differences, may be the difference between declaring warming & declaring cooling for an area.
It's junk science at best & at worst a direct path to biased manipulation of the data, particularly when thousands of sites were closed & most of them were non-urban in location.
|
|