|
Post by trbixler on Dec 22, 2008 15:39:00 GMT
Ron 'Assuming there's no bias' assumptions are not good in experimental science. Kind of like guesses. Proof is the goal. Hansen makes up numbers to match his assumptions then publishes the 'results' as proof. He can 'nail it' every time. ;D
|
|
|
Post by nautonnier on Dec 22, 2008 17:42:45 GMT
The question is - how much correction is required - how is this known - is it just SWAG? ***
If a thermometer once in the countryside is now on the edge of a UHI - how much massaging up or down of the figures is needed to get the 'correct' temperature? - presumably the one that would be the case if the urban sprawl had not encroached? If the wind is from the countryside - then perhaps none - if its from the steel smelter then perhaps a lot - if its from the subdivision with 1 acre lots then not so much and if its from the apartments with the large car park perhaps more......of course if it has been a really sunny day then perhaps more is needed than if it has been cloudy but the same air temperature. then perhaps a tad less...
What are - a lot - not so much, perhaps more and a tad les in degrees F? (to 1/100th of a degree of course)
If the thermometers are out in the country side and the farmers irrigate the fields then there should be corrections for that too. but not perhaps for rain.....
There is also the logical position that the UHI are in fact part of AGW so why correct them out at all?
Not only that but the practice of extrapolating temperatures over hundreds of miles - equivalent to having a massaged temperature for Lands End in Cornwall and generating the Edinburgh temperature from it ... or generating the Framingham MA temperature from the massaged one at Cape May NJ. If a meteorologist tried this he would be laughed at - but for a climatologist it is 'science'.
I think that the entire logical edifice of USHCN is faulty.
There are other data sets of meteorological information that have been kept just as long, which have NOT been massaged to meet assumptions that may be far better to be used.
All airports have been reporting temperatures every hour for many decades. It would be interesting to compare the data from these unbiased and unmassaged sources with those that have been 'corrected'. Similarly for at last the last few decades aircraft in flight have been recording Outside Air Temperatures and transmitting them to the weather services. These should also be used as a comparator with satellite temperatures (which I seem to remember were massaged corrected for 'drift' too (how did they know how much to correct these?). There are data sets of air temperatures for large portions of the globe including the poles from 25000ft to 43,000ft that would make excellent comparators. It would be very strange if they have not been correlated - but then we live in strange times where scientists feel that challenging their results with other data is 'denying' and counter to their consensus.
*** (SWAG = Scientific Wild Ass Guess)
|
|
|
Post by ron on Dec 22, 2008 18:20:47 GMT
1) Assuming no bias was a point for this discussion on averaging, not scientific method. Adjustment or corrections are for bias, not random imprecision.
2) Yes, exactly my point. IF we needed to know the exact wind speed over the London Bridge we would need very accurate instruments. But we don't. For these purposes we are looking for an average wind speed over all bridges, or all over the globe.
3) Yes, you can have an average that is within the error margin (i.e graduation marks) on the thermometer. Temperatures are also analog and are not moving in graduated steps. The larger the sampling (number of sites) the more accurate is the average for temperature taking both random error and imprecision.
I'm sure there is a statistician among us who would just love to explain how interviewing 1,018 people who have a 50% imprecision (they can answer only yes or no) can give a statistical result of 63% +- 4%. I have not yet met a man who can vote 63% yes, although down through the years my wife has shown the ability to vote yes while clearly meaning no at least 1/3 of the time.
Bias (lying in the pollster example) is a different issue.
|
|
|
Post by Acolyte on Dec 22, 2008 19:42:44 GMT
1) Assuming no bias was a point for this discussion on averaging, not scientific method. Adjustment or corrections are for bias, not random imprecision. 2) Yes, exactly my point. IF we needed to know the exact wind speed over the London Bridge we would need very accurate instruments. But we don't. For these purposes we are looking for an average wind speed over all bridges, or all over the globe. 3) Yes, you can have an average that is within the error margin (i.e graduation marks) on the thermometer. Temperatures are also analog and are not moving in graduated steps. The larger the sampling (number of sites) the more accurate is the average for temperature taking both random error and imprecision. I'm sure there is a statistician among us who would just love to explain how interviewing 1,018 people who have a 50% imprecision (they can answer only yes or no) can give a statistical result of 63% +- 4%. I have not yet met a man who can vote 63% yes, although down through the years my wife has shown the ability to vote yes while clearly meaning no at least 1/3 of the time. Bias (lying in the pollster example) is a different issue. The problem is however, this is weather we're talking about. It is complex & to average out temperatures across wide regions means losing the very factors that cause change. Can you think of a reason why we would want to know the average wind speed across every bridge? If I'm wanting to know how our bridges hold up or if they're subjected to stress or anything else I can think of, what matters is not the average but the actual. waether is the same. And Climate comes from weather. (aside: Old saying: Climate is what we expect; Weather is what we get) Now as I've stated elsewhere & to which nobody gave a reply, the only way I can think of to get a Global Average Temp is to point a satellite at Earth & record all radiation out & compare that to radiation in. That isn't happening & instead we are trying to smooth the weather patterns to approximate an answer. Then the agw crowd use those figures to tell us what's coming. Near as I can work out, that simply can't work, as the temperature differences are a main factor in causing air & moisture movement. Averaging them out of the equations means you have no way to predict the next moments, let alone a century off when the climate in decades to come is the result of weather changes now, each change having a domino effect that helps define what the next 'weather' position will be. Likewise, as mentioned above, with adjustments. Without understanding the actual processes, how can we adjust anything & expect it to be meaningful? Where is the measurement of each sensor to give some baseline for how good or bad it is. In electronics we do our best to make identical parts but in any sample the parts can be at either end of the quality range. Why haven't they spent some of the billions involved in propaganda to hire some people to go to the sensors & calibrate them? Is it because sitting at a computer is a better way to ensure you get the data to prove your theory is correct than using the real data that might just cause a change in the theory? I think the whole approach is poorly thought out & to be altering people's lives based on such faulty logic, poorly modelled scenarios & manipulated data is criminal. Spend some of the billions to put a satellite up to watch the Earth & measure the radiation leaving it at all wavelengths - make sure of course the sensors are actually calibrated *grins* - & then i might begin to believe what they're saying. Given what's been happening down here though, I would also want the data to come in pure & be known by all before being allowed into the hands of people with predefined opinions on what the results should be. The way things are being done is like trying to analyse a chaotic system by picking only the middle values. It can't be done - only by knowing all the values do you have any chance to model such a system. Picking the middle values then changing them to ensure a pre-decalred model is right doesn't even approximate good science, although it is probably effective political tactics.
|
|
|
Post by nautonnier on Dec 22, 2008 20:19:57 GMT
<<<<SNIP>>> Now as I've stated elsewhere & to which nobody gave a reply, the only way I can think of to get a Global Average Temp is to point a satellite at Earth & record all radiation out & compare that to radiation in. That isn't happening & instead we are trying to smooth the weather patterns to approximate an answer. Then the agw crowd use those figures to tell us what's coming. <<<SNIP>>>> I think the whole approach is poorly thought out & to be altering people's lives based on such faulty logic, poorly modelled scenarios & manipulated data is criminal. Spend some of the billions to put a satellite up to watch the Earth & measure the radiation leaving it at all wavelengths - make sure of course the sensors are actually calibrated *grins* - & then i might begin to believe what they're saying. Given what's been happening down here though, I would also want the data to come in pure & be known by all before being allowed into the hands of people with predefined opinions on what the results should be. The way things are being done is like trying to analyse a chaotic system by picking only the middle values. It can't be done - only by knowing all the values do you have any chance to model such a system. Picking the middle values then changing them to ensure a pre-decalred model is right doesn't even approximate good science, although it is probably effective political tactics. I think we have been saying similar things in different ways. The Earth needs to be treated as a 'black box' or a hohlraum (if you want to use Stefan-Boltzmann) then as you say you can look at the Earth and measure the radiation in and out. The current arguments on its cooler because of a La Nina show a total lack of understanding that it is the entire system we are measuring not only air temperatures in places affected by La Nina. They are breaking the rules and trying to measure _inside_ the hohlraum and not just outside it with a satellite. This raises the question of whether it is correct in any way to use Stefan-Boltzmann on parts of a body inside the hohlraum -as they are doing - then attempt to extrapolate from the parts that they have measured do the entire system. This shows a misunderstanding of the entire Srefan-Boltzmann 'black body' concept And in any case, as you say snap-shots of selected values of one variable in a chaotic system cannot be used to forecast the future state of that system But they have built an entire industry on radiation from the surface of the Earth being 'trapped' by CO2 and the rise in air temperature- and there is a distinct unwillingness to go back to re-examine their base assumptions. (Although Steve is good at that and I owe him a reply too )
|
|
|
Post by socold on Dec 22, 2008 20:37:18 GMT
There is a satellite sitting in a warehouse somewhere that would give us balance measurements for incoming and outgoing energy to and from the earth, it was never launched. I believe there is a similar mission planned within the next few years though. edit: unfortunately not, the one I was thinking of is only measuring TSI and aerosols. glory.giss.nasa.gov/Then again this is useful. This is just one of a number of satellites to be launched in the near future that will measure aerosols. In a few years we should have far more precise data for aerosol forcing.
|
|
|
Post by Acolyte on Dec 22, 2008 22:05:56 GMT
socold, I agree it would be good to get more data but, for mine, unless the basic situation changes it just means such data is going to be used to prop up a given PoV & we will simply continue to use pseudo-science to scare the beejesus out of Joe Public so he can be manipulated according to agendas. I'd like to see the non-political of both sides get together to agree on what constitutes good science so we can actually work out what's happening with the planet. (silly idealistic me... )
|
|
|
Post by kiwistonewall on Dec 22, 2008 23:28:50 GMT
With open publishing of the raw data, and details and explanations for all adjustments made to the data!
|
|
|
Post by nautonnier on Dec 22, 2008 23:36:53 GMT
socold, I agree it would be good to get more data but, for mine, unless the basic situation changes it just means such data is going to be used to prop up a given PoV & we will simply continue to use pseudo-science to scare the beejesus out of Joe Public so he can be manipulated according to agendas. I'd like to see the non-political of both sides get together to agree on what constitutes good science so we can actually work out what's happening with the planet. (silly idealistic me... ) Too much festive cheer there I think... Everyone is keen to sit down and have an apolitical discussion on what constitutes good science - until someone mentions research funding - then its back to the antagonistic approach again. If someone can find a way around that they will be worth more than one Nobel prize. If you have 'government funding' then you are expected to produce politically acceptable results. If you have industry funding everyone assumes that you are producing industry acceptable results. It takes more effort, resilience and persistence than most researchers can muster, to produce results that are unacceptable to their funding source.
|
|
|
Post by jimg on Dec 23, 2008 0:00:15 GMT
Hi Ron. I'm not sure if I misunderstood your post or not, so here goes.
By taking an average of temperature data that was recorded at 1F intervals over a period of decades and , you can come up with a number that is 51.32F (hypothetical number).
This is what we now call the "global mean". Now we set that global mean as our baseline and compare individual years by subtracting the mean. This gives us the "anomoly" from the mean. By subtracing this mean, we can now get a +.15 or -.02 anomoly.
However, the margin of error is still +/- .5F based on the accuracy of the measuring instruments. No matter how many stations are averaged into the mean, the margin of error is still +/- .5F (.28C)
So, now comparing that to our historical hockey stick, anomolies of .1C are meaningless. And graphs that vary from -.1 to .3C are equally useless for policy determination since we could be cooling or remaining steady while we are combatting warming. On the flip side, we could be warming alot more than we realize. However, that does not appear to be the case given current conditions.
|
|
|
Post by dopeydog on Dec 23, 2008 1:03:47 GMT
|
|
|
Post by Acolyte on Dec 23, 2008 1:10:01 GMT
Hi Ron. I'm not sure if I misunderstood your post or not, so here goes. By taking an average of temperature data that was recorded at 1F intervals over a period of decades and , you can come up with a number that is 51.32F (hypothetical number). This is what we now call the "global mean". Now we set that global mean as our baseline and compare individual years by subtracting the mean. This gives us the "anomoly" from the mean. By subtracing this mean, we can now get a +.15 or -.02 anomoly. However, the margin of error is still +/- .5F based on the accuracy of the measuring instruments. No matter how many stations are averaged into the mean, the margin of error is still +/- .5F (.28C) So, now comparing that to our historical hockey stick, anomolies of .1C are meaningless. And graphs that vary from -.1 to .3C are equally useless for policy determination since we could be cooling or remaining steady while we are combatting warming. On the flip side, we could be warming alot more than we realize. However, that does not appear to be the case given current conditions. I think it's worse than that because first they are averaging the temps from across a whole heap of gauges, then they are averaging again across time - I remain unconvinced this actually gives a meaningful 'Mean' temp to compare against to give the anomaly figures in the first place.
|
|
|
Post by ron on Dec 23, 2008 2:16:23 GMT
I'm not sure that's accurate. Look at it this way: Temperatures do not adhere to 1 degree increments. They are evenly distributed through the temperature range. so given a set of actual temperatures of 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 and 1.9 sum = 14.5 average 1.45 An accurate thermometer with increments of 1 degree would record 1 1 1 1 1 2 2 2 2 2 sum = 15 average temperature = 1.5 Multiple this by a sufficiently large number (that number would be up for debate) of thermometers spread out over well chosen (again, up for debate) varied terrain and locations and spread out over time and you will have an EXTREEEMLY accurate average global temperature to the thousandth of a degree or better. Of course the number and chosen locations of the thermometers along with non-random biases in the locations (not following standards) would seriously alter the readings and averages. But as for finding a global temperature by measuring in one or two degree increments over time with large samples is fine. Hey, more precise is better but not as substantially as you seem to think. It can make a huge difference in a single location, but averaging samples can be a very powerful tool. But this is just my not so humble opinion and I think I've said the same thing 3 different ways at this point, so I think I'm done!
|
|
|
Post by kiwistonewall on Dec 23, 2008 4:31:04 GMT
If they calculated the average of the anomalies, instead of the anomaly of the averages, the figure would be far more meaningful.
But (as I've said before) differences of two nearly equal figures create very large errors. - that is a fundamental princple of Numerical methods.
SO the uncertainty in these anomalies is much larger than the anomaly itself.
No matter how large the sample.
|
|
|
Post by dopeydog on Dec 23, 2008 15:38:45 GMT
We have skeptics in the wire! businessandmedia.org/articles/2008/20081218205953.aspxActually there are three that have voiced issues with agw. I think Reynolds Wolf said something skeptical in the morning a few days ago. If CNN is starting to have doubts, the deep rumblings must be getting louder. Correction: That was Rob Marciano not Wolf so we are still at two.
|
|