|
Puzzle
Mar 13, 2010 16:39:09 GMT
Post by icefisher on Mar 13, 2010 16:39:09 GMT
I have to admit its darned puzzling as to how the globe can be so warm when so much of the northern hemisphere is so cold.
It seems the only only unusual phenomena are wind current patterns potentially affecting the distribution of heat around the planet. So I started thinking and want to understand a bit more about the 4th power emissions function of radiant bodies in relationship to the distribution of heat. So mentally I built a simple mental (relationship) model that is probably like a block of swiss cheese and hoped some of the more physics-oriented folks here would plug some of the holes for simple and better understanding.
I took a hypothetical body and assigned it a heat value of 8. From that I distributed the heat evenly across two halves of the body and assigned each half a value of 4.
Then I took the fourth power of 4 and came up with 256 as a relative value of its IR emission giving a total emission value of 512 for the entire body.
then I redistributed the heat giving one half the body a value of 6 and the other half of the body a value of 2. Taking the fourth power results in two numbers 1296 and 16. when those are combined it gives a relative emission value of 1312, more than double the emissions value of when the heat is equally distributed.
So the question is does heat distribution have such a huge effect on the rate of planetary cooling based upon how the heat is distributed or am I leaving something out of this simplistic relationship model?
|
|
|
Puzzle
Mar 13, 2010 17:27:27 GMT
Post by sigurdur on Mar 13, 2010 17:27:27 GMT
Actually, you thought process is right on. There is a reason that the GCM's don't work even if one spends millions on computers. The MET office is figuring it out....maybe?
The idea when you don't incorporate h20 vapor well and think your model has any validity is rubbish. It continues to amaze me how people believe this, when the validation shows that the models don't work because of the holes in them.
This is why the climate scientists dispise the physics in all of this. The physics shows it doesn't add up at this time.
|
|
|
Puzzle
Mar 13, 2010 18:10:14 GMT
Post by nautonnier on Mar 13, 2010 18:10:14 GMT
There is a continual confusion of temperature with heat-content. I believe it is this that leads to many of the problems.
I may use this thread to answer glc's inability to understand the problems with averaging 'temperatures' across a multi-constituent atmosphere with differing heat capacity and content.
|
|
|
Puzzle
Mar 13, 2010 20:18:45 GMT
Post by hairball on Mar 13, 2010 20:18:45 GMT
I wonder if there might be a fairly simple explanation for the confusion. Whatever way the oceanic and atmospheric cycles are working right now seems to be concentrating the cold over landmasses. There are (supposedly) reasonably good records over land for a century while ocean temperature readings are very patchy. When better temp readings over the sea were obtained did they just assume them to be normal, when in fact they were below normal because natural variation was keeping the heat over the landmasses for the 1980's and 90's. Gah, I'm sure there's a way to write that which actually makes sense. In any case ocean heat content is the way to measure the planet's temperature.
|
|
|
Puzzle
Mar 14, 2010 0:46:07 GMT
Post by glc on Mar 14, 2010 0:46:07 GMT
I may use this thread to answer glc's inability to understand the problems with averaging 'temperatures' across a multi-constituent atmosphere with differing heat capacity and content.
I look forward to your contribution.
|
|
|
Puzzle
Mar 14, 2010 0:55:37 GMT
Post by glc on Mar 14, 2010 0:55:37 GMT
In any case ocean heat content is the way to measure the planet's temperature
That's the spirit. It used to be the satellite record but it's getting to the point where we can't really trust it. It now looks as though we've got a warming trend since 1998 so it might be time to ditch Roy Spencer and John Christy and their UAH dataset. OHC is the only measure we can really rely on - for the time being, at least. Obviously we'll have to look around for something else when that starts to head in the wrong direction.
|
|
|
Puzzle
Mar 14, 2010 1:27:29 GMT
Post by icefisher on Mar 14, 2010 1:27:29 GMT
There is a continual confusion of temperature with heat-content. I believe it is this that leads to many of the problems. I may use this thread to answer glc's inability to understand the problems with averaging 'temperatures' across a multi-constituent atmosphere with differing heat capacity and content. I look forward to it. I can see off the bat the complication being added by the real world having different landmass/ocean ratios. My simplistic model was trying to avoid that complication by assuming a uniform density/material sphere and splitting the heat between the two halves keeping all the other variables equal. Since emissions are as I understand it driven by temperature I was hoping to create a model initially unaffected by the difference between temperature and heat content and the real world differences of our planet. I was thinking that any total cooling rate differential created by a simple redistribution of heat, or more to the real world point the blocking of heat distribution on a surface that receives different heat inpu by location, opens a bunch of fascinating issues. So I hope you go slowly so all us folks with limited physics background can keep up.
|
|
|
Puzzle
Mar 14, 2010 14:11:45 GMT
Post by nautonnier on Mar 14, 2010 14:11:45 GMT
There is a tendency to see the atmosphere in an extremely simplistic way: horizontal line for the surface then another horizontal line for the tropopause and others for stratosphere, mesosphere, etc., and within these layers there is a homogeneous mix of gasses. Indeed the IPCC _specifies_ such an atmosphere and magically changes its composition while maintaining its well mixed slab like state. This then leads to measurements taken at the surface and lower troposphere _as if_ the atmosphere is such a slab. This extends to geometric areas and volumes being specified and then measured for temperature. These temperatures are then ‘averaged’ to provide the ‘average global temperature’ – but what does this really mean? Does it mean anything? Unfortunately, the world is not quite that simple. At the poles as part of the transport of heat from the tropics, there is almost always a large high pressure region. High pressure is associated with _descending_ _cold_ _dry_ air. The Hadley Cells are either side of the Inter-Tropical Convergence Zone that is along the ‘thermal equator’. The thermal equator moves between the tropics of Cancer and Capricorn. The air in the Hadley cells is very humid and is rising convectively taking heat and humidity upward. Between these cells are the Ferrel Cells these carry the streams of weather areas in the Rossby waves that run along the edge of the polar cell this is delineated at the tropopause by the jetstreams. None of these patterns of convective cells are geometric shapes there are ‘jet-streaks’ and discontinuities and there are waves in the tropopause that can ‘break’ rather like waves on a beach. But in general there is cold and dry at the poles and hot and humid at the equator with variable conditions in the Ferrel cells between the two extremes. The tropopause is the level at which the convective weather stops – note that again – the tropopause doesn’t stop the convective weather it is where the weather stops. ( like a flood line is where the flood got to.) The tropopause at the poles is ~30,000ft or lower in the tropics it is 60,000ft or higher. Now let us add in a little more – dry air has a lot lower thermal capacity than humid air. So at the poles in the ‘lower troposphere’ we have a layer of cold dry air; in the tropics we have a layer twice as deep of hot humid air. The heat content and heat capacity of a volume of air in the tropics is much higher than the heat content and capacity of a similar size volume of air at the poles. Most of the heat content in tropical air is the sensible and latent heat of the water vapor in the air – this is the energy that is the driver for the severe convection in the tropics. If a particular amount of heat is added to dry Polar air it will rapidly rise in TEMPERATURE as its heat capacity is low. If I add the same amount of heat to the same volume of tropical humid air it will NOT rise by the same amount as its heat capacity is higher; water will change state or warm but the specific heat of water is VERY much higher than that of nitrogen. So the question: Is use of ‘average atmospheric temperature’ a suitable metric in a swirling heterogeneous atmosphere ? If we look at this year’s weather, the polar highs and the Arctic polar vortex extended a LONG way South of the normal . The dry air received a relatively normal amount of heat from the sun/surface and rose in temperature more than the normal more humid Ferrel cell air would have done. It was still cold – but not _as_ cold. So perhaps the Northern areas such as Canada were ‘warmer’ (see posts from Jim Cripwell) and drier than normal. At the same time the Ferrel Cell weather was COLDER than normal by enough to put the zero degree isotherm close or onto the ground. So the precipitation in the mid-Latitudes fell as snow and then Rossby waves brought the polar air south and it warmed (relatively). The Hadley cells in the tropics became compressed into a smaller area. It looks like the _heat content_ of the atmosphere has dropped. But now we have an ‘average’ where polar air covering a lot larger area warmed, although it was still very cold, and the rest of the atmosphere stays the same. Thus the HEAT content of the atmosphere can stay constant or drop but an expansion of the polar vortex south will show a rise in TEMPERATURE as the air is dry. There is a huge skew in atmospheric heat content to the tropics expand or shrink the Hadley cells or the polar vortex and the skew is increased. TEMPERATURE is NOT a good metric for atmospheric HEAT CONTENT I think that this or something close to it explains the paradox of cold severe weather in the mid-latitudes of the Northern Hemisphere with slightly warmer in some Northern areas, and a claimed ‘warmest ever’ period. This also shows the problem of using ‘temperature’ as a metric when you are measuring a poorly mixed atmosphere. Not only that but it seems foolish basing temperature on geographic areas and averaging those, when it disregards the effect of changing humidity, especially as humidity itself is expected to alter significantly with changes of heat content. It certainly makes calculation to decimal places of degrees of average temperature completely spurious accuracy.. Heat Content is a far better metric and in reality is what we should be concerned about. As heat content of the atmosphere is a transient – it is just measuring heat _already_ escaping to space – the driver for everything is Ocean Heat Content and whatever causes/moderates the changes in it. = = = = = On the radiative problem with Stefan Boltzmann (which may be connected to the discussion above).... My understanding of Stefan Boltzmann was that the radiation should be considered as from a 'hohlraum' and breaking things into constituent parts and 'looking inside' the hohlraum was not to be done. If the amount of energy in the system is maintained and the constituent parts are homogeneous then I don't think the 'paradox' you claim can exist. I always find little throw away lines in these laws that are critical... So from www.universetoday.com/guide-to-space/physics/boltzmann-constant/ we see: "The zero-th law of thermodynamics is, in essence, what allows us to define temperature; if you could 'look inside' an isolated system (in equilibrium), the proportion of constituents making up the system with energy E is a function of E, and the Boltzmann constant (k or kB). Specifically, the probability is proportional to:
e-E/kT
where T is the temperature. In SI units, k is 1.38 x 10-23 J/K (that's joules per Kelvin). How Boltzmann's constant links the macroscopic and microscopic worlds may perhaps be easiest seen like this: k is the gas constant R (remember the ideal gas law, pV = nRT) divided by Avogadro's number."The mathematicians will have flashed over the writing and looked at the equations. However, in the preamble in parentheses is 'in equilibrium' - this means the isolated system is in thermal equilibrium - and this leads to the temperature. OK - so is the atmosphere in equilibrium? Does the atmosphere have A temperature? I am not sure that this answers your paradox (All written with jetlag )
|
|
|
Puzzle
Mar 14, 2010 15:02:05 GMT
Post by northsphinx on Mar 14, 2010 15:02:05 GMT
figure 1 in this paper www.eumetsat.int/groups/cps/documents/document/pdf_conf_p42_s5_key.pdfshow rater well different temperature gradient with pressure/altitude. In short: Inversion during arctic winter cause the atmosphere to be warmer than the ground. No lapse rate temperature decrease in the first 50-100 hPa. That may be one piece in the large puzzle. If the satellite measurement and calculations do not handle inversion right will high latitudes in winter be showing unusual warm at 900-950 hPa. Reason could be as simple as larger area of "Arctic" conditions.
|
|
|
Puzzle
Mar 14, 2010 18:17:35 GMT
Post by nautonnier on Mar 14, 2010 18:17:35 GMT
figure 1 in this paper www.eumetsat.int/groups/cps/documents/document/pdf_conf_p42_s5_key.pdfshow rater well different temperature gradient with pressure/altitude. In short: Inversion during arctic winter cause the atmosphere to be warmer than the ground. No lapse rate temperature decrease in the first 50-100 hPa. That may be one piece in the large puzzle. If the satellite measurement and calculations do not handle inversion right will high latitudes in winter be showing unusual warm at 900-950 hPa. Reason could be as simple as larger area of "Arctic" conditions. The point is more fundamental than that - I think that temperature is not a metric that should be used. To move to 'average temperature' compounds the problems.
|
|
|
Puzzle
Mar 14, 2010 20:13:13 GMT
Post by northsphinx on Mar 14, 2010 20:13:13 GMT
Agree nautonnier: average temperature is not a good measure of climate change or heat content. But it is the one used until we have something better. And what could be better? That depend on what we would like to measure/proof/forecast. If the idea is to monitor climate change do we need to measure "Climate". That is not a easy task but not so difficult as first thought I think it would be rather easy with statistics from the agriculture sector. Which is not as sexy as with satellites but still VERY valid. Measure Heat content. For which purpose? I don't see the direct link to climate for that either. If the heat content increase , is that a result of increased incoming or reduced outgoing amount of energy? We can measure that in the atmosphere but we still don't know the reason for it. If we use the wrong hypothesis to explain our measurements will our conclusion be wrong. We should not use a to simple proxy as temperature for the climate, let us measure the climate direct. And the easiest proxy for climate measurement is to calculate degree days for different climate zones. Back to degree days which i think is the best proxy of Climate. www.ipm.ucdavis.edu/WEATHER/ddconcepts.htmltinyurl.com/yl2tdzv
|
|
|
Puzzle
Mar 14, 2010 20:18:54 GMT
Post by stranger on Mar 14, 2010 20:18:54 GMT
If it is a puzzle, it is one that was solved in the 18th Century. Perhaps too simply, the earth has much less capacity to store or hold heat than the oceans. In addition, earth is a pretty fair insulator. So the land masses give up surface heat very quickly - and replacement energy from below is very slow to arrive.
On the other hand, oceans initially have much more available energy, and when that is radiated the lost energy is replaced much more quickly than on the land. The rate depends on mixing rates - but it is far faster than dirt.
In either event, equilibrium eventually occurs - but much later and at a higher temperature in the case of the oceans.
Stranger
|
|
|
Puzzle
Mar 15, 2010 5:33:01 GMT
Post by hairball on Mar 15, 2010 5:33:01 GMT
In any case ocean heat content is the way to measure the planet's temperatureThat's the spirit. It used to be the satellite record but it's getting to the point where we can't really trust it. It now looks as though we've got a warming trend since 1998 so it might be time to ditch Roy Spencer and John Christy and their UAH dataset. OHC is the only measure we can really rely on - for the time being, at least. Obviously we'll have to look around for something else when that starts to head in the wrong direction. Hahah, I musta studied the IPCC modus operandi a little too closely: "It's warming, wait, extinctions, no wait, wait - Arctic melt, um decreased precipitation, erm the climate's changing, more precipitation, erm, desertification or deluges of snow. No, seriously, ocean acidification. Or something." Meanwhile, 1998 remains the warmest year on record
|
|
|
Puzzle
Mar 15, 2010 9:27:35 GMT
Post by glc on Mar 15, 2010 9:27:35 GMT
There is a tendency to see the atmosphere in an extremely simplistic way: horizontal line for the surface then another horizontal line for the tropopause and others for stratosphere, mesosphere, etc., and within these layers there is a homogeneous mix of gasses. Indeed the IPCC _specifies_ such an atmosphere and magically changes its composition while maintaining its well mixed slab like state. This then leads to measurements taken at the surface and lower troposphere _as if_ the atmosphere is such a slab. This extends to geometric areas and volumes being specified and then measured for temperature. These temperatures are then ‘averaged’ to provide the ‘average global temperature’ – but what does this really mean? Does it mean anything? Unfortunately, the world is not quite that simple. At the poles as part of the transport of heat from the tropics, there is almost always a large high pressure region. High pressure is associated with _descending_ _cold_ _dry_ air. The Hadley Cells are either side of the Inter-Tropical Convergence Zone that is along the ‘thermal equator’. The thermal equator moves between the tropics of Cancer and Capricorn. The air in the Hadley cells is very humid and is rising convectively taking heat and humidity upward. Between these cells are the Ferrel Cells these carry the streams of weather areas in the Rossby waves that run along the edge of the polar cell this is delineated at the tropopause by the jetstreams. None of these patterns of convective cells are geometric shapes there are ‘jet-streaks’ and discontinuities and there are waves in the tropopause that can ‘break’ rather like waves on a beach. But in general there is cold and dry at the poles and hot and humid at the equator with variable conditions in the Ferrel cells between the two extremes. The tropopause is the level at which the convective weather stops – note that again – the tropopause doesn’t stop the convective weather it is where the weather stops. ( like a flood line is where the flood got to.) The tropopause at the poles is ~30,000ft or lower in the tropics it is 60,000ft or higher. Now let us add in a little more – dry air has a lot lower thermal capacity than humid air. So at the poles in the ‘lower troposphere’ we have a layer of cold dry air; in the tropics we have a layer twice as deep of hot humid air. The heat content and heat capacity of a volume of air in the tropics is much higher than the heat content and capacity of a similar size volume of air at the poles. Most of the heat content in tropical air is the sensible and latent heat of the water vapor in the air – this is the energy that is the driver for the severe convection in the tropics. If a particular amount of heat is added to dry Polar air it will rapidly rise in TEMPERATURE as its heat capacity is low. If I add the same amount of heat to the same volume of tropical humid air it will NOT rise by the same amount as its heat capacity is higher; water will change state or warm but the specific heat of water is VERY much higher than that of nitrogen. So the question: Is use of ‘average atmospheric temperature’ a suitable metric in a swirling heterogeneous atmosphere ? If we look at this year’s weather, the polar highs and the Arctic polar vortex extended a LONG way South of the normal . The dry air received a relatively normal amount of heat from the sun/surface and rose in temperature more than the normal more humid Ferrel cell air would have done. It was still cold – but not _as_ cold. So perhaps the Northern areas such as Canada were ‘warmer’ (see posts from Jim Cripwell) and drier than normal. At the same time the Ferrel Cell weather was COLDER than normal by enough to put the zero degree isotherm close or onto the ground. So the precipitation in the mid-Latitudes fell as snow and then Rossby waves brought the polar air south and it warmed (relatively). The Hadley cells in the tropics became compressed into a smaller area. It looks like the _heat content_ of the atmosphere has dropped. But now we have an ‘average’ where polar air covering a lot larger area warmed, although it was still very cold, and the rest of the atmosphere stays the same. Thus the HEAT content of the atmosphere can stay constant or drop but an expansion of the polar vortex south will show a rise in TEMPERATURE as the air is dry. There is a huge skew in atmospheric heat content to the tropics expand or shrink the Hadley cells or the polar vortex and the skew is increased. TEMPERATURE is NOT a good metric for atmospheric HEAT CONTENT I think that this or something close to it explains the paradox of cold severe weather in the mid-latitudes of the Northern Hemisphere with slightly warmer in some Northern areas, and a claimed ‘warmest ever’ period. This also shows the problem of using ‘temperature’ as a metric when you are measuring a poorly mixed atmosphere. Not only that but it seems foolish basing temperature on geographic areas and averaging those, when it disregards the effect of changing humidity, especially as humidity itself is expected to alter significantly with changes of heat content. It certainly makes calculation to decimal places of degrees of average temperature completely spurious accuracy.. Heat Content is a far better metric and in reality is what we should be concerned about. As heat content of the atmosphere is a transient – it is just measuring heat _already_ escaping to space – the driver for everything is Ocean Heat Content and whatever causes/moderates the changes in it. = = = = = (All written with jetlag ) All you're really saying here is that air temperatures can be highly variable. We know that. However air (and sea surface) temperatures do ultimately reflect the amount of heat in the system - if a little "noisily". This is why we don't rely on a small sample of measurements. The measurerments cover all conditions and hence the average is robust. Short-term climate events can't corrupt the overall trend. The recent very warm readings in the troposphere have barely changed the long term trend. They might, though, affect the shorter term trend (e.g. 5 to 10 years) a little more. This is the main reason why trends of less than around 20 years in length can be misleading. Short-term, e.g. ENSO events, have a greater influence. Your argument holds up if we took all the measurements taken to-day with the intention of calculating a global temperature. Firstly we would have no idea if to-day was 'typical' or 'normal'. Secondly, and perhaps more importantly, no-one claims to be calculating a global temperature. We're not actually trying to determine a global temperature. We are trying to determine the temperature increase (or decrease) over a period of time. To use an analogy. It's like trying to find whether the water level in a lake has risen (or fallen). We don't need to know the depth of the lake. We just need to know the start level. If we have a large enough sample of measurements for each month/year then we can detemine the change in the level of the lake to a reasonable degree of accuracy. There are sound statistical reasons for believing that the trends for all the datasets are, given the confidence intervals specified, reliable and robust. A second analogy (or illustration). If I want to find the average height of US adult males what do I do? I can select a random sample take the average and then provide a confidence interval. How large should the sample be? 2? I can calculate an average but how reliable is it. If my random sample just happens to have picked a couple of basketball players then it's not much use. 10? A bit better but the average is still likely to be heavily influenced by a couple of Danny DeVitos or a couple of Arnies in the sample. 100? We could be starting to home in on the true mean. But we'd probably have a relatively wide CI. 1000? This should provide a decent estimate. But we can't rule out the fact that there is some bias in the data. However the variability of the data can be calculated and this should lead to relatively 'tight' CI. To summarise: All your concerns are recognised. However by using appropriate sampling, both in size and spatial coverage, the influence of short term events is minimised.
|
|
|
Puzzle
Mar 15, 2010 11:29:51 GMT
Post by steve on Mar 15, 2010 11:29:51 GMT
I have to admit its darned puzzling as to how the globe can be so warm when so much of the northern hemisphere is so cold. It seems the only only unusual phenomena are wind current patterns potentially affecting the distribution of heat around the planet. So I started thinking and want to understand a bit more about the 4th power emissions function of radiant bodies in relationship to the distribution of heat. So mentally I built a simple mental (relationship) model that is probably like a block of swiss cheese and hoped some of the more physics-oriented folks here would plug some of the holes for simple and better understanding. I took a hypothetical body and assigned it a heat value of 8. From that I distributed the heat evenly across two halves of the body and assigned each half a value of 4. Then I took the fourth power of 4 and came up with 256 as a relative value of its IR emission giving a total emission value of 512 for the entire body. then I redistributed the heat giving one half the body a value of 6 and the other half of the body a value of 2. Taking the fourth power results in two numbers 1296 and 16. when those are combined it gives a relative emission value of 1312, more than double the emissions value of when the heat is equally distributed. So the question is does heat distribution have such a huge effect on the rate of planetary cooling based upon how the heat is distributed or am I leaving something out of this simplistic relationship model? I think your heat distribution has been too extreme. We're talking about local temperature anomalies measured only in 10s of degrees at most. A ten degree rise from 273 to 283 gives a 15% rise in radiation, but taking the Central England Temperature as an example, 95% of the day to day variation from the norm is within a 10C range. www.metoffice.gov.uk/climatechange/science/monitoring/CR_data/Monthly/HadCET_act_graph.gifI guess you also need to factor in the fact that surface temperatures are not necessarily correllated with rates of energy loss, because presence of water vapour and clouds affect both surface temperatures and outgoing radiation. ie. humid nights in the UK may be warmer than dry clear nights because of the clouds and water vapour. But I applaud your approach. I don't see why it should not be possible to analyse the likely change in energy balance in different weather and climate scenarios to see the potential impact of, say, a spell of negative or positive PDO. Obviously you cannot know the whole state of the atmosphere even with all the surface and satellite observations we have, but the errors in your outputs should point you to new phenomena and new explanations. On the other hand, things in your approach that turn out to work well may point you to areas of physics that you have correctly simulated.
|
|