|
Post by socold on Apr 29, 2010 14:59:30 GMT
It's not argument from ignorance because I have presented a reasonable basis for expecting the low sensitivity result to have been shown in GCMs by now if it were possible to show it in GCMs. No you did not provide a reasonable basis. Yes I did. Several times. Here it is again in shortened form: If it was possible to encode such an outcome, I argue it would have been done because the motive is there to do it.What's unreasonable with that argument?
|
|
|
Post by icefisher on Apr 29, 2010 15:13:35 GMT
No you did not provide a reasonable basis. Yes I did. Several times. Here it is again in shortened form: If it was possible to encode such an outcome, I argue it would have been done because the motive is there to do it.What's unreasonable with that argument? You are assuming that if I wrote a model that demonstrated lower climate sensitivity you would accept it. OK. I turned up lucky this week that somebody found a model useful for demonstrating how current models overly hype sensitivity. Here is the model: www.drroyspencer.com/2010/04/simple-climate-model-release-version-1-0/
|
|
|
Post by socold on Apr 29, 2010 16:26:09 GMT
"You are assuming that if I wrote a model that demonstrated lower climate sensitivity you would accept it. OK."
I am assuming that if you, or anyone else, could demonstrate low climate sensitivity with a GCM, someone would have done it by now.
"Here is the model:"
That's not a GCM and sensitivity in that model is an input rather than an output anyway.
|
|
|
Post by magellan on Apr 29, 2010 16:40:41 GMT
"You are assuming that if I wrote a model that demonstrated lower climate sensitivity you would accept it. OK." I am assuming that if you, or anyone else, could demonstrate low climate sensitivity with a GCM, someone would have done it by now. "Here is the model:" That's not a GCM and sensitivity in that model is an input rather than an output anyway. Dear socold, Provide one climate model that correctly models cloud dynamics and water vapor (the two are intertwined). Spencer only relies on observational data to support his hypothesis. Do that and you likely win the argument. Until then, nothing you say can win points with those using common sense. AGW surely is a religion; Warmology.
|
|
|
Post by nautonnier on Apr 29, 2010 17:08:27 GMT
I still do not understand that if the models all use immutable laws of physics that they
(1) get different results and
(2) fail to forecast (or whatever euphemism is currently in vogue for forecast) correctly the actions of the climate or weather.
Surely as it is so simple that all you do is get a suitable computer language, add laws of physics and stir - we would have got forecasting down pat by now.
Instead what we see is grudging acceptance that perhaps warming hasn't climbed out of statistical insignificance yet. Followed by a shout of eureka from an AGW proponent saying that a single model has been found that shows a leveling of somewhat similar proportions but not in the same time period.... so AGW is proven!
But what about all the models that did NOT show that result - didn't they have the same immutable physical laws?
This appears to be the scattergun approach we will have models that forecast _everything_ that can happen and whatever happens - there is a model for that!! This contradicts the purist using only 'immutable laws of physics' claim does it not?
Something else must have been added to the mixes to get different results. Like assumptions and parameterization based on assumptions - and these things tend to lead to confirmation bias rather than validation against the real world.
|
|
|
Post by icefisher on Apr 29, 2010 17:34:17 GMT
That's not a GCM and sensitivity in that model is an input rather than an output anyway. Sensitivity is an input in every GCM. Thats the nature of models all they do is operate on designer/chef provided logic. The amount of "variable" input the model allows makes for a more flexible model. A model where all input is "fixed" has no utility as a model. The only difference between models with "variable" versus "fixed" input models is that "variable" input models (defined as having more variables) are slightly more difficult to code than "fixed" input models (defined as giving fewer options to the operator) and variable input models tend to be more confusing and difficult to operate for the operator. What Spencer is demonstrating here is sensitivity has been estimated too high because of natural variability and the effect of that on clouds which are not modeled in GCMs (assumed static and fixed input in the GCM) but have been observed to vary in the climate system. Spencer points out that these random forcings are not a new discovery, just that it has been ignored by modelers as to their effects on estimating climate sensitivity values. This is really fundamental stuff for some sciences delving into complex systems. As we go ahead we learn stuff about what drives things and we miss major elements at the same time. Since this has passed peer review it essentially means that "fixed" input GCMs with regards to climate sensitivity numbers are going to need to be recoded. What it means to output remains to be seen as we learn more about variability and the complexities of the climate system. Since this is an Excel file and in my modeling area I think I will play around with it.
|
|
|
Post by socold on Apr 29, 2010 17:42:14 GMT
"You are assuming that if I wrote a model that demonstrated lower climate sensitivity you would accept it. OK." I am assuming that if you, or anyone else, could demonstrate low climate sensitivity with a GCM, someone would have done it by now. "Here is the model:" That's not a GCM and sensitivity in that model is an input rather than an output anyway. Dear socold, Provide one climate model that correctly models cloud dynamics and water vapor (the two are intertwined). There are no climate models that perfectly model cloud dynamics. Cloud dynamics in models are confined to observed behavior. The models are empirically based. If there is a variable 'x' which cannot be derived from physical laws, then the value of 'x' can be set from observational data instead. So yes, models are based on observational data. If observations show that 'x' is between 5 and 7, you can't justifably set 'x' to something outside that range in the model. You instead explore what effect changing 'x' has on the output on the model, and indeed that might help tell you what value 'x' likely has in the actual climate. As the analogy goes, what we have is that both x=5 and x=7 and everything in between show high climate sensitivity. x=2 might show low climate sensitivity, but setting a model to x=2 would violate empirical evidence. Hence why noone has demonstrated low climate sensitivity by setting 'x' to 2. I am arguing not that models are perfect, but that models have some wiggle room. If the variables in the models could be justifiably wiggled to support low climate sensitivity, it would have been done by now and demonstrated to the world. Hence I conclude that low climate sensitivity cannot be shown what we understand mechanism-wise and number-wise about the climate. Low climate sensitivity, if true, relies on a currently nont understood mechanism. Spencer isn't providing a mechanism for low climate sensitivity. If he was he could show it in a GCM or point out what needed to be changed.
|
|
|
Post by socold on Apr 29, 2010 17:51:06 GMT
That's not a GCM and sensitivity in that model is an input rather than an output anyway. Sensitivity is an input in every GCM. It's an output. A common experiment is to double co2. The input in that case is doubling the co2 level. That has knock on effects on all subcomponents of the model. The output is the change in temperature caused by the doubling of co2 (which can be determined with comparison to a control run where co2 is not doubled). That change in temperature determines the climate sensitivity, which is therefore an output too. Something passing peer review doesn't mean it's true, it marks the beginning of a debate on the matter by professional experts which may see it validated or falsified. For example Lindzen and Choi got peer reviewed, we heard a lot about that from posters on this forum, but it subsequently was falsified after experts (including Spencer) looked at it.
|
|
|
Post by socold on Apr 29, 2010 17:56:13 GMT
I still do not understand that if the models all use immutable laws of physics that they (1) get different results and (2) fail to forecast (or whatever euphemism is currently in vogue for forecast) correctly the actions of the climate or weather. See my previous to last post. There is wriggle room in many of the variables in models. This is why different models end up with different results. Parameterizations are not based on assumptions - they are based on observational data. As realclimate explains: "We are still a long way from being able to simulate the climate with a true first principles calculation. While many basic aspects of physics can be included (conservation of mass, energy etc.), many need to be approximated for reasons of efficiency or resolutions (i.e. the equations of motion need estimates of sub-gridscale turbulent effects, radiative transfer codes approximate the line-by-line calculations using band averaging), and still others are only known empirically (the formula for how fast clouds turn to rain for instance). With these approximations and empirical formulae, there is often a tunable parameter or two that can be varied in order to improve the match to whatever observations exist. Adjusting these values is described as tuning and falls into two categories. First, there is the tuning in a single formula in order for that formula to best match the observed values of that specific relationship. This happens most frequently when new parameterisations are being developed.
Secondly, there are tuning parameters that control aspects of the emergent system. Gravity wave drag parameters are not very constrained by data, and so are often tuned to improve the climatology of stratospheric zonal winds. The threshold relative humidity for making clouds is tuned often to get the most realistic cloud cover and global albedo. Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data. It is important to note that these exercises are done with the mean climate (including the seasonal cycle and some internal variability) – and once set they are kept fixed for any perturbation experiment."www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/langswitch_lang/bg/
|
|
|
Post by icefisher on Apr 29, 2010 18:20:43 GMT
Something passing peer review doesn't mean it's true, it marks the beginning of a debate on the matter by professional experts which may see it validated or falsified. For example Lindzen and Choi got peer reviewed, we heard a lot about that from posters on this forum, but it subsequently was falsified after experts (including Spencer) looked at it. Oh OK Socold! Its really about "current understanding of physics". Care to describe how that is arrived at?
|
|
|
Post by socold on Apr 29, 2010 18:33:56 GMT
Through decades of hard work by climate scientists of course
|
|
|
Post by icefisher on Apr 29, 2010 20:19:01 GMT
Through decades of hard work by climate scientists of course If all it took was decades of hard working scientists wouldn't we have solved all science questions by now? I think you need to be more specific.
|
|
|
Post by dogsbody on Apr 30, 2010 6:14:49 GMT
Good on you Poptech, I was thinking of that graph when I read Socold's comments. I don't think Socold looks into it very deeply or he would realise that Roy Spencer has had to battle to get papers published recently. The hockey team has been pretty active in influencing journals and editors for most of the last decade. The link to the paper here. www.worldclimatereport.com/index.php/2008/02/11/a-2000-year-global-temperature-record/
|
|
|
Post by dogsbody on Apr 30, 2010 6:27:30 GMT
Decades of hard work by climate scientists Poptech? Climatology is a relatively new discipline compared with atmospheric physics, meteorology and atmospheric chemistry. Those three disciplines comprise the larger part of climatology, and where do you find most of your sceptical scientists.
You don't have to look to hard to see that.
|
|
|
Post by steve on Apr 30, 2010 7:42:28 GMT
Through decades of hard work by climate scientists of course If all it took was decades of hard working scientists wouldn't we have solved all science questions by now? I think you need to be more specific. To accurately understand an individual cloud you need to know very detailed behaviour of what goes on inside them, in terms of how water condenses and evaporates, how rain drops grow, how droplets freeze, etc. In an individual cloud these processes are very sentitive to temperature, humidity and turbulence. Atmospheric components can also have an impact on formation of cloud droplets. All these processes are difficult to study directly because they tend to happen inside clouds! You can't really do the simulations in a lab. Assuming you understand formation of many types of individual clouds, you still need very good observations of the weather to predict formation of these individual clouds. On the other hand, if you know what types of weather produce what types of clouds, a perfect physical formulation of every cloud may not be necessary, and you resort to parametrizations. If one model can forecast the weather (including cloud types and amounts) for, say, the UK, Moscow, North Dakota, Perth, Aukland, Beijing, Rio and Narvik all year round to a certain degree of accuracy, and if the model is just left running, its climate for each of these places is roughly correct, then one could argue that it may be equally successful if you were to rerun the model after cooking the climate up another degree or two. People talk about "negative feedbacks from clouds" but there is no particular reason why such negative feedbacks have to wait for the whole earth to warm before showing up. You can also run your model lots of times varying the parameters that affect the details of your cloud modelling to see which of the parameters have more or less impact on whether the model is behaving. If you find that a lot of parameter settings produce reasonable behaviour and low sensitivity to warming, then you have an argument for low sensitivity that can be explored with more detailed investigations. I am aware that detailed observation campaigns (beyond the normal routine observations) are done involving extra observations including radiosonde and aircraft, to explore these aspects. Obviously, more and better satellite observations are slowly having an impact too. A lot of this research is driven by meteorology rather than climate. I am also aware that models are run with wide ranges of parameters to quantify the uncertainty that exists in their parametrizations.
|
|