View Single Post
  #223   Report Post  
 
Posts: n/a
Default

Note followups. Please remove rec.woodworking from the distribution.

Executive summary: I'm skeptical that hte "hockey stick" plot
has any predictive value. But if it does, that will be totally
dominated by the most recent data, temperatures a hundred
years ago or more are all but irrelevant.

Mark & Juanita wrote:
On 5 Jul 2005 15:51:36 -0700, wrote:



Larry Jaques wrote:
On 4 Jul 2005 12:01:09 -0700, the opaque

clearly wrote:

Larry Jaques wrote:
"How can we make our point with so little data to go on? Aha, make the
increments so small the data (with which we want to scare folks) is
off the charts!" Oh, and "Let's estimate data about 10x longer than
we have ANY data for.)

SPLORF! I realize that is not your only criticism but it is hilarious
that you would base ANY criticism on the tic spacing on the temeprature
axis. If they spaced the tics 10 degrees apart the plot would look the
same, it would just be harder to convert the picture to numbers.

Graph range has been used to hide data more than once, bubba.


Sure, had the authore chosen a range from, say -100 C to + 100 C the
chart would be inscrutable. As it is, the range appears tobe
chosen as any sensible person would, to fit the data on the page
within comfortable margins.

BTW, why'd you change the subject from tic-spacing to range? Perhaps
you DO realize the tic spacing is arbitrary, just like the choice
of origin?


When the @#$% was the subject ever tic spacing?


When Larry Jaques wrote:
"How can we make our point with so little data
to go on? Aha, make the increments so small
the data (with which we want to scare folks) is
off the charts!"

I thought he was referring to the tic spacing as 'increments'.
If not, perhaps he or you could identify at least one (1) such
'increment' such as by showing me the endopints.

The issue is the
represented data and the range of the data that is based upon very gross
observables being used to predict global average temperature fluctuations
based upon ice core samples, tree ring size, and contemporary cultural
documentation going back the past millennia. Those gross measurements
(again, which could be influenced by more than just temperature) were then
used to compute numbers with very small predicted increments. The
precision presented is not the precision that one would expect from such
gross measures. Had you explored the web site at which you found the
chart, you would have found that this was a conclusion from a paper by Mann
in 1998 that used the data that was summarized in that chart to predict
future global warming.


No I would not have found that because
that website was not written by Mann.
If I want to know what Bush said in
his state of the Union Message I go
to
www.whitehouse.gov, not moveon.org.
If I want to know what Mann says about
the plot, I'll consult HIS writing.


The paper by Mann is one of the keystones of the
global warming adherents (not just a dog and pony show chart). The chart
is simply a summary of the Mann's "research" and conclusions.


A chart that is simply a summary
of someone's research and conclusions
is, by definiton, a dog and pony
show style chart. Furthermore,
if any chart is a keystone in the
argument for Global Warming it
is this:

http://www.oar.noaa.gov/organization...ers/cmdl_2.gif
from
http://www.oar.noaa.gov/organization...ders/cmdl.html


There are
numerous objections to Mann's methods and his refusal to turn over *all*
of his data or algorithms http://www.climateaudit.org/index.php?p=234
despite being funded by the NSF.


Nothing there appears to have
been posted by Mann.

Data destruction is a serious
problem that pervades scientific
society today. Obviously there
is good reason to keep data
proprietary to the reasercher
for a reasonable period of time.
For the HST, that is ten years.
But scientists (civil servants)
working in Geophysics for NASA
and NOAAA, typically keep their
data proprietary forever and may
(often do) deliberately destroy
it after their papers are published.

Of course there is no honest
rational reason to destroy data
once the researcher is through
with his own analysis and publication.
No benefit accrues to the individual
researcher, to science or to humanity
from that destruction. The downside
is obvious, opportunity to learn
more from the data is lost. The upside
is completely nonexistant. Yet that
appalling practice persists.


As for his algorithms, the algorithms
ARE the science, if he didn't publish
his algorithms, he didn't publish anything
of value.

This, er discussion, reminds me of
something written by Tolkien
in his forward to _The Lord of the
Rings_: "Some who have read the
book, or at any rate have reviewed
it, have found it to be..."

Tolkien understood that some people
would not let a minor detail like not
having read something interfere with
their criticism and support of it. I
can't find Mann's own description of
the plot online so *I* do not know what
it is meant to portray. I'll
take a couple of educated guesses below.

Further, problems with his methodology
are documented in http://www.numberwatch.co.uk/2003%20October.htm#bathtub
as well as other areas on the site. He deliberately omitted data that
corresponded to a midiaeval warm period, thus making his predictions for
the future look like the largest jump in history.


The anonymous author(s) of that
webpage have not released their data
either, have they? Keeping that
in mind, let's take a look at the
"hockey stick" graph and compare
it to the "bathtub graph".

The data may be divided into three
ranges based on the error bar size.
The first range, on the left, has
the largest error bars, roughly
plus/minus 0.5 degrees abd extebds
from c AD 1000 to c AD 1625.
The second range extends from circa
AD 1625 to c AD 1920 and looks to
have error bars of about 0.3 degrees.
The third region, beginning c AD 1920
and extending to the present time
looks to have error bars of maybe 0.1
degree. Actually there is fourth region,
appearing to the right of AD 2000 that
does not appear to have any error bars
at all. That defies explanation since
it post-dates the publication of the
paper and were it a prediction, one
would expect uncertainties in the predicted
temperatures to be plotted long with
the predicted temperatures temselves.

It also seems reasonable to presume that
the most recent data are by far the most numerous
and those on the extreme left, the most sparse.

I think that one of your criticisms
is that the error bars toward the
left of the chart are too small.
Now, maybe the plot just shows data
after some degree of processing. For
example, each point may represent a
ten-year arithmetic mean of all the
temperature data within that decade,
each point could be a running boxcar
average through the data and so on.
I realy don't know but probably, due
to the data density, each point represents
more than a single observation. E.g.
it is a 'meta' plot. If so the error
bars may simply be standard plus/minus
two sigmas of the standard deviation
to those means. If so, the
the size of the error bars will scale
in inverse proportion to the square root
of the sample size.

That last is not opinion or deception.
That is simple statistical fact. Whether or
not the error bars are the right size (and keep
in mind, we don't even know if they ARE two-sigma)
is a matter than can ONLY be definiteively
settled by arithmetic though people who have
experience with similar data sets may be able
to take an educated guess based on analogy alone.

Of course if my GUESS about what is being plotted
is wrong that may also be totally irrelevant.

But supposing this is a plot of his data
set with the errors bars established by the
numerical precision within the data themelves.

A least squares fit
will be dominated by the data that are most
numerous and those with the lowest uncertainties.
ANY model fitted to those data will be
dominated by the data in the third region to the
extent that data in the second region will only have
a minor effect and those in the first region
may have a negligible efect.

So since the data that are numerous and precise are
the data from the third range, which show a rapid rise
in temperature, ANY model that is fitted to those data
will be dominated by the characteristics of that
data range. Given the steep upward slope of the data
in that third region it is hard to imagine how,
underestimating the errors in the earlier data would
actually reduce the estimated future temperature rise
extrapolated from that model.

What if the data out in that earlier flat region were
biased? What if they really should be lower or higher?
Again, that would have little effect on a madel fitted
to the entire data range for precisely the same reasons.

So, what if the 'bathtub' plot data are more accurate?
They still will surely lack the precision and density of
the modern data and so still will have little effect on
any model parameters fitted to the data set.

To the untrained eye, the 'bathtub' plot looks VERY
different from the 'hockey stick' plot but both
would produce a similar result when fitting a model
that includes the third data range.

Finally, there is a useful statistical
parameter called the reduced chi square
of the fit to the model that is indicative
of whether or not errors none's data have
been estimated properly. Simply stated,
if the have, the value will be near
unity. If they have not, the value will
be above or below unity depending on whether
the uncertainties were over or underestimated.

Gregor Mendel, long after his death, was first
accused of culling his data, then exhonerated
on the basis of statistics drawn from his data.
(Note, this was only possible because his data
were saved, not destroyed) The key mistake made
by his critic was over-estimating the degrees
of freedom.


Again, even if this
chart was only for consumption by politicians and policy makers, it was a
deliberately distorted conclusion


How is the _chart_ a conclusion? If you refer
to the points to the right of AD 2000 as a
conclusion, how do you show they are not
properly extrapolated from the model?


that could only be intended to engender a
specific response regarding global warming. In order to get his infamous
2.5C temperature rise prediction, he used trend of the numbers to pad the
data fit rather than padding with the mean of the data. (again documented
on the numberwatch page).


Documented how? I can't even find the _word_ "pad" on that
page?

What the hell do
"used trend of the numbers to pad the data"

and

"padding with the mean of the data."

mean? What the hell is "padding"?

What the page actually tells us
(it doesn't document THAT either
it tells us) is that another
climatologists, Hans von Storch
et all (HVS) using their own model
obtained results that differerd
from Mann, but those differences
were less pronounced if noise
were added to the HVS model.

As the numberwatch author notes,
as more noise is added the
long term variability in the
data is reduced. One is inclined
to say "Doh!" Adding noise
ALWAYS reduces any measure of
variablilty in a data set.

Mann's data set may exhibit less
noise becuase he has more data,
or maybe he also added noise into
his analysis to bring his reduced
chi squares to unity. He ought to
say if he did, and for all I know,
maybe he did.



Here
they go the opposite direction to support falsehoods and hysteria.


The graph in question looks to me to have been prepared for some
sort of dog and pony show. If it was created by a climatologist
in the first place, I'll bet it was created to show to reporters
and politicians (and also bet that they didn't understand it anyways.)


... and if it was so created, it was created in order to drive a specific
conclusion and input to direct public policy. That is not a trivial, wave
your hands and dismiss-it kind of action. The politicians who used it
certainly understood the conclusions that Mann was trying to assert. The
fact that he omitted the medieval warm period further indicates that this
was not a harmless use of the data from an innocent scientist.


As noted above, data from medieval
times are not going to affect a fit to a
model unless, contrary to reason, they
are weighted equally with the modern
data that are far more numerous and
undoubtably more accurate.




It has been over a decade since I last attended a coloquium given
by a climatologist. At that time predictions were being made based
on climate models--not by looking at a graph and imagining it extended
beyond the right margin.


Where do you think that climatologists get the bases for their climate
models? Where do you think they get data that they can use to fine-tune
those models and validate them?


Why do you ask those questions? You indicated you already
know the answers. All I am saying is that the question of
whetehr or not the exisiting data base is large and precise
engough to jsutify a prediction is a mathematical question.
A criticism of the prediction without math is just blowing
smoke, no better than a predition make without any mathmatical
modeling.




For example, this fellow (sorry I do not remember his name) explained
that one of the objections to a Kyoto type agreement (this was
before Kyoto) came about because some models predicted that average
annual rainfall in Siberia would decrease over about the next fifty
years but then increase over the following 100. So the Soviets
(this was back when there were still Soviets) were concerned about
not stabilizing global change at a time when Siberia was near the
dryest part of the expected changes.


So, since it's been over a decade, were their models correct?


Irrelevant. The point is that a simple linear regression does
not have inflection points.

....


Note also that Siberia getting drier for fifty years and then
getting wetter for a hunderd years after is a nonlinear change.
The prediction was not being made by simply extending a plot.


No, it was made by running a computer model. Do you know what goes into
computer models and simulations? Do you have any idea how much data and
effort is required to get a computer model to make predictions that are
reliable?


Yes. I 'turn the crank' every day and
twice on Sundays on data sets
that include tens of thousands
of observations for medium precision
orbit determination and similar
work. We emphatically do not determine
where a satellite will be tomorrow
by simply extrapolating from where
it was today.

I do; as I mentioned before, I've been involved in the area of
development, and integration & test for a considerable time. I know how
difficult it is to get a model to generate accurate predictions even when I
have control of a significant proportion of the test environment. To
believe that climatologists have the ability to generate models that
predict the future performance of such a complex system as the Earth's
climate yet cannot predict even short term with any significant degree of
accuracy is a stretch of epic proportions to say the least.


Nature presents numerous examples
where short-term variablity
obscures long-term trends. Take
geodetic measurements for example.
The long term movement over thousands
of years can be readily
determied by geological data, but
that long term movement is
punctuated with short-term seismic
events that, over the time
frame of an hour are orders of
magnitude larger making the short-term
prediction completely wrong.

Solar astronmers can better predict
the average sunspot number over
the next year than they can for
a dya three dyas from now.

A weatherman can better predict
annual rainfall for next year than
he can how much it will rain next week.

My physical condition a hundred years
from now is much easier to predict
than my physical condition ten years from now.

There are many areas in nature in
which short-term prediction due to
variability is far more difficult
than the long term.

Let's go back to the cornerstone
of global warming, the atmospheric
Carbon Dioxide data. The temperature
of a body is constant when the rate
at which it loses energy is the same
as the rate at which it receives energy.
The three largest sources of energy
for the Earth, by far, are radioactive
decay, dissipationnof tidal energy,
and insolation. We have no significant
influence on the first two. There
are but two significant ways the Earth
loses energy, tidal dissipation and
radiative cooling. Again, we have
no influence on the former.

We have no influence on the natural
variation in the solar 'constant'.
But direct sampling of the
atmosphere makes it clear beyond
all doubt that we can influence
the Earth's albedo. We can, and
do change the balance in the
radiative transfer of energy
between the Earth and the rest of
the Universe. There is no question
that the short term effect
meaning over a century or so,
of the introduction of more
greenhouse gasses into the atmosphere
will be a temperature rise, absent
other confounding factors.
That is predicted not by any
climate model but by the law
of the conservation of energy.

There may be confounding factors
that will counteract that temperature
rise. But unless it can be demonstrated
that there are such
factors and they are countering the
effect of greenhouse gasses it
is not a question of if we can
observe the change. It is a question
only of how soon we will be able to.

It won't shock me if we cannot see
a trend yet. That non-observation
will not disprove the law of the
conservation of energy. Wht remains
crucial is determining the magnitude
of _other_ influences on Global
Temperature and how the Earth
responds to all of them.

....

People who think that climatologists who generate such charts are not
attempting to influence policy and opinion are
1) Not very honest
2) Not very bright
3) Have mislead themselves into believing that said climatologists are
simply objective scientists publishing reduced graphs that are being used
for purposes that they did not envision.


What would their motive be?


That Mann does not fall under the title of naive scientist can be found
in http://www.washtimes.com/commentary/20030825-090130-5881r.htm


Your commentary is at least as impressive as
any commentary read in the Washington Times.
Hell, you probably at least know some math and
science, I'm less than confident of the same
for the editorial staff of the Washington Times.



I've never worked on a Climate model but have no doubt that
Climatologists rely on tried and true statistical methods
to fit data to their models and to made predictions from
those models just like any other scientist.


Very well, and where are these climatologists getting *their* data to
validate their models? Generating models is easy, generating models that
produce accurate results is not.


You've indicated a variety of sources yourself, so why ask?


If they underestimate the uncertainties in their data, or
overestimate the degrees of freedom in their models their
reduced chi-squares will be too small, just like they were
when Gregor Mendel's data were fitted to his theory. (Not
by Mendel himself, he didn't do chi squares). While Mendel's
theory of genetics overestimated the degrees of freedom, his
data fit modern genetic theory quite well.

If someone has a scientifically valid theory, they will have
the math to support it. The same is true for a scientifically
valid criticism of a theory.


Statistics does *not* make the math for a model. Statistics can be used
to validate the precision, or distribution of outcomes of a model run in a
Monte-Carlo sense, comparing the dispersion of the monte-carlo runs to the
dispersion of real data, but that assumes one has sufficient real data with
which to perform such a comparison and that the diversity of the variables
being modified in the model are sufficiently represented in the data set to
which the model is being compared.


If one's real data are insuficent in quality
or qwuantity this will result
in large uncertainties in the predictions.

If all one is relying upon to predict
future events is past data being statistically processed, one has done
nothing beyond glorified curve fitting and extrapolation beyond the data
set.


I quite agree. Of Course, I have no idea if that is what Mann
did, or not.


The real math behind models and simulations should be the
first-principals physics and chemistry that are properly applied to the
problem being modeled. Therein lies the rub, there are so many variables
and degrees of freedom (in a true modeling definition of that phrase), that
validating the first principals models to the degree that one could trust a
model to predict future climate changes is, at this time, insufficient.


Agreed.



If instead, their criticism is that the tic spacing on a graph
is too close, well, that conclusion is left as an exercise for
the reader.


Your statement above indicates that either you don't get it, or are being
deliberately obtuse regarding the referenced paper and the infamous "hockey
stick" chart. Think of it this way, the chart shown is the equivalent to
the final output from one of your revered climatologist's models that
predicts global average temperature will increase by 2.5C per decade


I have no revered climatologists. You earlier referred to the
chart as a plot of DATA, now you call it 'equivalent to the final
output...' I don't see how data input to a model can be considered
equivalent to the outpur from a model.

As I said before, I haven't read the paper. It appears that neither
have you.

--

FF