View Single Post
  #25   Report Post  
Posted to rec.crafts.metalworking
Ed Huntress Ed Huntress is offline
external usenet poster
 
Posts: 12,529
Default The economy -- are we replacing or repairing?

On Thu, 12 Jan 2012 10:48:10 -0500, "Jim Wilkins"
wrote:

"Ed Huntress" wrote in message .
...
Right. Now, relating it to the discussion, those are problems in
probability that are the same kind of stuff that econometrics tries to
model. Again, dealing with human behavior as a key part of the
variables, the levels of certainty are much lower in economics. But
statistical evaluations are a facet of science that's common among
social sciences and the probabalistic side of the physical sciences.

To go back to the original point I disagreed with, the idea that
something isn't science if it can't produce testable models is simply
wrong, historically and in contemporary practice. And the reason that
the physical sciences are more able, more often, to produce such
models is not that they are "superior" sciences. It's because the
things they study always behave the same, even if the sameness is some
statistical value. When the singular or collective behavior of human
beings are the subjects, or part of the subjects, that's rarely true.
But those studies are still scientific in every essential meaning of
the word, which is an elaboration on the ancient idea that science is
a systematic investigation to increase the store of knowledge.
Ed Huntress


IIRC in the Ringworld series Larry Niven comments that engineers and lawyers
solve similarly difficult problems, except that an engineer's problems don't
actively evade solution. Then he mostly disproves that statement.

As I think you see now the science I'm familiar with encounters the same
issues as the social sciences when policy makers seek and employ their
advice as the rationale for expensive and nationally significant decisions,
both socially and technically oriented.
http://mitre.org/
I was a lab manager there, not involved in decisions but not ignorant of
them either, and also as an amateur historian I've investigated the
background of air power and military electronics systems development.
Currently I'm researching and debating the tradeoffs of WW2 armored vs
unarmored aircraft carriers in rec.aviation.military.
http://www.navweaps.com/index_tech/tech-042.htm
Like social issues these required firm timely decisions derived from
incomplete and possibly wrong information plus good intuition. Much of the
"logic" of opposing arguments masks their financial, professional or
electoral self interest.

There is pressure to offer unjustifiably definite advice to avoid
accusations of Analysis Paralysis (see Jimmy Carter). With some exceptions
the physical sciences are more aware of and willing to admit the limited
accuracy of predictions based on statistical methodology. Some good clues we
pick up are numerical values with too many significant digits, or rounded
percentages that add up to exactly 100%.

Political pollsters here report the margin of error as one over the square
root of the sample size without relating it to confidence level, and are
annoyed and suspicious when they are proven wrong.


Ah, having done that research myself in different contexts --
primarily broadcast license-renewal studies and marketing research --
let me toss in an oar here. The researchers, at least the good ones,
know what they're doing. The problem comes in the reporting, primarily
in how the *press* reports the findings. Knowing that the press is
going to munge a report, behavioral-science researchers (which
includes pollsters and marketing researchers) report their findings at
a 95% confidence level unless otherwise stated. There are other
examples of misreporting, primarily in basing statistics on the number
of respondants rather than the sample size, but everyone who's
knowledgable in the field is on the lookout for non-respondant bias
when they look at a poll or marketing report. That's why, when we used
to discuss polls here more than we do now, I'd always link to the
methodology, if it was available, when I cited a poll.

This error is much more common in marketing, where the researchers in
the small agencies that do a lot of that work often have no real
background in statistics. No one else is going to understand the
difference, anyway.

In medicine and pharma research, for which I've spent several years
editing journal articles, there is some flabbiness due to the fact
that they rely on p-values rather than confidence levels. The standard
p-value they use for significance is 0.05. Unless they state
otherwise, that's the cutoff level they use for rejection of the null
hypothesis and determination of "statistical significance." Sometimes
they'll determine significance at 0.01 if they're really proud of
their data, and they'll be sure you know about it if they do. In terms
of what it means, it makes no discernible difference, but it sounds
better.

It's arbitrary, but given the enormous range of cohort sizes in pharma
and medical research (from perhaps a dozen to 100,000), it's the only
thing that makes any sense -- even though it doesn't make a lot of
sense. g

My experience is that many of the current generation of report writers
who write for general public consumption don't know a lot about the
statistics they report. They report the output values from SAS, SPSS,
Stata, or sometimes R that are conventional in that particular field.
There's an entire layer of expert statisticians and researchers who
work up the line from them. (See below.)

Experimental verification
exposes such sloppiness by physical scientists and serves to keep us honest
and careful.


Of course, and, in the life sciences, it's as big a part of the
science as it is in the physical sciences. One drug I was writing
about (rimonabant, trade name in Europe is Acomplia) had over
$100,000,000 invested in experimental research.

In the social sciences, psychology has more success at it than, say,
sociology, but there are models and testing going on in all of them.
It's just a lot more limited part of those sciences, which tend to be
more about observing phenomena and measuring them than testing
hypotheses.


Social scientists can be very certain, smug and arrogant about conclusions
which we plainly see that their data does not adequately support. A common
failing is that while the logic may be internally consistent its underlying
assumptions are at least controversial. Chomsky is a fine example. And that
is the basis for considering them less than real scientists, not the
extensive complexity and uncertainty of their subject though they are
trivial compared to human biochemistry. It's the results that matter, not
the tools used to obtain them.


Well, then, it boils down to matters of perception. Having spent
roughly 30 of the past 35 years interviewing engineers and others
whose work is based on physical science, my perception is that the
physical science side is the more smug and arrogant one. For example,
Clarke's statements. Those are common.


I accidentally took a statistics class meant for social scientists once. The
mathematical rigor fell drastically short of what I expected and needed, but
it was very interesting for the insight into sampling algorithms and the
many intentional and inadvertent ways to bias the results, such as calling
homes during the daytime when only unemployed people will answer.


That must have been a long time ago, and you didn't get very far with
it. Even when I studied it (statistics and behavioral research
methodologies in three different university departments, including
both math and social science), those biases were well-known and the
methodologies either avoid them or develop correction facors based on
additional research.

The
pundits here repeatedly express their amazement at how so many of us can
remain "undecided" right up until we enter the voting booth.


Don't blame the limitations of political polling on all of statistical
behavioral research.

My son, with a degree in economics and currently a grad student in
applied math, who works as an econometrics researcher for a major
think tank, could run both of us under the table on statistics and
methodologies. And the entire policy institute in which he works is
full of comparable young people. They are very, very good at what they
do. And that's where the policy research work is coming from.

How the politicians, the press, and particularly the general public
abuse and misuse the results is our problem.

--
Ed Huntress