Thread: JET bandsaws
View Single Post
  #36   Report Post  
Posted to rec.woodworking
dpb dpb is offline
external usenet poster
 
Posts: 12,595
Default JET bandsaws

Frank Boettcher wrote:
On Thu, 09 Aug 2007 16:56:58 -0500, dpb wrote:

Frank Boettcher wrote:
On Thu, 09 Aug 2007 11:51:40 -0500, dpb wrote:

Frank Boettcher wrote:
On Thu, 09 Aug 2007 08:38:12 -0500, dpb wrote:

And, given that few, if any, of us here or in the general
readership audience of FWW are going to be able to go and observe the
actual operating line quality control data from any of the
manufacturers, it would certainly be far more than we know now (or are
likely to know in the future).
We were not discussing the general readership audience

We were? That's news to me!

Is there a joke there?
Your post that preceded this part of the thread:


See the followup where I corrected my quoting context...perhaps that
helps, I don't know???

....

I guess you meant "they" to be the general readership audience and not
the testers?


I don't think we much disagree at all, fundamentally, but seem to be
having a communication problem (hopefully not deliberately)...

I'll try again...

The "they" in the above paragraph did indeed refer to testing but was
intended generically to include an individual test/tester and/or a
sponsoring publisher such as FWW. I don't care specifically who does
the test and as I replied in the followup, even if it were
vendor-supplied data that would be fine for supplying the population
data currently missing.

That being given though, the point of any testing and reporting isn't
for the benefit of the tester but the readership of the resulting test
report for whom it is at least one if not the primary basis for
selection of or at least a winnowing down of one particular machine for
purchase. So, the overall target of my comments was intended towards
providing more meaningful data for the general readership, yes, and that
is why I said I, at least, was directing comments from the readership
audience pov.

....

And, of course, there is very little serious evaluation in most reviews
at least of what these measurements _really_ mean in a quantitative
sense of how the machine actually will perform on a comparative basis.


Your contention is that most do not already know how a particular
feature measurement translates into real world comparitive
performance? Do you?


For a lot of these measurements reported, no, I don't (and certainly
don't believe the general audience for which such reviews might be of
real value do either). Whether the reviewer does have some knowledge
turns out to be immaterial for the most part because I've never seen
that knowledge or information presented in any review that I can recall.
Some of them are probably not even measurements that are part of the
manufacturer's QA/QC checks, either. That may be because they're
derived measurements controlled by others or because they could be
considered as immaterial.

Really basic measurements such as runout on a tablesaw arbor flange are
pretty clear. The offset in the guide bar on a bandsaw in mils so that
it isn't perfectly straight and therefore might require a tweak of a
fixed guide block type of blade guides when switching from thin stock to
a heavy cut isn't nearly so obvious as to how much is too much. Sure,
it makes sense that "less is better" but it certainly isn't directly
clear that the worst of a reported value is actually enough to make a
real problem in the shop.

The other difficulty in the reports that I was attempting to address is
that if the sample measurement for machine A is 1 mil worse than the
same measurement for machine B, does that imply that if another unit A
and B were purchased and measured that the same differential would be
present or would A even still be worse than B for this pairing of test
machines? Certainly the way test reviews are written and presented
there is no basis for judging anything else but you have done enough
QA/QC testing to know that isn't necessarily so. In fact, the
population mean of the two machines could be the same or even A better
than B instead of what the single sample result indicates. If so, the
poor reader who concludes that B is the better buy in conjunction w/ the
author's "Best Buy" label just might have made the wrong decision if
swayed by the reported numbers. So, I'm simply saying it is an
incomplete service imo to not have context such as that provided in
reviews but recognize that to do so raises the scope of reviews to a
level beyond what would be practical for general circulation magazines.
Hence the "game".

It doesn't imply I think anybody is rigging anything, incompetent, nor
underhanded in any way. They're simply operating under a set of
conditions that aren't optimal to answering some questions in a rigorous
manner. As you have pointed out, vendors have such data and some of
that data would be of real value and lots more of interest (if of little
actual practical value) to at least the more astute and interested in
the general readership. You also noted at least one manufacturer made
such information available if requested, but didn't contradict my
conjecture that such data would not have been allowed to be published
which is certainly understandable for competitive reasons if no other.
I suspect not all vendors were so open to potential reviewers for such
data even then, particularly if they were aware the same reviewer was
visiting other vendors. With the present competitive environment I can
only imagine such pressures weigh even more heavily upon them to
maintain such data closely held proprietary information.

Hopefully, that makes a step forward?

--