View Single Post
  #43   Report Post  
Robert Bonomi
 
Posts: n/a
Default

In article ,
Duane Bozarth wrote:
Robert Bonomi wrote:

In article ,
Duane Bozarth wrote:
Robert Bonomi wrote:

...
The downside to the Sawstop is the _cost_ of an activation. measured both
in time and money, it is non-trivial. circa $80, as I recall, for the
'replacement' cartridge, *plus* whatever damage is done to the blade.

Which would undoubtedly be considerably extended in medical costs and
likely in missed work time, irregardless whether the woodworking is
professional or hobby....


That assumes that the triggering _did_ prevent an accident. grin


In that scenario, yes, obviously that was intended.

Yes, in the case of an _actual_ accident prevention, the expense is
"cheap at {bigmultiple} the price".

In the case of a 'false alarm', it is a totally _unnecessary_ expense.

The trick is differentiating the two cases -- maximizing the former,
and minimizing the latter.

The manufacturer concentrates almost exclusively on the first situation,
and (apparently) totally ignores the latter one.

...

You have shown no evidence to support that claim other than your
hypothesis.


The facts are self-evident. There is *NO* published information available
to consult. This does *NOT* necessarily mean that there _is_ an 'objectionably
high' rate of false triggering. It *DOES* mean the _potential_ customers
"don't know" what the risk is.

"Don't know", and _can't_find_out_.

The more 'unknowns' there are about an object, the "riskier" the purchase
of that object is.

I have just as strong evidence (my belief and experience in
product engineering/development) that Type II error would certainly have
been considered by the manufacturer.


Whether or not the _manufacturer_ 'considered' it is irrelevant to the point
under discussion.

*NO* data is available to the prospective _purchaser_, to evaluate the
likelihood of such an occurrence -- which *will* cost the purchaser money.

There is a tacit admission by the manufacturer that the system _will_
false-trigger under some circumstances. They provide a means for
disabling the 'stop' capability.o

But _what_ those circumstances are, and how frequently they are likely to
occur -- who knows? The company isn't telling.

Of course, after purchasing, customers can find out -- the hard way. *BANG*
and another $80-200 out the window.


...snip stuff on purported difficulties in testing....

While it is true that not every conceivable action can be explicitly
tested, it is certainly possible to analyze and test against quite broad
classes of likely operational and mal-operational conditions.


And it is -guaranteed- that the 'sufficiently determined' customers will
come up with "hundreds, if not thousands" of situations that were not
tested for.

I have _personal_ experience *being* that 'sufficiently determined',uh, "party"
that breaks systems *without*deliberate*effort* --

Many years ago, I made an _inadvertent_ mistake in producing *one* control
card in a job deck to be fed to an IBM mainframe. As a result, that machine
was *totally* out of commission for more than a week. Because of that
incident, IBM did an emergency _hardware_ modification to every similar
installed system _world-wide_. (I grabbed a card that was already partly
punched, without realizing it -- and what resulted was _not_ what I had
intended. Unfortunately that which resulted _was_ comprehensible to the
machine.)

It 'broke' the system because the directive was *SO*STUPID*, and so non-
sensical, that nobody in their right mind would ever do it, and thus the
system was not protected against that particular form of idiocy. It had
simply never occurred to the designers this particular kind of thing might
happen.

The consequences of that little error were *staggering*. Among other
things, _payroll_ was late. Sending payroll deductions to the Gov't was
delayed. Not just for that company, but for 28 _other_ agencies that they
acted as 'service bureau' for.

In later years, I had a couple of clients who retained me specifically as
a 'tester' for their software products. They would send me a product, and
I would try what 'seemed reasonable' to me, in using it. They figured if
it survived 24 hours in my hands, it was safe to ship to customers. grin
The _really_ funny part is that I did _not_ set out to deliberately try
and break the software, either. It was 'reasonable, but un-conventional'
use that broke things every time. I got things like software that wouldn't
even _install_ on my MS test-bed platform -- it couldn't cope with _local_
hard-drive X: as the install destination, for one example.

If
exhaustive testing of every possibility were required to make any
product, no products of any complexity would exist, so such claims that
such is required before release of this particular product are simply
specious.


Now go back and _read_ what I wrote. grin

I *never* claimed that any such 'exhaustive testing' is necessary.
In fact, I meant to suggest that 'exhaustive testing' is =not= practical.
That there is *no* real substitute for a few million hours of 'hands on'
in the care of 'sufficiently determined' fools.

Disclosure of _what_kinds_ of realistically-encountered situations could cause
false triggering -- so that potential customers could evaluate the likelihood
of experiencing =that= kind of event -- is something that seems to be missing
from the manufacturer's materials.

Well, not *quite* entirely. It is well documented that you can't use it
for slicing up hot dogs. grin