View Single Post
  #174   Report Post  
Posted to rec.woodworking
Robert Bonomi Robert Bonomi is offline
external usenet poster
 
Posts: 379
Default Do you use any computer based tool for doing project layout?

In article ,
Bill wrote:

"Bill" wrote in message
...

wrote in message
...

Since bitwise negation can be performed by a single transistor, I would
expect that that a value in a register could be negated VERY fast. I
think
just a few clock cycles.


Above you use 2's complement representations in your example. Now you
switch tracks to 1's complement representation of negative numbers
(the only format where negation = inversion).
Yes, bitwise *INVERSION* can be done by a single transistor (indeed it
takes zero clock cycles to invert a signal), but this is a negation
only if you're doing 1's complement arithmetic. You still have to do...



That must be one of the reasons they switched to 2s complement, no?


I hate to answer my own question, but the main reason was the duplicity of
zeros in 1s complement, I think.


The 'ambiguous' bit-pattern for 'zero' *was* _the_ compelling reason that
IEEE 'standardized' on 2's complement. The 'test for zero/non-zero' operation
had to check for _two_ bit patterns (all zeroes, all ones), which either
took twice as long as a single check, *or* used up a _lot_ more 'silicon
real-estate'. Even _worse_, a test for "equality" could not simply check
for a bit-for-bit correspondence between the two values, it had to return
'equal' IF one value was all zeroes, and the other was all ones. This
was _really_ "bad news" for limited-capability processors -- you had to
invert one operand, SUBTRACT, and _then_ perform the zero/non-zero check
described above. Suddenly the test for 'equal' is 3 gate times *SLOWER*
than a 'subtract'. This _really_ hurts performance. "Inequality" compares
are also adversely affected, although not to the same degree.


For *big*, 'maximum performance' machines, the cost of the additional
hardware for dealing with unique +0/-0 was small enough (relative to the
_total_ cost of the machine) that it was easy to justify for the performance
benefits. When the IEEE stuck it's oar in, 'budget' computing was a fact
of life -- mini-computers, and micro-processors. It was _important to the
*user* of computing that the results on 'cheap' hardware match *exactly*
that obtained from using the 'high priced spread'. And that _code_ developed
on one machine run *unchanged* on another machine, and produce exactly
the same results.

At the vehement urging of the makers of 'budget' computing systems, as well
as the users thereof, 2's complement arithmetic was selected for the IEEE
standard, *despite* the obvious problem of a _non-symmetric_ representation
scheme. Number 'comparisons' were much more common in existing code than
'negations', thus it 'made sense' to use a representation scheme that favored
the 'more common' operations. In addition, the 'minor problem' of the 'most
negative number' not having a positive counterpart was not perceived to be
a 'killer' issue. "Real-world" data showed that only in *VERY*RARE*
situations did numeric values in computations get 'close' to the 'limit of
representation' in hardware.

I *understand* the decision, although, still to this day, I disagree with it.
wry grin