View Single Post
  #101   Report Post  
Posted to rec.woodworking
Robert Bonomi Robert Bonomi is offline
external usenet poster
 
Posts: 379
Default Do you use any computer based tool for doing project layout?

In article ,
wrote:
On Apr 13, 1:33*am, Puckdropper puckdropper(at)yahoo(dot)com wrote:
" wrote

:

That's not unusual at all. *Subtraction *is* adding the negative
(complement).


OTOH, the IBM 1620 was known as the CADET (Can't Add, Didn't Even
Try). *It had no ADD (or subtract) instruction at all, rather used an
index into a lookup table in memory to add. *Want a different
operator? *Overwrite the "ADD" lookup table, sometimes on purpose,
even.


In one of my CS classes, it was pointed out that ADD circuits are usually
smaller and easier than SUBtract circuits, so they're used more often. *
That's what was so weird about the subtractor being used to emulate
addition.


Not true. The (add and subtract) operations use the same logic.


Really? I've -never- seen an IC chip that did subtraction directly. 'Adder'
chips, however, are common as dirt.

You can -accomplish- subtraction using an 'adder' and a bunch of inverters
on the second input (and ignore the overflow).

True 'subtract' logic _is_ more complicated -- because the states in the
operation table do not collapse as well.
Addition: operand1 OR operand2 == 0 = zero result, zero carry
operand1 XOR operand2 == 1 = one result, zero carry
operand1 AND operand2 == 1 =? zero result, one carry

Subtraction: operand1 EQ operand2 = zero result, zero borrow
operand1 EQ 1 AND operand2 EQ 0 = one result, zero borrow
operand1 EQ 0 AND operand2 EQ 1 = one result, one borrow

To expound on the 'difference' between addition and subtraction, consider
hardware that uses "ONES COMPLEMENT" arithmetic. Where the 'negative' of
a number is represented by simply inverting all the bits of the positive
value. e.g. the negative of "00000010" is "11111101".

Note well that in _THIS_ number representation scheme there are *TWO* bit-
values that evaluate to -zero-. "0000000' is 'positive zero, and
'11111111' is 'negative zero'.

It is *HIGHLY*DESIRABLE* that numeric computations which give a "zero"
result, have the bit-pattern of 'positive zero'. If you 'subtract'
'00000011' from '00000011' by 'complement and add', you get
'00000011'
+'11111100'
===========
'11111111' which is 'negative zero'

if you do it by 'actual' subtraction
'00000011'
-'00000011'
===========
'00000000' which is 'positive zero', the desired result


To get the 'desired result' of 'positive zero', using _adder_ circuitry,
one has to have an additional stage that examines -every- result for the
'negative zero' bit-pattern, and inverts all the bits.


The 'does addition by complement and subtract' was *NOT* unique to the
CDC machines. *every* machine that used "1's complement" arithmetic
internally did things the same way.

There are advantages to "1's complement" over "2's complement", notably
_all_ numbers have a positive and negative representation. (In 2's
complement math, it is *NOT*POSSIBLE* to represent the complement of the
'largest possible negative negative number'. you _can_ have '-2**n' but
only '+((2**n)-1)'. The disadvantage is that there are -two- values for
'zero'. But that's just 'nothing'. grin

On the other side of the fence, there _are_ advantages to "2's complement",
notably that all numbers have a single _unique_ representation. The
disadvantages are that there =is= a negative value that you cannot
represent as a positive number. And 2's complement math _IS_ just a
little bit slower -- by one gate time -- than 1's complement. As
processor speeds became faster, that 'one gate time' difference became
less significant, and the world settled on _not_ dealing with "+/- zero".