View Single Post
  #20   Report Post  
Posted to uk.comp.os.linux,uk.d-i-y
Bruce Stephens Bruce Stephens is offline
external usenet poster
 
Posts: 2
Default Chosing a new PC

The Natural Philosopher writes:

[...]

64 bit code is a bit larger, because it carries 64 bit entities...but
that's usually more than made up for by the faster speed of
computation.


That's my guess. It's worth mentioning that (I suspect) 64 bit code
spends most of its time handling 32 bit data (the benefits being the
saner instruction set and the extra registers).

In C-terms int is still 32 bit (and on Windows so is long). It's only
longs (on Windows long longs) and pointers that are 64 bit. So it's
certainly not the case that a 64 bit program will take twice as much
memory as a 32 bit equivalent; an array of 1000 ints is the same size in
both.

Linux wise the only real downside is dealing with legacy 32 bit code
which doesn't always work, especially if its a 3rd party kernel module
:-(


Yes, but how often does that happen? It's worth stressing that 64-bit
linux kernels can (unless you deliberately disable the feature) run
32-bit binaries (presuming that the necessary 32-bit libraries are
available, and distros make that easy to arrange).

My guess is that nowadays most people will be better off running 64-bit
for the most part (with the occasional 32-bit app). (I think the only
32-bit GNU/Linux application I have now is google earth, now that 64-bit
flash is available.)