Thread: Cat5e or what?
View Single Post
  #125   Report Post  
Posted to uk.d-i-y
Johnny B Good Johnny B Good is offline
external usenet poster
 
Posts: 1,491
Default Cat5e or what?

On Mon, 01 Feb 2016 08:25:03 +0000, T i m wrote:

On Mon, 01 Feb 2016 00:49:56 GMT, Johnny B Good
wrote:

On Wed, 27 Jan 2016 15:10:41 +0000, T i m wrote:

snip

I'll have to test mine but being yours is a 'real' server (focused on
i/o and not economy like mine) is likely to be much better an ant
generic PC hardware running as a server.


That won't necessarily be true. For several years, I tried just about
every trick I could to get the data transfer rates between my NAS4Free
box and my win2k desktop machine (connected via 2 or 3 metres worth of
CAT5 in total using an 8 port Netgear GBit switch above 60MB/s (circa
500Mbps). Both machines were using 2010 vintage MoBos with built in GBit
lan ports and dual core CPUs.

The CrystalDiskMark results were interesting in that sustained large
sequential transfer rates hovered around the 75MB/s mark for any of the
four disks in the NAS box (mapped to local drive letters) almost without
regard to any real world stop watch timed benchmarked improvements I was
able to make.

The biggest improvement arose out of replacing the single core Semperon
in the NAS box with a dual core Athlon 64 chip


snip more interesting stuff for brevity

That would reinforce what I was thinking regarding the poor i/o of a
'std' (onboard NIC) compared with one focused on efficient / low CPU
involvement / server orientated NIC?


That ought not be an issue with anything made during the past decade
(NICs had been forced to utilise DMA ever since the advent of 8MHz
clocked 80286 processors - that Novell server box was built on such a
system board and was able to max out the 10Mbps cheapernet link with
ease, so much so that the older 286 machines donated to my children with
pre- IDE HDDs fitted could load up the Doom game faster from the server
than they could from their own local HDDs (300 to 450 KB/s HDD transfer
rates way back then, circa 1997).

All NICs since those days of ISA slot cards use DMA. However, transfer
protocol techniques have gotten a lot more sophisticated with the advent
of Fast and Gbit ethernet adapters which may well add to the cpu overhead
in the cheaper brands compared to the use of more sophisticated silicon
on the Intel adapters (somewhat similar to the use of a host
controllerless modem being way better than the ****e 'winmodem' but not
quite as good as a standalone external modem).


Transferring data is a very I/O based task and therefore shouldn't
require much in the way of CPU. So, as long as the hardware involved was
self sufficient (could use DMA etc) then it should offload much of the
CPU load onto the Ethernet card itself (and why the sell such cards for
'servers' presumably)?


When network speeds were limited to 10 and 100Mbps, the then current
'wisdom' was that you didn't need much by way of CPU 'grunt' in a (file)
server box (now referred to as a NAS box) since the long established use
of DMA for both HDDs and NICs offloaded the I/O 'donkeywork' from the CPU
(which even then was several thousands of times more powerful than the
humble 80286 such local network server technology had started out with).

Indeed, when I was upgrading my 'server' (now to be known as a NAS box)
back in 2010, I rather thought the 2.2GHz clocked Semperon cpu was
'serious overkill' for the task in hand (I hadn't counted on the abysmal
scaling performance of Gbit ethernet adapters in regard of CPU
requirements) hence my underclocking in order to trade performance for
reduced power consumption courtesy of undervolting the core voltage.


Network *and* any hard disk controllers may help.


These days, any modern 'entry level' MoBo + dual/quad core cpu + 4 to 8
GB (or more) of ram with an unheatsinked built in graphics chip, even
when using software RAID, should have ample reserve with regard to I/O
throughput (at least as far as a SoHo home server box in a Gbit LAN
system is concerned). Let's face it, even a 6 year old micro-ATX with
SATA2 ports and its own built in Gbit LAN port hardware seems to be quite
capable of this trick (at least when it's blessed with a dual core Athlon
64 - and that test with a decently specced win7 client machine suggests
that even a single core cpu may have sufficed - I didn't bother testing
this at the time).


http://www.intel.com/content/www/us/...ducts/gigabit-

server-adapters/overview.html


That's an interesting site but I think the real problem with
alternatives, which have equal on-chip support for checksum offloading
and VLAN support and so on, is the rather spotty driver support in the
*nix based NOSes used in a lot of commercial NAS boxes (as well as "The
Usual Suspects" in home brewed NAS/server boxes - Debian Linux (and
derivatives) and NAS4Free/FreeNAS and so on). With an Intel adapter,
you're more or less guaranteed good driver support, with other makes,
less so (none or broken driver support ime).

There doesn't seem to be the same issue with HDD interface driver
support, at least not for the more mature chipsets. The latest 'bleeding
edge' stuff will always be problematical with the open source based OSes,
usually resolved in time but sometimes never, especially if it's a short
lived 'transient' "Fad" like technological development which can be tough
if you had the misfortune to buy into what seemed to be the latest last
word in technological development in MoBo hardware only to discover it
was merely a short lived intermediate step towards an even better longer
lived technology (RIMM anyone?).

NAS (Network Attached Storage) boxes are even more orientated towards
offering 'services' than the earlier breed of 'Servers' than their title
would suggest. If your main concern is 'file serving', you'd be well
advised to make sure all unnecessary 'services' are disabled (DLNA/UPnP,
iTunes/DAAP, Dynamic DNS, Webserver, SNMP, Unison, FTP, TFTP, NFS and so
on to quote just half of what's built into a NAS4Free box.

Of course, if you plan on using the NAS to act as a media server, you'll
need DLNA/UPnP and its ilk but don't make the mistake of enabling any
transcoding features unless your NAS box has a higher spec than your
desktop workstation - just make damn sure your chosen media streaming
client is capable of handling your chosen media file types without
needing such a 'crutch' (or else avoid the more obscure, less well
supported media formats in the first place).

Anyone who uses torrent sources, enabling the BitTorrent client on the
NAS is a no-brainer (assuming the NAS is left to run 24/7). The torrent
client demands very little use of system resources, even on an
underpowered NAS toy. Indeed, anyone heavily into tying up their desktop
PC overnight, accumulating torrents, can justify the use of a toy NAS,
with a built in torrent client service (pretty well all of them), on this
one feature alone.

Harking back to the topic of cabling choice, I'd have to say that CAT5
or CAT5e is the correct answer in the OP's case. Gbit ethernet is likely
to remain viable for the next decade, by which time fibre optic kit
should become cheap enough to consider using the CAT5 cables as 'draw
strings' to pull optical fibre cable through, especially if the CAT5 was
installed with this in mind in the first place.

--
Johnny B Good