UK diy (uk.d-i-y) For the discussion of all topics related to diy (do-it-yourself) in the UK. All levels of experience and proficency are welcome to join in to ask questions or offer solutions.

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #41   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 1,491
Default DIY NAS question for the *nix experts

On Sun, 14 Feb 2016 19:29:33 +0000, Andrew Gabriel wrote:

In article ,
Mike Tomlinson writes:
En el artÃ*culo , Andrew Gabriel
escribió:

I've worked with many hundreds of customers using ZFS, and none have
ever lost any data due to it.


A colleague, an experienced system admin, lost all the data on a large
Linux ZFS array when it refused to re-mount the zpool following a
sudden power loss. He was hampered by the lack of decent tools to
diagnose the problem.

I'm unconvinced that zfs is yet sufficiently mature and proven on
Linux.


ZFS on Linux has lagged someway behind the Illumos based distros and
FreeBSD, but it is catching up. I wouldn't choose Linux to host a
dedicated ZFS fileserver yet, although lots of people are running ZFS
loads on Linux without problems. It's miles ahead of btrfs which seems
to have made little progress for years, much to the anguish of the Linux
distros who are struggling with not having an Enterprise class
filesystem.

ZFS on Linux will get there - more people are working on it than on any
of the other platforms, but if it's the main purpose of the system, for
now choose an OS where ZFS is the native filesystem,
i.e. Illumos based (OmniOS, Openindiana, Nexentastor, etc), or FreeBSD
(including FreeNAS).

Just taking this reference to FreeNAS (and NAS4Free) here as a jumping
in point to mention that FreeBSD based NAS solutions offer by far and
away superior SMB performance over Linux ime (at least twice as fast on
the same hardware).

TBH, this rather dismal SMB performance of a Debian based 'file server'
alternative to the NW3.11 server I'd finally had enough of due to Novell's
crafty built in 1MB/s read performance limit (even when upgraded from
10Mbps ethernet to 100Mbps Fast Ethernet networking) I was only seeing
6.8MB/s transfers between a win2k client and the Debian box versus the
10MB/s win2k to win2k or winXP boxes' transfer speeds over a 100Mbps
network.

Later on, after upgrading to Gbit ethernet, when I had to boot from a
Knoppix Live USB pen drive whilst awaiting for a fix to the 1TB
wraparound bug in the FreeNAS Ext2 driver that I'd discovered after
upgrading a couple of the 1TB drives to 2TB Samsung SpinPoints, I found
myself seeing just 25MB/s transfer rates versus the 50MB/s and higher
rates normally seen with FreeNAS at that time.

I gave up waiting for a driver fix after a few months and reformatted
the SpinPoints using UFS before repeating my disk drive upgrading attempt
under FreeNAS once more. It was a relief to say 'goodbye and good
riddance' to the temporary Linux solution's dog slow SMB transfer
performance.

I have to say, Linux performs quite dismally whether as a server or as a
client hanging off a Gbit connected NAS4Free box which, over the same
network was offering, to a half decently specced windows 7 desktop
machine, peak transfer rates of 127MB/s before the ram caches filled up,
dropping back to 80/85 MB/s depending on the disk SDTR performance at
both ends of the link. In this case, the Linux distro being Linux Mint
KDE 64 ver 17.1 on a machine that had been upgraded just last April with
a new MoBo, 4 core 3.8GHz AMD cpu and a modest 8GB helping of ram which
finally did for the win2k installation I'd been running up to that point
in time.

The best speed of transfer I've seen *reported* was little more than
70MB/s, more often 60 to 64MB/s and sometimes little better than 40 odd
MB/s. With the previous, now 6 years old hardware (3GHz dual core Phenom
with 3GB of ram and a SATA II MoBo), I typically saw 60 to 64 MiB/s write
speeds (to the NAS box) and a curiously slower 50MiB/s read speed
(seemingly a limitation of the win2k Gbit ethernet drivers if my win7
experience was to be any guide).

So, here I sit with a much upgraded desktop machine, experiencing almost
no improvement in network file transfer speeds (GB sized media files)
over the previous win2k incarnation, harbouring a deep suspicion that if
I'd had the confidence to install a BSD based distro instead of taking
the easy way out with Linux Mint, I'd be seeing the sort of transfer
speeds I'd experienced when I'd had that customer's win7 desktop in the
workshop to test the NAS4Free box's real performance potential just over
a year ago (and, I hadn't even bothered to select the SMB2 protocol
option designed to improve performance with Vista/win7 and later versions
of MS windows!).

I have to say that DIYing your own NAS box using NAS4Free or FreeNAS has
a lot going for it, not the least being its reliability and more mature
ZFS support (for those that can afford the extra ram to make it work and
can handle the extra complexity of building and maintaining ZFS volumes
with enough confidence to avoid making any disastrous mistakes - a
problem with all forms of RAID) and, perhaps more importantly, superior
SMB performance with windows machines on the LAN. It can be a very cost
effective way for a home user to attach a high performance NAS box to
their home lan.

--
Johnny B Good
  #42   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 14/02/16 22:04, Johnny B Good wrote:
On Sun, 14 Feb 2016 19:29:33 +0000, Andrew Gabriel wrote:

In article ,
Mike Tomlinson writes:
En el artÃ*culo , Andrew Gabriel
escribió:

I've worked with many hundreds of customers using ZFS, and none have
ever lost any data due to it.

A colleague, an experienced system admin, lost all the data on a large
Linux ZFS array when it refused to re-mount the zpool following a
sudden power loss. He was hampered by the lack of decent tools to
diagnose the problem.

I'm unconvinced that zfs is yet sufficiently mature and proven on
Linux.


ZFS on Linux has lagged someway behind the Illumos based distros and
FreeBSD, but it is catching up. I wouldn't choose Linux to host a
dedicated ZFS fileserver yet, although lots of people are running ZFS
loads on Linux without problems. It's miles ahead of btrfs which seems
to have made little progress for years, much to the anguish of the Linux
distros who are struggling with not having an Enterprise class
filesystem.

ZFS on Linux will get there - more people are working on it than on any
of the other platforms, but if it's the main purpose of the system, for
now choose an OS where ZFS is the native filesystem,
i.e. Illumos based (OmniOS, Openindiana, Nexentastor, etc), or FreeBSD
(including FreeNAS).

Just taking this reference to FreeNAS (and NAS4Free) here as a jumping
in point to mention that FreeBSD based NAS solutions offer by far and
away superior SMB performance over Linux ime (at least twice as fast on
the same hardware).

TBH, this rather dismal SMB performance of a Debian based 'file server'
alternative to the NW3.11 server I'd finally had enough of due to Novell's
crafty built in 1MB/s read performance limit (even when upgraded from
10Mbps ethernet to 100Mbps Fast Ethernet networking) I was only seeing
6.8MB/s transfers between a win2k client and the Debian box versus the
10MB/s win2k to win2k or winXP boxes' transfer speeds over a 100Mbps
network.

Later on, after upgrading to Gbit ethernet, when I had to boot from a
Knoppix Live USB pen drive whilst awaiting for a fix to the 1TB
wraparound bug in the FreeNAS Ext2 driver that I'd discovered after
upgrading a couple of the 1TB drives to 2TB Samsung SpinPoints, I found
myself seeing just 25MB/s transfer rates versus the 50MB/s and higher
rates normally seen with FreeNAS at that time.

I gave up waiting for a driver fix after a few months and reformatted
the SpinPoints using UFS before repeating my disk drive upgrading attempt
under FreeNAS once more. It was a relief to say 'goodbye and good
riddance' to the temporary Linux solution's dog slow SMB transfer
performance.

I have to say, Linux performs quite dismally whether as a server or as a
client hanging off a Gbit connected NAS4Free box which, over the same
network was offering, to a half decently specced windows 7 desktop
machine, peak transfer rates of 127MB/s before the ram caches filled up,
dropping back to 80/85 MB/s depending on the disk SDTR performance at
both ends of the link. In this case, the Linux distro being Linux Mint
KDE 64 ver 17.1 on a machine that had been upgraded just last April with
a new MoBo, 4 core 3.8GHz AMD cpu and a modest 8GB helping of ram which
finally did for the win2k installation I'd been running up to that point
in time.

The best speed of transfer I've seen *reported* was little more than
70MB/s, more often 60 to 64MB/s and sometimes little better than 40 odd
MB/s. With the previous, now 6 years old hardware (3GHz dual core Phenom
with 3GB of ram and a SATA II MoBo), I typically saw 60 to 64 MiB/s write
speeds (to the NAS box) and a curiously slower 50MiB/s read speed
(seemingly a limitation of the win2k Gbit ethernet drivers if my win7
experience was to be any guide).

So, here I sit with a much upgraded desktop machine, experiencing almost
no improvement in network file transfer speeds (GB sized media files)
over the previous win2k incarnation, harbouring a deep suspicion that if
I'd had the confidence to install a BSD based distro instead of taking
the easy way out with Linux Mint, I'd be seeing the sort of transfer
speeds I'd experienced when I'd had that customer's win7 desktop in the
workshop to test the NAS4Free box's real performance potential just over
a year ago (and, I hadn't even bothered to select the SMB2 protocol
option designed to improve performance with Vista/win7 and later versions
of MS windows!).

I have to say that DIYing your own NAS box using NAS4Free or FreeNAS has
a lot going for it, not the least being its reliability and more mature
ZFS support (for those that can afford the extra ram to make it work and
can handle the extra complexity of building and maintaining ZFS volumes
with enough confidence to avoid making any disastrous mistakes - a
problem with all forms of RAID) and, perhaps more importantly, superior
SMB performance with windows machines on the LAN. It can be a very cost
effective way for a home user to attach a high performance NAS box to
their home lan.

I think you will find that with NFS rather than SMB you will see the
transfer speeds you want

Samba is I presume what BSD still uses, so its probably that the default
config on BSD is simply better tuned.

If you suspect its the underlying Ethernet driver, a ftp transfer is
probably the most optimised client server thing to try to test raw
network performance, nut again, I cant see that there would be a huge
amount of difference between free BSD and Linux.

==============================


Years ago in the days of MSDOS and Apricots and PC clones I did a job
for a part who was selling some primitive Basic databasey sort of code.
He would sell it on either an Apricot, or a half the price clone.

'Look' he said in full salesman mode ' it runs 4 times faster on the
Apricot'.

I looked the the clone. It had the same processor and clock speed as the
Apricot. 'Hmm' I said 'that shouldn't be' and I looked in config.sys and
sure enough the apricot came with more FILES and BUFFERS configured. I
quickly made them the same, re ran the test and showed the Clone was in
fact 5% faster. 'There. Now you don't need to sell apricots anymore'

Bad mistake. That was the last I ever heard from him...



--
Future generations will wonder in bemused amazement that the early
twenty-first centurys developed world went into hysterical panic over a
globally average temperature increase of a few tenths of a degree, and,
on the basis of gross exaggerations of highly uncertain computer
projections combined into implausible chains of inference, proceeded to
contemplate a rollback of the industrial age.

Richard Lindzen
  #43   Report Post  
Posted to uk.d-i-y
dan dan is offline
external usenet poster
 
Posts: 1
Default DIY NAS question for the *nix experts


I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?


John,

ZFS is your friend ! Here's what I'd do in your situation:

Supermicro Atom (Rangely) board - with plenty of ECC RAM and lots of SATA ports
FreeNAS on a USB stick
All the drives in a massive ZFS pool with redundancy.


Supermicro boards aren't cheap but they will serve you well. Don't be tempted to skip on ECC RAM though.

Dan



  #44   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 5,168
Default DIY NAS question for the *nix experts

On 15/02/2016 03:41, The Natural Philosopher wrote:

I think you will find that with NFS rather than SMB you will see the
transfer speeds you want

Samba is I presume what BSD still uses, so its probably that the default
config on BSD is simply better tuned.

If you suspect its the underlying Ethernet driver, a ftp transfer is
probably the most optimised client server thing to try to test raw
network performance, nut again, I cant see that there would be a huge
amount of difference between free BSD and Linux.


I run some Synology ds215j boxes and they max out the disks on
smb/ftp/nfs. It runs linux on an arm based CPU so any x86 based box
should be able to do the same.


  #45   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 805
Default DIY NAS question for the *nix experts

On Sun, 14 Feb 2016 22:04:46 +0000, Johnny B Good wrote:

snip

I have to say, Linux performs quite dismally whether as a server or as
a
client hanging off a Gbit connected NAS4Free box which, over the same
network was offering, to a half decently specced windows 7 desktop
machine, peak transfer rates of 127MB/s before the ram caches filled up,
dropping back to 80/85 MB/s depending on the disk SDTR performance at
both ends of the link. In this case, the Linux distro being Linux Mint
KDE 64 ver 17.1 on a machine that had been upgraded just last April with
a new MoBo, 4 core 3.8GHz AMD cpu and a modest 8GB helping of ram which
finally did for the win2k installation I'd been running up to that point
in time.

The best speed of transfer I've seen *reported* was little more than
70MB/s, more often 60 to 64MB/s and sometimes little better than 40 odd
MB/s. With the previous, now 6 years old hardware (3GHz dual core Phenom
with 3GB of ram and a SATA II MoBo), I typically saw 60 to 64 MiB/s
write speeds (to the NAS box) and a curiously slower 50MiB/s read speed
(seemingly a limitation of the win2k Gbit ethernet drivers if my win7
experience was to be any guide).

snip

I find all this a little confusing - touched on performance some months
back.

Transferring Windows - Windows seems to be able to drive a Gigabit
network almost flat out, which seems to be faster than you are managing
with any kind of NAS from your figures above.

My Windows 7 Premium 64 bit is acting as the file server (that is, sharing
drives to other systems). Rough spec. Intel core i5 2500k 3.3 GHz.
MoBo Asrock Z68 Extreme4 Gen 3.

Target system is Win 8.1 Core 2 Quad Q6700 @2.66 GHz. Mobo ASUSTek P5K SE.
Had an Intel Gigqabit network adapter added a while back as the on board
chip was acting a bit flakey.

So, what kind of performance do you get copying from your W7 system to
another Windows PC W7 or later (if you have one)?

With absolutely no comment about types of filestore, security, resilience,
multiple OS client support etc. is it possible that Windows - Windows
just outperforms the average {non-Windows NAS} - Windows?

Cheers

Dave R



--
Windows 8.1 on PCSpecialist box


  #46   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 1,491
Default DIY NAS question for the *nix experts

On Mon, 15 Feb 2016 12:20:03 +0000, David wrote:

On Sun, 14 Feb 2016 22:04:46 +0000, Johnny B Good wrote:

snip

I have to say, Linux performs quite dismally whether as a server or as
a
client hanging off a Gbit connected NAS4Free box which, over the same
network was offering, to a half decently specced windows 7 desktop
machine, peak transfer rates of 127MB/s before the ram caches filled
up,
dropping back to 80/85 MB/s depending on the disk SDTR performance at
both ends of the link. In this case, the Linux distro being Linux Mint
KDE 64 ver 17.1 on a machine that had been upgraded just last April
with a new MoBo, 4 core 3.8GHz AMD cpu and a modest 8GB helping of ram
which finally did for the win2k installation I'd been running up to
that point in time.

The best speed of transfer I've seen *reported* was little more than
70MB/s, more often 60 to 64MB/s and sometimes little better than 40 odd
MB/s. With the previous, now 6 years old hardware (3GHz dual core
Phenom with 3GB of ram and a SATA II MoBo), I typically saw 60 to 64
MiB/s write speeds (to the NAS box) and a curiously slower 50MiB/s read
speed (seemingly a limitation of the win2k Gbit ethernet drivers if my
win7 experience was to be any guide).

snip

I find all this a little confusing - touched on performance some months
back.

Transferring Windows - Windows seems to be able to drive a Gigabit
network almost flat out, which seems to be faster than you are managing
with any kind of NAS from your figures above.


Well, 127 MegaBytes per second is pretty well the translation of 1000
Megabits per second into a Megabytes per second figure. What the win7
client showed me was that my 5 year old NAS4Free box (now 6 years old)
was quite capable of saturating the Gbit ethernet link after all and that
my 50/60 MB/s speeds using win2k on a slightly better specced desktop PC
was the culprit in this speed limited performance.

Back in the days of Fast Ethernet working, windows to windows speeds did
max out a Fast Ethernet (100Mbps) link (10 to 11 MB/s transfers) but the
first Debian based replacement for the NW3.12 server slowed this down to
a mere 6.8MB/s.

When I first upgraded to Gbit ethernet, I was using PCI NetGear adapters
in my pre PCIe PCs and struggling to get better than 40MB/s transfers.
Upgrading to the next generation of MoBos with built in GBit ethernet LAN
ports about 6 years ago pushed this up to 60MB/s max using a single core
Semperon in the NAS to begin with until I tried a dual core Athlon 64
which improved it to 64MB/s. I was already using a 3.1GHz dual core Phenom
in the win2k box so I figured I'd hit the limit at both ends of the wire.

It wasn't until I tested transfer speeds using a customer's win7 desktop
PC that I realised that my speed limit was, despite all my efforts at
tuning the network performance, a limit imposed by some shortcoming in
win2k.


My Windows 7 Premium 64 bit is acting as the file server (that is,
sharing drives to other systems). Rough spec. Intel core i5 2500k 3.3
GHz.
MoBo Asrock Z68 Extreme4 Gen 3.

Target system is Win 8.1 Core 2 Quad Q6700 @2.66 GHz. Mobo ASUSTek P5K
SE.
Had an Intel Gigqabit network adapter added a while back as the on board
chip was acting a bit flakey.

So, what kind of performance do you get copying from your W7 system to
another Windows PC W7 or later (if you have one)?


I don't have any Gbit endowed win7 desktop PCs of my own to run such a
test. The closest I've got is a Dell Dimension E521 with Vista 32 bit
installed. I'd have to drop a Gbit ethernet adapter into its PCIe slot to
run this test since the built in LAN port is only Fast Ethernet despite
its otherwise high spec at the time of its design.


With absolutely no comment about types of filestore, security,
resilience,
multiple OS client support etc. is it possible that Windows - Windows
just outperforms the average {non-Windows NAS} - Windows?


Considering my experience of SMB performance with Linux, I'd say that's
a strong possibility but that wouldn't be the case with BSD based NAS
boxes (eg NAS4Free and its cousin FreeNAS which retained the original
project name) built on entry level PC hardware from even as far back as 5
years ago, at least not in my experience. :-)

If you're planning on adding a *dedicated* NAS box to your lan, I
wouldn't recommend a windows desktop PC for such a task. A BSD based NAS
box would be a much better option since its SMB performance is more than
a match for any windows clients and it too supports all the other unixy
file transfer protocols mentioned by TNP which a windows based NAS would
be lacking.

In practice, all those extra file transfer protocols may not be of any
interest to you personally but you may find the Bittorent client feature,
common not only to NAS4Free and FreeNAS but also virtually every NAS
under the sun, a useful feature to have on a box running 24/7 since, if
you're into using Torrents, it neatly offloads this task from your
Desktop PCs.

Likewise DLNA and other media streaming services (but don't make the
mistake of enabling any CPU intensive transcoding on what should remain a
fileserver - it's best to avoid exotic media file formats that your media
streaming players can't understand and stick with formats that they *can*
process directly without any such assistance).

--
Johnny B Good
  #47   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 116
Default DIY NAS question for the *nix experts

Mike Tomlinson wrote:

About three years ago I had to choose a file system for a 24-drive array
(48TB total, mounted as a single volume). RAID5 was done in hardware


RAID5 with that many discs? Too high a chances of multiple failures for
my liking ... RAID6.


  #48   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 7,434
Default DIY NAS question for the *nix experts

On 16/02/16 07:48, Andy Burns wrote:
Mike Tomlinson wrote:

About three years ago I had to choose a file system for a 24-drive array
(48TB total, mounted as a single volume). RAID5 was done in hardware


RAID5 with that many discs? Too high a chances of multiple failures for
my liking ... RAID6.



It's not unlikely that the RAID sets would have been done with smaller
batches of discs and then amalgamated together into one large
presentation. That's how my EQLLogic does it (although I have RAID10 for
IOPS capacity).

But IIRC EQLLogic call the options RAID50 and RAID60 (RAID5 plus
RAID0...) respectively. Other arrays may not be so obvious.

OTOH I have had Chapparal (sp) arrays with quite moderate numbers of
disks with a single pure RAID5 across the lot, but that was years ago.
  #49   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , Andy
Burns escribió:

RAID5 with that many discs? Too high a chances of multiple failures for
my liking ... RAID6.


I've never been particularly convinced by RAID6. In an array with
several dozen drives, maybe.

In the ones I set up, two hot spares were assigned in the RAID chassis.
When a drive fails, the chassis hardware (custom, not running **** like
Windows) automatically rebuilds the array onto a hot spare and sends me
an email to go replace the dead drive. The second hot spare is there in
case another drive fails before I get around to replacing the first dead
one. Distributed parity is used so that failure of a parity disk
doesn't take out the array.

I've used several arrays of this type in this configuration for many
years without a single problem, storing scientific data which was
heavily data mined and used by scientists all over the world, so they
got a hammering.

How one configures RAID depends on your risk tolerance. RAID is not a
substitute for backup, of course; daily rsync backups to an identical
array were made as well, the amount of data being impractical for tape.

I am also careful to specify proper enterprise-grade drives, not cheap
desktop or "NAS grade" ****, to install them in racks, with UPS power
protection, in a dedicated custom-built (designed by me) server room
with redundant air conditioners. Yes, it costs, but you get what you
pay for, and it wasn't my money I was spending.

http://static.googleusercontent.com/...com/en//archiv
e/disk_failures.pdf

http://www.theregister.co.uk/2014/02...to_evaluate_di
sk_reliability/


--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #50   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 7,434
Default DIY NAS question for the *nix experts

On 16/02/16 08:33, Mike Tomlinson wrote:
En el artículo , Andy
Burns escribió:

RAID5 with that many discs? Too high a chances of multiple failures for
my liking ... RAID6.


I've never been particularly convinced by RAID6. In an array with
several dozen drives, maybe.


48 drives

Dell actually recommend RAID6(0) over RAID5(0) due to the rebuild time -
it's a long period to be at risk of a second drive failure - which is
now slightly more likely due to the stress on the remaining drives due
to the rebuild operation itself.

Or RAID10 of course

In the ones I set up, two hot spares were assigned in the RAID chassis.
When a drive fails, the chassis hardware (custom, not running **** like
Windows) automatically rebuilds the array onto a hot spare and sends me
an email to go replace the dead drive. The second hot spare is there in
case another drive fails before I get around to replacing the first dead
one. Distributed parity is used so that failure of a parity disk
doesn't take out the array.


Dedicated parity would have been RAID4 and I don't recall seeing anyone
use that ever, IME...

I've used several arrays of this type in this configuration for many
years without a single problem, storing scientific data which was
heavily data mined and used by scientists all over the world, so they
got a hammering.

How one configures RAID depends on your risk tolerance. RAID is not a
substitute for backup, of course; daily rsync backups to an identical
array were made as well, the amount of data being impractical for tape.

I am also careful to specify proper enterprise-grade drives, not cheap
desktop or "NAS grade" ****, to install them in racks, with UPS power
protection, in a dedicated custom-built (designed by me) server room
with redundant air conditioners. Yes, it costs, but you get what you
pay for, and it wasn't my money I was spending.

http://static.googleusercontent.com/...com/en//archiv
e/disk_failures.pdf

http://www.theregister.co.uk/2014/02...to_evaluate_di
sk_reliability/


That is one of the key factors which helps - enterprise gear



  #51   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 16/02/2016 08:33, Mike Tomlinson wrote:

I am also careful to specify proper enterprise-grade drives, not cheap
desktop or "NAS grade" ****, to install them in racks, with UPS power
protection, in a dedicated custom-built (designed by me) server room
with redundant air conditioners. Yes, it costs, but you get what you
pay for, and it wasn't my money I was spending.

http://static.googleusercontent.com/...com/en//archiv
e/disk_failures.pdf

http://www.theregister.co.uk/2014/02...to_evaluate_di
sk_reliability/


One of the things that those discussions seem to gloss over with regard
to enterprise class drives is the way in which the drives handle read
errors. In an enterprise environment you actually want a drive deal with
errors more quickly, even if it means a lower chance of recovering the
data, since all the time its retrying its blocking throughput. With
consumer grade drives there is no guarantee there are other copies of
the data, so you are prepared to trade off performance to have it try
"harder" to recover difficult sectors.

For backblaze's particular application however, this is not really an
issue.


--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #52   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , John
Rumm escribió:

One of the things that those discussions seem to gloss over with regard
to enterprise class drives is the way in which the drives handle read
errors. In an enterprise environment you actually want a drive deal with
errors more quickly, even if it means a lower chance of recovering the
data


Careful now. That's true for RAID arrays, yes, since you don't want a drive
trying to recover from errors for so long that the RAID controller assumes
it has dies and takes it offline.

, since all the time its retrying its blocking throughput. With
consumer grade drives there is no guarantee there are other copies of
the data, so you are prepared to trade off performance to have it try
"harder" to recover difficult sectors.


Indeed.

For backblaze's particular application however, this is not really an
issue.


They've just published their latest reliability survey. Hitachi/HGST comes
out top again (no surprise there, I always bought HGST Ultrastars which have
been fantastic) with a 1.5% failure rate and Seagate bottom (no surprise
again) with a 28% failure rate. WD also do very poorly, especially the 2TB
Greens.

http://arstechnica.com/information-t...st-hard-disks-
still-super-reliable-seagates-have-greatly-improved/

or http://tinyurl.com/gnjakdg

q
"The HGST drives are some of the oldest in Backblaze's collection, with
the 2TB units being almost five years old on average. Over the last two
and a half years, only 1.55 percent of them have failed"
/q

The PC I'm writing this on now has a 2TB HGST drive which runs 24/7. I
can't remember when I bought it, so checked SMART. It has power-on
hours of 2008 days, 8 hrs which is 5.5 years. Still heavily used and going
strong, though creaking at the seams with just 9GB free.

That's not bad at all IMO.

9 Power-On Hours (POH) 2008d 8h 94 94
197 Current Pending Sector Count 0 100 100
198 Uncorrectable Sector Count 0 100 100
199 UltraDMA CRC Error Count 0 200 200

ps. just checked with a defragmenting tool. 17% fragmentation, and the
most fragmented file has 13,620 fragments. Time for a defrag, perhaps

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #53   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 17/02/2016 10:49, Mike Tomlinson wrote:
En el artículo , John
Rumm escribió:

One of the things that those discussions seem to gloss over with regard
to enterprise class drives is the way in which the drives handle read
errors. In an enterprise environment you actually want a drive deal with
errors more quickly, even if it means a lower chance of recovering the
data


Careful now. That's true for RAID arrays, yes, since you don't want a drive
trying to recover from errors for so long that the RAID controller assumes
it has dies and takes it offline.

, since all the time its retrying its blocking throughput. With
consumer grade drives there is no guarantee there are other copies of
the data, so you are prepared to trade off performance to have it try
"harder" to recover difficult sectors.


Indeed.

For backblaze's particular application however, this is not really an
issue.


They've just published their latest reliability survey. Hitachi/HGST comes
out top again (no surprise there, I always bought HGST Ultrastars which have
been fantastic) with a 1.5% failure rate and Seagate bottom (no surprise
again) with a 28% failure rate. WD also do very poorly, especially the 2TB
Greens.


As various commentators have pointed out, their crop of Seagate drives
included a large proportion of a particular 1.5TB drive that Seagate
themselves admitted had a problem - so its not surprising they were
seeing such a high rate of failures. (also its still the drive they buy
most of, as it wins on the cost benefit trade off)

The PC I'm writing this on now has a 2TB HGST drive which runs 24/7. I
can't remember when I bought it, so checked SMART. It has power-on
hours of 2008 days, 8 hrs which is 5.5 years. Still heavily used and going
strong, though creaking at the seams with just 9GB free.

That's not bad at all IMO.


Indeed...

I find that with machines which are left on 24/7 they usually last
better than ones that do shorter hours, but stop and start more often.


9 Power-On Hours (POH) 2008d 8h 94 94
197 Current Pending Sector Count 0 100 100
198 Uncorrectable Sector Count 0 100 100
199 UltraDMA CRC Error Count 0 200 200

ps. just checked with a defragmenting tool. 17% fragmentation, and the
most fragmented file has 13,620 fragments. Time for a defrag, perhaps



or a SSD ;-)



--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #54   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , John
Rumm escribió:

or a SSD ;-)


Two fitted, one just for the OS and apps. The spinning rust is for
data'n'stuff

Thanks for the comments.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #55   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
John Rumm writes:
On 16/02/2016 08:33, Mike Tomlinson wrote:

I am also careful to specify proper enterprise-grade drives, not cheap
desktop or "NAS grade" ****, to install them in racks, with UPS power
protection, in a dedicated custom-built (designed by me) server room
with redundant air conditioners. Yes, it costs, but you get what you
pay for, and it wasn't my money I was spending.

http://static.googleusercontent.com/...com/en//archiv
e/disk_failures.pdf

http://www.theregister.co.uk/2014/02...to_evaluate_di
sk_reliability/


One of the things that those discussions seem to gloss over with regard
to enterprise class drives is the way in which the drives handle read
errors. In an enterprise environment you actually want a drive deal with
errors more quickly, even if it means a lower chance of recovering the
data, since all the time its retrying its blocking throughput. With
consumer grade drives there is no guarantee there are other copies of
the data, so you are prepared to trade off performance to have it try
"harder" to recover difficult sectors.


That's commonly referred to as TLER - time limited error recovery,
i.e. don't try recovering from a read error for more than a certain
number of seconds before giving up, so the RAID controller can know
to get the data from other disk(s). This is implemented in all
Enterprise disks, most nearline SAS disks, and in consumer grade NAS
disks.

As you say, you specifically don't want this behaviour on your
desktop with a single drive - you want it to try much harder to
read your data because there isn't another copy (usually;-)

In the early days, manufacturers allowed this behaviour to be
configured, but they mostly don't anymore, so they can sell TLER
drives for more money (although there are other feature differences
between single and NAS/SAN drives).

For backblaze's particular application however, this is not really an
issue.


--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]


  #56   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 17/02/2016 17:12, Mike Tomlinson wrote:
En el artículo , John
Rumm escribió:

or a SSD ;-)


Two fitted, one just for the OS and apps. The spinning rust is for
data'n'stuff


Yup, I do the same with all my machines generally...

(just bought a 2TB WD "Black" drive to move my games partition onto -
see if it can cut down loading times a bit ;-)

With SWMBOs and one of the sprogs machines I used SSHDs (aka hybrid
drive) - kind of a "nearly best of both worlds" solution that saves
having to get them to make use of multiple partitions.


--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #57   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 17/02/2016 19:36, Andrew Gabriel wrote:
In article ,
John Rumm writes:
On 16/02/2016 08:33, Mike Tomlinson wrote:

I am also careful to specify proper enterprise-grade drives, not cheap
desktop or "NAS grade" ****, to install them in racks, with UPS power
protection, in a dedicated custom-built (designed by me) server room
with redundant air conditioners. Yes, it costs, but you get what you
pay for, and it wasn't my money I was spending.

http://static.googleusercontent.com/...com/en//archiv
e/disk_failures.pdf

http://www.theregister.co.uk/2014/02...to_evaluate_di
sk_reliability/


One of the things that those discussions seem to gloss over with regard
to enterprise class drives is the way in which the drives handle read
errors. In an enterprise environment you actually want a drive deal with
errors more quickly, even if it means a lower chance of recovering the
data, since all the time its retrying its blocking throughput. With
consumer grade drives there is no guarantee there are other copies of
the data, so you are prepared to trade off performance to have it try
"harder" to recover difficult sectors.


That's commonly referred to as TLER - time limited error recovery,
i.e. don't try recovering from a read error for more than a certain
number of seconds before giving up, so the RAID controller can know
to get the data from other disk(s). This is implemented in all
Enterprise disks, most nearline SAS disks, and in consumer grade NAS
disks.

As you say, you specifically don't want this behaviour on your
desktop with a single drive - you want it to try much harder to
read your data because there isn't another copy (usually;-)

In the early days, manufacturers allowed this behaviour to be
configured, but they mostly don't anymore, so they can sell TLER
drives for more money (although there are other feature differences
between single and NAS/SAN drives).


I note in recent times, there have sprouted even more classifications of
(consumer) drive... WD for example had the normal consumer Blue range,
the performance Black, the energy efficient (I am going to rapidly wear
out my load cycle count by parking every few seconds) green drives, and
the red NAS ones. Now I see they have a purple Video/DVR recording
drive. Not sure what its USP is, but I would guess its something like it
wont allow thermal recalibration to glitch data transfer for long enough
to drop frames of video...



--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #58   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 5,168
Default DIY NAS question for the *nix experts

On 18/02/2016 04:32, John Rumm wrote:


I note in recent times, there have sprouted even more classifications of
(consumer) drive... WD for example had the normal consumer Blue range,
the performance Black, the energy efficient (I am going to rapidly wear
out my load cycle count by parking every few seconds) green drives, and
the red NAS ones. Now I see they have a purple Video/DVR recording
drive. Not sure what its USP is, but I would guess its something like it
wont allow thermal recalibration to glitch data transfer for long enough
to drop frames of video...


I have greendrives in my synology nas, they don't park very often.

3300 hours powered on, 2190 load cycles, 1990 power cycles.

These are retail ones from Currys (they were the cheapest) with no
changes done by me.


I doubt if using black drives will make much difference to loading
games, most of the performance gains appear to be the write cache.
Maybe you need to look at 10k rpm drives?
  #59   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 18/02/2016 10:10, dennis@home wrote:
On 18/02/2016 04:32, John Rumm wrote:


I note in recent times, there have sprouted even more classifications of
(consumer) drive... WD for example had the normal consumer Blue range,
the performance Black, the energy efficient (I am going to rapidly wear
out my load cycle count by parking every few seconds) green drives, and
the red NAS ones. Now I see they have a purple Video/DVR recording
drive. Not sure what its USP is, but I would guess its something like it
wont allow thermal recalibration to glitch data transfer for long enough
to drop frames of video...


I have greendrives in my synology nas, they don't park very often.


I think they have fixed it in more recent firmware. Early versions of
the drive would unload after 12 seconds of inactivity.

3300 hours powered on, 2190 load cycles, 1990 power cycles.


I used to have a couple in my NAS... after about 18 months of use, they
had got to more than 600K load cycles on less than 10 power cycles!

These are retail ones from Currys (they were the cheapest) with no
changes done by me.


I doubt if using black drives will make much difference to loading
games, most of the performance gains appear to be the write cache.


The main reason for getting another drive was my games partition was
running low on space (not surprising with some games wanting 60GB+ these
days!), so I thought I would try the supposedly faster drive while at
it. I will report back when I get round to trying it.

Maybe you need to look at 10k rpm drives?


They seem less common than they used to be...




--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #60   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , Andrew Gabriel
escribió:

ZFS on Linux will get there - more people are working on it than
on any of the other platforms, but if it's the main purpose of the
system, for now choose an OS where ZFS is the native filesystem,
i.e. Illumos based (OmniOS, Openindiana, Nexentastor, etc), or FreeBSD
(including FreeNAS).


Just seen this today - the next release of Ubuntu LTS will include the
ZFS module in the kernel (previously, you had to build it and install it
as a kernel module yourself). That's a big vote of confidence in zfs.

http://arstechnica.com/gadgets/2016/...will-be-built-
into-ubuntu-16-04-lts-by-default/

http://tinyurl.com/zytzl5p

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")


  #61   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 21/02/16 13:49, Huge wrote:
On 2016-02-20, Mike Tomlinson wrote:
En el artÃ*culo , Andrew Gabriel
escribió:

ZFS on Linux will get there - more people are working on it than
on any of the other platforms, but if it's the main purpose of the
system, for now choose an OS where ZFS is the native filesystem,
i.e. Illumos based (OmniOS, Openindiana, Nexentastor, etc), or FreeBSD
(including FreeNAS).


Just seen this today - the next release of Ubuntu LTS will include the
ZFS module in the kernel (previously, you had to build it and install it
as a kernel module yourself). That's a big vote of confidence in zfs.

http://arstechnica.com/gadgets/2016/...will-be-built-
into-ubuntu-16-04-lts-by-default/

http://tinyurl.com/zytzl5p


That's good news. Winder how long it will take to get into Mint?



http://blog.linuxmint.com/?p=2975

June or thereabouts

--
He who ****s in the road, will meet flies on his return.

"Mr Natural"
  #62   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 116
Default DIY NAS question for the *nix experts

John Rumm wrote:

Mike Tomlinson wrote:

John Rumm escribió:

or a SSD ;-)


Two fitted, one just for the OS and apps. The spinning rust is for
data'n'stuff


Yup, I do the same with all my machines generally...

(just bought a 2TB WD "Black" drive to move my games partition onto -


Couple of weeks ago I was testing a P2V'ed database server worked OK
with the customers app/PCs ...

cust: "wow! So the database runs faster on your laptop than on our server?"

me: "yes, good isn't it, want me to fit an SSD for you?"

To be fair it's a less than 2GB mySQL database and a fairly ropey Access
front-end, I've never looked at the code, but going from 2 minutes to 2
seconds launch time was an easy hit ...

Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Question for our plastics or molding experts - epoxy casting molds A2 Metalworking 18 November 5th 15 03:25 AM
Question for you experts about connecting rod stryped[_2_] Metalworking 6 December 13th 08 03:53 AM
A question for the gun experts Roger Shoaf Metalworking 28 August 3rd 07 04:47 PM
Kitchen Island question?? Cabinet experts please come in!! bdeditch Woodworking 19 March 24th 07 05:53 PM
bee swarm question for the bee experts Tim Mitchell UK diy 50 June 26th 05 10:24 PM


All times are GMT +1. The time now is 05:12 PM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 DIYbanter.
The comments are property of their posters.
 

About Us

"It's about DIY & home improvement"