View Single Post
  #28   Report Post  
Posted to uk.d-i-y
T i m T i m is offline
external usenet poster
 
Posts: 13,431
Default d-i-y Nas. Hard drive makes?

On 18 Aug 2019 12:40:35 GMT, Bob Eager wrote:

On Sun, 18 Aug 2019 12:42:33 +0100, T i m wrote:

I use HP microservers a lot.


A couple of friends have them Bob (and I think I remember they could be
good VFM when bought with some form of cashback?) but I was concerned of
how specific the hardware was over building my own for 'std' PC parts.


The only thing that's ever failed is a couple of PSUs, and that wasn't
their fault. I fitted optical drives to the machines, and the power
adaptors I'd used failed catastrophically and killed the PSU.


The worst example of that I witnessed is someone using a very cheap
Molex to SATA power adapter take out a brand new and very large
(expensive) capacity HDD. 5 and 12V crossed over. ;-(


New PSUs
aren't cheap from HP, but they are just ITX PSUs, of which I had a few.


Handy.


The main NAS is one of those.


What OS is running on it?


FreeBSD, with geom/gmirror.


Ok, thanks.

If a disk fails (happened once in 9 years)


And that's the thing. Whilst I know daughter would be pi$$ed off if she
couldn't access her data because the only disk had failed, I'm not sure
what the odds are on the hardware itself failing (removing access to
both mirrored drives) over the drive (if only a single drive etc)
failing?


As I said, spare machine (they were cheap enough). And I always go for
dual (RAID-1) drives - and they even give a small edge in performance
over the single drive.


Understood.

then it's RAID-1 so it's easy to replace the faulty disk (cold
swap, but very fast) and rebuild the array.


I'm not sure it's *always* easy though is it Rob? I have read many tales
of the rebuild process screwing up and taking all your data with it,
hence you are *still* reliant on a backup?


I have honestly never heard of that happening with geom/gmirror. This is
software RAID in the OS.


That's good then.

Hardware RAID I consider problematical (not
least if you can't get exactly the same controller if that fails).


I have heard of a bug in a commercial RAID box that recovered the
replacement (blank) drive over the rest of the array. I have also used
a few HW RAID mobos with mirrored drives (Intel RAID controllers if I
remember correctly) and they were always throwing up error messages
and having issues. It was suggested that software RAID was even less
predictable / reliable?



BTW, they are all Western Digital Red now. The one that failed wasn't -
it was a Seagate.


OOI, were the drives that would typically be failing now have been
specific 'NAS / Server' drives or just 'drives' (FWIW etc)?


The first failure was a bog standard Seagate. It came with one of the
early microservers so I just used it.


Ok.

The second failure was a WD Red, obviously something wrong because of the
quick failure. Some of my early drives are still in there, and they are
WD Blue, but most are now Red ones.


It will be interesting to hear how they fair over time.

The Red ones have a three year warranty (think Blue are one year) but
they don't go 'deaf' for more than a few seconds if they have trouble
reading a sector - much better when the disk is in an array.


Sure. Do I remember correctly that some drives / controllers could
actually ensure all spindles in an array were kept in sync?

I just checked, and one pair of the Reds has done about 55,000 hours
(pretty well continously). Another is at 47,000 and then it ranges down
through 25,000 - to 3,500 for the newest pair (I upgraded capacity).


Interesting, thanks.

I have a total of seven machines running with FreeBSD geom/gmirror.


Cool!

Also running one Windows machine with a mirrored pair - using the
standard Windows mirroring.


Ok. OOI, how many have ever failed in use and how well did they handle
the failure (did they just 'carry on as hoped / expected')?

How do you back that lot up or are some backups of the others?

Cheers, T i m