UK diy (uk.d-i-y) For the discussion of all topics related to diy (do-it-yourself) in the UK. All levels of experience and proficency are welcome to join in to ask questions or offer solutions.

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?

--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #2   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.


My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).

Look for a cheap NAS on fleabay, or stick then in a cheap case. Use a
separate small drive or SSD for the OS, then you can upgrade/maintain
that without affecting the data on the RAIDed disks. I take them out of
my Microserver when doing OS upgrades, etc. so no "accident" can befall
them.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #3   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 1,375
Default DIY NAS question for the *nix experts

On 06/02/16 14:58, John Rumm wrote:
I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.


Here's maybe a thought,

When digital TV first launched, there were queries how much disk space
would it take to simultaneously record the output of all the TV muxes.

How many days continuous could you catch with your HDD stash? I know
it's the age of onDemand, but not all content is available that way.

--
Adrian C
  #4   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,300
Default DIY NAS question for the *nix experts


"Mike Tomlinson" wrote in message
...
En el artículo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.


My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).

Look for a cheap NAS on fleabay, or stick then in a cheap case. Use a
separate small drive or SSD for the OS, then you can upgrade/maintain
that without affecting the data on the RAIDed disks. I take them out of
my Microserver when doing OS upgrades, etc. so no "accident" can befall
them.


Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


  #5   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article . com,
"bm" writes:
Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


Steer well clear of the 3112/3114 SATA controllers!
They were very early SATA controllers that pretend to be IDE (PATA) to
the OS, so they can be used by OS's which didn't know about SATA drives.
(Note they are not PCIe either - they predate that too.)
They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]


  #6   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,300
Default DIY NAS question for the *nix experts


"Andrew Gabriel" wrote in message
...
In article . com,
"bm" writes:
Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


Steer well clear of the 3112/3114 SATA controllers!
They were very early SATA controllers that pretend to be IDE (PATA) to
the OS, so they can be used by OS's which didn't know about SATA drives.
(Note they are not PCIe either - they predate that too.)
They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).


Terrific, cheers for that.
Hmmmmm.


  #7   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 13,431
Default DIY NAS question for the *nix experts

On Sat, 06 Feb 2016 14:58:34 +0000, John Rumm
wrote:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?



Yes and no (but probably more no as a solution for you). ;-)

My current server is an Dual Core Atom running 3 x 500G 2.5" SATA
drives on Windows Home Server V1. Two SATA are on board and there are
another 3 on a PCI card (1 used).

They used a technique that sounds like it would suit your needs (as
it still suits mine) in that you simply add drives to a pool (it's
easy to do because it's Windows g) and they can be any size or
interface. So, an old Mobo with 4 SATA and 2 PATA ports could use 6
drives straight away. It was called Drive Extender and the good thing
was the drives were just running straight NTFS and so each could be
read independently if required (unlike a single drive from a RAID 5
array)

As soon as the system detects a new drive it asks you how you want to
use it, either by adding it to the / a pool or as a backup drive. If
you have an existing pool of say 1.5TB and then you add another 500G
drive your pool then becomes 2TB.

Data redundancy is provided by folder mirroring where a mirrored
folder will be mirrored across two separate drives.

If a drive starts to play up (or you want to replace it with a bigger
one), you just remove it from the pool (all the data will
automatically be migrated to the remaining drives), you take it out,
fit the replacement and join it back into the pool.

Mine has been running every day for checks 1812 days:

https://dl.dropboxusercontent.com/u/5772409/WHS_V1.jpg

However, it is woken up by the first (Windows / Mac) client that turns
on and goes to sleep after the last client has shut down, assuming
there is no ongoing network or CPU activity above a certain threshold
or an unfinished torrent etc.

It also backs up all the Windows clients every day to the point where
if a client had drive fails catastrophically, I can replace the drive
and re-image the machine over the LAN in less than an hour and a few
clicks (and have done so a couple of times so far).

Or you can browse the list of backup available for each machine and
get a single file from say six months previous, depending on your
backup settings.

The backup system knows when it's backing up identical files from
different machines so only stores them once.

MS dropped WHS V1 (based on WS 2005) for WHS V2011 (based on WS 2008)
but you can still find copies on the likes of eBay.

I initially tried making a Linux server but gave up long before I even
considered the automatic (image) backup or all of the other features
that were so easy for me to install on WHS. I just wanted a NAS / file
server, not a new geeky hobby and was happy to pay the 45 or so quid
for the privilege.

Cheers, T i m

p.s. You can still get drive pooling on other Windows solutions (I
don't know which, I'm not a Windows fanatic) and you can also get
independent solutions like Drive Bender:

http://www.division-m.com/drivebender/
http://www.division-m.com/videos/drivebender-demo1/
  #8   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 06/02/2016 15:46, Mike Tomlinson wrote:
En el artículo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.


My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).


I can get basic USB2 caddies for about £3.50, so say £25 for 3 gig. For
the drives you shove in the main box, one does not even need to use the
box that comes with them, just the SATA header.



--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #9   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 06/02/2016 15:55, Adrian Caspersz wrote:
On 06/02/16 14:58, John Rumm wrote:
I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.


Here's maybe a thought,

When digital TV first launched, there were queries how much disk space
would it take to simultaneously record the output of all the TV muxes.

How many days continuous could you catch with your HDD stash? I know
it's the age of onDemand, but not all content is available that way.


I have about 15 gigs worth of new 500G 2.5" drives, and perhaps 3 or 4
gigs worth of 3.5" ones. That won't buy you that much time if recording
all the muxes - especially the HD ones.



--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #10   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 7,434
Default DIY NAS question for the *nix experts

On 06/02/16 14:58, John Rumm wrote:
I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?


Linux MD (RAID) does not care what the interfaces are - it works at the
block device level, so as long as your devices appear, it will work find.

But of course you are going to have to balance the sizes for RAID1,5,6
which might not be totally convenient. MD RAIN can work with partitions
(it's just another block device) so that is one get out.

However, ZFS might be a more interesting choice.

For something like this, I would (and do) use Debian. ZFS is pretty
solid on Debian (but is not "native: you need to include an extra
repository http://zfsonlinux.org/debian.html ) and MD + LVM is
absolutely rock solid.


  #11   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 5,168
Default DIY NAS question for the *nix experts

On 06/02/2016 19:50, Theo wrote:
Adrian Caspersz wrote:
When digital TV first launched, there were queries how much disk space
would it take to simultaneously record the output of all the TV muxes.


Doable, at a price:
http://www.promise.tv/products/seven.html
That's apparently single-figures TB but the tricky thing is you need at
least 6 tuners for all the muxes.


The new SkyQ box has 12 tuners.
Well it really has a chip that can do all the muxes.

It only has 2TB of disk though.
  #12   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 06/02/16 17:00, bm wrote:
"Mike Tomlinson" wrote in message
...
En el artÃ*culo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.


My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).

Look for a cheap NAS on fleabay, or stick then in a cheap case. Use a
separate small drive or SSD for the OS, then you can upgrade/maintain
that without affecting the data on the RAIDed disks. I take them out of
my Microserver when doing OS upgrades, etc. so no "accident" can befall
them.


Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


Linux will do some sort of RAID without need for a raid hardware solution.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...

I have to say I am running an ATOM server with a GB of RAM and its more
than fast enough for the trivial load file and web serving places on it.
Cases can be picked up for nothing., My PC supplier has tons of old
cases, and would probably sling me a load of drive bay kits for peanuts.


And an obsolete 10 year old XP style machine that has been junked for
being too old and slow is all you need. Tons of people have this stuff
lying around. Offer them a pint and its yours...


--
Canada is all right really, though not for the whole weekend.

"Saki"
  #13   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 06/02/2016 23:23, The Natural Philosopher wrote:

And an obsolete 10 year old XP style machine that has been junked for
being too old and slow is all you need. Tons of people have this stuff
lying around. Offer them a pint and its yours...



I am trying to get rid of it, not get more.... I will take the pint
though ;-)

--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #14   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 58
Default DIY NAS question for the *nix experts



"The Natural Philosopher" wrote in message
...
On 06/02/16 17:00, bm wrote:
"Mike Tomlinson" wrote in message
...
En el artÃ*culo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.

My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).

Look for a cheap NAS on fleabay, or stick then in a cheap case. Use a
separate small drive or SSD for the OS, then you can upgrade/maintain
that without affecting the data on the RAIDed disks. I take them out of
my Microserver when doing OS upgrades, etc. so no "accident" can befall
them.


Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


Linux will do some sort of RAID without need for a raid hardware solution.


So will Win.

But I question the value of RAID versus say a simple mirror the important
software on a nightly basis...


You do get rather more total storage capacity with
RAID tho its certainly not as simple if a drive dies.

I have to say I am running an ATOM server with a GB of RAM and its more
than fast enough for the trivial load file and web serving places on it.
Cases can be picked up for nothing., My PC supplier has tons of old cases,


Not many of them take all that many drives tho.

and would probably sling me a load of drive bay kits for peanuts.


They only cost peanuts from aliexpress anyway.

And an obsolete 10 year old XP style machine that has been junked for
being too old and slow is all you need. Tons of people have this stuff
lying around. Offer them a pint and its yours...


Not ideal power use wise tho.

  #15   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , Andrew Gabriel
escribió:

They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).


Balls. I've used many SiI3112-based cards over the years with nary a
problem. They were a great alternative to the more expensive Adaptecs,
so much so that some motherboards included the Silicon Image BIOS in the
system BIOS, and performed better.

Silent data corruption is almost always due to improper termination.

SCSI termination is a black art. You need sacrificial goats, black
candles, a pentagram in the right shade of chalk, and to be standing on
the correct foot and facing in precisely the right direction while
chanting your incantation to the SCSI gods.

Apart from that, it's **** easy.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")


  #16   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
Mike Tomlinson writes:
En el artículo , Andrew Gabriel
escribió:

They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).


Balls. I've used many SiI3112-based cards over the years with nary a
problem. They were a great alternative to the more expensive Adaptecs,
so much so that some motherboards included the Silicon Image BIOS in the
system BIOS, and performed better.


The Silicon Image BIOS is just to support managing and booting from RAID.
If you are not using RAID (or more specifically, not booting from RAIDed
disks), you don't need the SI BIOS, and the controller will still work
as it looks like an IDE controller. For the add-in cards, they produce
two BIOS's, one which supports RAID, and one which doesn't, and you can
flash either into the cards.

Silent data corruption is almost always due to improper termination.


Improper termination causes transport errors. Transport errors are not
silent - both ends get to know they happened and can take corrective
action.

Silent data corruption is when a block write completes without errors,
and a later read of the same block completes without errors, but the
data returned is not the last data written to that block. These can
only be detected by something higher up the stack (such as ZFS which
checksums every block on the disk, or in the absence of ZFS, by an
application which can detect data corruption).

SCSI termination is a black art. You need sacrificial goats, black
candles, a pentagram in the right shade of chalk, and to be standing on
the correct foot and facing in precisely the right direction while
chanting your incantation to the SCSI gods.


SCSI termination wasn't a black art, and these cards are not SCSI anyway.
SATA phy is point-to-point, so there's no termination configuration to
do - it's built into the phy chip at each end.

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #17   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 13,431
Default DIY NAS question for the *nix experts

On Sun, 7 Feb 2016 12:16:21 -0000 (UTC),
(Andrew Gabriel) wrote:

In article ,
Mike Tomlinson writes:
En el artículo , Andrew Gabriel
escribió:

They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).


Balls. I've used many SiI3112-based cards over the years with nary a
problem. They were a great alternative to the more expensive Adaptecs,
so much so that some motherboards included the Silicon Image BIOS in the
system BIOS, and performed better.


The Silicon Image BIOS is just to support managing and booting from RAID.
If you are not using RAID (or more specifically, not booting from RAIDed
disks), you don't need the SI BIOS, and the controller will still work
as it looks like an IDE controller.


Yup. Similar with Adaptec SCSI HBAs.

For the add-in cards, they produce
two BIOS's, one which supports RAID, and one which doesn't, and you can
flash either into the cards.

Silent data corruption is almost always due to improper termination.


Improper termination causes transport errors. Transport errors are not
silent - both ends get to know they happened and can take corrective
action.

Silent data corruption is when a block write completes without errors,
and a later read of the same block completes without errors, but the
data returned is not the last data written to that block. These can
only be detected by something higher up the stack (such as ZFS which
checksums every block on the disk, or in the absence of ZFS, by an
application which can detect data corruption).

SCSI termination is a black art. You need sacrificial goats, black
candles, a pentagram in the right shade of chalk, and to be standing on
the correct foot and facing in precisely the right direction while
chanting your incantation to the SCSI gods.


SCSI termination wasn't a black art,


+1

and these cards are not SCSI anyway.
SATA phy is point-to-point, so there's no termination configuration to
do - it's built into the phy chip at each end.


I love it when I read something from someone who knows what they are
talking and that corrects someone who thinks they do. ;-)

Cheers, T i m
  #18   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
The Natural Philosopher writes:
Linux will do some sort of RAID without need for a raid hardware solution.


This chipset isn't hardware RAID - its BIOS has minimal support to allow
the PC BIOS to boot from a RAID set. Once the OS takes over from the
BIOS i/o functions, the OS needs a driver to perform the RAID function.
The RAID management data is all proprietary as far as I know - at least,
they didn't publish it ~10 years ago when we were testing this chipset,
so if you use the BIOS RAID feature to boot from a RAID set, you have to
use their OS drivers.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...


Depends what data availability you need. The cost of mirroring is worth
it for me - if a disk dies in the middle of my work, I don't have to
stop work and go and fix it.


--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #19   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
Tim Watts writes:
However, ZFS might be a more interesting choice.

For something like this, I would (and do) use Debian. ZFS is pretty
solid on Debian (but is not "native: you need to include an extra
repository http://zfsonlinux.org/debian.html ) and MD + LVM is
absolutely rock solid.


Just to point out (and I'm sure Tim knows this anyway), you don't
normally use MD/LVM with ZFS - it has its own volume management
integrated in. You can mix disk sizes in zpool, but you ideally
use the same size in each top level vdev (stripe), as within a top
level vdev, it will treat all disks as though they are the size
of the smallest one.

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #20   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 07/02/16 12:36, Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
Linux will do some sort of RAID without need for a raid hardware solution.


This chipset isn't hardware RAID - its BIOS has minimal support to allow
the PC BIOS to boot from a RAID set. Once the OS takes over from the
BIOS i/o functions, the OS needs a driver to perform the RAID function.
The RAID management data is all proprietary as far as I know - at least,
they didn't publish it ~10 years ago when we were testing this chipset,
so if you use the BIOS RAID feature to boot from a RAID set, you have to
use their OS drivers.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...


Depends what data availability you need. The cost of mirroring is worth
it for me - if a disk dies in the middle of my work, I don't have to
stop work and go and fix it.


that is fair enough, but then you have top replace it with a new one of
the same basic type.


--
Bureaucracy defends the status quo long past the time the quo has lost
its status.

Laurence Peter


  #21   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 805
Default DIY NAS question for the *nix experts

On Sat, 06 Feb 2016 14:58:34 +0000, John Rumm wrote:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?


Watching with interest as I am slowly collecting 2.5" SATA HDDs as I
upgrade to SSDs.

I use
http://www.amazon.co.uk/gp/product/B0037SHEAQ?
psc=1&redirect=true&ref_=oh_aui_detailpage_o08_s00
to fit two 2.5" drives into a 3.5" hole in a chassis.

A recommendation for a SATA3 add on card for PCIe would be good, as I am
running out of SATA3 ports on my main chassis.

For an old box, I assume SATA 2 would be sufficient because the bus
wouldn't provide SATA3 transfer speeds anyway (I have some recollection of
being told this on uk.comp.homebuilt some time back).

Cheers


Dave R

--
Windows 8.1 on PCSpecialist box
  #22   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 07/02/16 13:46, David wrote:
For an old box, I assume SATA 2 would be sufficient because the bus
wouldn't provide SATA3 transfer speeds anyway (I have some recollection of
being told this on uk.comp.homebuilt some time back).


Its doubtful whether overall sata3 makes a lot of difference to spinning
rust, although it for sure does to SSD!

You will probably be limited by the network anyway for NAS unless you
have gigabit

I can easily saturate a 100Mbps link.

MM. Looking at the specs SATA2 goes up to 2.4Gbps - so could saturate a
gigabit link too.

So:

- Not much CPU grunt neeeded.
- Sata 2 fine
- As many SATA ports as possible
- Best ethernet you can
- Enough RAM to provide sane caching. probably 512MB at least.
- OS to glue it all together - headless servers? Debian really.
- Rsync, software RAID or hardware RAID depending on application and
prejudice.
- STRONGLY RECOMMEND compiling up latest 'minidlna' if you have a smart
TV with dlna support, to make a place to dump ALL your videos.

RIPPING DVDs to MP4s and putting them on the server makes for a vastly
easier experience than trying to find the right DVD.


--
"It is an established fact to 97% confidence limits that left wing
conspirators see right wing conspiracies everywhere"
  #23   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , Andrew Gabriel
escribió:

[an admirably restrained reply]

My apologies Andrew, you're quite right. Getting confused in my
advancing years. Ignore me.

I'd got your Silicon Image 3112 controller (of which I have installed
and used many - still got a couple in my bits box) confused with the
Symbios Logic 53c810 narrow SCSI controller, which was a popular, cheap,
quick and reliable alternative to Adaptec cards back in the day.

Of course, you're quite right that there are no termination issues with
SATA because it's point-to-point. But my comments about SCSI
termination being a black art stand. I've worked with SCSI for many
years from the early SASI devices, right up to U320 SCSI.

I haven't seen any references to SiI3112 causing silent corruption, and
google isn't helping. Do you have a cite, please? Ta.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #24   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , David
escribió:

A recommendation for a SATA3 add on card for PCIe would be good, as I am
running out of SATA3 ports on my main chassis.


SATA3 is only needed for SSDs. You're wasting your time connecting hard
drives to SATA3 ports; SATA2 has plenty of bandwidth.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #25   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,115
Default DIY NAS question for the *nix experts

On Sun, 07 Feb 2016 13:21:41 +0000, The Natural Philosopher wrote:

On 07/02/16 12:36, Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
Linux will do some sort of RAID without need for a raid hardware
solution.


This chipset isn't hardware RAID - its BIOS has minimal support to
allow the PC BIOS to boot from a RAID set. Once the OS takes over from
the BIOS i/o functions, the OS needs a driver to perform the RAID
function. The RAID management data is all proprietary as far as I know
- at least,
they didn't publish it ~10 years ago when we were testing this chipset,
so if you use the BIOS RAID feature to boot from a RAID set, you have
to use their OS drivers.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...


Depends what data availability you need. The cost of mirroring is worth
it for me - if a disk dies in the middle of my work, I don't have to
stop work and go and fix it.


that is fair enough, but then you have top replace it with a new one of
the same basic type.


I mirror partitions rather than disks - so any disk will do as long as it
is sufficiently big.


  #26   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 08/02/16 08:49, Huge wrote:
On 2016-02-07, Mike Tomlinson wrote:

[14 lines snipped]

But my comments about SCSI
termination being a black art stand. I've worked with SCSI for many
years from the early SASI devices, right up to U320 SCSI.


I never had any trouble with it, and I supported SCSI based Sun stuff
for many years. Perhaps because it was all single vendor?


we used to plug third party disks into SUNs all the time.

As long as the terminator was in place, not a problem


--
Outside of a dog, a book is a man's best friend. Inside of a dog it's
too dark to read.

Groucho Marx


  #27   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , Huge
escribió:

I never had any trouble with it, and I supported SCSI based Sun stuff
for many years. Perhaps because it was all single vendor?


The slower stuff, up to ultra-wide SCSI, mixed-and-matched without
problems if you used decent cables and active terminators, and checked
carefully for on-board termination on devices before fitting them.
Remember these funny in-line resistor packs?

http://www.warp12racing.com/images/a.../ataritt18.jpg

I ran a Digital/Compaq Alpha server running Tru64 UNIX with 7 disks in
an external "pedestal" and four DLT tape drives attached to another
card. That was the main server for the department for many years.

Things got a bit hinky with U160, not helped by the Adaptec U160
adapters being a pile of ****e. It was disappointing after the 2940UW
card, which Just Worked with about everything. I still have a couple in
the bits box, can't bear to chuck them out.

I gave up on daisy-chaining with U320 and reverted to point-to-point
(one card, one cable, one device.) It just wasn't reliable enough. The
adapters were cheap enough for it to not matter.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #28   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 1,375
Default DIY NAS question for the *nix experts

On 08/02/16 08:49, Huge wrote:
On 2016-02-07, Mike Tomlinson wrote:

[14 lines snipped]

But my comments about SCSI
termination being a black art stand. I've worked with SCSI for many
years from the early SASI devices, right up to U320 SCSI.


I never had any trouble with it, and I supported SCSI based Sun stuff
for many years. Perhaps because it was all single vendor?


Jumpers and terminators, not many got the science about what had to be
done with different/mixed SCSI widths, cables, number of cables,
position of the controller, auto termination, were taken in consideration.

I was, er, perfect....

The glory days of the 1542 and 2940UW ....

--
Adrian C
  #29   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 5,168
Default DIY NAS question for the *nix experts

On 08/02/2016 10:09, Adrian Caspersz wrote:
On 08/02/16 08:49, Huge wrote:
On 2016-02-07, Mike Tomlinson wrote:

[14 lines snipped]

But my comments about SCSI
termination being a black art stand. I've worked with SCSI for many
years from the early SASI devices, right up to U320 SCSI.


I never had any trouble with it, and I supported SCSI based Sun stuff
for many years. Perhaps because it was all single vendor?


Jumpers and terminators, not many got the science about what had to be
done with different/mixed SCSI widths, cables, number of cables,
position of the controller, auto termination, were taken in consideration.

I was, er, perfect....


You missed active/passive terminators.


The glory days of the 1542 and 2940UW ....


  #30   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 1,375
Default DIY NAS question for the *nix experts

On 08/02/16 14:44, dennis@home wrote:
On 08/02/2016 10:09, Adrian Caspersz wrote:
On 08/02/16 08:49, Huge wrote:
On 2016-02-07, Mike Tomlinson wrote:

[14 lines snipped]

But my comments about SCSI
termination being a black art stand. I've worked with SCSI for many
years from the early SASI devices, right up to U320 SCSI.

I never had any trouble with it, and I supported SCSI based Sun stuff
for many years. Perhaps because it was all single vendor?


Jumpers and terminators, not many got the science about what had to be
done with different/mixed SCSI widths, cables, number of cables,
position of the controller, auto termination, were taken in
consideration.

I was, er, perfect....


You missed active/passive terminators.


Yes and sacrificing goats. Cripes, barely remembered that.

http://www.staff.uni-mainz.de/neuffer/scsi/fun.html


--
Adrian C


  #31   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 08/02/2016 08:49, Huge wrote:
On 2016-02-07, Mike Tomlinson wrote:

[14 lines snipped]

But my comments about SCSI
termination being a black art stand. I've worked with SCSI for many
years from the early SASI devices, right up to U320 SCSI.


I never had any trouble with it, and I supported SCSI based Sun stuff
for many years. Perhaps because it was all single vendor?


Same here, used it for years, and if the termination was "correct" it
was rare to get a problem. Part of the difficulty was "not quite
correct" termination... say where you had a host adaptor with hard wired
termination at one end of the bus, and then both internal and external
connections. Then you ended up with passive termination not quite at the
end of the bus as seen by an external device. (also some cables were
less good than others)


--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #32   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 08/02/2016 09:44, Mike Tomlinson wrote:
En el artículo , Huge
escribió:

I never had any trouble with it, and I supported SCSI based Sun stuff
for many years. Perhaps because it was all single vendor?


The slower stuff, up to ultra-wide SCSI, mixed-and-matched without
problems if you used decent cables and active terminators, and checked
carefully for on-board termination on devices before fitting them.
Remember these funny in-line resistor packs?

http://www.warp12racing.com/images/a.../ataritt18.jpg

I ran a Digital/Compaq Alpha server running Tru64 UNIX with 7 disks in
an external "pedestal" and four DLT tape drives attached to another
card. That was the main server for the department for many years.

Things got a bit hinky with U160, not helped by the Adaptec U160
adapters being a pile of ****e. It was disappointing after the 2940UW
card, which Just Worked with about everything. I still have a couple in
the bits box, can't bear to chuck them out.


I still have one in my "graphics / games" machine... only talks to my
scanner(s) these days.



--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #33   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 13,431
Default DIY NAS question for the *nix experts

On Tue, 09 Feb 2016 10:33:36 +0000, John Rumm
wrote:
snip

Same here, used it for years, and if the termination was "correct" it
was rare to get a problem. Part of the difficulty was "not quite
correct" termination... say where you had a host adaptor with hard wired
termination at one end of the bus, and then both internal and external
connections. Then you ended up with passive termination not quite at the
end of the bus as seen by an external device. (also some cables were
less good than others)



If I had any internal devices I would just take the internal
termination off the card and use an external terminator. Then if you
did also have external devices you replaced the terminator with the
cable and put the terminator back on the remote end. ;-)

Slightly more complex was a wide but and devices with a narrow device
on the end. ;-)

But, follow the rules and as you say it generally 'just worked'.

Cheers, T i m
  #34   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,853
Default DIY NAS question for the *nix experts

On 07/02/2016 15:10, Mike Tomlinson wrote:
Of course, you're quite right that there are no termination issues with
SATA because it's point-to-point. But my comments about SCSI
termination being a black art stand. I've worked with SCSI for many
years from the early SASI devices, right up to U320 SCSI.


I'm another one who never saw silent SCSI problems from termination. I
started with SASI, and got out of the design side about SCSI-2 - and
never saw anything like that. It either worked, or not. After all it
does have parity. You'd have to try pretty hard to get no parity errors,
but data errors.

Andy
  #35   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
The Natural Philosopher writes:
On 07/02/16 12:36, Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
Linux will do some sort of RAID without need for a raid hardware solution.


This chipset isn't hardware RAID - its BIOS has minimal support to allow
the PC BIOS to boot from a RAID set. Once the OS takes over from the
BIOS i/o functions, the OS needs a driver to perform the RAID function.
The RAID management data is all proprietary as far as I know - at least,
they didn't publish it ~10 years ago when we were testing this chipset,
so if you use the BIOS RAID feature to boot from a RAID set, you have to
use their OS drivers.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...


Depends what data availability you need. The cost of mirroring is worth
it for me - if a disk dies in the middle of my work, I don't have to
stop work and go and fix it.


that is fair enough, but then you have top replace it with a new one of
the same basic type.


With ZFS, the new disk needs to have a sector size = the other disk,
and number of sectors = other disk. Other than that, it doesn't care.
If the new disk is bigger, the excess isn't used unless/until you replace
the other disk with a bigger one too.

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]


  #36   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 13/02/16 00:08, Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
On 07/02/16 12:36, Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
Linux will do some sort of RAID without need for a raid hardware solution.

This chipset isn't hardware RAID - its BIOS has minimal support to allow
the PC BIOS to boot from a RAID set. Once the OS takes over from the
BIOS i/o functions, the OS needs a driver to perform the RAID function.
The RAID management data is all proprietary as far as I know - at least,
they didn't publish it ~10 years ago when we were testing this chipset,
so if you use the BIOS RAID feature to boot from a RAID set, you have to
use their OS drivers.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...

Depends what data availability you need. The cost of mirroring is worth
it for me - if a disk dies in the middle of my work, I don't have to
stop work and go and fix it.


that is fair enough, but then you have top replace it with a new one of
the same basic type.


With ZFS, the new disk needs to have a sector size = the other disk,
and number of sectors = other disk. Other than that, it doesn't care.
If the new disk is bigger, the excess isn't used unless/until you replace
the other disk with a bigger one too.

Isn't ZFS the file system that if you are unlucky, borks itself beyod
all repair possibility, even on RAID?


--
Ideas are more powerful than guns. We would not let our enemies have
guns, why should we let them have ideas?

Josef Stalin
  #37   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
The Natural Philosopher writes:
On 13/02/16 00:08, Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
On 07/02/16 12:36, Andrew Gabriel wrote:

Depends what data availability you need. The cost of mirroring is worth
it for me - if a disk dies in the middle of my work, I don't have to
stop work and go and fix it.


that is fair enough, but then you have top replace it with a new one of
the same basic type.


With ZFS, the new disk needs to have a sector size = the other disk,
and number of sectors = other disk. Other than that, it doesn't care.
If the new disk is bigger, the excess isn't used unless/until you replace
the other disk with a bigger one too.

Isn't ZFS the file system that if you are unlucky, borks itself beyod
all repair possibility, even on RAID?


It's pretty difficult to do.
In the early days, it was liable to IDE disks lying about when they
had really committed blocks to disk which hit some home users, but a
feature was added to enable ZFS to backstep through the transaction
commits to find the last one where the disk had really committed all
the i/o, when it has lied about performing later commits and then lost
the data at poweroff. I haven't heard of SATA drives lying about write
commits, and everyone who uses ZFS at large scale will be using SAS
disks which have much better quality firmware anyway.

I've worked with many hundreds of customers using ZFS, and none have
ever lost any data due to it. It's clearly not impossible if you are
stupid enough and some people are, but in general it's much less likely
than on other filesystems. Above all, you do know when data is corrupt,
which is not the case with most filesystems. Many customers have layered
ZFS on top of their expensive SAN storage arrays so they can tell when
data gets corrupted, which was something that previously usually went
unnoticed until it was too late to restore an uncorrupted copy.

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #38   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , Andrew Gabriel
escribió:

I've worked with many hundreds of customers using ZFS, and none have
ever lost any data due to it.


A colleague, an experienced system admin, lost all the data on a large
Linux ZFS array when it refused to re-mount the zpool following a sudden
power loss. He was hampered by the lack of decent tools to diagnose the
problem.

I'm unconvinced that zfs is yet sufficiently mature and proven on Linux.
About three years ago I had to choose a file system for a 24-drive array
(48TB total, mounted as a single volume). RAID5 was done in hardware on
the RAID chassis and the external presentation was as one large block
device. I considered ufs, ext3, ext4, xfs, zfs, reiserfs, and btrfs.

ufs, ext3 and ext4 were out of the question as they become horribly
inefficient at sizes over 8TB due to the large cluster size. We were
storing a lot of small files, which would have meant a lot of wasted
space.

zfs and btrfs I rejected (at the time) as too new and unproven, and
lacking in tools to repair filesystems with problems. zfs, though,
would have been highly attractive for its data-healing capabilities.
Today, I would seriously consider using zfs as the user base is f\ar
larger and thus, hopefully, most of the wrinkles will have been ironed
out.

zfs requires a lot of memory to work efficiently.

reiserfs I rejected as unproven and concerns about ongoing development.

That left xfs: stable, developed and refined over 20 years, a reliable
source (SGI), good diagnostic tools, fast, highly efficient with small
files, and well-supported under Linux, so I went with that, and so far,
so good.

This talk at LinuxTag a couple of years ago goes over the pros and cons:

http://www.linuxtag.org/2013/fileadm...lides/Heinz_Ma
uelshagen_-_Which_filesystem_should_I_use_.e204.pdf

or http://tinyurl.com/k3kngh3

Many customers have layered
ZFS on top of their expensive SAN storage arrays so they can tell when
data gets corrupted, which was something that previously usually went
unnoticed until it was too late to restore an uncorrupted copy.


Sickipedia could have done with that:

(from www.sickipedia.org)

quote
[...] earlier today we experienced a cascade failure on one of the
volumes on our SAN, a drive failed when the array was already rebuilding
to a spare - this took that particular volume offline - it has
effectively failed at this point with a 2 disk failure within the same
24hr period.

The end result of todays issue, as the data on the volume was corrupted
unrecoverably when we forced it back online, is that the data for your
two servers has been lost. Unfortunately the service you have, also
never included backups"
/quote

Not the brightest idea, forcing a failed array back into a volume...

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #39   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 7,434
Default DIY NAS question for the *nix experts

On 13/02/16 15:42, Mike Tomlinson wrote:
En el artículo , Andrew Gabriel
escribió:

I've worked with many hundreds of customers using ZFS, and none have
ever lost any data due to it.


A colleague, an experienced system admin, lost all the data on a large
Linux ZFS array when it refused to re-mount the zpool following a sudden
power loss. He was hampered by the lack of decent tools to diagnose the
problem.

I'm unconvinced that zfs is yet sufficiently mature and proven on Linux.
About three years ago I had to choose a file system for a 24-drive array
(48TB total, mounted as a single volume). RAID5 was done in hardware on
the RAID chassis and the external presentation was as one large block
device. I considered ufs, ext3, ext4, xfs, zfs, reiserfs, and btrfs.

ufs, ext3 and ext4 were out of the question as they become horribly
inefficient at sizes over 8TB due to the large cluster size. We were
storing a lot of small files, which would have meant a lot of wasted
space.

zfs and btrfs I rejected (at the time) as too new and unproven, and
lacking in tools to repair filesystems with problems. zfs, though,
would have been highly attractive for its data-healing capabilities.
Today, I would seriously consider using zfs as the user base is f\ar
larger and thus, hopefully, most of the wrinkles will have been ironed
out.

zfs requires a lot of memory to work efficiently.

reiserfs I rejected as unproven and concerns about ongoing development.


Good call - apart from Hans Reiser being a wife murderer (convicted),
reiserfs was an unstable piece of junk that ate a load of / filesystems
on out desktop PCs (no real data lost, but much swearing and resinstalling).

That left xfs: stable, developed and refined over 20 years, a reliable
source (SGI), good diagnostic tools, fast, highly efficient with small
files, and well-supported under Linux, so I went with that, and so far,
so good.


You forgot JFS - but that's locked up on my in the past (well, jammed
the filesystem into emergency RP mode due an assert failure.

XFS has been extremely well behaved and I use it for big (upto 1TB)
filestores at work. ext4 for database backends.

This talk at LinuxTag a couple of years ago goes over the pros and cons:

http://www.linuxtag.org/2013/fileadm...lides/Heinz_Ma
uelshagen_-_Which_filesystem_should_I_use_.e204.pdf

or http://tinyurl.com/k3kngh3

Many customers have layered
ZFS on top of their expensive SAN storage arrays so they can tell when
data gets corrupted, which was something that previously usually went
unnoticed until it was too late to restore an uncorrupted copy.


Sickipedia could have done with that:

(from www.sickipedia.org)

quote
[...] earlier today we experienced a cascade failure on one of the
volumes on our SAN, a drive failed when the array was already rebuilding
to a spare - this took that particular volume offline - it has
effectively failed at this point with a 2 disk failure within the same
24hr period.

The end result of todays issue, as the data on the volume was corrupted
unrecoverably when we forced it back online, is that the data for your
two servers has been lost. Unfortunately the service you have, also
never included backups"
/quote

Not the brightest idea, forcing a failed array back into a volume...


  #40   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
Mike Tomlinson writes:
En el artículo , Andrew Gabriel
escribió:

I've worked with many hundreds of customers using ZFS, and none have
ever lost any data due to it.


A colleague, an experienced system admin, lost all the data on a large
Linux ZFS array when it refused to re-mount the zpool following a sudden
power loss. He was hampered by the lack of decent tools to diagnose the
problem.

I'm unconvinced that zfs is yet sufficiently mature and proven on Linux.


ZFS on Linux has lagged someway behind the Illumos based distros
and FreeBSD, but it is catching up. I wouldn't choose Linux to
host a dedicated ZFS fileserver yet, although lots of people
are running ZFS loads on Linux without problems. It's miles
ahead of btrfs which seems to have made little progress for years,
much to the anguish of the Linux distros who are struggling with
not having an Enterprise class filesystem.

ZFS on Linux will get there - more people are working on it than
on any of the other platforms, but if it's the main purpose of the
system, for now choose an OS where ZFS is the native filesystem,
i.e. Illumos based (OmniOS, Openindiana, Nexentastor, etc), or FreeBSD
(including FreeNAS).

About three years ago I had to choose a file system for a 24-drive array
(48TB total, mounted as a single volume). RAID5 was done in hardware on
the RAID chassis and the external presentation was as one large block
device. I considered ufs, ext3, ext4, xfs, zfs, reiserfs, and btrfs.

ufs, ext3 and ext4 were out of the question as they become horribly
inefficient at sizes over 8TB due to the large cluster size. We were
storing a lot of small files, which would have meant a lot of wasted
space.

zfs and btrfs I rejected (at the time) as too new and unproven, and
lacking in tools to repair filesystems with problems. zfs, though,
would have been highly attractive for its data-healing capabilities.
Today, I would seriously consider using zfs as the user base is f\ar
larger and thus, hopefully, most of the wrinkles will have been ironed
out.

zfs requires a lot of memory to work efficiently.

reiserfs I rejected as unproven and concerns about ongoing development.

That left xfs: stable, developed and refined over 20 years, a reliable
source (SGI), good diagnostic tools, fast, highly efficient with small
files, and well-supported under Linux, so I went with that, and so far,
so good.

This talk at LinuxTag a couple of years ago goes over the pros and cons:

http://www.linuxtag.org/2013/fileadm...lides/Heinz_Ma
uelshagen_-_Which_filesystem_should_I_use_.e204.pdf

or http://tinyurl.com/k3kngh3

Many customers have layered
ZFS on top of their expensive SAN storage arrays so they can tell when
data gets corrupted, which was something that previously usually went
unnoticed until it was too late to restore an uncorrupted copy.


Sickipedia could have done with that:

(from www.sickipedia.org)

quote
[...] earlier today we experienced a cascade failure on one of the
volumes on our SAN, a drive failed when the array was already rebuilding
to a spare - this took that particular volume offline - it has
effectively failed at this point with a 2 disk failure within the same
24hr period.

The end result of todays issue, as the data on the volume was corrupted
unrecoverably when we forced it back online, is that the data for your
two servers has been lost. Unfortunately the service you have, also
never included backups"
/quote

Not the brightest idea, forcing a failed array back into a volume...


--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Question for our plastics or molding experts - epoxy casting molds A2 Metalworking 18 November 5th 15 03:25 AM
Question for you experts about connecting rod stryped[_2_] Metalworking 6 December 13th 08 03:53 AM
A question for the gun experts Roger Shoaf Metalworking 28 August 3rd 07 04:47 PM
Kitchen Island question?? Cabinet experts please come in!! bdeditch Woodworking 19 March 24th 07 05:53 PM
bee swarm question for the bee experts Tim Mitchell UK diy 50 June 26th 05 10:24 PM


All times are GMT +1. The time now is 06:29 AM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 DIYbanter.
The comments are property of their posters.
 

About Us

"It's about DIY & home improvement"