UK diy (uk.d-i-y) For the discussion of all topics related to diy (do-it-yourself) in the UK. All levels of experience and proficency are welcome to join in to ask questions or offer solutions.

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?

--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #2   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.


My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).

Look for a cheap NAS on fleabay, or stick then in a cheap case. Use a
separate small drive or SSD for the OS, then you can upgrade/maintain
that without affecting the data on the RAIDed disks. I take them out of
my Microserver when doing OS upgrades, etc. so no "accident" can befall
them.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #3   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,300
Default DIY NAS question for the *nix experts


"Mike Tomlinson" wrote in message
...
En el artículo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.


My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).

Look for a cheap NAS on fleabay, or stick then in a cheap case. Use a
separate small drive or SSD for the OS, then you can upgrade/maintain
that without affecting the data on the RAIDed disks. I take them out of
my Microserver when doing OS upgrades, etc. so no "accident" can befall
them.


Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


  #4   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article . com,
"bm" writes:
Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


Steer well clear of the 3112/3114 SATA controllers!
They were very early SATA controllers that pretend to be IDE (PATA) to
the OS, so they can be used by OS's which didn't know about SATA drives.
(Note they are not PCIe either - they predate that too.)
They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #5   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,300
Default DIY NAS question for the *nix experts


"Andrew Gabriel" wrote in message
...
In article . com,
"bm" writes:
Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


Steer well clear of the 3112/3114 SATA controllers!
They were very early SATA controllers that pretend to be IDE (PATA) to
the OS, so they can be used by OS's which didn't know about SATA drives.
(Note they are not PCIe either - they predate that too.)
They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).


Terrific, cheers for that.
Hmmmmm.




  #6   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , Andrew Gabriel
escribió:

They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).


Balls. I've used many SiI3112-based cards over the years with nary a
problem. They were a great alternative to the more expensive Adaptecs,
so much so that some motherboards included the Silicon Image BIOS in the
system BIOS, and performed better.

Silent data corruption is almost always due to improper termination.

SCSI termination is a black art. You need sacrificial goats, black
candles, a pentagram in the right shade of chalk, and to be standing on
the correct foot and facing in precisely the right direction while
chanting your incantation to the SCSI gods.

Apart from that, it's **** easy.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #7   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
Mike Tomlinson writes:
En el artículo , Andrew Gabriel
escribió:

They are well known for silent data corruption (great for testing ZFS
self-healing, but useless for any other filesystems).


Balls. I've used many SiI3112-based cards over the years with nary a
problem. They were a great alternative to the more expensive Adaptecs,
so much so that some motherboards included the Silicon Image BIOS in the
system BIOS, and performed better.


The Silicon Image BIOS is just to support managing and booting from RAID.
If you are not using RAID (or more specifically, not booting from RAIDed
disks), you don't need the SI BIOS, and the controller will still work
as it looks like an IDE controller. For the add-in cards, they produce
two BIOS's, one which supports RAID, and one which doesn't, and you can
flash either into the cards.

Silent data corruption is almost always due to improper termination.


Improper termination causes transport errors. Transport errors are not
silent - both ends get to know they happened and can take corrective
action.

Silent data corruption is when a block write completes without errors,
and a later read of the same block completes without errors, but the
data returned is not the last data written to that block. These can
only be detected by something higher up the stack (such as ZFS which
checksums every block on the disk, or in the absence of ZFS, by an
application which can detect data corruption).

SCSI termination is a black art. You need sacrificial goats, black
candles, a pentagram in the right shade of chalk, and to be standing on
the correct foot and facing in precisely the right direction while
chanting your incantation to the SCSI gods.


SCSI termination wasn't a black art, and these cards are not SCSI anyway.
SATA phy is point-to-point, so there's no termination configuration to
do - it's built into the phy chip at each end.

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #8   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 06/02/16 17:00, bm wrote:
"Mike Tomlinson" wrote in message
...
En el artÃ*culo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.


My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).

Look for a cheap NAS on fleabay, or stick then in a cheap case. Use a
separate small drive or SSD for the OS, then you can upgrade/maintain
that without affecting the data on the RAIDed disks. I take them out of
my Microserver when doing OS upgrades, etc. so no "accident" can befall
them.


Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


Linux will do some sort of RAID without need for a raid hardware solution.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...

I have to say I am running an ATOM server with a GB of RAM and its more
than fast enough for the trivial load file and web serving places on it.
Cases can be picked up for nothing., My PC supplier has tons of old
cases, and would probably sling me a load of drive bay kits for peanuts.


And an obsolete 10 year old XP style machine that has been junked for
being too old and slow is all you need. Tons of people have this stuff
lying around. Offer them a pint and its yours...


--
Canada is all right really, though not for the whole weekend.

"Saki"
  #9   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 06/02/2016 23:23, The Natural Philosopher wrote:

And an obsolete 10 year old XP style machine that has been junked for
being too old and slow is all you need. Tons of people have this stuff
lying around. Offer them a pint and its yours...



I am trying to get rid of it, not get more.... I will take the pint
though ;-)

--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #10   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 58
Default DIY NAS question for the *nix experts



"The Natural Philosopher" wrote in message
...
On 06/02/16 17:00, bm wrote:
"Mike Tomlinson" wrote in message
...
En el artÃ*culo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.

My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).

Look for a cheap NAS on fleabay, or stick then in a cheap case. Use a
separate small drive or SSD for the OS, then you can upgrade/maintain
that without affecting the data on the RAIDed disks. I take them out of
my Microserver when doing OS upgrades, etc. so no "accident" can befall
them.


Tis true, a 4 bay case is more expensive than buying a larger drive.
I'm wondering whether to use an old PC, bung in my drives, get something
like this -
http://www.ebay.co.uk/itm/4-Port-SAT...AOSwSdZWd0r Z
maybe I could use FreeNAS.


Linux will do some sort of RAID without need for a raid hardware solution.


So will Win.

But I question the value of RAID versus say a simple mirror the important
software on a nightly basis...


You do get rather more total storage capacity with
RAID tho its certainly not as simple if a drive dies.

I have to say I am running an ATOM server with a GB of RAM and its more
than fast enough for the trivial load file and web serving places on it.
Cases can be picked up for nothing., My PC supplier has tons of old cases,


Not many of them take all that many drives tho.

and would probably sling me a load of drive bay kits for peanuts.


They only cost peanuts from aliexpress anyway.

And an obsolete 10 year old XP style machine that has been junked for
being too old and slow is all you need. Tons of people have this stuff
lying around. Offer them a pint and its yours...


Not ideal power use wise tho.



  #11   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
The Natural Philosopher writes:
Linux will do some sort of RAID without need for a raid hardware solution.


This chipset isn't hardware RAID - its BIOS has minimal support to allow
the PC BIOS to boot from a RAID set. Once the OS takes over from the
BIOS i/o functions, the OS needs a driver to perform the RAID function.
The RAID management data is all proprietary as far as I know - at least,
they didn't publish it ~10 years ago when we were testing this chipset,
so if you use the BIOS RAID feature to boot from a RAID set, you have to
use their OS drivers.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...


Depends what data availability you need. The cost of mirroring is worth
it for me - if a disk dies in the middle of my work, I don't have to
stop work and go and fix it.


--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #12   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 07/02/16 12:36, Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
Linux will do some sort of RAID without need for a raid hardware solution.


This chipset isn't hardware RAID - its BIOS has minimal support to allow
the PC BIOS to boot from a RAID set. Once the OS takes over from the
BIOS i/o functions, the OS needs a driver to perform the RAID function.
The RAID management data is all proprietary as far as I know - at least,
they didn't publish it ~10 years ago when we were testing this chipset,
so if you use the BIOS RAID feature to boot from a RAID set, you have to
use their OS drivers.

But I question the value of RAID versus say a simple mirror the
important software on a nightly basis...


Depends what data availability you need. The cost of mirroring is worth
it for me - if a disk dies in the middle of my work, I don't have to
stop work and go and fix it.


that is fair enough, but then you have top replace it with a new one of
the same basic type.


--
Bureaucracy defends the status quo long past the time the quo has lost
its status.

Laurence Peter
  #13   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 06/02/2016 15:46, Mike Tomlinson wrote:
En el artículo , John
Rumm escribió:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones.


My immediate thought is that, by the time you've bought the hardware
needed to achieve this (caddies, etc. especially, as you mention them)
it would be cheaper to buy a few big drives. Buy just three and you can
do RAID5 (=fault tolerance).


I can get basic USB2 caddies for about £3.50, so say £25 for 3 gig. For
the drives you shove in the main box, one does not even need to use the
box that comes with them, just the SATA header.



--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/
  #14   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 1,375
Default DIY NAS question for the *nix experts

On 06/02/16 14:58, John Rumm wrote:
I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.


Here's maybe a thought,

When digital TV first launched, there were queries how much disk space
would it take to simultaneously record the output of all the TV muxes.

How many days continuous could you catch with your HDD stash? I know
it's the age of onDemand, but not all content is available that way.

--
Adrian C
  #15   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 25,191
Default DIY NAS question for the *nix experts

On 06/02/2016 15:55, Adrian Caspersz wrote:
On 06/02/16 14:58, John Rumm wrote:
I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.


Here's maybe a thought,

When digital TV first launched, there were queries how much disk space
would it take to simultaneously record the output of all the TV muxes.

How many days continuous could you catch with your HDD stash? I know
it's the age of onDemand, but not all content is available that way.


I have about 15 gigs worth of new 500G 2.5" drives, and perhaps 3 or 4
gigs worth of 3.5" ones. That won't buy you that much time if recording
all the muxes - especially the HD ones.



--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/


  #16   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 13,431
Default DIY NAS question for the *nix experts

On Sat, 06 Feb 2016 14:58:34 +0000, John Rumm
wrote:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?



Yes and no (but probably more no as a solution for you). ;-)

My current server is an Dual Core Atom running 3 x 500G 2.5" SATA
drives on Windows Home Server V1. Two SATA are on board and there are
another 3 on a PCI card (1 used).

They used a technique that sounds like it would suit your needs (as
it still suits mine) in that you simply add drives to a pool (it's
easy to do because it's Windows g) and they can be any size or
interface. So, an old Mobo with 4 SATA and 2 PATA ports could use 6
drives straight away. It was called Drive Extender and the good thing
was the drives were just running straight NTFS and so each could be
read independently if required (unlike a single drive from a RAID 5
array)

As soon as the system detects a new drive it asks you how you want to
use it, either by adding it to the / a pool or as a backup drive. If
you have an existing pool of say 1.5TB and then you add another 500G
drive your pool then becomes 2TB.

Data redundancy is provided by folder mirroring where a mirrored
folder will be mirrored across two separate drives.

If a drive starts to play up (or you want to replace it with a bigger
one), you just remove it from the pool (all the data will
automatically be migrated to the remaining drives), you take it out,
fit the replacement and join it back into the pool.

Mine has been running every day for checks 1812 days:

https://dl.dropboxusercontent.com/u/5772409/WHS_V1.jpg

However, it is woken up by the first (Windows / Mac) client that turns
on and goes to sleep after the last client has shut down, assuming
there is no ongoing network or CPU activity above a certain threshold
or an unfinished torrent etc.

It also backs up all the Windows clients every day to the point where
if a client had drive fails catastrophically, I can replace the drive
and re-image the machine over the LAN in less than an hour and a few
clicks (and have done so a couple of times so far).

Or you can browse the list of backup available for each machine and
get a single file from say six months previous, depending on your
backup settings.

The backup system knows when it's backing up identical files from
different machines so only stores them once.

MS dropped WHS V1 (based on WS 2005) for WHS V2011 (based on WS 2008)
but you can still find copies on the likes of eBay.

I initially tried making a Linux server but gave up long before I even
considered the automatic (image) backup or all of the other features
that were so easy for me to install on WHS. I just wanted a NAS / file
server, not a new geeky hobby and was happy to pay the 45 or so quid
for the privilege.

Cheers, T i m

p.s. You can still get drive pooling on other Windows solutions (I
don't know which, I'm not a Windows fanatic) and you can also get
independent solutions like Drive Bender:

http://www.division-m.com/drivebender/
http://www.division-m.com/videos/drivebender-demo1/
  #17   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 7,434
Default DIY NAS question for the *nix experts

On 06/02/16 14:58, John Rumm wrote:
I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?


Linux MD (RAID) does not care what the interfaces are - it works at the
block device level, so as long as your devices appear, it will work find.

But of course you are going to have to balance the sizes for RAID1,5,6
which might not be totally convenient. MD RAIN can work with partitions
(it's just another block device) so that is one get out.

However, ZFS might be a more interesting choice.

For something like this, I would (and do) use Debian. ZFS is pretty
solid on Debian (but is not "native: you need to include an extra
repository http://zfsonlinux.org/debian.html ) and MD + LVM is
absolutely rock solid.
  #18   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default DIY NAS question for the *nix experts

In article ,
Tim Watts writes:
However, ZFS might be a more interesting choice.

For something like this, I would (and do) use Debian. ZFS is pretty
solid on Debian (but is not "native: you need to include an extra
repository http://zfsonlinux.org/debian.html ) and MD + LVM is
absolutely rock solid.


Just to point out (and I'm sure Tim knows this anyway), you don't
normally use MD/LVM with ZFS - it has its own volume management
integrated in. You can mix disk sizes in zpool, but you ideally
use the same size in each top level vdev (stripe), as within a top
level vdev, it will treat all disks as though they are the size
of the smallest one.

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #19   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 805
Default DIY NAS question for the *nix experts

On Sat, 06 Feb 2016 14:58:34 +0000, John Rumm wrote:

I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?


Watching with interest as I am slowly collecting 2.5" SATA HDDs as I
upgrade to SSDs.

I use
http://www.amazon.co.uk/gp/product/B0037SHEAQ?
psc=1&redirect=true&ref_=oh_aui_detailpage_o08_s00
to fit two 2.5" drives into a 3.5" hole in a chassis.

A recommendation for a SATA3 add on card for PCIe would be good, as I am
running out of SATA3 ports on my main chassis.

For an old box, I assume SATA 2 would be sufficient because the bus
wouldn't provide SATA3 transfer speeds anyway (I have some recollection of
being told this on uk.comp.homebuilt some time back).

Cheers


Dave R

--
Windows 8.1 on PCSpecialist box
  #20   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default DIY NAS question for the *nix experts

On 07/02/16 13:46, David wrote:
For an old box, I assume SATA 2 would be sufficient because the bus
wouldn't provide SATA3 transfer speeds anyway (I have some recollection of
being told this on uk.comp.homebuilt some time back).


Its doubtful whether overall sata3 makes a lot of difference to spinning
rust, although it for sure does to SSD!

You will probably be limited by the network anyway for NAS unless you
have gigabit

I can easily saturate a 100Mbps link.

MM. Looking at the specs SATA2 goes up to 2.4Gbps - so could saturate a
gigabit link too.

So:

- Not much CPU grunt neeeded.
- Sata 2 fine
- As many SATA ports as possible
- Best ethernet you can
- Enough RAM to provide sane caching. probably 512MB at least.
- OS to glue it all together - headless servers? Debian really.
- Rsync, software RAID or hardware RAID depending on application and
prejudice.
- STRONGLY RECOMMEND compiling up latest 'minidlna' if you have a smart
TV with dlna support, to make a place to dump ALL your videos.

RIPPING DVDs to MP4s and putting them on the server makes for a vastly
easier experience than trying to find the right DVD.


--
"It is an established fact to 97% confidence limits that left wing
conspirators see right wing conspiracies everywhere"


  #21   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 4,069
Default DIY NAS question for the *nix experts

En el artículo , David
escribió:

A recommendation for a SATA3 add on card for PCIe would be good, as I am
running out of SATA3 ports on my main chassis.


SATA3 is only needed for SSDs. You're wasting your time connecting hard
drives to SATA3 ports; SATA2 has plenty of bandwidth.

--
(\_/)
(='.'=) Bunny says: Windows 10? Nein danke!
(")_(")
  #22   Report Post  
Posted to uk.d-i-y
dan dan is offline
external usenet poster
 
Posts: 1
Default DIY NAS question for the *nix experts


I seem to be sinking under a pile of spare hard drives at the moment -
typically 2.5" 500GB ones. It would be nice to find a way of making use
of them *cheaply*. It would be nice to build a NAS platform for use as a
backup repository, and for perhaps archiving stuff like films.

Performance is not that critical, but I would like fault tolerance. Not
too fussed about uptime. So it needs to be a RAID setup of some sort
that can survive any individual drive failure (e.g. RAID 5 or 6), but it
can be shutdown for maintenance etc without any worries - so I don't
need to worry about hot swap or redundant components.

A small low power mobo in an old PC case could be a starting point, or
for that matter, even as RaspPi 2 B or similar level single board
computer, but that will soon run out of sata ports (or not have any to
start with). One option that springs to mind would be a powered USB hub,
and a bunch of drive caddies which would be a cheap way of adding lots
of drives if required.

That then raises the question of software to drive it... How workable
would the various MD style RAID admin tools and file systems be at
coping with drives mounted on mixed hardware interfaces - say a mix of
SATA and USB? Has anyone tried multiple budget SATA cards on stock PC
hardware?


John,

ZFS is your friend ! Here's what I'd do in your situation:

Supermicro Atom (Rangely) board - with plenty of ECC RAM and lots of SATA ports
FreeNAS on a USB stick
All the drives in a massive ZFS pool with redundancy.


Supermicro boards aren't cheap but they will serve you well. Don't be tempted to skip on ECC RAM though.

Dan



Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Question for our plastics or molding experts - epoxy casting molds A2 Metalworking 18 November 5th 15 03:25 AM
Question for you experts about connecting rod stryped[_2_] Metalworking 6 December 13th 08 03:53 AM
A question for the gun experts Roger Shoaf Metalworking 28 August 3rd 07 04:47 PM
Kitchen Island question?? Cabinet experts please come in!! bdeditch Woodworking 19 March 24th 07 05:53 PM
bee swarm question for the bee experts Tim Mitchell UK diy 50 June 26th 05 10:24 PM


All times are GMT +1. The time now is 10:46 PM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 DIYbanter.
The comments are property of their posters.
 

About Us

"It's about DIY & home improvement"