In the quest to keep up with my file serving needs here at Crapple HQ I've very recently looked to upgrade our current storage solution. The problem isn't performance, in fact quite the contrary. It's the requirement of space, especially to hold high quality (and more increasingly high definition) video. Also because I'm paranoid, mirroring of this data. In the past what we had is two servers which mirrored each other and were located in different parts of the HQ (offsite backup is something I'm considering for particularly sensitive stuff).
Anyhow before it would be a big server doing all the main file serving duties (documents, music, books) and my Qnap linux box serving video files from a 3x 1Tb RAID 5 array and doing backup duties for the other stuff. I figured that videos can be re-acquired and mirroring such amounts of data would just be too expensive, so a RAID 5 would suffice (RAID 6 would be better but I only had three slots free on the Qnap 409.
Now what I've done is move the video duties from the Qnap to the main server, but not before I upgraded the disk subsystem on that machine. Gone are the 10,000 RPM drives in favour of four 2.5" SAS drives (connected to a Dell Perc 6 controller). The OS sits on an old 10,000 RPM WD Raptor 36Gb drive (I know, it's not a very good drive but it's fast and reliable). I've also put two 750Gb WD 7200 RPM drives in there for a network "public" share (patches/configuration stuff). Currently it's in RAID 0 as part of the migration but that will be changed to RAID 1. The video array has now been expanded.
Previously the three 1Tb drives were as follows: 2x Western Digital Green Power units and 1x Samsung F1. I've had terrible reliability problems with the F1, two out of three failed. Granted it's not a big sample size but I don't want to take my chances. So to compliment the two WD drives I bought three more of the newer WD Green Power units with 32MB cache and 333Gb platters. In the end it tops out at around 3.8Tb after journaling and redunancy - over double what I had before. The best thing is that I managed to source an ICY DOCK 5 port hot swap bay which makes the thing easy as pie to maintain.
So all in all, Jimbob, has the following disk subsystem.
2x 146Gb 10,000 RPM SAS (RAID 0) for Photos
2x 73Gb 10,000 RPM SAS (RAID 0) for Documents and music
2x 750Gb 7,200 RPM SATA (RAID 0) for public network files
5x 1Tb 5,400 RPM SATA (RAID 5) for videos
1x 36Gb 10,000 RPM SATA for OS
The photos and documents shares are in RAID 0 because every 10 minutes they get backed up remotely to a RAID 5 array (in the Qnap). We just don't change files too much for this to be a problem. The Qnap will have 3x WD 500Gb RE2 7,200 RPM SATA disks giving around 1Tb of storage.
Overall I'm quite happy with everything so far, lets hope the WD Green Power drives work well as the power savings are something that I am looking forward to embracing (being green 'n all).
Recently I came to the conclusion that having a complete mirror of my complete videos volume will be almost unsustainable. So instead of going the RAID 51 route I decided to install Linux on my QNAP 409 thanks to this very useful guide.
So now that I have a full on (albeit slow) Linux system I went onto install Samba as you do. Of course with Debian it's pretty simple, apt-get install samba will do it. Anyhow I noticed that out of the box the throughput was pretty terrible - 500k/sec on 100mbit/sec Ethernet. The Samba protocol has a lot of overhead (unlike FTP) so I wasn't expecting anything near wire-speed but 500k/sec is pathetic and means I can't stream HD video. So I did some googlising and found a site with some options for Samba - two of which were already in my original config file from back yonder.
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=8192 SO_SNDBUF=8192
This worked a treat - going from 500k/sec to a stready 7.5-8.5MB/sec. Unsurprisingly CPU usage on the little QNAP has gone up (only to around 18% on the SMB host process though) but it means I can finally stream videos without any trouble.
As a sidenote, some people have found that older kernels can cause speed issues. For the record I'm using 2.6.26 for the Orion arch.