I have been using a 2TB external hard drive attached to a USB port on my RT-N66U router for my NAS solution. The router only supports USB2.0 and the R/W speeds are under 10MB/s. This works fine for streaming video, even with HD video playing on 2 of my HTHDTVPI‘s, but there is no redundancy. I cannibalized a few parts from other builds and bought some Western Digital Reds and built a NAS using ZFS.

First, some really long winded thoughts on RAID5 and ZFS.

Both have options for mirror, striping, and parity. I really wanted some hard numbers on read/write speed between the two, but there really isn’t a clear way to get accurate numbers. The biggest problem is with buffering. RAID5 was easy to get numbers on with FIO Tester, as it has a flag option for direct writing to disk. This is not possible with ZFS though, so all that data was unusable (something I didn’t find out until after completing the RAID5 tests. Oh well, good to be learning). I could have done large file transfers, which only really capture sequential read and write but there really is too many variables that impact this. I really wish I could have provided solid figures since there are none comparing the two file systems online. I will mention though that I did play around with large file transfer after basic setup on both and saw a noticeable increase in performance from ZFS. Also, the first transfers I tried were from my other PC, and both file systems were limited on read speed by network, with write speeds on RAID5 just dipping below the limit at 90MB/s. This could all have a lot to do with buffer though.

ZFS also has other benefits I see over RAID. Caching is already hugely integrated out of the box, but you can add flash memory to the pool to act as both read (L2ARC) and write (ZIL) cache for significant performance gains. I was cost prohibited from putting on another SSD to try this (it is not recommended that you partition the OS’s SSD for it either) and as I already mentioned, performance is bottle-necked at the network. There is also quite a bit of information talking about URE rates being an issue with larger arrays with RAID5. There is a level of uncertainty in the community regarding this, here’s both sides of the argument for those interested. Either way, research was done at the University of Wisconsin that leads me to believe it is much less susceptible to URE’s and another point for ZFS over RAID. I will likely add another HDD to the pool for double parity soon, and that will be my final point to make. With RAID you have to back up, destroy, and rebuild with the new drives. ZFS is a simple zpool command.

Next I’ll show off the build, code name “Minion”:

Fractal Design Mini
Gigabyte GA-B85M ($55 bucks!)
Intel i3-4160
16GB G-Skill DDR3-1600
Samsung 850 EVO 120GB
3 Western Digital Red 2TB HDD

OK, enough talking. Here’s my process on my freshly installed Debian 8.

I started with getting ZFSonLinux outlined in this link.

apt-get install lsb-release
wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_6_all.deb
dpkg -i zfsonlinux_6_all.deb
apt-get install debian-zfs

Then create the pool, after identifying the devices with fdisk -l:

zpool create pool1 raidz sdb sdc sdd -f

Lets check it:

zpool status

pool: pool1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0

zpool list

NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool1 5.44T 336K 5.44T – 0% 0% 1.00x ONLINE –

Cool. Looks good. Unlike RAID, there is no mounting step. Now lets get samba and create some folders. I wanted to have a public read-only media folder and a backup folder behind a password prompt.

mkdir /pool1/Media
mkdir /pool/Backup
smbpasswd -a foo
nano /etc/samba/smb.conf

I commented out the Home directories option as that is not useful to me on this NAS. I am the only tech-savvy user in the house. Then I added this to the end:

[media]
path = /pool1/Media
available = yes
read only = no
browseable = yes
public = yes
writable = no

[backup]
path = /pool1/Backup
available = yes
valid users = foo
read only = no
browseable = yes
write list = foo
public = yes
writable = yes