====== How we setup the filesytem on our flexbufs ====== ===== The really short version ===== * We use ZFS * 6 raidz's with 6 disks * Use a 4k alignment for the disks * Set set xattr=sa for each pool * Use vdev aliases for the disks ===== Create a vdev alias mapping ===== You should make your vdev alias mapping first to be able to use F0-5 as disk names for your pool. F0-5 are the disk vdev names. This does not work by default you need to make a vdev alias mapping first to be able to use this. We do this because it makes it easy to find a disk in the system. F0 means slot zero in the front. Because we map it to the pci path instead of the disk, we can replace a disk and it will get the correct vdev name automatically. The only thing to keep in mind is that the pci path could change if you add a pci device to the system. Your first step is to know where each disk is in your system a nice identifier is the serial number of the disk. So write these down with the slot they are I. Install smartmontools on your system and set it up to send emails when disks start failing! When you have done that use smartmontools to find which device belongs to which path. sudo smartctl -i /dev/disk/by-path/pci-0000\:83\:00.0-sas-phy1-lun-0 Then add this to your vdev file with an alias name. Create it in /etc/zfs/vdev_id.conf. Do this for all disks Heres an example alias F0 pci-0000:01:00.0-sas-phy3-lun-0 alias F1 pci-0000:01:00.0-sas-phy2-lun-0 alias F2 pci-0000:01:00.0-sas-phy1-lun-0 alias F3 pci-0000:01:00.0-sas-phy0-lun-0 alias F4 pci-0000:01:00.0-sas-phy7-lun-0 alias F5 pci-0000:01:00.0-sas-phy6-lun-0 alias F6 pci-0000:01:00.0-sas-phy5-lun-0 alias F7 pci-0000:01:00.0-sas-phy4-lun-0 alias F8 pci-0000:02:00.0-sas-phy3-lun-0 alias F9 pci-0000:02:00.0-sas-phy2-lun-0 alias F10 pci-0000:02:00.0-sas-phy1-lun-0 alias F11 pci-0000:02:00.0-sas-phy0-lun-0 alias F12 pci-0000:02:00.0-sas-phy7-lun-0 alias F13 pci-0000:02:00.0-sas-phy6-lun-0 alias F14 pci-0000:02:00.0-sas-phy5-lun-0 alias F15 pci-0000:02:00.0-sas-phy4-lun-0 One of our config files note that the file name should be vdev_id.conf not .txt [[http://www.jive.nl/jivewiki/lib/exe/fetch.php?media=vdev_id.txt]] Look closely you will see there is some logic to it so you don't have to probe all your disks. After editing the file you need to update your vdev entries by running: sudo udevadm trigger If you want to check if its correct you can do the same smartctl command but with the vdev instead of the pci path. sudo smartctl -i /dev/disk/by-vdev/F0 Create a zfs pool zpool create -o ashift=12 -m /mnt/disk0 disk0 raidz F0 F1 F2 F3 F4 F5 What does this mean? Zpool create is the command used to create a new pool -o is set an option, ashift=12 means 4096b disk alignment Most disk these days use a sector size of 4096b instead of 512b Since it is not possible to replace a 512b disk with 4096 disk if the pool is aligned at 512b we recommend aligning the pool at 4096b even though you are using 512b disks this to be future proof. -m is to set the mount point, by defailt the mount point is the pool name. We set it to /mnt/disk0 disk0 is the pool name raidz is the raid type raidz is the zfs equivalent of raid5. We use this so we have 1 redundant disk per 6 disks. F0-5 are the disk vdev names. If the zfs create command complaints there is already a partition on the disk add -f to the command. Then it will happily destroy whatever is on the disks to create your pool. Be careful. Think twice. zpool create -f -o ashift=12 -m /mnt/disk0 disk0 raidz F0 F1 F2 F3 F4 F5 Check if it all went well. sudo zpool status Then set xattr=sa its a linux specific setting so there is a change you cannot import the pool on a non linux system. sudo zfs set xattr=sa poolname