ZFS
ZFS is a combined file system and logical volume manager designed by Sun Microsystems. ZFS is scalable, and includes extensive protection against data corruption, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z, native NFSv4 ACLs, and can be very precisely configured. (source: Wikipedia)
Pool
RAID options
Description | Type | Behavior | Multiple VDEVs in single pool |
---|---|---|---|
Striped | Single disk | Striped VDEVs, like RAID0 | |
Mirrored | mirror |
Like RAID1 | Striped mirror, like RAID10 |
RAIDZ | raidz |
Like RAID5 | Nested RAIDZ, like RAID50/60 |
RAIDZ2 | raidz2
|
Like RAID6 | |
RAIDZ3 | raidz3
|
Create
sudo zpool create testpool -o ashift=12 raidz1 /dev/disk/by-id/ata-ST2000DM001-1CH164_W1E5ETF9 /dev/disk/by-id/ata-ST2000DM001-1CH164_Z1E68GLR /dev/disk/by-id/ata-ST2000DM001-1CH164_Z1E6CQPW
or
sudo zfs create testpool -o ashift=12 -o mountpoint=/srv/testpool raidz1 /dev/disk/by-id/ata-ST2000DM001-1CH164_W1E5ETF9 /dev/disk/by-id/ata-ST2000DM001-1CH164_Z1E68GLR /dev/disk/by-id/ata-ST2000DM001-1CH164_Z1E6CQPW
-o ashift=12
set to 4K sectors-o mountpoint=/srv/testpool
set custom mount point
sudo zpool status
Add VDEV
sudo zpool add (-n) (-f) tank0 -o ashift=12 raidz2 /dev/disk1 /dev/disk2 /dev/disk3 /dev/disk4
-n
dry-run, shows what the new configuration will look like-f
required if the disks are of different sizes
Compression
sudo zfs set compression=lz4 testpool sudo zfs get compression testpool
- VirtualBox VM: 48G → 34G
- Documents, photos, videos, etc: 395G → 388G
Mount point
sudo zfs set mountpoint=/srv/testpool testpool zfs get mountpoint
Stats
zpool iostat testpool 1
zfs get space (pool/dataset)
Scrub
zpool scrub files
Replace disk
- Offline the disk, if necessary, with the
zpool offline
command. - Remove the disk to be replaced.
- Insert the replacement disk.
- Run the
zpool replace
command.
zpool offline testpool disk-old-id zpool replace (-f) testpool disk-old-id /dev/disk/by-id/new
-f
forces use of new disk, even if its appears to be in use
Destroy
sudo zpool destroy tank
Dataset
# Create sudo zfs create testpool/test # Remove sudo zfs destroy testpool/test (-r) # List sudo zfs list # Rename sudo zfs rename testpool/test testpool/new_test
Snapshot
Snapshots aren't pointers, they are time stamps.
ZFS is a tree of blocks and each block has a set of metadata, one of which is its birth date. Any block with a birth date older than the snapshot time stamp is part of that snapshot.
What happens is something like this; your 10 MB file gets written to 79 128k blocks and you take a snapshot. You then change 5 MB of the file causing 40 new blocks to be written. Then ZFS will try and remove the outdated 40 blocks and while its doing that its checking to see if any block is part of a snapshot, leaving them alone if they are. Thus you now have a "snapshot" worth 5 MB. [1]
# Create sudo zfs snap testpool/test@snap # Destroy sudo zfs destroy testpool/test@snap # List sudo zfs list -t snapshot # Clone sudo zfs clone testpool/test@snap testpool/test/snap-clone
Send and receive
# local zfs send tank/data@snap1 | zfs recv spool/ds01 # allow regular user to send sudo zfs allow username send,snapshot,mount tank/data
# remote zfs send tank/dana@snap1 | ssh host2 zfs recv spool/ds01 # allow regular user to receive sudo zfs allow username receive,create,mount spool