USC2025+SE2 — Backups for the people!
We have started deploying a new backup server, levering the zfs
filesystem together with FreeBSD jails
🤓
We have started deploying a new backup server, levering the zfs
filesystem together with FreeBSD jails
🤓
So, we’ve seen how to create a native jail using FreeBSD’s toolset, and we’ve fine-tuned a few of its settings, including mounting select directories from the host into the jail.
Is that really enough though? 🙃
We want zfs
inside our jail, period!
Since we use a dedicated zfs dataset per jail, isn’t that enough? Well, dataset management (and anything disk-related) is handled on the host.
Practically speaking, this means that the root
user inside the jail cannot
alter dataset properties, nor create new ones.
So, we’ve seen how to create a native jail using FreeBSD’s toolset. Meaning we have a brand-new system to configure!
Some jail-related specificities:
vnet
jails allows for virtualizing the entire network stack;Let’s copy /etc/resolv.conf
& /etc/localtime
from the host into the jail,
so that it can issue DNS requests, and most importantly be on time ;)
FreeBSD jails were introduced in June 2000. They were the first open-source solution for lightweight virtualization, and proved to be foundational to the container revolution that took off later on, preceding the emergence of linux-vserver in October 2001, or LXC containers at the end of 2008.
The jail technology inspired Sun’s engineers, who refined and further elaborated on its concepts through the development of Solaris Zones in 2004, as this talk by Bryan Cantrill amusingly evokes.
The following example aims at illustrating the core concept of zfs send/recv:
zfs create storage/test-source
;zfs snap storage/test-source@one
;zfs send -v storage/test-source@one | zfs recv -v storage/test-destination
; we now
have a working independant copy of the dataset on the destination, ain’t
this cool?zfs snap storage/test-source@two
;zfs send -v -i storage/test-source@two | zfs recv -v storage/test-destination
zfs list -t snap
to check what’s
up.Note: To track the progress of zfs send | zfs recv
, one can use a
well-known tool, pv, as suggested in
the Solaris
documentation.
Most of us came from traditional storage systems, and had to wrap our minds
around new concepts introduced by zfs
. Let’s break it down:
A zpool
combines multiple physical disks into a single storage pool,
handling redundancy, caching, and data integrity at the block level. It’s
comparable to a:
Instead of manually partitioning disks or setting up traditional RAID, zfs
automatically distributes data across the zpool
.
Using ZFS for NAS appliances storing large media files is a very common scenario. In this specific context, we may want to get:
…in that order!
We’ll try & figure out proper storage designs for 8 large hard disks in this specific context.
You may well want to read choosing the right ZFS pool layout in addition to getting familiar with ZFS concepts.