ZFS Snapshots and Clones
The difference between a snapshot and a clone is that a clone is a writeable, mountable copy of the file system. This capability allows us to store multiple copies of mostly-shared data in a very space-efficient way.
A snapshot is an read-only image of a file system,. When taking a snapshot, it is stored in a away so further transactions on a filesystem will only be carried out on the original file filesystem and not on the snapshot itself. This way it will be possible to rollback to get back to a previous state.
Clones and snapshots are not data copies but state copies which means they don't use any space when created. It is when the original filesystem is modified that the differences are stored.
If a rollback is performed, these differences are overwritten and space is freed-up once again.
Note: Clones can only be created from an existing snapshot.
zfs send
and zfs receive
allow clones of filesystems to be sent to for example a development environment.
To create a snapshot:
# zfs snapshot pool-name/filesystem-name@snapshot-name
To clone a snapshot:
# zfs clone snapshot-name filesystem-name
To roll back to a snapshot:
zfs rollback pool-name/filesystem-name@snapshot-name
Examples:
- List current ZFS filesystems:
# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 149K 1.95G 31K /pool pool/data01 31K 512M 31K /data01
- Create a snapshot from a ZFS file system:
# zfs snapshot pool/data01@snapshot01 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 150K 1.95G 31K /pool pool/data01 31K 512M 31K /data01 pool/data01@snapshot01 0 - 31K -
- Rollback a ZFS file system to a previous state:
- First let's do some modifications on filesystem:
# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 152K 1.95G 31K /pool pool/data01 31K 512M 31K /data01 pool/data01@snapshot01 0 - 31K - # cd /data01 # mkfile 64m file01 # ls -l total 131095 -rw------T 1 root root 67108864 Jun 1 10:42 file01
- Notice the difference in size of the snapshot:
# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 63.7M 1.89G 31K /pool pool/data01 63.6M 448M 63.5M /data01 pool/data01@snapshot01 19K - 31K -
- Now perform the rollback:
# zfs rollback pool/data01@snapshot01
- The file has disappeared, and the size of the snapshot has reduced.
# ls -l total 0 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 180K 1.95G 31K /pool pool/data01 32K 512M 31K /data01 pool/data01@snapshot01 1K - 31K -
- First let's do some modifications on filesystem:
- Remove a snapshot:
# zfs destroy pool/data01@snapshot01 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 161K 1.95G 31K /pool pool/data01 31K 512M 31K /data01
- Multiple snapshots can be taken at different times to have different points in time of recovery:
- Create multiple snapshots:
# zfs snapshot pool/data01@snapshot01 # mkfile 64m /data01/file01 # zfs snapshot pool/data01@snapshot02 # mkfile 64m /data01/file02 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 118M 1.84G 31K /pool pool/data01 117M 395M 117M /data01 pool/data01@snapshot01 19K - 31K - pool/data01@snapshot02 19K - 64.0M - # ls -l /data01 total 262186 -rw------T 1 root root 67108864 Jun 1 10:53 file01 -rw------T 1 root root 67108864 Jun 1 10:53 file02
- In this scenario we can only rollback the most recent snapshot. If we attempt to rollback the oldest snapshot we receive the following:
# zfs rollback pool/data01@snapshot01 cannot rollback to 'pool/data01@snapshot01': more recent snapshots exist use '-r' to force deletion of the following snapshots: pool/data01@snapshot02
- If we need to rollback to first snapshot, first we have to rollback to the newer one, destroy it and, then rollback to the oldest snapshot:
# zfs rollback pool/data01@snapshot02 # ls -l /data01/ total 131093 -rw------T 1 root root 67108864 Jun 1 10:53 file01 # zfs destroy pool/data01@snapshot02 # zfs rollback pool/data01@snapshot01 # ls -l /data01 total 0
- Create multiple snapshots:
- Displaying snapshots
# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pool/data01@snapshot01 18K - 31K - pool/data01@snapshot02 0 - 31K -
- Accessing snapshot contents through the
.zfs/snapshot
in the/pool-name
directory. This can allow end users to recover their files without system administrator intervention.# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 220K 1.95G 31K /pool pool/data01 49K 512M 31K /data01 pool/data01@snapshot01 18K - 31K - pool/data01@snapshot02 0 - 31K - # cd /data01/.zfs/snapshot/ # ls -l total 6 drwxr-xr-x 2 root root 2 Jun 1 10:38 snapshot01 drwxr-xr-x 2 root root 3 Jun 1 11:01 snapshot02
Each of the directories contains directory/file structures existing at the moment when snapshot was taken - Cloning a snapshot:
# zfs clone pool/data01@snapshot01 pool/data02 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.3M 1.89G 31K /pool pool/data01 64.1M 448M 64.0M /data01 pool/data01@snapshot01 19K - 31K - pool/data02 1K 1.89G 31K /pool/data02
Snapshot pool/data01@snapshot01 has been copied and will be writeable on pool/data02 clone - Removing a clone/snapshot:
# zfs destroy pool/data02 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.2M 1.89G 32K /pool pool/data01 64.1M 448M 64.0M /data01 pool/data01@snapshot01 19K - 31K -
Note: If a snapshot has one or more clones we won't be able to destroy it unless clones are destroyed first:# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.3M 1.89G 31K /pool pool/data01 64.1M 448M 64.0M /data01 pool/data01@snapshot01 19K - 31K - pool/data02 1K 1.89G 31K /pool/data02 # zfs destroy pool/data01@snapshot01 cannot destroy 'pool/data01@snapshot01': snapshot has dependent clones use '-R' to destroy the following datasets: pool/data02 # zfs destroy pool/data02 # zfs destroy pool/data01@snapshot01 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.2M 1.89G 31K /pool pool/data01 64.0M 448M 64.0M /data01
- Promoting a clone
Once a clone in place, we can use to replace original dataset. We will make clone independent of snapshot it was created from and, then, remove snapshot(s) and origin filesystem so our clone will replace it
# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.2M 1.89G 31K /pool pool/data01 64.0M 448M 64.0M /data01 # ll /data01 total 131093 -rw------T 1 root root 67108864 Jun 1 11:01 file01 # zfs snapshot pool/data01@snapshot01 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.2M 1.89G 31K /pool pool/data01 64.0M 448M 64.0M /data01 pool/data01@snapshot01 0 - 64.0M - # zfs clone -o mountpoint=/data02 pool/data01@snapshot01 pool/data02 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.2M 1.89G 31K /pool pool/data01 64.0M 448M 64.0M /data01 pool/data01@snapshot01 0 - 64.0M - pool/data02 1K 1.89G 64.0M /data02 # zfs promote pool/data02 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.2M 1.89G 31K /pool pool/data01 0 512M 64.0M /data01 pool/data02 64.0M 1.89G 64.0M /data02 pool/data02@snapshot01 1K - 64.0M - # zfs destroy pool/data01 # zfs destroy pool/data02@snapshot01 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 64.2M 1.89G 31K /pool pool/data02 64.0M 1.89G 64.0M /data02 # ll /data01 total 0 # ll /data02 total 131093 -rw------T 1 root root 67108864 Jun 1 11:01 file01