You know, the idea is to visit how to use ZFS on removable media.
At first, this may seen useless, but it can be powerful and effective for:
- Data safety;
- Data migration;
- Data mobility.
The procedure is reasonably well documented by Oracle.
Anyway I think it deserves to be visited yet once more.
I won't get into the underlying detail of physical devices.
For more detail refer to Dynamically Configuring Devices documentation.
I've found deeper detail on another blog post that may also be interesting.
The bottom line of the physical issues will be some device name under /dev.
So for this post I'll be more than happy to exemplify with a 256 MB RAM disk.
In fact I have a somewhat overlapping but specific post on RAM ZFS pools.
As you know, a RAM disk isn't persistent across reboots.
Thus, it will do as a temporary (removable) device.
The underlying device on this post will be:
/dev/ramdisk/d-01If you plug an USB hard-disk the device name will be different, of course!
Depending on the situation, it's best to temporarily disable the Removable Media Services so that it won't preclude you in manually managing removable devices such as in this particular case.
# svcadm disable -t rmvolmgr
With the root or equivalent role one can find out device names with format:
$ echo | pfexec format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
...
Another very cool method (I've got to know checking other blog) is:
(rmformat is also extensively described in the official documentation)
$ rmformat
Looking for devices...
1. Logical Node: /dev/rdsk/c3t1d0p0
Physical Node: /pci@0,0/pci1b0a,df@1f,2/cdrom@1,0
Connected Device: HL-DT-ST DVDRAM GH22NS40 NL02
Device Type: CD Reader
Bus:
Size:
Label:
Access permissions:
...
Once the physical device is known, decide which mountpoint to use.
I assume /mnt is not being used, but it could be any other mountpoint.
Also, it's probably good to have this removable ZFS pool encrypted!
(it could also be compressed, have quotas, reservations and so on...)
The ZFS pool creation command could be as follows:
# zpool create -R /mnt -O encryption=on Backup /dev/ramdisk/d-01
Enter passphrase for 'Backup':
Enter again:
# zpool list Backup
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
Backup 250M 136K 250M 0% 1.00x ONLINE /mnt
# zfs list /mnt
NAME USED AVAIL REFER MOUNTPOINT
Backup 100K 218M 33K /mnt
There are a few details to note that are not usually mentioned.
The -R option implies the altroot and cachefile ZFS pool properties.
The value of the cachefile shows that the ZFS pool isn't persistent across reboots.
# zpool get altroot,cachefile Backup
NAME PROPERTY VALUE SOURCE
Backup altroot /mnt local
Backup cachefile none local
For the encryption, note the default behavior of prompting for a passphrase.
There are other options, but this seems particularly handy in this case.
Also note that the default checksum has changed to sha256-mac.
There are other values for the checksum property as well.
# zfs get encryption,checksum Backup
NAME PROPERTY VALUE SOURCE
Backup encryption on local
Backup checksum sha256-mac local
Note:
250 MB out of 256 MB of the RAM disk is available to the ZFS pool.
This represents approximately 97.7% of the RAM disk space.
For this case, an overhead around 2.3% (not bad).
218 MB out 250 MB of the ZFS pool is available to the ZFS file system.
This represents approximately 87.2% of the ZFS pool space.
For this case, an overhead around 12.8% (perhaps not bad).
Consider the total overhead in 218 MB out 256 MB.
As only around 85.2% is available, the total overhead is around 17.5%.
Now consider the 80% usage limit recommendation.
80% of 218 MB is around 174.4 MB which is about 68.1% of 256 MB.
Now we can see that the total overhead is pretty hard: 31.9%.
But ZFS is so good that this doesn't seem so hard after all.
And it's better than some competitors' 40% penalty.
Now the removable and encrypted ZFS file system is ready for use.
All the power of ZFS is available; that's remarkable!
When done it's time to remove the ZFS pool (and file system, of course).
This is easily done by simply exporting the ZFS pool.
The ZFS file system is automatically unmounted.
Also, the ZFS pool is forgotten by the system.
# zpool export Backup
# zpool list Backup
cannot open 'Backup': no such pool
# zfs list /mnt
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT/11.1.20.5 12.2G 191G 6.80G /
With a real removable storage device, now follow the procedure to disconnect it.
This procedure will vary according to the device, so I won't cover it.
No data is lost as long as the underlying device is "preserved".
That is, all the encrypted data still rest there for later.
In this particular post, the underlying storage device is a RAM disk.
This means that /dev/ramdisk/d-01 will be intact until system shutdown.
Of course, for this post, I'm assuming the device won't be reused or manually destroyed.
Later (any lapse of time, for a real underlying storage device) when it's time to use again the removable ZFS pool it suffices to reconnect the storage device to the system (a procedure I'm not covering) and import the ZFS pool.
After reconnecting the physical storage device but right before importing the ZFS pool, it's possible (and recommended) to query which ZFS pools can be successfully imported for sure. In this query procedure, the system, by default, looks up storage devices under /dev/dsk unless the -d option specifies somewhere else (/dev/ramdisk in this particular post).
# zpool import -d /dev/ramdisk
pool: Backup
id: 500609675252772792
state: ONLINE
action: The pool can be imported ...
config:
Backup ONLINE
/dev/ramdisk/d-01 ONLINE
To actually import the removable ZFS pool an additional and important point is to use the -R option as this is a removable ZFS pool. Not doing so will cache the pool information for persistent remount at each boot, which is presumably not what I want or need.
If the purpose is just to browse and read already existent data, the -o readonly=on will assure no data is accidentally or inadvertently modified.
By the way, as the removable ZFS pool in encrypted, the passphrase will be immediately prompted in order to clear the access to the stored data.
# zpool import -d /dev/ramdisk -R /mnt -o readonly=on Backup
Enter passphrase for 'Backup':
# zpool get readonly Backup
NAME PROPERTY VALUE SOURCE
Backup readonly on -
# touch /mnt/test
touch: cannot create /mnt/test: Read-only file system
# zpool export Backup
If otherwise the intention was really a read/write access possibility:
# zpool import -d /dev/ramdisk -R /mnt Backup
Enter passphrase for 'Backup':
Now it's possible to write (data and metadata) to the ZFS pool:
# zfs create -o quota=10m Backup/Project
# zfs list -r Backup
NAME USED AVAIL REFER MOUNTPOINT
Backup 154K 218M 35K /mnt
Backup/Project 33K 9.97M 33K /mnt/Project
When done using the ZFS pool:
# zpool export Backup
Removable ZFS pools:
One more cool Solaris stuff !