-
Notifications
You must be signed in to change notification settings - Fork 0
Thanks to the ZFS FSAL, NFS-Ganesha is able to access and manipulate a ZFS filesystem. This access is not done through FUSE and the zfs-fuse project but directly using a custom library.
This library called libzfswrap is distributed along with NFS-Ganesha and is mostly based on zfs-fuse. Packages for rpm based distributions and deb based distributions are also available.
This document will explain the creation and managing of zpool and the configuration of NFS-Ganesha to use the zpool newly created.
A set of tools has been created on top of libzfswrap to manage ZFS zpools. This set of tools is not fully finished as some features are still missing. Anyway the classical zfs-fuse based tools can also be used to manage the same zpool.
A zpool created and managed by libzfswrap is also visible by zfs-fuse and conversely.
A zpool is a set of disk that can be grouped together with or without redundancy to form a logical volume. This pool of disk form the first ZFS filesystem that you can directly access and use in NFS-Ganesha.
Creating a zpool is really straightforward:
root@localhost% lzw_zpool create tank mirror /dev/sda /dev/sdb
This command will create a zpool called tank that form a mirror using /dev/sda and /dev/sdb.
The third argument of the command is the type of zpool that will be created. Several types exist:
- mirror: each disk is a mirror
- raidz: like a classical RAID5
- raidz[1..255]: like a RAID5 with n disk for the parity (raidz3 implies n=3)
To list the available zpool on the system and some information about them:
root@localhost% lzw_zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT pool 1016M 106K 1016M 0% 1.00x ONLINE - tank 4.59G 2.02G 2.57G 43% 1.00x ONLINE -
You can specify the list of properties you want to get by providing as second argument the name of them separated by colons.
root@localhost% lzw_zpool list name,size,health NAME SIZE HEALTH pool 1016M ONLINE tank 4.59G ONLINE
To get more detailed information about the structure and the status of each disk in the pool, just use the status command:
root@localhost% lzw_zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/sda ONLINE 0 0 0 /dev/sdb ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 hda3 ONLINE 0 0 0 hda4 ONLINE 0 0 0 errors: No known data errors
It's always possible to add disk or group of disk to a zpool.
root@localhost% lzw_zpool add pool raidz /dev/sdc /dev/sdd /dev/sde root@localhost% lzw_zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/sda ONLINE 0 0 0 /dev/sdb ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 /dev/sdc ONLINE 0 0 0 /dev/sdd ONLINE 0 0 0 /dev/sde ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 hda3 ONLINE 0 0 0 hda4 ONLINE 0 0 0 errors: No known data errors
The 'add' command takes as arguments:
- 'pool': name of the pool
- 'raidz': type of the disk set to add
- the list of devices that form the disk set to add
root@localhost% lzw_zpool detach pool /dev/sdb root@localhost% lsz_zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 /dev/sda ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 /dev/sdc ONLINE 0 0 0 /dev/sdd ONLINE 0 0 0 /dev/sde ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 hda3 ONLINE 0 0 0 hda4 ONLINE 0 0 0 errors: No known data errors
To undo this operation just use the 'attach' command:
root@localhost% lzw_zpool attach pool /dev/sda /dev/sdb
This command takes as argument:
- 'pool': name of the pool
- '/dev/sda': device to use as attachment point
- '/dev/sdb': device to attach to the previous argument
To destroy a zpool only one command is needed:
root@localhost% lzw_zpool destroy pool root@localhost% lzw_zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 4.59G 2.02G 2.57G 43% 1.00x ONLINE -
To configure NFS-Ganesha to access to a zpool, you must set some options in the configuration file, in the ZFS configuration block
The only parameter to set is the name of the pool that NFS-Ganesha must use.
ZFS { # Zpool to use zpool = "tank"; }
Moreover the Path variable in the Export block must be "/".