diff --git a/.buildinfo b/.buildinfo new file mode 100644 index 000000000..1e87f69ad --- /dev/null +++ b/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: 2d51a8f7f9fbdf6800788999f711b3db +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/404.html b/404.html new file mode 100644 index 000000000..cff9e6429 --- /dev/null +++ b/404.html @@ -0,0 +1,116 @@ + + + + + + + 404 Page not found. — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

404 Page not found.

+

Please use left menu or search to find interested page.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/Checksums.html b/Basic Concepts/Checksums.html new file mode 100644 index 000000000..49ee06a71 --- /dev/null +++ b/Basic Concepts/Checksums.html @@ -0,0 +1,324 @@ + + + + + + + Checksums and Their Use in ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Checksums and Their Use in ZFS

+

End-to-end checksums are a key feature of ZFS and an important +differentiator for ZFS over other RAID implementations and filesystems. +Advantages of end-to-end checksums include:

+
    +
  • detects data corruption upon reading from media

  • +
  • blocks that are detected as corrupt are automatically repaired if +possible, by using the RAID protection in suitably configured pools, +or redundant copies (see the zfs copies property)

  • +
  • periodic scrubs can check data to detect and repair latent media +degradation (bit rot) and corruption from other sources

  • +
  • checksums on ZFS replication streams, zfs send and +zfs receive, ensure the data received is not corrupted by +intervening storage or transport mechanisms

  • +
+
+

Checksum Algorithms

+

The checksum algorithms in ZFS can be changed for datasets (filesystems +or volumes). The checksum algorithm used for each block is stored in the +block pointer (metadata). The block checksum is calculated when the +block is written, so changing the algorithm only affects writes +occurring after the change.

+

The checksum algorithm for a dataset can be changed by setting the +checksum property:

+
zfs set checksum=sha256 pool_name/dataset_name
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Checksum

Ok for dedup +and nopwrite?

Compatible with +other ZFS +implementations?

Notes

on

see notes

yes

on is a +short hand for +fletcher4 +for non-deduped +datasets and +sha256 for +deduped +datasets

off

no

yes

Do not do use +off

fletcher2

no

yes

Deprecated +implementation +of Fletcher +checksum, use +fletcher4 +instead

fletcher4

no

yes

Fletcher +algorithm, also +used for +zfs send +streams

sha256

yes

yes

Default for +deduped +datasets

noparity

no

yes

Do not use +noparity

sha512

yes

requires pool +feature +org.illumos:sha512

salted +sha512 +currently not +supported for +any filesystem +on the boot +pools

skein

yes

requires pool +feature +org.illumos:skein

salted +skein +currently not +supported for +any filesystem +on the boot +pools

edonr

see notes

requires pool +feature +org.illumos:edonr

salted +edonr +currently not +supported for +any filesystem +on the boot +pools

+

In an abundance of +caution, Edon-R requires +verification when used +with dedup, so it will +automatically use +verify.

+

blake3

yes

requires pool +feature +org.openzfs:blake3

salted +blake3 +currently not +supported for +any filesystem +on the boot +pools

+
+
+

Checksum Accelerators

+

ZFS has the ability to offload checksum operations to the Intel +QuickAssist Technology (QAT) adapters.

+
+
+

Checksum Microbenchmarks

+

Some ZFS features use microbenchmarks when the zfs.ko kernel module +is loaded to determine the optimal algorithm for checksums. The results +of the microbenchmarks are observable in the /proc/spl/kstat/zfs +directory. The winning algorithm is reported as the “fastest” and +becomes the default. The default can be overridden by setting zfs module +parameters.

+ + + + + + + + + + + + + + + + + +

Checksum

Results Filename

zfs module parameter

Fletcher4

/proc/spl/kstat/zfs/fletcher_4_bench

zfs_fletcher_4_impl

all-other

/proc/spl/kstat/zfs/chksum_bench

zfs_blake3_impl, +zfs_sha256_impl, +zfs_sha512_impl

+
+
+

Disabling Checksums

+

While it may be tempting to disable checksums to improve CPU +performance, it is widely considered by the ZFS community to be an +extrodinarily bad idea. Don’t disable checksums.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/Feature Flags.html b/Basic Concepts/Feature Flags.html new file mode 100644 index 000000000..2aa79b1f5 --- /dev/null +++ b/Basic Concepts/Feature Flags.html @@ -0,0 +1,289 @@ + + + + + + + Feature Flags — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Feature Flags

+

ZFS on-disk formats were originally versioned with a single number, +which increased whenever the format changed. The numbered approach was +suitable when development of ZFS was driven by a single organisation.

+

For distributed development of OpenZFS, version numbering was +unsuitable. Any change to the number would have required agreement, +across all implementations, of each change to the on-disk format.

+

OpenZFS feature flags – an alternative to traditional version numbering +– allow a uniquely named pool property for each change to the on-disk +format. This approach supports:

+
    +
  • format changes that are independent

  • +
  • format changes that depend on each other.

  • +
+
+

Compatibility

+

Where all features that are used by a pool are supported by multiple +implementations of OpenZFS, the on-disk format is portable across those +implementations.

+

Features that are exclusive when enabled should be periodically ported +to all distributions.

+
+
+

Reference materials

+

ZFS Feature Flags +(Christopher Siden, 2012-01, in the Internet +Archive Wayback Machine) in particular: “… Legacy version numbers still +exist for pool versions 1-28 …”.

+

zpool-features(7) man page - OpenZFS

+

zpool-features (5) – illumos

+
+
+

Feature flags implementation per OS

+
+ZFS Feature Matrix + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Feature FlagRead-Only
Compatible
OpenZFS (Linux, FreeBSD 13+)FreeBSD pre OpenZFSIllumosJoyentNetBSDNexentaOmniOS CEOpenZFS on OS X
0.6.5.110.7.130.8.62.0.72.1.152.2.3master12.1.012.2.0mastermaster9.3main4.0.5-FPmasterr151046r151048master2.2.02.2.22.2.3rc4main
org.zfsonlinux:allocation_classesyesnonoyesyesyesyesyesnoyesyesyesnonononoyesyesyesyesyesyesyes
com.delphix:async_destroyyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
org.openzfs:blake3nonononononoyesyesnononononononononononoyesyesyesyes
com.fudosecurity:block_cloningyesnononononoyesyesnononononononononononoyesyesyesyes
com.datto:bookmark_v2nononoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
com.delphix:bookmark_writtennonononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:bookmarksyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.nexenta:class_of_storageyesnononononononononononononoyesyesnonononononono
org.openzfs:device_rebuildyesnononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:device_removalnononoyesyesyesyesyesyesyesyesyesnononoyesyesyesyesyesyesyesyes
org.openzfs:draidnononononoyesyesyesnononononononononononoyesyesyesyes
org.illumos:edonrnoyes1yes1yes1yes1yes1yes1yesnonoyesyesnononoyesyesyesyesyesyesyesyes
com.delphix:embedded_datanoyesyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesyes
com.delphix:empty_bpobjyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.delphix:enabled_txgyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.datto:encryptionnononoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
com.delphix:extensible_datasetnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.joyent:filesystem_limitsyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.delphix:head_errlognonononononoyesyesnononononononononononoyesyesyesyes
com.delphix:hole_birthnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
org.open-zfs:large_blocksnoyesyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesyes
org.zfsonlinux:large_dnodenonoyesyesyesyesyesyesnoyesyesyesnonononoyesyesyesyesyesyesyes
com.delphix:livelistyesnononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:log_spacemapyesnononoyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
org.illumos:lz4_compressnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.nexenta:meta_devicesyesnononononononononononononoyesyesnonononononono
com.joyent:multi_vdev_crash_dumpnonoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.delphix:obsolete_countsyesnonoyesyesyesyesyesyesyesyesyesnononoyesyesyesyesyesyesyesyes
org.zfsonlinux:project_quotayesnonoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
org.openzfs:raidz_expansionnononononononoyesnonononononononononononoyesyesyes
com.delphix:redacted_datasetsnonononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:redaction_bookmarksnonononoyesyesyesyesnononononononononononoyesyesyesyes
com.delphix:redaction_list_spillnononononononoyesnononononononononononoyesyesyesyes
com.datto:resilver_deferyesnonoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
org.illumos:sha512nonoyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesyes
org.illumos:skeinnonoyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesyes
com.delphix:spacemap_histogramyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyes
com.delphix:spacemap_v2yesnonoyesyesyesyesyesyesyesyesyesnonononoyesyesyesyesyesyesyes
org.zfsonlinux:userobj_accountingyesnoyesyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
com.nexenta:vdev_propertiesyesnononononononononononononoyesyesnonononononono
com.klarasystems:vdev_zaps_v2nonononononoyesyesnononononononononononoyesyesyesyes
com.nexenta:wbcnononononononononononononononoyesnonononononono
org.openzfs:zilsaxattryesnononononoyesyesnonoyesyesnonononoyesyesyesyesyesyesyes
com.delphix:zpool_checkpointyesnonoyesyesyesyesyesyesyesyesyesnonononoyesyesyesyesyesyesyes
org.freebsd:zstd_compressnonononoyesyesyesyesnononononononononononoyesyesyesyes
+ +

Table generates by parsing manpages for feature flags, and is entirely dependent on good, accurate documentation.
Last updated on 2024-03-28T09:44:55.376137Z using compatibility_matrix.py.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/RAIDZ.html b/Basic Concepts/RAIDZ.html new file mode 100644 index 000000000..8663b9060 --- /dev/null +++ b/Basic Concepts/RAIDZ.html @@ -0,0 +1,200 @@ + + + + + + + RAIDZ — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

RAIDZ

+

tl;dr: RAIDZ is effective for large block sizes and sequential workloads.

+
+

Introduction

+

RAIDZ is a variation on RAID-5 that allows for better distribution of parity +and eliminates the RAID-5 “write hole” (in which data and parity become +inconsistent after a power loss). +Data and parity is striped across all disks within a raidz group.

+

A raidz group can have single, double, or triple parity, meaning that the raidz +group can sustain one, two, or three failures, respectively, without losing any +data. The raidz1 vdev type specifies a single-parity raidz group; the raidz2 +vdev type specifies a double-parity raidz group; and the raidz3 vdev type +specifies a triple-parity raidz group. The raidz vdev type is an alias for +raidz1.

+

A raidz group of N disks of size X with P parity disks can hold +approximately (N-P)*X bytes and can withstand P devices failing without +losing data. The minimum number of devices in a raidz group is one more +than the number of parity disks. The recommended number is between 3 and 9 +to help increase performance.

+
+
+

Space efficiency

+

Actual used space for a block in RAIDZ is based on several points:

+
    +
  • minimal write size is disk sector size (can be set via ashift vdev parameter)

  • +
  • stripe width in RAIDZ is dynamic, and starts with at least one data block part, or up to +disks count minus parity number parts of data block

  • +
  • one block of data with size of recordsize is +splitted equally via sector size parts +and written on each stripe on RAIDZ vdev

  • +
  • each stripe of data will have a part of block

  • +
  • in addition to data one, two or three blocks of parity should be written, +one per disk; so, for raidz2 of 5 disks there will be 3 blocks of data and +2 blocks of parity

  • +
+

Due to these inputs, if recordsize is less or equal to sector size, +then RAIDZ’s parity size will be effictively equal to mirror with same redundancy. +For example, for raidz1 of 3 disks with ashift=12 and recordsize=4K +we will allocate on disk:

+
    +
  • one 4K block of data

  • +
  • one 4K parity block

  • +
+

and usable space ratio will be 50%, same as with double mirror.

+

Another example for ashift=12 and recordsize=128K for raidz1 of 3 disks:

+
    +
  • total stripe width is 3

  • +
  • one stripe can have up to 2 data parts of 4K size because of 1 parity blocks

  • +
  • we will have 128K/8k = 16 stripes with 8K of data and 4K of parity each

  • +
  • 16 stripes each with 12k, means we write 192k to store 128k

  • +
+

so usable space ratio in this case will be 66%.

+

The more disks RAIDZ has, the wider the stripe, the greater the space +efficiency.

+

You can find actual parity cost per RAIDZ size here:

+

(source)

+
+
+

Performance considerations

+
+

Write

+

Because of full stripe width, one block write will write stripe part on each disk. +One RAIDZ vdev has a write IOPS of one slowest disk because of that in worst case.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/Troubleshooting.html b/Basic Concepts/Troubleshooting.html new file mode 100644 index 000000000..834f4402c --- /dev/null +++ b/Basic Concepts/Troubleshooting.html @@ -0,0 +1,226 @@ + + + + + + + Troubleshooting — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Troubleshooting

+
+

Todo

+

This page is a draft.

+
+

This page contains tips for troubleshooting ZFS on Linux and what info +developers might want for bug triage.

+ +
+
+

About Log Files

+

Log files can be very useful for troubleshooting. In some cases, +interesting information is stored in multiple log files that are +correlated to system events.

+

Pro tip: logging infrastructure tools like elasticsearch, fluentd, +influxdb, or splunk can simplify log analysis and event correlation.

+
+

Generic Kernel Log

+

Typically, Linux kernel log messages are available from dmesg -T, +/var/log/syslog, or where kernel log messages are sent (eg by +rsyslogd).

+
+
+

ZFS Kernel Module Debug Messages

+

The ZFS kernel modules use an internal log buffer for detailed logging +information. This log information is available in the pseudo file +/proc/spl/kstat/zfs/dbgmsg for ZFS builds where ZFS module parameter +zfs_dbgmsg_enable = +1

+
+
+
+
+

Unkillable Process

+

Symptom: zfs or zpool command appear hung, does not return, and +is not killable

+

Likely cause: kernel thread hung or panic

+

Log files of interest: Generic Kernel Log, +ZFS Kernel Module Debug Messages

+

Important information: if a kernel thread is stuck, then a backtrace of +the stuck thread can be in the logs. In some cases, the stuck thread is +not logged until the deadman timer expires. See also debug +tunables

+
+
+
+

ZFS Events

+

ZFS uses an event-based messaging interface for communication of +important events to other consumers running on the system. The ZFS Event +Daemon (zed) is a userland daemon that listens for these events and +processes them. zed is extensible so you can write shell scripts or +other programs that subscribe to events and take action. For example, +the script usually installed at /etc/zfs/zed.d/all-syslog.sh writes +a formatted event message to syslog. See the man page for zed(8) +for more information.

+

A history of events is also available via the zpool events command. +This history begins at ZFS kernel module load and includes events from +any pool. These events are stored in RAM and limited in count to a value +determined by the kernel tunable +zfs_event_len_max. +zed has an internal throttling mechanism to prevent overconsumption +of system resources processing ZFS events.

+

More detailed information about events is observable using +zpool events -v The contents of the verbose events is subject to +change, based on the event and information available at the time of the +event.

+

Each event has a class identifier used for filtering event types. +Commonly seen events are those related to pool management with class +sysevent.fs.zfs.* including import, export, configuration updates, +and zpool history updates.

+

Events related to errors are reported as class ereport.* These can +be invaluable for troubleshooting. Some faults can cause multiple +ereports as various layers of the software deal with the fault. For +example, on a simple pool without parity protection, a faulty disk could +cause an ereport.io during a read from the disk that results in an +erport.fs.zfs.checksum at the pool level. These events are also +reflected by the error counters observed in zpool status If you see +checksum or read/write errors in zpool status then there should be +one or more corresponding ereports in the zpool events output.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/dRAID Howto.html b/Basic Concepts/dRAID Howto.html new file mode 100644 index 000000000..b459b0d59 --- /dev/null +++ b/Basic Concepts/dRAID Howto.html @@ -0,0 +1,351 @@ + + + + + + + dRAID — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

dRAID

+
+

Note

+

This page describes functionality which has been added for the +OpenZFS 2.1.0 release, it is not in the OpenZFS 2.0.0 release.

+
+
+

Introduction

+

dRAID is a variant of raidz that provides integrated distributed hot +spares which allows for faster resilvering while retaining the benefits +of raidz. A dRAID vdev is constructed from multiple internal raidz +groups, each with D data devices and P parity devices. These groups +are distributed over all of the children in order to fully utilize the +available disk performance. This is known as parity declustering and +it has been an active area of research. The image below is simplified, +but it helps illustrate this key difference between dRAID and raidz.

+

draid1

+

Additionally, a dRAID vdev must shuffle its child vdevs in such a way +that regardless of which drive has failed, the rebuild IO (both read +and write) will distribute evenly among all surviving drives. This +is accomplished by using carefully chosen precomputed permutation +maps. This has the advantage of both keeping pool creation fast and +making it impossible for the mapping to be damaged or lost.

+

Another way dRAID differs from raidz is that it uses a fixed stripe +width (padding as necessary with zeros). This allows a dRAID vdev to +be sequentially resilvered, however the fixed stripe width significantly +effects both usable capacity and IOPS. For example, with the default +D=8 and 4k disk sectors the minimum allocation size is 32k. If using +compression, this relatively large allocation size can reduce the +effective compression ratio. When using ZFS volumes and dRAID the +default volblocksize property is increased to account for the allocation +size. If a dRAID pool will hold a significant amount of small blocks, +it is recommended to also add a mirrored special vdev to store those +blocks.

+

In regards to IO/s, performance is similar to raidz since for any +read all D data disks must be accessed. Delivered random IOPS can be +reasonably approximated as floor((N-S)/(D+P))*<single-drive-IOPS>.

+

In summary dRAID can provide the same level of redundancy and +performance as raidz, while also providing a fast integrated distributed +spare.

+
+
+

Create a dRAID vdev

+

A dRAID vdev is created like any other by using the zpool create +command and enumerating the disks which should be used.

+
# zpool create <pool> draid[1,2,3] <vdevs...>
+
+
+

Like raidz, the parity level is specified immediately after the draid +vdev type. However, unlike raidz additional colon separated options can be +specified. The most important of which is the :<spares>s option which +controls the number of distributed hot spares to create. By default, no +spares are created. The :<data>d option can be specified to set the +number of data devices to use in each RAID stripe (D+P). When unspecified +reasonable defaults are chosen.

+
# zpool create <pool> draid[<parity>][:<data>d][:<children>c][:<spares>s] <vdevs...>
+
+
+
    +
  • parity - The parity level (1-3). Defaults to one.

  • +
  • data - The number of data devices per redundancy group. In general +a smaller value of D will increase IOPS, improve the compression ratio, +and speed up resilvering at the expense of total usable capacity. +Defaults to 8, unless N-P-S is less than 8.

  • +
  • children - The expected number of children. Useful as a cross-check +when listing a large number of devices. An error is returned when the +provided number of children differs.

  • +
  • spares - The number of distributed hot spares. Defaults to zero.

  • +
+

For example, to create an 11 disk dRAID pool with 4+1 redundancy and a +single distributed spare the command would be:

+
# zpool create tank draid:4d:1s:11c /dev/sd[a-k]
+# zpool status tank
+
+  pool: tank
+ state: ONLINE
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        tank                  ONLINE       0     0     0
+          draid1:4d:11c:1s-0  ONLINE       0     0     0
+            sda               ONLINE       0     0     0
+            sdb               ONLINE       0     0     0
+            sdc               ONLINE       0     0     0
+            sdd               ONLINE       0     0     0
+            sde               ONLINE       0     0     0
+            sdf               ONLINE       0     0     0
+            sdg               ONLINE       0     0     0
+            sdh               ONLINE       0     0     0
+            sdi               ONLINE       0     0     0
+            sdj               ONLINE       0     0     0
+            sdk               ONLINE       0     0     0
+        spares
+          draid1-0-0          AVAIL
+
+
+

Note that the dRAID vdev name, draid1:4d:11c:1s, fully describes the +configuration and all of disks which are part of the dRAID are listed. +Furthermore, the logical distributed hot spare is shown as an available +spare disk.

+
+
+

Rebuilding to a Distributed Spare

+

One of the major advantages of dRAID is that it supports both sequential +and traditional healing resilvers. When performing a sequential resilver +to a distributed hot spare the performance scales with the number of disks +divided by the stripe width (D+P). This can greatly reduce resilver times +and restore full redundancy in a fraction of the usual time. For example, +the following graph shows the observed sequential resilver time in hours +for a 90 HDD based dRAID filled to 90% capacity.

+

draid-resilver

+

When using dRAID and a distributed spare, the process for handling a +failed disk is almost identical to raidz with a traditional hot spare. +When a disk failure is detected the ZFS Event Daemon (ZED) will start +rebuilding to a spare if one is available. The only difference is that +for dRAID a sequential resilver is started, while a healing resilver must +be used for raidz.

+
# echo offline >/sys/block/sdg/device/state
+# zpool replace -s tank sdg draid1-0-0
+# zpool status
+
+  pool: tank
+ state: DEGRADED
+status: One or more devices is currently being resilvered.  The pool will
+        continue to function, possibly in a degraded state.
+action: Wait for the resilver to complete.
+  scan: resilver (draid1:4d:11c:1s-0) in progress since Tue Nov 24 14:34:25 2020
+        3.51T scanned at 13.4G/s, 1.59T issued 6.07G/s, 6.13T total
+        326G resilvered, 57.17% done, 00:03:21 to go
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        tank                  DEGRADED     0     0     0
+          draid1:4d:11c:1s-0  DEGRADED     0     0     0
+            sda               ONLINE       0     0     0  (resilvering)
+            sdb               ONLINE       0     0     0  (resilvering)
+            sdc               ONLINE       0     0     0  (resilvering)
+            sdd               ONLINE       0     0     0  (resilvering)
+            sde               ONLINE       0     0     0  (resilvering)
+            sdf               ONLINE       0     0     0  (resilvering)
+            spare-6           DEGRADED     0     0     0
+              sdg             UNAVAIL      0     0     0
+              draid1-0-0      ONLINE       0     0     0  (resilvering)
+            sdh               ONLINE       0     0     0  (resilvering)
+            sdi               ONLINE       0     0     0  (resilvering)
+            sdj               ONLINE       0     0     0  (resilvering)
+            sdk               ONLINE       0     0     0  (resilvering)
+        spares
+          draid1-0-0          INUSE     currently in use
+
+
+

While both types of resilvering achieve the same goal it’s worth taking +a moment to summarize the key differences.

+
    +
  • A traditional healing resilver scans the entire block tree. This +means the checksum for each block is available while it’s being +repaired and can be immediately verified. The downside is this +creates a random read workload which is not ideal for performance.

  • +
  • A sequential resilver instead scans the space maps in order to +determine what space is allocated and what must be repaired. +This rebuild process is not limited to block boundaries and can +sequentially reads from the disks and make repairs using larger +I/Os. The price to pay for this performance improvement is that +the block checksums cannot be verified while resilvering. Therefore, +a scrub is started to verify the checksums after the sequential +resilver completes.

  • +
+

For a more in depth explanation of the differences between sequential +and healing resilvering check out these sequential resilver slides +which were presented at the OpenZFS Developer Summit.

+
+
+

Rebalancing

+

Distributed spare space can be made available again by simply replacing +any failed drive with a new drive. This process is called rebalancing +and is essentially a resilver. When performing rebalancing a healing +resilver is recommended since the pool is no longer degraded. This +ensures all checksums are verified when rebuilding to the new disk +and eliminates the need to perform a subsequent scrub of the pool.

+
# zpool replace tank sdg sdl
+# zpool status
+
+  pool: tank
+ state: DEGRADED
+status: One or more devices is currently being resilvered.  The pool will
+        continue to function, possibly in a degraded state.
+action: Wait for the resilver to complete.
+  scan: resilver in progress since Tue Nov 24 14:45:16 2020
+        6.13T scanned at 7.82G/s, 6.10T issued at 7.78G/s, 6.13T total
+        565G resilvered, 99.44% done, 00:00:04 to go
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        tank                  DEGRADED     0     0     0
+          draid1:4d:11c:1s-0  DEGRADED     0     0     0
+            sda               ONLINE       0     0     0  (resilvering)
+            sdb               ONLINE       0     0     0  (resilvering)
+            sdc               ONLINE       0     0     0  (resilvering)
+            sdd               ONLINE       0     0     0  (resilvering)
+            sde               ONLINE       0     0     0  (resilvering)
+            sdf               ONLINE       0     0     0  (resilvering)
+            spare-6           DEGRADED     0     0     0
+              replacing-0     DEGRADED     0     0     0
+                sdg           UNAVAIL      0     0     0
+                sdl           ONLINE       0     0     0  (resilvering)
+              draid1-0-0      ONLINE       0     0     0  (resilvering)
+            sdh               ONLINE       0     0     0  (resilvering)
+            sdi               ONLINE       0     0     0  (resilvering)
+            sdj               ONLINE       0     0     0  (resilvering)
+            sdk               ONLINE       0     0     0  (resilvering)
+        spares
+       draid1-0-0          INUSE     currently in use
+
+
+

After the resilvering completes the distributed hot spare is once again +available for use and the pool has been restored to its normal healthy +state.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Basic Concepts/index.html b/Basic Concepts/index.html new file mode 100644 index 000000000..6e7749a4b --- /dev/null +++ b/Basic Concepts/index.html @@ -0,0 +1,164 @@ + + + + + + + Basic Concepts — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/Buildbot Options.html b/Developer Resources/Buildbot Options.html new file mode 100644 index 000000000..a85702ac1 --- /dev/null +++ b/Developer Resources/Buildbot Options.html @@ -0,0 +1,383 @@ + + + + + + + Buildbot Options — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Buildbot Options

+

There are a number of ways to control the ZFS Buildbot at a commit +level. This page provides a summary of various options that the ZFS +Buildbot supports and how it impacts testing. More detailed information +regarding its implementation can be found at the ZFS Buildbot Github +page.

+
+

Choosing Builders

+

By default, all commits in your ZFS pull request are compiled by the +BUILD builders. Additionally, the top commit of your ZFS pull request is +tested by TEST builders. However, there is the option to override which +types of builder should be used on a per commit basis. In this case, you +can add +Requires-builders: <none|all|style|build|arch|distro|test|perf|coverage|unstable> +to your commit message. A comma separated list of options can be +provided. Supported options are:

+
    +
  • all: This commit should be built by all available builders

  • +
  • none: This commit should not be built by any builders

  • +
  • style: This commit should be built by STYLE builders

  • +
  • build: This commit should be built by all BUILD builders

  • +
  • arch: This commit should be built by BUILD builders tagged as +‘Architectures’

  • +
  • distro: This commit should be built by BUILD builders tagged as +‘Distributions’

  • +
  • test: This commit should be built and tested by the TEST builders +(excluding the Coverage TEST builders)

  • +
  • perf: This commit should be built and tested by the PERF builders

  • +
  • coverage : This commit should be built and tested by the Coverage +TEST builders

  • +
  • unstable : This commit should be built and tested by the Unstable +TEST builders (currently only the Fedora Rawhide TEST builder)

  • +
+

A couple of examples on how to use Requires-builders: in commit +messages can be found below.

+
+

Preventing a commit from being built and tested.

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-builders: none
+
+
+
+
+

Submitting a commit to STYLE and TEST builders only.

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-builders: style test
+
+
+
+
+
+

Requiring SPL Versions

+

Currently, the ZFS Buildbot attempts to choose the correct SPL branch to +build based on a pull request’s base branch. In the cases where a +specific SPL version needs to be built, the ZFS buildbot supports +specifying an SPL version for pull request testing. By opening a pull +request against ZFS and adding Requires-spl: in a commit message, +you can instruct the buildbot to use a specific SPL version. Below are +examples of a commit messages that specify the SPL version.

+
+

Build SPL from a specific pull request

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-spl: refs/pull/123/head
+
+
+
+
+

Build SPL branch spl-branch-name from zfsonlinux/spl repository

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-spl: spl-branch-name
+
+
+
+
+
+

Requiring Kernel Version

+

Currently, Kernel.org builders will clone and build the master branch of +Linux. In cases where a specific version of the Linux kernel needs to be +built, the ZFS buildbot supports specifying the Linux kernel to be built +via commit message. By opening a pull request against ZFS and adding +Requires-kernel: in a commit message, you can instruct the buildbot +to use a specific Linux kernel. Below is an example commit message that +specifies a specific Linux kernel tag.

+
+

Build Linux Kernel Version 4.14

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Requires-kernel: v4.14
+
+
+
+
+
+

Build Steps Overrides

+

Each builder will execute or skip build steps based on its default +preferences. In some scenarios, it might be possible to skip various +build steps. The ZFS buildbot supports overriding the defaults of all +builders in a commit message. The list of available overrides are:

+
    +
  • Build-linux: <Yes|No>: All builders should build Linux for this +commit

  • +
  • Build-lustre: <Yes|No>: All builders should build Lustre for this +commit

  • +
  • Build-spl: <Yes|No>: All builders should build the SPL for this +commit

  • +
  • Build-zfs: <Yes|No>: All builders should build ZFS for this +commit

  • +
  • Built-in: <Yes|No>: All Linux builds should build in SPL and ZFS

  • +
  • Check-lint: <Yes|No>: All builders should perform lint checks for +this commit

  • +
  • Configure-lustre: <options>: Provide <options> as configure +flags when building Lustre

  • +
  • Configure-spl: <options>: Provide <options> as configure +flags when building the SPL

  • +
  • Configure-zfs: <options>: Provide <options> as configure +flags when building ZFS

  • +
+

A couple of examples on how to use overrides in commit messages can be +found below.

+
+

Skip building the SPL and build Lustre without ldiskfs

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Build-lustre: Yes
+Configure-lustre: --disable-ldiskfs
+Build-spl: No
+
+
+
+
+

Build ZFS Only

+
This is a commit message
+
+This text is part of the commit message body.
+
+Signed-off-by: Contributor <contributor@email.com>
+Build-lustre: No
+Build-spl: No
+
+
+
+
+
+

Configuring Tests with the TEST File

+

At the top level of the ZFS source tree, there is the TEST +file which +contains variables that control if and how a specific test should run. +Below is a list of each variable and a brief description of what each +variable controls.

+
    +
  • TEST_PREPARE_WATCHDOG - Enables the Linux kernel watchdog

  • +
  • TEST_PREPARE_SHARES - Start NFS and Samba servers

  • +
  • TEST_SPLAT_SKIP - Determines if splat testing is skipped

  • +
  • TEST_SPLAT_OPTIONS - Command line options to provide to splat

  • +
  • TEST_ZTEST_SKIP - Determines if ztest testing is skipped

  • +
  • TEST_ZTEST_TIMEOUT - The length of time ztest should run

  • +
  • TEST_ZTEST_DIR - Directory where ztest will create vdevs

  • +
  • TEST_ZTEST_OPTIONS - Options to pass to ztest

  • +
  • TEST_ZTEST_CORE_DIR - Directory for ztest to store core dumps

  • +
  • TEST_ZIMPORT_SKIP - Determines if zimport testing is skipped

  • +
  • TEST_ZIMPORT_DIR - Directory used during zimport

  • +
  • TEST_ZIMPORT_VERSIONS - Source versions to test

  • +
  • TEST_ZIMPORT_POOLS - Names of the pools for zimport to use +for testing

  • +
  • TEST_ZIMPORT_OPTIONS - Command line options to provide to +zimport

  • +
  • TEST_XFSTESTS_SKIP - Determines if xfstest testing is skipped

  • +
  • TEST_XFSTESTS_URL - URL to download xfstest from

  • +
  • TEST_XFSTESTS_VER - Name of the tarball to download from +TEST_XFSTESTS_URL

  • +
  • TEST_XFSTESTS_POOL - Name of pool to create and used by +xfstest

  • +
  • TEST_XFSTESTS_FS - Name of dataset for use by xfstest

  • +
  • TEST_XFSTESTS_VDEV - Name of the vdev used by xfstest

  • +
  • TEST_XFSTESTS_OPTIONS - Command line options to provide to +xfstest

  • +
  • TEST_ZFSTESTS_SKIP - Determines if zfs-tests testing is +skipped

  • +
  • TEST_ZFSTESTS_DIR - Directory to store files and loopback devices

  • +
  • TEST_ZFSTESTS_DISKS - Space delimited list of disks that +zfs-tests is allowed to use

  • +
  • TEST_ZFSTESTS_DISKSIZE - File size of file based vdevs used by +zfs-tests

  • +
  • TEST_ZFSTESTS_ITERS - Number of times test-runner should +execute its set of tests

  • +
  • TEST_ZFSTESTS_OPTIONS - Options to provide zfs-tests

  • +
  • TEST_ZFSTESTS_RUNFILE - The runfile to use when running +zfs-tests

  • +
  • TEST_ZFSTESTS_TAGS - List of tags to provide to test-runner

  • +
  • TEST_ZFSSTRESS_SKIP - Determines if zfsstress testing is +skipped

  • +
  • TEST_ZFSSTRESS_URL - URL to download zfsstress from

  • +
  • TEST_ZFSSTRESS_VER - Name of the tarball to download from +TEST_ZFSSTRESS_URL

  • +
  • TEST_ZFSSTRESS_RUNTIME - Duration to run runstress.sh

  • +
  • TEST_ZFSSTRESS_POOL - Name of pool to create and use for +zfsstress testing

  • +
  • TEST_ZFSSTRESS_FS - Name of dataset for use during zfsstress +tests

  • +
  • TEST_ZFSSTRESS_FSOPT - File system options to provide to +zfsstress

  • +
  • TEST_ZFSSTRESS_VDEV - Directory to store vdevs for use during +zfsstress tests

  • +
  • TEST_ZFSSTRESS_OPTIONS - Command line options to provide to +runstress.sh

  • +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/Building ZFS.html b/Developer Resources/Building ZFS.html new file mode 100644 index 000000000..311b852e9 --- /dev/null +++ b/Developer Resources/Building ZFS.html @@ -0,0 +1,379 @@ + + + + + + + Building ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Building ZFS

+
+

GitHub Repositories

+

The official source for OpenZFS is maintained at GitHub by the +openzfs organization. The primary +git repository for the project is the zfs repository.

+

There are two main components in this repository:

+
    +
  • ZFS: The ZFS repository contains a copy of the upstream OpenZFS +code which has been adapted and extended for Linux and FreeBSD. The +vast majority of the core OpenZFS code is self-contained and can be +used without modification.

  • +
  • SPL: The SPL is a thin shim layer which is responsible for +implementing the fundamental interfaces required by OpenZFS. It’s +this layer which allows OpenZFS to be used across multiple +platforms. SPL used to be maintained in a separate repository, but +was merged into the zfs +repository in the 0.8 major release.

  • +
+
+
+

Installing Dependencies

+

The first thing you’ll need to do is prepare your environment by +installing a full development tool chain. In addition, development +headers for both the kernel and the following packages must be +available. It is important to note that if the development kernel +headers for the currently running kernel aren’t installed, the modules +won’t compile properly.

+

The following dependencies should be installed to build the latest ZFS +2.1 release.

+
    +
  • RHEL/CentOS 7:

  • +
+
sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel git ncompress libcurl-devel
+sudo yum install --enablerepo=epel python-packaging dkms
+
+
+
    +
  • RHEL/CentOS 8, Fedora:

  • +
+
sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel git ncompress libcurl-devel
+sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms
+
+
+
    +
  • Debian, Ubuntu:

  • +
+
sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-generic python3 python3-dev python3-setuptools python3-cffi libffi-dev python3-packaging git libcurl4-openssl-dev debhelper-compat dh-python po-debconf python3-all-dev python3-sphinx parallel
+
+
+
    +
  • FreeBSD:

  • +
+
pkg install autoconf automake autotools git gmake python devel/py-sysctl sudo
+
+
+
+
+

Build Options

+

There are two options for building OpenZFS; the correct one largely +depends on your requirements.

+
    +
  • Packages: Often it can be useful to build custom packages from +git which can be installed on a system. This is the best way to +perform integration testing with systemd, dracut, and udev. The +downside to using packages it is greatly increases the time required +to build, install, and test a change.

  • +
  • In-tree: Development can be done entirely in the SPL/ZFS source +tree. This speeds up development by allowing developers to rapidly +iterate on a patch. When working in-tree developers can leverage +incremental builds, load/unload kernel modules, execute utilities, +and verify all their changes with the ZFS Test Suite.

  • +
+

The remainder of this page focuses on the in-tree option which is +the recommended method of development for the majority of changes. See +the custom packages page for additional +information on building custom packages.

+
+
+

Developing In-Tree

+
+

Clone from GitHub

+

Start by cloning the ZFS repository from GitHub. The repository has a +master branch for development and a series of *-release +branches for tagged releases. After checking out the repository your +clone will default to the master branch. Tagged releases may be built +by checking out zfs-x.y.z tags with matching version numbers or +matching release branches.

+
git clone https://github.com/openzfs/zfs
+
+
+
+
+

Configure and Build

+

For developers working on a change always create a new topic branch +based off of master. This will make it easy to open a pull request with +your change latter. The master branch is kept stable with extensive +regression testing of every pull +request before and after it’s merged. Every effort is made to catch +defects as early as possible and to keep them out of the tree. +Developers should be comfortable frequently rebasing their work against +the latest master branch.

+

In this example we’ll use the master branch and walk through a stock +in-tree build. Start by checking out the desired branch then build +the ZFS and SPL source in the traditional autotools fashion.

+
cd ./zfs
+git checkout master
+sh autogen.sh
+./configure
+make -s -j$(nproc)
+
+
+
+
tip: --with-linux=PATH and --with-linux-obj=PATH can be +passed to configure to specify a kernel installed in a non-default +location.
+
tip: --enable-debug can be passed to configure to enable all ASSERTs and +additional correctness tests.
+
+

Optional Build packages

+
make rpm #Builds RPM packages for CentOS/Fedora
+make deb #Builds RPM converted DEB packages for Debian/Ubuntu
+make native-deb #Builds native DEB packages for Debian/Ubuntu
+
+
+
+
tip: Native Debian packages build with pre-configured paths for +Debian and Ubuntu. It’s best not to override the paths during +configure.
+
tip: For native Debain packages, KVERS, KSRC and KOBJ +environment variables can be exported to specify the kernel installed +in non-default location.
+
+
+

Note

+

Support for native Debian packaging will be available starting from +openzfs-2.2 release.

+
+
+
+

Install

+

You can run zfs-tests.sh without installing ZFS, see below. If you +have reason to install ZFS after building it, pay attention to how your +distribution handles kernel modules. On Ubuntu, for example, the modules +from this repository install in the extra kernel module path, which +is not in the standard depmod search path. Therefore, for the +duration of your testing, edit /etc/depmod.d/ubuntu.conf and add +extra to the beginning of the search path.

+

You may then install using +sudo make install; sudo ldconfig; sudo depmod. You’d uninstall with +sudo make uninstall; sudo ldconfig; sudo depmod.

+
+
+

Running zloop.sh and zfs-tests.sh

+

If you wish to run the ZFS Test Suite (ZTS), then ksh and a few +additional utilities must be installed.

+
    +
  • RHEL/CentOS 7:

  • +
+
sudo yum install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf
+sudo yum install --enablerepo=epel dbench
+
+
+
    +
  • RHEL/CentOS 8, Fedora:

  • +
+
sudo dnf install --skip-broken ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf
+sudo dnf install --skip-broken --enablerepo=epel dbench
+
+
+
    +
  • Debian:

  • +
+
sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-perf selinux-utils quota
+
+
+
    +
  • Ubuntu:

  • +
+
sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-tools-common selinux-utils quota
+
+
+
    +
  • FreeBSD:

  • +
+
pkg install base64 bash checkbashisms fio hs-ShellCheck ksh93 pamtester devel/py-flake8 sudo
+
+
+

There are a few helper scripts provided in the top-level scripts +directory designed to aid developers working with in-tree builds.

+
    +
  • zfs-helper.sh: Certain functionality (i.e. /dev/zvol/) depends on +the ZFS provided udev helper scripts being installed on the system. +This script can be used to create symlinks on the system from the +installation location to the in-tree helper. These links must be in +place to successfully run the ZFS Test Suite. The -i and -r +options can be used to install and remove the symlinks.

  • +
+
sudo ./scripts/zfs-helpers.sh -i
+
+
+
    +
  • zfs.sh: The freshly built kernel modules can be loaded using +zfs.sh. This script can later be used to unload the kernel +modules with the -u option.

  • +
+
sudo ./scripts/zfs.sh
+
+
+
    +
  • zloop.sh: A wrapper to run ztest repeatedly with randomized +arguments. The ztest command is a user space stress test designed to +detect correctness issues by concurrently running a random set of +test cases. If a crash is encountered, the ztest logs, any associated +vdev files, and core file (if one exists) are collected and moved to +the output directory for analysis.

  • +
+
sudo ./scripts/zloop.sh
+
+
+
    +
  • zfs-tests.sh: A wrapper which can be used to launch the ZFS Test +Suite. Three loopback devices are created on top of sparse files +located in /var/tmp/ and used for the regression test. Detailed +directions for the ZFS Test Suite can be found in the +README +located in the top-level tests directory.

  • +
+
./scripts/zfs-tests.sh -vx
+
+
+

tip: The delegate tests will be skipped unless group read +permission is set on the zfs directory and its parents.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/Custom Packages.html b/Developer Resources/Custom Packages.html new file mode 100644 index 000000000..83911d864 --- /dev/null +++ b/Developer Resources/Custom Packages.html @@ -0,0 +1,361 @@ + + + + + + + Custom Packages — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Custom Packages

+

The following instructions assume you are building from an official +release tarball +(version 0.8.0 or newer) or directly from the git +repository. Most users should not +need to do this and should preferentially use the distribution packages. +As a general rule the distribution packages will be more tightly +integrated, widely tested, and better supported. However, if your +distribution of choice doesn’t provide packages, or you’re a developer +and want to roll your own, here’s how to do it.

+

The first thing to be aware of is that the build system is capable of +generating several different types of packages. Which type of package +you choose depends on what’s supported on your platform and exactly what +your needs are.

+
    +
  • DKMS packages contain only the source code and scripts for +rebuilding the kernel modules. When the DKMS package is installed +kernel modules will be built for all available kernels. Additionally, +when the kernel is upgraded new kernel modules will be automatically +built for that kernel. This is particularly convenient for desktop +systems which receive frequent kernel updates. The downside is that +because the DKMS packages build the kernel modules from source a full +development environment is required which may not be appropriate for +large deployments.

  • +
  • kmods packages are binary kernel modules which are compiled +against a specific version of the kernel. This means that if you +update the kernel you must compile and install a new kmod package. If +you don’t frequently update your kernel, or if you’re managing a +large number of systems, then kmod packages are a good choice.

  • +
  • kABI-tracking kmod Packages are similar to standard binary kmods +and may be used with Enterprise Linux distributions like Red Hat and +CentOS. These distributions provide a stable kABI (Kernel Application +Binary Interface) which allows the same binary modules to be used +with new versions of the distribution provided kernel.

  • +
+

By default the build system will generate user packages and both DKMS +and kmod style kernel packages if possible. The user packages can be +used with either set of kernel packages and do not need to be rebuilt +when the kernel is updated. You can also streamline the build process by +building only the DKMS or kmod packages as shown below.

+

Be aware that when building directly from a git repository you must +first run the autogen.sh script to create the configure script. This +will require installing the GNU autotools packages for your +distribution. To perform any of the builds, you must install all the +necessary development tools and headers for your distribution.

+

It is important to note that if the development kernel headers for the +currently running kernel aren’t installed, the modules won’t compile +properly.

+ +
+

RHEL, CentOS and Fedora

+

Make sure that the required packages are installed to build the latest +ZFS 2.1 release:

+
    +
  • RHEL/CentOS 7:

  • +
+
sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel ncompress
+sudo yum install --enablerepo=epel dkms python-packaging
+
+
+
    +
  • RHEL/CentOS 8, Fedora:

  • +
+
sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build kernel-rpm-macros libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) kernel-abi-stablelists-$(uname -r | sed 's/\.[^.]\+$//') python3 python3-devel python3-setuptools python3-cffi libffi-devel ncompress
+sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms
+
+
+
    +
  • RHEL/CentOS 9:

  • +
+
sudo dnf config-manager --set-enabled crb
+sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build kernel-rpm-macros libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) kernel-abi-stablelists-$(uname -r | sed 's/\.[^.]\+$//') python3 python3-devel python3-setuptools python3-cffi libffi-devel
+sudo dnf install --skip-broken --enablerepo=epel python3-packaging dkms
+
+
+

Get the source code.

+
+

DKMS

+

Building rpm-based DKMS and user packages can be done as follows:

+
$ cd zfs
+$ ./configure
+$ make -j1 rpm-utils rpm-dkms
+$ sudo yum localinstall *.$(uname -p).rpm *.noarch.rpm
+
+
+
+
+

kmod

+

The key thing to know when building a kmod package is that a specific +Linux kernel must be specified. At configure time the build system will +make an educated guess as to which kernel you want to build against. +However, if configure is unable to locate your kernel development +headers, or you want to build against a different kernel, you must +specify the exact path with the –with-linux and –with-linux-obj +options.

+
$ cd zfs
+$ ./configure
+$ make -j1 rpm-utils rpm-kmod
+$ sudo yum localinstall *.$(uname -p).rpm
+
+
+
+
+

kABI-tracking kmod

+

The process for building kABI-tracking kmods is almost identical to for +building normal kmods. However, it will only produce binaries which can +be used by multiple kernels if the distribution supports a stable kABI. +In order to request kABI-tracking package the –with-spec=redhat +option must be passed to configure.

+

NOTE: This type of package is not available for Fedora.

+
$ cd zfs
+$ ./configure --with-spec=redhat
+$ make -j1 rpm-utils rpm-kmod
+$ sudo yum localinstall *.$(uname -p).rpm
+
+
+
+
+
+

Debian and Ubuntu

+

Make sure that the required packages are installed:

+
sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-generic python3 python3-dev python3-setuptools python3-cffi libffi-dev python3-packaging debhelper-compat dh-python po-debconf python3-all-dev python3-sphinx libpam0g-dev
+
+
+

Get the source code.

+
+

kmod

+

The key thing to know when building a kmod package is that a specific +Linux kernel must be specified. At configure time the build system will +make an educated guess as to which kernel you want to build against. +However, if configure is unable to locate your kernel development +headers, or you want to build against a different kernel, you must +specify the exact path with the –with-linux and –with-linux-obj +options.

+

To build RPM converted Debian packages:

+
$ cd zfs
+$ ./configure --enable-systemd
+$ make -j1 deb-utils deb-kmod
+$ sudo apt-get install --fix-missing ./*.deb
+
+
+

Starting from openzfs-2.2 release, native Debian packages can be built +as follows:

+
$ cd zfs
+$ ./configure
+$ make native-deb-utils native-deb-kmod
+$ rm ../openzfs-zfs-dkms_*.deb
+$ rm ../openzfs-zfs-dracut_*.deb  # deb-based systems usually use initramfs
+$ sudo apt-get install --fix-missing ../*.deb
+
+
+

Native Debian packages build with pre-configured paths for Debian and +Ubuntu. It’s best not to override the paths during configure. +KVERS, KSRC and KOBJ environment variables can be exported +to specify the kernel installed in non-default location.

+
+
+

DKMS

+

Building RPM converted deb-based DKMS and user packages can be done as +follows:

+
$ cd zfs
+$ ./configure --enable-systemd
+$ make -j1 deb-utils deb-dkms
+$ sudo apt-get install --fix-missing ./*.deb
+
+
+

Starting from openzfs-2.2 release, native deb-based DKMS and user +packages can be built as follows:

+
$ sudo apt-get install dh-dkms
+$ cd zfs
+$ ./configure
+$ make native-deb-utils
+$ rm ../openzfs-zfs-dracut_*.deb  # deb-based systems usually use initramfs
+$ sudo apt-get install --fix-missing ../*.deb
+
+
+
+
+
+

Get the Source Code

+
+

Released Tarball

+

The released tarball contains the latest fully tested and released +version of ZFS. This is the preferred source code location for use in +production systems. If you want to use the official released tarballs, +then use the following commands to fetch and prepare the source.

+
$ wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-x.y.z.tar.gz
+$ tar -xzf zfs-x.y.z.tar.gz
+
+
+
+
+

Git Master Branch

+

The Git master branch contains the latest version of the software, and +will probably contain fixes that, for some reason, weren’t included in +the released tarball. This is the preferred source code location for +developers who intend to modify ZFS. If you would like to use the git +version, you can clone it from Github and prepare the source like this.

+
$ git clone https://github.com/zfsonlinux/zfs.git
+$ cd zfs
+$ ./autogen.sh
+
+
+

Once the source has been prepared you’ll need to decide what kind of +packages you’re building and jump the to appropriate section above. Note +that not all package types are supported for all platforms.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/Git and GitHub for beginners.html b/Developer Resources/Git and GitHub for beginners.html new file mode 100644 index 000000000..57d302d93 --- /dev/null +++ b/Developer Resources/Git and GitHub for beginners.html @@ -0,0 +1,315 @@ + + + + + + + Git and GitHub for beginners (ZoL edition) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Git and GitHub for beginners (ZoL edition)

+

This is a very basic rundown of how to use Git and GitHub to make +changes.

+

Recommended reading: ZFS on Linux +CONTRIBUTING.md

+
+

First time setup

+

If you’ve never used Git before, you’ll need a little setup to start +things off.

+
git config --global user.name "My Name"
+git config --global user.email myemail@noreply.non
+
+
+
+
+

Cloning the initial repository

+

The easiest way to get started is to click the fork icon at the top of +the main repository page. From there you need to download a copy of the +forked repository to your computer:

+
git clone https://github.com/<your-account-name>/zfs.git
+
+
+

This sets the “origin” repository to your fork. This will come in handy +when creating pull requests. To make pulling from the “upstream” +repository as changes are made, it is very useful to establish the +upstream repository as another remote (man git-remote):

+
cd zfs
+git remote add upstream https://github.com/zfsonlinux/zfs.git
+
+
+
+
+

Preparing and making changes

+

In order to make changes it is recommended to make a branch, this lets +you work on several unrelated changes at once. It is also not +recommended to make changes to the master branch unless you own the +repository.

+
git checkout -b my-new-branch
+
+
+

From here you can make your changes and move on to the next step.

+

Recommended reading: C Style and Coding Standards for +SunOS, +ZFS on Linux Developer +Resources, +OpenZFS Developer +Resources

+
+
+

Testing your patches before pushing

+

Before committing and pushing, you may want to test your patches. There +are several tests you can run against your branch such as style +checking, and functional tests. All pull requests go through these tests +before being pushed to the main repository, however testing locally +takes the load off the build/test servers. This step is optional but +highly recommended, however the test suite should be run on a virtual +machine or a host that currently does not use ZFS. You may need to +install shellcheck and flake8 to run the checkstyle +correctly.

+
sh autogen.sh
+./configure
+make checkstyle
+
+
+

Recommended reading: Building +ZFS, ZFS Test +Suite +README

+
+
+

Committing your changes to be pushed

+

When you are done making changes to your branch there are a few more +steps before you can make a pull request.

+
git commit --all --signoff
+
+
+

This command opens an editor and adds all unstaged files from your +branch. Here you need to describe your change and add a few things:

+
# Please enter the commit message for your changes. Lines starting
+# with '#' will be ignored, and an empty message aborts the commit.
+# On branch my-new-branch
+# Changes to be committed:
+#   (use "git reset HEAD <file>..." to unstage)
+#
+#   modified:   hello.c
+#
+
+
+

The first thing we need to add is the commit message. This is what is +displayed on the git log, and should be a short description of the +change. By style guidelines, this has to be less than 72 characters in +length.

+

Underneath the commit message you can add a more descriptive text to +your commit. The lines in this section have to be less than 72 +characters.

+

When you are done, the commit should look like this:

+
Add hello command
+
+This is a test commit with a descriptive commit message.
+This message can be more than one line as shown here.
+
+Signed-off-by: My Name <myemail@noreply.non>
+Closes #9998
+Issue #9999
+# Please enter the commit message for your changes. Lines starting
+# with '#' will be ignored, and an empty message aborts the commit.
+# On branch my-new-branch
+# Changes to be committed:
+#   (use "git reset HEAD <file>..." to unstage)
+#
+#   modified:   hello.c
+#
+
+
+

You can also reference issues and pull requests if you are filing a pull +request for an existing issue as shown above. Save and exit the editor +when you are done.

+
+
+

Pushing and creating the pull request

+

Home stretch. You’ve made your change and made the commit. Now it’s time +to push it.

+
git push --set-upstream origin my-new-branch
+
+
+

This should ask you for your github credentials and upload your changes +to your repository.

+

The last step is to either go to your repository or the upstream +repository on GitHub and you should see a button for making a new pull +request for your recently committed branch.

+
+
+

Correcting issues with your pull request

+

Sometimes things don’t always go as planned and you may need to update +your pull request with a correction to either your commit message, or +your changes. This can be accomplished by re-pushing your branch. If you +need to make code changes or git add a file, you can do those now, +along with the following:

+
git commit --amend
+git push --force
+
+
+

This will return you to the commit editor screen, and push your changes +over top of the old ones. Do note that this will restart the process of +any build/test servers currently running and excessively pushing can +cause delays in processing of all pull requests.

+
+
+

Maintaining your repository

+

When you wish to make changes in the future you will want to have an +up-to-date copy of the upstream repository to make your changes on. Here +is how you keep updated:

+
git checkout master
+git pull upstream master
+git push origin master
+
+
+

This will make sure you are on the master branch of the repository, grab +the changes from upstream, then push them back to your repository.

+
+
+

Final words

+

This is a very basic introduction to Git and GitHub, but should get you +on your way to contributing to many open source projects. Not all +projects have style requirements and some may have different processes +to getting changes committed so please refer to their documentation to +see if you need to do anything different. One topic we have not touched +on is the git rebase command which is a little more advanced for +this wiki article.

+

Additional resources: Github Help, +Atlassian Git Tutorials

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/OpenZFS Exceptions.html b/Developer Resources/OpenZFS Exceptions.html new file mode 100644 index 000000000..b2794f5aa --- /dev/null +++ b/Developer Resources/OpenZFS Exceptions.html @@ -0,0 +1,1426 @@ + + + + + + + OpenZFS Exceptions — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

OpenZFS Exceptions

+

Commit exceptions used to explicitly reference a given Linux commit. +These exceptions are useful for a variety of reasons.

+

This page is used to generate +OpenZFS Tracking +page.

+
+

Format:

+
    +
  • <openzfs issue>|-|<comment> - The OpenZFS commit isn’t applicable +to Linux, or the OpenZFS -> ZFS on Linux commit matching is unable to +associate the related commits due to lack of information (denoted by +a -).

  • +
  • <openzfs issue>|<commit>|<comment> - The fix was merged to Linux +prior to their being an OpenZFS issue.

  • +
  • <openzfs issue>|!|<comment> - The commit is applicable but not +applied for the reason described in the comment.

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

OpenZFS issue id

status/ZFS commit

comment

11453

!

check_disk() on illumos +isn’t available on ZoL / +OpenZFS 2.0

11276

da68988

11052

2efea7c

11051

3b61ca3

10853

8dc2197

10844

61c3391

10842

d10b2f1

10841

944a372

10809

ee36c70

10808

2ef0f8c

10701

0091d66

10601

cc99f27

10573

48d3eb4

10572

edc1e71

10566

ab7615d

10554

bec1067

10500

03916905

10449

379ca9c

10406

da2feb4

10154

    +
  • +
+

Not applicable to Linux

10067

    +
  • +
+

The only ZFS change was to +zfs remap, which was +removed on Linux.

9884

    +
  • +
+

Not applicable to Linux

9851

    +
  • +
+

Not applicable to Linux

9691

d9b4bf0

9683

    +
  • +
+

Not applicable to Linux due +to devids not being used

9680

    +
  • +
+

Applied and rolled back in +OpenZFS, additional changes +needed.

9672

29445fe3

9647

a448a25

9626

59e6e7ca

9635

    +
  • +
+

Not applicable to Linux

9623

22448f08

9621

305bc4b3

9539

5228cf01

9512

b4555c77

9487

48fbb9dd

9466

272b5d73

9440

f664f1e

Illumos ticket 9440 never +landed in openzfs/openzfs, +but in ZoL / OpenZFS 2.0

9433

0873bb63

9421

64c1dcef

9237

    +
  • +
+

Introduced by 8567 which +was never applied to Linux

9194

    +
  • +
+

Not applicable the ‘-o +ashift=value’ option is +provided on Linux

9077

    +
  • +
+

Not applicable to Linux

9027

4a5d7f82

9018

3ec34e55

8984

!

WIP to support NFSv4 ACLs

8969

    +
  • +
+

Not applicable to Linux

8942

650258d7

8941

390d679a

8862

3b9edd7

8858

    +
  • +
+

Not applicable to Linux

8856

    +
  • +
+

Not applicable to Linux due +to Encryption (b525630)

8809

!

Adding libfakekernel needs +to be done by refactoring +existing code.

8727

b525630

8713

871e0732

8661

1ce23dca

8648

f763c3d1

8602

a032ac4

8601

d99a015

Equivalent fix included in +initial commit

8590

935e2c2

8569

    +
  • +
+

This change isn’t relevant +for Linux.

8567

    +
  • +
+

An alternate fix was +applied for Linux.

8552

935e2c2

8521

ee6370a7

8502

!

Apply when porting OpenZFS +7955

9485

1258bd7

8477

92e43c1

8454

    +
  • +
+

An alternate fix was +applied for Linux.

8423

50c957f

8408

5f1346c

8379

    +
  • +
+

This change isn’t relevant +for Linux.

8376

    +
  • +
+

This change isn’t relevant +for Linux.

8311

!

Need to assess +applicability to Linux.

8304

    +
  • +
+

This change isn’t relevant +for Linux.

8300

44f09cd

8265

    +
  • +
+

The large_dnode feature has +been implemented for Linux.

8168

78d95ea

8138

44f09cd

The spelling fix to the zfs +man page came in with the +mdoc conversion.

8108

    +
  • +
+

An equivalent Linux +specific fix was made.

8068

a1d477c24c

merged with zfs device +evacuation/removal

8064

    +
  • +
+

This change isn’t relevant +for Linux.

8022

e55ebf6

8021

7657def

8013

    +
  • +
+

The change is illumos +specific and not applicable +for Linux.

7982

    +
  • +
+

The change is illumos +specific and not applicable +for Linux.

7970

c30e58c

7956

cda0317

7955

!

Need to assess +applicability to Linux. If +porting, apply 8502.

7869

df7eecc

7816

    +
  • +
+

The change is illumos +specific and not applicable +for Linux.

7803

    +
  • +
+

This functionality is +provided by +upda +te_vdev_config_dev_strs() +on Linux.

7801

0eef1bd

Commit f25efb3 in +openzfs/master has a small +change for linting which is +being ported.

7779

    +
  • +
+

The change isn’t relevant, +zfs_ctldir.c was +rewritten for Linux.

7740

32d41fb

7739

582cc014

7730

e24e62a

7710

    +
  • +
+

None of the illumos build +system is used under Linux.

7602

44f09cd

7591

541a090

7586

c443487

7570

    +
  • +
+

Due to differences in the +block layer all discards +are handled asynchronously +under Linux. This +functionality could be +ported but it’s unclear to +what purpose.

7542

    +
  • +
+

The Linux libshare code +differs significantly from +the upstream OpenZFS code. +Since this change doesn’t +address a Linux specific +issue it doesn’t need to be +ported. The eventual plan +is to retire all of the +existing libshare code and +use the ZED to more +flexibly control filesystem +sharing.

7512

    +
  • +
+

None of the illumos build +system is used under Linux.

7497

    +
  • +
+

DTrace is isn’t readily +available under Linux.

7446

!

Need to assess +applicability to Linux.

7430

68cbd56

7402

690fe64

7345

058ac9b

7278

    +
  • +
+

Dynamic ARC tuning is +handled slightly +differently under Linux and +this case is covered by +arc_tuning_update()

7238

    +
  • +
+

zvol_swap test already +disabled in ZoL

7194

d7958b4

7164

b1b85c87

7041

33c0819

7016

d3c2ae1

6914

    +
  • +
+

Under Linux the +arc_meta_limit can be tuned +with the +zfs_arc_meta_limit_percent +module option.

6875

!

WIP to support NFSv4 ACLs

6843

f5f087e

6841

4254acb

6781

15313c5

6765

!

WIP to support NFSv4 ACLs

6764

!

WIP to support NFSv4 ACLs

6763

!

WIP to support NFSv4 ACLs

6762

!

WIP to support NFSv4 ACLs

6648

6bb24f4

6578

6bb24f4

6577

6bb24f4

6575

6bb24f4

6568

6bb24f4

6528

6bb24f4

6494

    +
  • +
+

The vdev_disk.c and +vdev_file.c files have +been reworked extensively +for Linux. The proposed +changes are not needed.

6468

6bb24f4

6465

6bb24f4

6434

472e7c6

6421

ca0bf58

6418

131cc95

6391

ee06391

6390

85802aa

6388

0de7c55

6386

485c581

6385

f3ad9cd

6369

6bb24f4

6368

2024041

6346

058ac9b

6334

1a04bab

6290

017da6

6250

    +
  • +
+

Linux handles crash dumps +in a fundamentally +different way than Illumos. +The proposed changes are +not needed.

6249

6bb24f4

6248

6bb24f4

6220

    +
  • +
+

The b_thawed debug code was +unused under Linux and +removed.

6209

    +
  • +
+

The Linux user space mutex +implementation is based on +phtread primitives.

6095

f866a4ea

6091

c11f100

6037

a8bd6dc

5984

480f626

5966

6bb24f4

5961

22872ff

5882

83e9986

5815

    +
  • +
+

This patch could be adapted +if needed use equivalent +Linux functionality.

5770

c3275b5

5769

dd26aa5

5768

    +
  • +
+

The change isn’t relevant, +zfs_ctldir.c was +rewritten for Linux.

5766

4dd1893

5693

0f7d2a4

5692

!

This functionality should +be ported in such a way +that it can be integrated +with filefrag(8).

5684

6bb24f4

5503

0f676dc

Proposed patch in 5503 +never upstreamed, +alternative fix deployed +with OpenZFS 7072

5502

f0ed6c7

Proposed patch in 5502 +never upstreamed, +alternative fix deployed +in ZoL with commit f0ed6c7

5410

0bf8501

5409

b23d543

5379

    +
  • +
+

This particular issue never +impacted Linux due to the +need for a modified +zfs_putpage() +implementation.

5316

    +
  • +
+

The illumos idmap facility +isn’t available under +Linux. This patch could +still be applied to +minimize code delta or all +HAVE_IDMAP chunks could be +removed on Linux for better +readability.

5313

ec8501e

5312

!

This change should be made +but the ideal time to do it +is when the spl repository +is folded in to the zfs +repository (planned for +0.8). At this time we’ll +want to cleanup many of the +includes.

5219

ef56b07

5179

3f4058c

5154

9a49d3f

Illumos ticket 5154 never +landed in openzfs/openzfs, +alternative fix deployed +in ZoL with commit 9a49d3f

5149

    +
  • +
+

Equivalent Linux +functionality is provided +by the +zvol_max_discard_blocks +module option.

5148

    +
  • +
+

Discards are handled +differently under Linux, +there is no DKIOCFREE +ioctl.

5136

e8b96c6

4752

aa9af22

4745

411bf20

4698

4fcc437

4620

6bb24f4

4573

10b7549

4571

6e1b9d0

4570

b1d13a6

4391

78e2739

4465

cda0317

4263

6bb24f4

4242

    +
  • +
+

Neither vnodes or their +associated events exist +under Linux.

4206

2820bc4

4188

2e7b765

4181

44f09cd

4161

    +
  • +
+

The Linux user space +reader/writer +implementation is based on +phtread primitives.

4128

!

The +ldi_ev_register_callbacks() +interface doesn’t exist +under Linux. It may be +possible to receive similar +notifications via the scsi +error handlers or possibly +a different interface.

4072

    +
  • +
+

None of the illumos build +system is used under Linux.

3998

417104bd

Illumos ticket 3998 never +landed in openzfs/openzfs, +alternative fix deployed +in ZoL.

3947

7f9d994

3928

    +
  • +
+

Neither vnodes or their +associated events exist +under Linux.

3871

d1d7e268

3747

090ff09

3705

    +
  • +
+

The Linux implementation +uses the lz4 workspace kmem +cache to resolve the stack +issue.

3606

c5b247f

3580

    +
  • +
+

Linux provides generic +ioctl handlers get/set +block device information.

3543

8dca0a9

3512

67629d0

3507

43a696e

3444

6bb24f4

3371

44f09cd

3311

6bb24f4

3301

    +
  • +
+

The Linux implementation of +vdev_disk.c does not +include this comment.

3258

9d81146

3254

!

WIP to support NFSv4 ACLs

3246

cc92e9d

2933

    +
  • +
+

None of the illumos build +system is used under Linux.

2897

fb82700

2665

32a9872

2130

460a021

1974

    +
  • +
+

This change was entirely +replaced in the ARC +restructuring.

1898

    +
  • +
+

The zfs_putpage() function +was rewritten to properly +integrate with the Linux +VM.

1700

    +
  • +
+

Not applicable to Linux, +the discard implementation +is entirely different.

1618

ca67b33

1337

2402458

1126

e43b290

763

3cee226

742

!

WIP to support NFSv4 ACLs

701

460a021

348

    +
  • +
+

The Linux implementation of +vdev_disk.c must have +this differently.

243

    +
  • +
+

Manual updates have been +made separately for Linux.

184

    +
  • +
+

The zfs_putpage() function +was rewritten to properly +integrate with the Linux +VM.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/OpenZFS Patches.html b/Developer Resources/OpenZFS Patches.html new file mode 100644 index 000000000..82083e956 --- /dev/null +++ b/Developer Resources/OpenZFS Patches.html @@ -0,0 +1,419 @@ + + + + + + + OpenZFS Patches — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

OpenZFS Patches

+

The ZFS on Linux project is an adaptation of the upstream OpenZFS +repository designed to work in +a Linux environment. This upstream repository acts as a location where +new features, bug fixes, and performance improvements from all the +OpenZFS platforms can be integrated. Each platform is responsible for +tracking the OpenZFS repository and merging the relevant improvements +back in to their release.

+

For the ZFS on Linux project this tracking is managed through an +OpenZFS tracking +page. The page is updated regularly and shows a list of OpenZFS commits +and their status in regard to the ZFS on Linux master branch.

+

This page describes the process of applying outstanding OpenZFS commits +to ZFS on Linux and submitting those changes for inclusion. As a +developer this is a great way to familiarize yourself with ZFS on Linux +and to begin quickly making a valuable contribution to the project. The +following guide assumes you have a github +account, +are familiar with git, and are used to developing in a Linux +environment.

+
+

Porting OpenZFS changes to ZFS on Linux

+
+

Setup the Environment

+

Clone the source. Start by making a local clone of the +spl and +zfs repositories.

+
$ git clone -o zfsonlinux https://github.com/zfsonlinux/spl.git
+$ git clone -o zfsonlinux https://github.com/zfsonlinux/zfs.git
+
+
+

Add remote repositories. Using the GitHub web interface +fork the +zfs repository in to your +personal GitHub account. Add your new zfs fork and the +openzfs repository as remotes +and then fetch both repositories. The OpenZFS repository is large and +the initial fetch may take some time over a slow connection.

+
$ cd zfs
+$ git remote add <your-github-account> git@github.com:<your-github-account>/zfs.git
+$ git remote add openzfs https://github.com/openzfs/openzfs.git
+$ git fetch --all
+
+
+

Build the source. Compile the spl and zfs master branches. These +branches are always kept stable and this is a useful verification that +you have a full build environment installed and all the required +dependencies are available. This may also speed up the compile time +latter for small patches where incremental builds are an option.

+
$ cd ../spl
+$ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc)
+$
+$ cd ../zfs
+$ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc)
+
+
+
+
+

Pick a patch

+

Consult the OpenZFS +tracking page and +select a patch which has not yet been applied. For your first patch you +will want to select a small patch to familiarize yourself with the +process.

+
+
+

Porting a Patch

+

There are 2 methods:

+ +

Please read about manual merge first to learn the +whole process.

+
+

Cherry-pick

+

You can start to +cherry-pick by your own, +but we have made a special +script, +which tries to +cherry-pick the patch +automatically and generates the description.

+
    +
  1. Prepare environment:

  2. +
+

Mandatory git settings (add to ~/.gitconfig):

+
[merge]
+    renameLimit = 999999
+[user]
+    email = mail@yourmail.com
+    name = Your Name
+
+
+

Download the script:

+
wget https://raw.githubusercontent.com/zfsonlinux/zfs-buildbot/master/scripts/openzfs-merge.sh
+
+
+
    +
  1. Run:

  2. +
+
./openzfs-merge.sh -d path_to_zfs_folder -c openzfs_commit_hash
+
+
+

This command will fetch all repositories, create a new branch +autoport-ozXXXX (XXXX - OpenZFS issue number), try to cherry-pick, +compile and check cstyle on success.

+

If it succeeds without any merge conflicts - go to autoport-ozXXXX +branch, it will have ready to pull commit. Congratulations, you can go +to step 7!

+

Otherwise you should go to step 2.

+
    +
  1. Resolve all merge conflicts manually. Easy method - install +Meld or any other diff tool and run +git mergetool.

  2. +
  3. Check all compile and cstyle errors (See Testing a +patch).

  4. +
  5. Commit your changes with any description.

  6. +
  7. Update commit description (last commit will be changed):

  8. +
+
./openzfs-merge.sh -d path_to_zfs_folder -g openzfs_commit_hash
+
+
+
    +
  1. Add any porting notes (if you have modified something): +git commit --amend

  2. +
  3. Push your commit to github: +git push <your-github-account> autoport-ozXXXX

  4. +
  5. Create a pull request to ZoL master branch.

  6. +
  7. Go to Testing a patch section.

  8. +
+
+
+

Manual merge

+

Create a new branch. It is important to create a new branch for +every commit you port to ZFS on Linux. This will allow you to easily +submit your work as a GitHub pull request and it makes it possible to +work on multiple OpenZFS changes concurrently. All development branches +need to be based off of the ZFS master branch and it’s helpful to name +the branches after the issue number you’re working on.

+
$ git checkout -b openzfs-<issue-nr> master
+
+
+

Generate a patch. One of the first things you’ll notice about the +ZFS on Linux repository is that it is laid out differently than the +OpenZFS repository. Organizationally it is much flatter, this is +possible because it only contains the code for OpenZFS not an entire OS. +That means that in order to apply a patch from OpenZFS the path names in +the patch must be changed. A script called zfs2zol-patch.sed has been +provided to perform this translation. Use the git format-patch +command and this script to generate a patch.

+
$ git format-patch --stdout <commit-hash>^..<commit-hash> | \
+    ./scripts/zfs2zol-patch.sed >openzfs-<issue-nr>.diff
+
+
+

Apply the patch. In many cases the generated patch will apply +cleanly to the repository. However, it’s important to keep in mind the +zfs2zol-patch.sed script only translates the paths. There are often +additional reasons why a patch might not apply. In some cases hunks of +the patch may not be applicable to Linux and should be dropped. In other +cases a patch may depend on other changes which must be applied first. +The changes may also conflict with Linux specific modifications. In all +of these cases the patch will need to be manually modified to apply +cleanly while preserving the its original intent.

+
$ git am ./openzfs-<commit-nr>.diff
+
+
+

Update the commit message. By using git format-patch to generate +the patch and then git am to apply it the original comment and +authorship will be preserved. However, due to the formatting of the +OpenZFS commit you will likely find that the entire commit comment has +been squashed in to the subject line. Use git commit --amend to +cleanup the comment and be careful to follow these standard +guidelines.

+

The summary line of an OpenZFS commit is often very long and you should +truncate it to 50 characters. This is useful because it preserves the +correct formatting of git log --pretty=oneline command. Make sure to +leave a blank line between the summary and body of the commit. Then +include the full OpenZFS commit message wrapping any lines which exceed +72 characters. Finally, add a Ported-by tag with your contact +information and both a OpenZFS-issue and OpenZFS-commit tag with +appropriate links. You’ll want to verify your commit contains all of the +following information:

+
    +
  • The subject line from the original OpenZFS patch in the form: +“OpenZFS <issue-nr> - short description”.

  • +
  • The original patch authorship should be preserved.

  • +
  • The OpenZFS commit message.

  • +
  • The following tags:

    +
      +
    • Authored by: Original patch author

    • +
    • Reviewed by: All OpenZFS reviewers from the original patch.

    • +
    • Approved by: All OpenZFS reviewers from the original patch.

    • +
    • Ported-by: Your name and email address.

    • +
    • OpenZFS-issue: https ://www.illumos.org/issues/issue

    • +
    • OpenZFS-commit: https +://github.com/openzfs/openzfs/commit/hash

    • +
    +
  • +
  • Porting Notes: An optional section describing any changes +required when porting.

  • +
+

For example, OpenZFS issue 6873 was applied to +Linux from this +upstream OpenZFS +commit.

+
OpenZFS 6873 - zfs_destroy_snaps_nvl leaks errlist
+
+Authored by: Chris Williamson <chris.williamson@delphix.com>
+Reviewed by: Matthew Ahrens <mahrens@delphix.com>
+Reviewed by: Paul Dagnelie <pcd@delphix.com>
+Ported-by: Denys Rtveliashvili <denys@rtveliashvili.name>
+
+lzc_destroy_snaps() returns an nvlist in errlist.
+zfs_destroy_snaps_nvl() should nvlist_free() it before returning.
+
+OpenZFS-issue: https://www.illumos.org/issues/6873
+OpenZFS-commit: https://github.com/openzfs/openzfs/commit/ee06391
+
+
+
+
+
+

Testing a Patch

+

Build the source. Verify the patched source compiles without errors +and all warnings are resolved.

+
$ make -s -j$(nproc)
+
+
+

Run the style checker. Verify the patched source passes the style +checker, the command should return without printing any output.

+
$ make cstyle
+
+
+

Open a Pull Request. When your patch builds cleanly and passes the +style checks open a new pull +request. +The pull request will be queued for automated +testing. As part of the +testing the change is built for a wide range of Linux distributions and +a battery of functional and stress tests are run to detect regressions.

+
$ git push <your-github-account> openzfs-<issue-nr>
+
+
+

Fix any issues. Testing takes approximately 2 hours to fully +complete and the results are posted in the GitHub pull +request. All the tests +are expected to pass and you should investigate and resolve any test +failures. The test +scripts +are all available and designed to run locally in order reproduce an +issue. Once you’ve resolved the issue force update the pull request to +trigger a new round of testing. Iterate until all the tests are passing.

+
# Fix issue, amend commit, force update branch.
+$ git commit --amend
+$ git push --force <your-github-account> openzfs-<issue-nr>
+
+
+
+
+

Merging the Patch

+

Review. Lastly one of the ZFS on Linux maintainers will make a final +review of the patch and may request additional changes. Once the +maintainer is happy with the final version of the patch they will add +their signed-off-by, merge it to the master branch, mark it complete on +the tracking page, and thank you for your contribution to the project!

+
+
+
+

Porting ZFS on Linux changes to OpenZFS

+

Often an issue will be first fixed in ZFS on Linux or a new feature +developed. Changes which are not Linux specific should be submitted +upstream to the OpenZFS GitHub repository for review. The process for +this is described in the OpenZFS +README.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Developer Resources/index.html b/Developer Resources/index.html new file mode 100644 index 000000000..b70ca6f7d --- /dev/null +++ b/Developer Resources/index.html @@ -0,0 +1,183 @@ + + + + + + + Developer Resources — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Alpine Linux/Root on ZFS.html b/Getting Started/Alpine Linux/Root on ZFS.html new file mode 100644 index 000000000..b33ad90a7 --- /dev/null +++ b/Getting Started/Alpine Linux/Root on ZFS.html @@ -0,0 +1,414 @@ + + + + + + + Alpine Linux Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Alpine Linux Root on ZFS

+

ZFSBootMenu

+

ZFSBootMenu is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+

UEFI support only

+

Only UEFI is supported by this guide.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Download latest extended variant of Alpine Linux +live image, +verify checksum +and boot from it.

    +
    gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Login as root user. There is no password.

  6. +
  7. Configure Internet

    +
    setup-interfaces -r
    +# You must use "-r" option to start networking services properly
    +# example:
    +network interface: wlan0
    +WiFi name:         <ssid>
    +ip address:        dhcp
    +<enter done to finish network config>
    +manual netconfig:  n
    +
    +
    +
  8. +
  9. If you are using wireless network and it is not shown, see Alpine +Linux wiki for +further details. wpa_supplicant can be installed with apk +add wpa_supplicant without internet connection.

  10. +
  11. Configure SSH server

    +
    setup-sshd
    +# example:
    +ssh server:        openssh
    +allow root:        "prohibit-password" or "yes"
    +ssh key:           "none" or "<public key>"
    +
    +
    +

    Configurations set here will be copied verbatim to the installed system.

    +
  12. +
  13. Set root password or /root/.ssh/authorized_keys.

    +

    Choose a strong root password, as it will be copied to the +installed system. However, authorized_keys is not copied.

    +
  14. +
  15. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  16. +
  17. Configure NTP client for time synchronization

    +
    setup-ntp busybox
    +
    +
    +
  18. +
  19. Set up apk-repo. A list of available mirrors is shown. +Press space bar to continue

    +
    setup-apkrepos
    +
    +
    +
  20. +
  21. Throughout this guide, we use predictable disk names generated by +udev

    +
    apk update
    +apk add eudev
    +setup-devd udev
    +
    +
    +

    It can be removed after reboot with setup-devd mdev && apk del eudev.

    +
  22. +
  23. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  24. +
  25. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  26. +
  27. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  28. +
  29. Install ZFS support from live media:

    +
    apk add zfs
    +
    +
    +
  30. +
  31. Install bootloader programs and partition tool

    +
    apk add parted e2fsprogs cryptsetup util-linux
    +
    +
    +
  32. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 1MiB 4GiB \
    + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + set 1 esp on \
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup temporary encrypted swap for this installation only. This is +useful if the available memory is small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3
    +   mkswap /dev/mapper/"${i##*/}"-part3
    +   swapon /dev/mapper/"${i##*/}"-part3
    +done
    +
    +
    +
  4. +
  5. Load ZFS kernel module

    +
    modprobe zfs
    +
    +
    +
  6. +
  7. Create root pool

    +
      +
    • Unencrypted:

      +
      # shellcheck disable=SC2046
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -R "${MNT}" \
      +    -O acltype=posixacl \
      +    -O canmount=off \
      +    -O dnodesize=auto \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O xattr=sa \
      +    -O mountpoint=none \
      +    rpool \
      +    mirror \
      +   $(for i in ${DISK}; do
      +      printf '%s ' "${i}-part2";
      +     done)
      +
      +
      +
    • +
    +
  8. +
  9. Create root system container:

    +
    +
    zfs create -o canmount=noauto -o mountpoint=legacy rpool/root
    +
    +
    +
    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o mountpoint=legacy rpool/home
    +mount -o X-mount.mkdir -t zfs rpool/root "${MNT}"
    +mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home
    +
    +
    +
  10. +
  11. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    +done
    +
    +for i in ${DISK}; do
    + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot
    + break
    +done
    +
    +
    +
  12. +
+
+
+

System Configuration

+
    +
  1. Install system to disk

    +
    BOOTLOADER=none setup-disk -k lts -v "${MNT}"
    +
    +
    +

    The error message about ZFS kernel module can be ignored.

    +
  2. +
  3. Install rEFInd boot loader:

    +
    # from http://www.rodsbooks.com/refind/getting.html
    +# use Binary Zip File option
    +apk add curl
    +curl -L http://sourceforge.net/projects/refind/files/0.14.0.2/refind-bin-0.14.0.2.zip/download --output refind.zip
    +unzip refind
    +
    +mkdir -p "${MNT}"/boot/EFI/BOOT
    +find ./refind-bin-0.14.0.2/ -name 'refind_x64.efi' -print0 \
    +| xargs -0I{} mv {} "${MNT}"/boot/EFI/BOOT/BOOTX64.EFI
    +rm -rf refind.zip refind-bin-0.14.0.2
    +
    +
    +
  4. +
  5. Add boot entry:

    +
    tee -a "${MNT}"/boot/refind-linux.conf <<EOF
    +"Alpine Linux" "root=ZFS=rpool/root"
    +EOF
    +
    +
    +
  6. +
  7. Unmount filesystems and create initial system snapshot:

    +
    umount -Rl "${MNT}"
    +zfs snapshot -r rpool@initial-installation
    +zpool export -a
    +
    +
    +
  8. +
  9. Reboot

    +
    reboot
    +
    +
    +
  10. +
  11. Mount other EFI system partitions then set up a service for syncing +their contents.

  12. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Alpine Linux/index.html b/Getting Started/Alpine Linux/index.html new file mode 100644 index 000000000..ea4a61d2d --- /dev/null +++ b/Getting Started/Alpine Linux/index.html @@ -0,0 +1,179 @@ + + + + + + + Alpine Linux — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Alpine Linux

+
+

Contents

+ +
+
+

Installation

+

Note: this is for installing ZFS on an existing Alpine +installation. To use ZFS as root file system, +see below.

+
    +
  1. Install ZFS package:

    +
    apk add zfs zfs-lts
    +
    +
    +
  2. +
  3. Load kernel module:

    +
    modprobe zfs
    +
    +
    +
  4. +
+
+
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Arch Linux/Root on ZFS.html b/Getting Started/Arch Linux/Root on ZFS.html new file mode 100644 index 000000000..fc737c773 --- /dev/null +++ b/Getting Started/Arch Linux/Root on ZFS.html @@ -0,0 +1,569 @@ + + + + + + + Arch Linux Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Arch Linux Root on ZFS

+

ZFSBootMenu

+

ZFSBootMenu is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+

UEFI support only

+

Only UEFI is supported by this guide.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Because the kernel of latest Live CD might be incompatible with +ZFS, we will use Alpine Linux Extended, which ships with ZFS by +default.

    +

    Download latest extended variant of Alpine Linux +live image, +verify checksum +and boot from it.

    +
    gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Login as root user. There is no password.

  6. +
  7. Configure Internet

    +
    setup-interfaces -r
    +# You must use "-r" option to start networking services properly
    +# example:
    +network interface: wlan0
    +WiFi name:         <ssid>
    +ip address:        dhcp
    +<enter done to finish network config>
    +manual netconfig:  n
    +
    +
    +
  8. +
  9. If you are using wireless network and it is not shown, see Alpine +Linux wiki for +further details. wpa_supplicant can be installed with apk +add wpa_supplicant without internet connection.

  10. +
  11. Configure SSH server

    +
    setup-sshd
    +# example:
    +ssh server:        openssh
    +allow root:        "prohibit-password" or "yes"
    +ssh key:           "none" or "<public key>"
    +
    +
    +
  12. +
  13. Set root password or /root/.ssh/authorized_keys.

  14. +
  15. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  16. +
  17. Configure NTP client for time synchronization

    +
    setup-ntp busybox
    +
    +
    +
  18. +
  19. Set up apk-repo. A list of available mirrors is shown. +Press space bar to continue

    +
    setup-apkrepos
    +
    +
    +
  20. +
  21. Throughout this guide, we use predictable disk names generated by +udev

    +
    apk update
    +apk add eudev
    +setup-devd udev
    +
    +
    +
  22. +
  23. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  24. +
  25. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  26. +
  27. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  28. +
  29. Install ZFS support from live media:

    +
    apk add zfs
    +
    +
    +
  30. +
  31. Install partition tool

    +
    apk add parted e2fsprogs cryptsetup util-linux
    +
    +
    +
  32. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 1MiB 4GiB \
    + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + set 1 esp on \
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup temporary encrypted swap for this installation only. This is +useful if the available memory is small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3
    +   mkswap /dev/mapper/"${i##*/}"-part3
    +   swapon /dev/mapper/"${i##*/}"-part3
    +done
    +
    +
    +
  4. +
  5. Load ZFS kernel module

    +
    modprobe zfs
    +
    +
    +
  6. +
  7. Create root pool

    +
      +
    • Unencrypted:

      +
      # shellcheck disable=SC2046
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -R "${MNT}" \
      +    -O acltype=posixacl \
      +    -O canmount=off \
      +    -O dnodesize=auto \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O xattr=sa \
      +    -O mountpoint=none \
      +    rpool \
      +    mirror \
      +   $(for i in ${DISK}; do
      +      printf '%s ' "${i}-part2";
      +     done)
      +
      +
      +
    • +
    +
  8. +
  9. Create root system container:

    +
    +
    zfs create -o canmount=noauto -o mountpoint=legacy rpool/root
    +
    +
    +
    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o mountpoint=legacy rpool/home
    +mount -o X-mount.mkdir -t zfs rpool/root "${MNT}"
    +mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home
    +
    +
    +
  10. +
  11. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    +done
    +
    +for i in ${DISK}; do
    + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot
    + break
    +done
    +
    +
    +
  12. +
+
+
+

System Configuration

+
    +
  1. Download and extract minimal Arch Linux root filesystem:

    +
    apk add curl
    +
    +curl --fail-early --fail -L \
    +https://america.archive.pkgbuild.com/iso/2024.01.01/archlinux-bootstrap-x86_64.tar.gz \
    +-o rootfs.tar.gz
    +curl --fail-early --fail -L \
    +https://america.archive.pkgbuild.com/iso/2024.01.01/archlinux-bootstrap-x86_64.tar.gz.sig \
    +-o rootfs.tar.gz.sig
    +
    +apk add gnupg
    +gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify rootfs.tar.gz.sig
    +
    +ln -s "${MNT}" "${MNT}"/root.x86_64
    +tar x  -C "${MNT}" -af rootfs.tar.gz root.x86_64
    +
    +
    +
  2. +
  3. Enable community repo

    +
    sed -i '/edge/d' /etc/apk/repositories
    +sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories
    +
    +
    +
  4. +
  5. Generate fstab:

    +
    apk add arch-install-scripts
    +genfstab -t PARTUUID "${MNT}" \
    +| grep -v swap \
    +| sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \
    +> "${MNT}"/etc/fstab
    +
    +
    +
  6. +
  7. Chroot

    +
    cp /etc/resolv.conf "${MNT}"/etc/resolv.conf
    +for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done
    +chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash
    +
    +
    +
  8. +
  9. Add archzfs repo to pacman config

    +
    pacman-key --init
    +pacman-key --refresh-keys
    +pacman-key --populate
    +
    +curl --fail-early --fail -L https://archzfs.com/archzfs.gpg \
    +|  pacman-key -a - --gpgdir /etc/pacman.d/gnupg
    +
    +pacman-key \
    +--lsign-key \
    +--gpgdir /etc/pacman.d/gnupg \
    +DDF7DB817396A49B2A2723F7403BD972F75D9D76
    +
    +tee -a /etc/pacman.d/mirrorlist-archzfs <<- 'EOF'
    +## See https://github.com/archzfs/archzfs/wiki
    +## France
    +#,Server = https://archzfs.com/$repo/$arch
    +
    +## Germany
    +#,Server = https://mirror.sum7.eu/archlinux/archzfs/$repo/$arch
    +#,Server = https://mirror.biocrafting.net/archlinux/archzfs/$repo/$arch
    +
    +## India
    +#,Server = https://mirror.in.themindsmaze.com/archzfs/$repo/$arch
    +
    +## United States
    +#,Server = https://zxcvfdsa.com/archzfs/$repo/$arch
    +EOF
    +
    +tee -a /etc/pacman.conf <<- 'EOF'
    +
    +#[archzfs-testing]
    +#Include = /etc/pacman.d/mirrorlist-archzfs
    +
    +#,[archzfs]
    +#,Include = /etc/pacman.d/mirrorlist-archzfs
    +EOF
    +
    +# this #, prefix is a workaround for ci/cd tests
    +# remove them
    +sed -i 's|#,||' /etc/pacman.d/mirrorlist-archzfs
    +sed -i 's|#,||' /etc/pacman.conf
    +sed -i 's|^#||' /etc/pacman.d/mirrorlist
    +
    +
    +
  10. +
  11. Install base packages:

    +
    pacman -Sy
    +pacman -S --noconfirm mg mandoc efibootmgr mkinitcpio
    +
    +kernel_compatible_with_zfs="$(pacman -Si zfs-linux \
    +| grep 'Depends On' \
    +| sed "s|.*linux=||" \
    +| awk '{ print $1 }')"
    +pacman -U --noconfirm https://america.archive.pkgbuild.com/packages/l/linux/linux-"${kernel_compatible_with_zfs}"-x86_64.pkg.tar.zst
    +
    +
    +
  12. +
  13. Install zfs packages:

    +
    pacman -S --noconfirm zfs-linux zfs-utils
    +
    +
    +
  14. +
  15. Configure mkinitcpio:

    +
    sed -i 's|filesystems|zfs filesystems|' /etc/mkinitcpio.conf
    +mkinitcpio -P
    +
    +
    +
  16. +
  17. For physical machine, install firmware

    +
    pacman -S linux-firmware intel-ucode amd-ucode
    +
    +
    +
  18. +
  19. Enable internet time synchronisation:

    +
    systemctl enable systemd-timesyncd
    +
    +
    +
  20. +
  21. Generate host id:

    +
    zgenhostid -f -o /etc/hostid
    +
    +
    +
  22. +
  23. Generate locales:

    +
    echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen
    +locale-gen
    +
    +
    +
  24. +
  25. Set locale, keymap, timezone, hostname

    +
    rm -f /etc/localtime
    +systemd-firstboot \
    +--force \
    +--locale=en_US.UTF-8 \
    +--timezone=Etc/UTC \
    +--hostname=testhost \
    +--keymap=us
    +
    +
    +
  26. +
  27. Set root passwd

    +
    printf 'root:yourpassword' | chpasswd
    +
    +
    +
  28. +
+
+
+

Bootloader

+
    +
  1. Install rEFInd boot loader:

    +
    # from http://www.rodsbooks.com/refind/getting.html
    +# use Binary Zip File option
    +pacman -S --noconfirm unzip
    +curl -L http://sourceforge.net/projects/refind/files/0.14.0.2/refind-bin-0.14.0.2.zip/download --output refind.zip
    +
    +unzip refind.zip
    +mkdir -p /boot/EFI/BOOT
    +find ./refind-bin-0.14.0.2/ -name 'refind_x64.efi' -print0 \
    +| xargs -0I{} mv {} /boot/EFI/BOOT/BOOTX64.EFI
    +rm -rf refind.zip refind-bin-0.14.0.2
    +
    +
    +
  2. +
  3. Add boot entry:

    +
    tee -a /boot/refind-linux.conf <<EOF
    +"Arch Linux" "root=ZFS=rpool/root rw zfs_import_dir=/dev/"
    +EOF
    +
    +
    +
  4. +
  5. Exit chroot

    +
    exit
    +
    +
    +
  6. +
  7. Unmount filesystems and create initial system snapshot

    +
    umount -Rl "${MNT}"
    +zfs snapshot -r rpool@initial-installation
    +
    +
    +
  8. +
  9. Export all pools

    +
    zpool export -a
    +
    +
    +
  10. +
  11. Reboot

    +
    reboot
    +
    +
    +
  12. +
  13. Mount other EFI system partitions then set up a service for syncing +their contents.

  14. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Arch Linux/index.html b/Getting Started/Arch Linux/index.html new file mode 100644 index 000000000..766a1b7c2 --- /dev/null +++ b/Getting Started/Arch Linux/index.html @@ -0,0 +1,209 @@ + + + + + + + Arch Linux — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Arch Linux

+
+

Contents

+ +
+
+

Support

+

Reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat.

+

If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @ne9z.

+
+
+

Overview

+

Due to license incompatibility, +ZFS is not available in Arch Linux official repo.

+

ZFS support is provided by third-party archzfs repo.

+
+
+

Installation

+

See Archlinux Wiki.

+
+
+

Root on ZFS

+

ZFS can be used as root file system for Arch Linux. +An installation guide is available.

+ +
+
+

Contribute

+
    +
  1. Fork and clone this repo.

  2. +
  3. Install the tools:

    +
    sudo pacman -S --needed python-pip make
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your "${PATH}", e.g. by adding this to ~/.bashrc:
    +[ -d "${HOME}"/.local/bin ] && export PATH="${HOME}"/.local/bin:"${PATH}"
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @ne9z.

  10. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian Bookworm Root on ZFS.html b/Getting Started/Debian/Debian Bookworm Root on ZFS.html new file mode 100644 index 000000000..a14a6b209 --- /dev/null +++ b/Getting Started/Debian/Debian Bookworm Root on ZFS.html @@ -0,0 +1,1330 @@ + + + + + + + Debian Bookworm Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian Bookworm Root on ZFS

+ +
+

Overview

+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Debian GNU/Linux Live CD. If prompted, login with the username +user and password live. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Setup and update the repositories:

    +
    sudo vi /etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian bookworm main contrib non-free-firmware
    +
    +
    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo apt install --yes openssh-server
    +
    +sudo systemctl restart ssh
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk zfsutils-linux
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio. Also when using /dev/vda, the partitions used later will be named +differently. Otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    If the disk was previously used with zfs:

    +
    wipefs -a $DISK
    +
    +
    +

    For flash-based storage, if the disk was previously used, you may wish to +do a full-disk discard (TRIM/UNMAP), which can improve performance:

    +
    blkdiscard -f $DISK
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -o compatibility=grub2 \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -O devices=off \
    +    -O acltype=posixacl -O xattr=sa \
    +    -O compression=lz4 \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O canmount=off -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    Note: GRUB does not support all zpool features (see +spa_feature_names in +grub-core/fs/zfs/zfs.c). +We create a separate zpool for /boot here, specifying the +-o compatibility=grub2 property which restricts the pool to only those +features that GRUB supports, allowing the root pool to use any/all features.

    +

    See the section on Compatibility feature sets in the zpool-features +man page for more information.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      apt install --yes cryptsetup
      +
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality was implemented in Ubuntu with the +zsys tool, though its dataset layout is more complicated, and zsys +is on life support. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
    +zfs mount rpool/ROOT/debian
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/debian
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                     rpool/home
    +zfs create -o mountpoint=/root rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off     rpool/var
    +zfs create -o canmount=off     rpool/var/lib
    +zfs create                     rpool/var/log
    +zfs create                     rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to separate these to exclude them from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
    +zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off rpool/usr
    +zfs create                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create rpool/var/games
    +
    +
    +

    If this system will have a GUI:

    +
    zfs create rpool/var/lib/AccountsService
    +zfs create rpool/var/lib/NetworkManager
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create rpool/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create rpool/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create rpool/var/www
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +

    Note: If you separate a directory required for booting (e.g. /etc) +into its own dataset, you must add it to +ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs. Datasets +with canmount=off (like rpool/usr above) do not matter for this.

    +
  6. +
  7. Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +
  8. +
  9. Install the minimal system:

    +
    debootstrap bookworm /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  10. +
  11. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  12. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/network/interfaces.d/NAME
    +
    +
    +
    auto NAME
    +iface NAME inet dhcp
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian bookworm main contrib non-free-firmware
    +deb-src http://deb.debian.org/debian bookworm main contrib non-free-firmware
    +
    +deb http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware
    +deb-src http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware
    +
    +deb http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware
    +deb-src http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    apt update
    +
    +apt install --yes console-setup locales
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +
  10. +
  11. Install ZFS in the chroot environment for the new system:

    +
    apt install --yes dpkg-dev linux-headers-generic linux-image-generic
    +
    +apt install --yes zfs-initramfs
    +
    +echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup does +not support ZFS.

    +
  12. +
  13. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup cryptsetup-initramfs
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  14. +
  15. Install an NTP service to synchronize time. +This step is specific to Bookworm which does not install the package during +bootstrap. +Although this step is not necessary for ZFS, it is useful for internet +browsing where local clock drift can cause login failures:

    +
    apt install systemd-timesyncd
    +
    +
    +
  16. +
  17. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      apt install --yes grub-pc
      +
      +
      +
    • +
    • Install GRUB for UEFI booting:

      +
      apt install dosfstools
      +
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +   /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +apt install --yes grub-efi-amd64 shim-signed
      +
      +
      +

      Notes:

      +
        +
      • The -s 1 for mkdosfs is only necessary for drives which present +4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later.

      • +
      +
    • +
    +
  18. +
  19. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  20. +
  21. Set a root password:

    +
    passwd
    +
    +
    +
  22. +
  23. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +

    Note: For some disk configurations (NVMe?), this service may fail with an error +indicating that the bpool cannot be found. If this happens, add +-d DISK-part3 (replace DISK with the correct device path) to the +zpool import command.

    +
  24. +
  25. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  26. +
  27. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  28. +
  29. Optional: For ZFS native encryption or LUKS, configure Dropbear for remote +unlocking:

    +
    apt install --yes --no-install-recommends dropbear-initramfs
    +mkdir -p /etc/dropbear/initramfs
    +
    +# Optional: Convert OpenSSH server keys for Dropbear
    +for type in ecdsa ed25519 rsa ; do
    +    cp /etc/ssh/ssh_host_${type}_key /tmp/openssh.key
    +    ssh-keygen -p -N "" -m PEM -f /tmp/openssh.key
    +    dropbearconvert openssh dropbear \
    +        /tmp/openssh.key \
    +        /etc/dropbear/initramfs/dropbear_${type}_host_key
    +done
    +rm /tmp/openssh.key
    +
    +# Add user keys in the same format as ~/.ssh/authorized_keys
    +vi /etc/dropbear/initramfs/authorized_keys
    +
    +# If using a static IP, set it for the initramfs environment:
    +vi /etc/initramfs-tools/initramfs.conf
    +# The syntax is: IP=ADDRESS::GATEWAY:MASK:HOSTNAME:NIC
    +# For example:
    +# IP=192.168.1.100::192.168.1.1:255.255.255.0:myhostname:ens3
    +# HOSTNAME and NIC are optional.
    +
    +# Rebuild the initramfs (required when changing any of the above):
    +update-initramfs -u -k all
    +
    +
    +

    Notes:

    +
      +
    • Converting the server keys makes Dropbear use the same keys as OpenSSH, +avoiding host key mismatch warnings. Currently, dropbearconvert doesn’t +understand the new OpenSSH private key format, so the +keys need to be converted to the old PEM format first using +ssh-keygen. The downside of using the same keys for both OpenSSH and +Dropbear is that the OpenSSH keys are then available on-disk, unencrypted +in the initramfs.

    • +
    • Later, to use this functionality, SSH to the system (as root) while it is +prompting for the passphrase during the boot process. For ZFS native +encryption, run zfsunlock. For LUKS, run cryptroot-unlock.

    • +
    • You can optionally add command="/usr/bin/zfsunlock" or +command="/bin/cryptroot-unlock" in front of the authorized_keys +line to force the unlock command. This way, the unlock command runs +automatically and is all that can be run.

    • +
    +
  30. +
  31. Optional (but kindly requested): Install popcon

    +

    The popularity-contest package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro.

    +
    apt install --yes popularity-contest
    +
    +
    +

    Choose Yes at the prompt.

    +
  32. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Workaround GRUB’s missing zpool-features support:

    +
    vi /etc/default/grub
    +# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=debian --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
  13. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/debian
    +zfs set canmount=noauto rpool/ROOT/debian
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  14. +
+
+
+

Step 6: First Boot

+
    +
  1. Optional: Snapshot the initial installation:

    +
    zfs snapshot bpool/BOOT/debian@install
    +zfs snapshot rpool/ROOT/debian@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  2. +
  3. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  4. +
  5. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  6. +
  7. If this fails for rpool, mounting it on boot will fail and you will need to +zpool import -f rpool, then exit in the initramfs prompt.

  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  10. +
  11. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +zfs create rpool/home/$username
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username
    +
    +
    +
  12. +
  13. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:

      +
      dpkg-reconfigure grub-pc
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  14. +
+
+
+

Step 7: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 8: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +
    tasksel --new-install
    +
    +
    +

    Note: This will check “Debian desktop environment” and “print server” +by default. If you want a server installation, unselect those.

    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 9: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/debian@install
    +sudo zfs destroy rpool/ROOT/debian@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
apt install --yes cryptsetup
+
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/debian
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian Bullseye Root on ZFS.html b/Getting Started/Debian/Debian Bullseye Root on ZFS.html new file mode 100644 index 000000000..2123afd1b --- /dev/null +++ b/Getting Started/Debian/Debian Bullseye Root on ZFS.html @@ -0,0 +1,1378 @@ + + + + + + + Debian Bullseye Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian Bullseye Root on ZFS

+ +
+

Overview

+
+

Newer release available

+
    +
  • See Debian Bookworm Root on ZFS for +new installs. This guide is no longer receiving most updates. It continues +to exist for reference for existing installs that followed it.

  • +
+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Debian GNU/Linux Live CD. If prompted, login with the username +user and password live. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Setup and update the repositories:

    +
    sudo vi /etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian bullseye main contrib
    +
    +
    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo apt install --yes openssh-server
    +
    +sudo systemctl restart ssh
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk zfsutils-linux
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    If the disk was previously used with zfs:

    +
    wipefs -a $DISK
    +
    +
    +

    For flash-based storage, if the disk was previously used, you may wish to +do a full-disk discard (TRIM/UNMAP), which can improve performance:

    +
    blkdiscard -f $DISK
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on -d \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@livelist=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O devices=off \
    +    -O acltype=posixacl -O xattr=sa \
    +    -O compression=lz4 \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O canmount=off -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The device_rebuild feature should be safe to use (except on raidz, +which it is incompatible with), but the boot pool is small, so this does +not matter in practice.

    • +
    • The log_spacemap and spacemap_v2 features have been tested and +are safe to use. The boot pool is small, so these do not matter in +practice.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      apt install --yes cryptsetup
      +
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality was implemented in Ubuntu with the +zsys tool, though its dataset layout is more complicated, and zsys +is on life support. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
    +zfs mount rpool/ROOT/debian
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/debian
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                     rpool/home
    +zfs create -o mountpoint=/root rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off     rpool/var
    +zfs create -o canmount=off     rpool/var/lib
    +zfs create                     rpool/var/log
    +zfs create                     rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to separate these to exclude them from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
    +zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off rpool/usr
    +zfs create                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create rpool/var/games
    +
    +
    +

    If this system will have a GUI:

    +
    zfs create rpool/var/lib/AccountsService
    +zfs create rpool/var/lib/NetworkManager
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create rpool/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create rpool/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create rpool/var/www
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +

    Note: If you separate a directory required for booting (e.g. /etc) +into its own dataset, you must add it to +ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs. Datasets +with canmount=off (like rpool/usr above) do not matter for this.

    +
  6. +
  7. Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +
  8. +
  9. Install the minimal system:

    +
    debootstrap bullseye /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  10. +
  11. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  12. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/network/interfaces.d/NAME
    +
    +
    +
    auto NAME
    +iface NAME inet dhcp
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian bullseye main contrib
    +deb-src http://deb.debian.org/debian bullseye main contrib
    +
    +deb http://deb.debian.org/debian-security bullseye-security main contrib
    +deb-src http://deb.debian.org/debian-security bullseye-security main contrib
    +
    +deb http://deb.debian.org/debian bullseye-updates main contrib
    +deb-src http://deb.debian.org/debian bullseye-updates main contrib
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    ln -s /proc/self/mounts /etc/mtab
    +apt update
    +
    +apt install --yes console-setup locales
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +
  10. +
  11. Install ZFS in the chroot environment for the new system:

    +
    apt install --yes dpkg-dev linux-headers-generic linux-image-generic
    +
    +apt install --yes zfs-initramfs
    +
    +echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup does +not support ZFS.

    +
  12. +
  13. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup cryptsetup-initramfs
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  14. +
  15. Install an NTP service to synchronize time. +This step is specific to Bullseye which does not install the package during +bootstrap. +Although this step is not necessary for ZFS, it is useful for internet +browsing where local clock drift can cause login failures:

    +
    apt install systemd-timesyncd
    +timedatectl
    +
    +
    +

    You should now see “NTP service: active” in the above timedatectl +output.

    +
  16. +
  17. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      apt install --yes grub-pc
      +
      +
      +

      Select (using the space bar) all of the disks (not partitions) in your +pool.

      +
    • +
    • Install GRUB for UEFI booting:

      +
      apt install dosfstools
      +
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +   /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +apt install --yes grub-efi-amd64 shim-signed
      +
      +
      +

      Notes:

      +
        +
      • The -s 1 for mkdosfs is only necessary for drives which present +4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later.

      • +
      +
    • +
    +
  18. +
  19. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  20. +
  21. Set a root password:

    +
    passwd
    +
    +
    +
  22. +
  23. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +

    Note: For some disk configurations (NVMe?), this service may fail with an error +indicating that the bpool cannot be found. If this happens, add +-d DISK-part3 (replace DISK with the correct device path) to the +zpool import command.

    +
  24. +
  25. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  26. +
  27. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  28. +
  29. Optional: For ZFS native encryption or LUKS, configure Dropbear for remote +unlocking:

    +
    apt install --yes --no-install-recommends dropbear-initramfs
    +mkdir -p /etc/dropbear-initramfs
    +
    +# Optional: Convert OpenSSH server keys for Dropbear
    +for type in ecdsa ed25519 rsa ; do
    +    cp /etc/ssh/ssh_host_${type}_key /tmp/openssh.key
    +    ssh-keygen -p -N "" -m PEM -f /tmp/openssh.key
    +    dropbearconvert openssh dropbear \
    +        /tmp/openssh.key \
    +        /etc/dropbear-initramfs/dropbear_${type}_host_key
    +done
    +rm /tmp/openssh.key
    +
    +# Add user keys in the same format as ~/.ssh/authorized_keys
    +vi /etc/dropbear-initramfs/authorized_keys
    +
    +# If using a static IP, set it for the initramfs environment:
    +vi /etc/initramfs-tools/initramfs.conf
    +# The syntax is: IP=ADDRESS::GATEWAY:MASK:HOSTNAME:NIC
    +# For example:
    +# IP=192.168.1.100::192.168.1.1:255.255.255.0:myhostname:ens3
    +# HOSTNAME and NIC are optional.
    +
    +# Rebuild the initramfs (required when changing any of the above):
    +update-initramfs -u -k all
    +
    +
    +

    Notes:

    +
      +
    • Converting the server keys makes Dropbear use the same keys as OpenSSH, +avoiding host key mismatch warnings. Currently, dropbearconvert doesn’t +understand the new OpenSSH private key format, so the +keys need to be converted to the old PEM format first using +ssh-keygen. The downside of using the same keys for both OpenSSH and +Dropbear is that the OpenSSH keys are then available on-disk, unencrypted +in the initramfs.

    • +
    • Later, to use this functionality, SSH to the system (as root) while it is +prompting for the passphrase during the boot process. For ZFS native +encryption, run zfsunlock. For LUKS, run cryptroot-unlock.

    • +
    • You can optionally add command="/usr/bin/zfsunlock" or +command="/bin/cryptroot-unlock" in front of the authorized_keys +line to force the unlock command. This way, the unlock command runs +automatically and is all that can be run.

    • +
    +
  30. +
  31. Optional (but kindly requested): Install popcon

    +

    The popularity-contest package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro.

    +
    apt install --yes popularity-contest
    +
    +
    +

    Choose Yes at the prompt.

    +
  32. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Workaround GRUB’s missing zpool-features support:

    +
    vi /etc/default/grub
    +# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=debian --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
  13. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/debian
    +zfs set canmount=noauto rpool/ROOT/debian
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  14. +
+
+
+

Step 6: First Boot

+
    +
  1. Optional: Snapshot the initial installation:

    +
    zfs snapshot bpool/BOOT/debian@install
    +zfs snapshot rpool/ROOT/debian@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  2. +
  3. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  4. +
  5. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  6. +
  7. If this fails for rpool, mounting it on boot will fail and you will need to +zpool import -f rpool, then exit in the initramfs prompt.

  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  10. +
  11. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +zfs create rpool/home/$username
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username
    +
    +
    +
  12. +
  13. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:

      +
      dpkg-reconfigure grub-pc
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  14. +
+
+
+

Step 7: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 8: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +
    tasksel --new-install
    +
    +
    +

    Note: This will check “Debian desktop environment” and “print server” +by default. If you want a server installation, unselect those.

    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 9: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/debian@install
    +sudo zfs destroy rpool/ROOT/debian@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
apt install --yes cryptsetup
+
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/debian
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian Buster Root on ZFS.html b/Getting Started/Debian/Debian Buster Root on ZFS.html new file mode 100644 index 000000000..446fd2a77 --- /dev/null +++ b/Getting Started/Debian/Debian Buster Root on ZFS.html @@ -0,0 +1,1315 @@ + + + + + + + Debian Buster Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian Buster Root on ZFS

+ +
+

Overview

+
+

Newer release available

+
    +
  • See Debian Bullseye Root on ZFS for +new installs. This guide is no longer receiving most updates. It continues +to exist for reference for existing installs that followed it.

  • +
+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Debian GNU/Linux Live CD. If prompted, login with the username +user and password live. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Setup and update the repositories:

    +
    sudo vi /etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian buster main contrib
    +deb http://deb.debian.org/debian buster-backports main contrib
    +
    +
    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo apt install --yes openssh-server
    +
    +sudo systemctl restart ssh
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-amd64
    +
    +apt install --yes -t buster-backports --no-install-recommends zfs-dkms
    +
    +modprobe zfs
    +apt install --yes -t buster-backports zfsutils-linux
    +
    +
    +
      +
    • The dkms dependency is installed manually just so it comes from buster +and not buster-backports. This is not critical.

    • +
    • We need to get the module built and loaded before installing +zfsutils-linux or zfs-mount.service will fail to start.

    • +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o ashift=12 -d \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    +    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    +    -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • The spacemap_v2 feature has been tested and is safe to use. The boot +pool is small, so this does not matter in practice.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O encryption=on \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      apt install --yes cryptsetup
      +
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality was implemented in Ubuntu with the +zsys tool, though its dataset layout is more complicated, and zsys +is on life support. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
    +zfs mount rpool/ROOT/debian
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/debian
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                                 rpool/home
    +zfs create -o mountpoint=/root             rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off                 rpool/var
    +zfs create -o canmount=off                 rpool/var/lib
    +zfs create                                 rpool/var/log
    +zfs create                                 rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to exclude these from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /opt on this system:

    +
    zfs create                                 rpool/opt
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create                                 rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off                 rpool/usr
    +zfs create                                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create                                 rpool/var/games
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create                                 rpool/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create                                 rpool/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create                                 rpool/var/www
    +
    +
    +

    If this system will use GNOME:

    +
    zfs create                                 rpool/var/lib/AccountsService
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
    +
    +
    +

    If this system will use NFS (locking):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
    +
    +
    +

    Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Install the minimal system:

    +
    debootstrap buster /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  8. +
  9. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  10. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/network/interfaces.d/NAME
    +
    +
    +
    auto NAME
    +iface NAME inet dhcp
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://deb.debian.org/debian buster main contrib
    +deb-src http://deb.debian.org/debian buster main contrib
    +
    +deb http://security.debian.org/debian-security buster/updates main contrib
    +deb-src http://security.debian.org/debian-security buster/updates main contrib
    +
    +deb http://deb.debian.org/debian buster-updates main contrib
    +deb-src http://deb.debian.org/debian buster-updates main contrib
    +
    +
    +
    vi /mnt/etc/apt/sources.list.d/buster-backports.list
    +
    +
    +
    deb http://deb.debian.org/debian buster-backports main contrib
    +deb-src http://deb.debian.org/debian buster-backports main contrib
    +
    +
    +
    vi /mnt/etc/apt/preferences.d/90_zfs
    +
    +
    +
    Package: src:zfs-linux
    +Pin: release n=buster-backports
    +Pin-Priority: 990
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --rbind /dev  /mnt/dev
    +mount --rbind /proc /mnt/proc
    +mount --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    ln -s /proc/self/mounts /etc/mtab
    +apt update
    +
    +apt install --yes console-setup locales
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +
  10. +
  11. Install ZFS in the chroot environment for the new system:

    +
    apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
    +
    +apt install --yes zfs-initramfs
    +
    +echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup does +not support ZFS.

    +
  12. +
  13. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  14. +
  15. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      apt install --yes grub-pc
      +
      +
      +

      Select (using the space bar) all of the disks (not partitions) in your +pool.

      +
    • +
    • Install GRUB for UEFI booting:

      +
      apt install dosfstools
      +
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +   /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +apt install --yes grub-efi-amd64 shim-signed
      +
      +
      +

      Notes:

      +
        +
      • The -s 1 for mkdosfs is only necessary for drives which present +4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later.

      • +
      +
    • +
    +
  16. +
  17. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  18. +
  19. Set a root password:

    +
    passwd
    +
    +
    +
  20. +
  21. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +
  22. +
  23. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  24. +
  25. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  26. +
  27. Optional (but kindly requested): Install popcon

    +

    The popularity-contest package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro.

    +
    apt install --yes popularity-contest
    +
    +
    +

    Choose Yes at the prompt.

    +
  28. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Workaround GRUB’s missing zpool-features support:

    +
    vi /etc/default/grub
    +# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=debian --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
  13. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/debian
    +zfs set canmount=noauto rpool/ROOT/debian
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  14. +
+
+
+

Step 6: First Boot

+
    +
  1. Optional: Snapshot the initial installation:

    +
    zfs snapshot bpool/BOOT/debian@install
    +zfs snapshot rpool/ROOT/debian@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  2. +
  3. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  4. +
  5. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  8. +
  9. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +zfs create rpool/home/$username
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username
    +
    +
    +
  10. +
  11. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:

      +
      dpkg-reconfigure grub-pc
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  12. +
+
+
+

Step 7: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 8: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +
    tasksel --new-install
    +
    +
    +

    Note: This will check “Debian desktop environment” and “print server” +by default. If you want a server installation, unselect those.

    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 9: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/debian@install
    +sudo zfs destroy rpool/ROOT/debian@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
apt install --yes cryptsetup
+
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/debian
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --rbind /dev  /mnt/dev
+mount --rbind /proc /mnt/proc
+mount --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian GNU Linux initrd documentation.html b/Getting Started/Debian/Debian GNU Linux initrd documentation.html new file mode 100644 index 000000000..6935ee60a --- /dev/null +++ b/Getting Started/Debian/Debian GNU Linux initrd documentation.html @@ -0,0 +1,250 @@ + + + + + + + Debian GNU Linux initrd documentation — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian GNU Linux initrd documentation

+
+

Supported boot parameters

+
    +
  • rollback=<on|yes|1> Do a rollback of specified snapshot.

  • +
  • zfs_debug=<on|yes|1> Debug the initrd script

  • +
  • zfs_force=<on|yes|1> Force importing the pool. Should not be +necessary.

  • +
  • zfs=<off|no|0> Don’t try to import ANY pool, mount ANY filesystem or +even load the module.

  • +
  • rpool=<pool> Use this pool for root pool.

  • +
  • bootfs=<pool>/<dataset> Use this dataset for root filesystem.

  • +
  • root=<pool>/<dataset> Use this dataset for root filesystem.

  • +
  • root=ZFS=<pool>/<dataset> Use this dataset for root filesystem.

  • +
  • root=zfs:<pool>/<dataset> Use this dataset for root filesystem.

  • +
  • root=zfs:AUTO Try to detect both pool and rootfs

  • +
+

In all these cases, <dataset> could also be <dataset>@<snapshot>.

+

The reason there are so many supported boot options to get the root +filesystem, is that there are a lot of different ways too boot ZFS out +there, and I wanted to make sure I supported them all.

+
+
+

Pool imports

+
+

Import using /dev/disk/by-*

+

The initrd will, if the variable USE_DISK_BY_ID is set in the file +/etc/default/zfs, to import using the /dev/disk/by-* links. It will try +to import in this order:

+
    +
  1. /dev/disk/by-vdev

  2. +
  3. /dev/disk/by-*

  4. +
  5. /dev

  6. +
+
+
+

Import using cache file

+

If all of these imports fail (or if USE_DISK_BY_ID is unset), it will +then try to import using the cache file.

+
+
+

Last ditch attempt at importing

+

If that ALSO fails, it will try one more time, without any -d or -c +options.

+
+
+
+

Booting

+
+

Booting from snapshot:

+

Enter the snapshot for the root= parameter like in this example:

+
linux   /BOOT/debian@/boot/vmlinuz-5.10.0-9-amd64 root=ZFS=rpool/ROOT/debian@some_snapshot ro
+
+
+

This will clone the snapshot rpool/ROOT/debian@some_snapshot into the +filesystem rpool/ROOT/debian_some_snapshot and use that as root +filesystem. The original filesystem and snapshot is left alone in this +case.

+

BEWARE that it will first destroy, blindingly, the +rpool/ROOT/debian_some_snapshot filesystem before trying to clone the +snapshot into it again. So if you’ve booted from the same snapshot +previously and done some changes in that root filesystem, they will be +undone by the destruction of the filesystem.

+
+
+

Snapshot rollback

+

From version 0.6.4-1-3 it is now also possible to specify rollback=1 to +do a rollback of the snapshot instead of cloning it. BEWARE that +this will destroy all snapshots done after the specified snapshot!

+
+
+

Select snapshot dynamically

+

From version 0.6.4-1-3 it is now also possible to specify a NULL +snapshot name (such as root=rpool/ROOT/debian@) and if so, the initrd +script will discover all snapshots below that filesystem (sans the at), +and output a list of snapshot for the user to choose from.

+
+
+

Booting from native encrypted filesystem

+

Although there is currently no support for native encryption in ZFS On +Linux, there is a patch floating around ‘out there’ and the initrd +supports loading key and unlock such encrypted filesystem.

+
+
+

Separated filesystems

+
+

Descended filesystems

+

If there are separate filesystems (for example a separate dataset for +/usr), the snapshot boot code will try to find the snapshot under each +filesystems and clone (or rollback) them.

+

Example:

+
rpool/ROOT/debian@some_snapshot
+rpool/ROOT/debian/usr@some_snapshot
+
+
+

These will create the following filesystems respectively (if not doing a +rollback):

+
rpool/ROOT/debian_some_snapshot
+rpool/ROOT/debian/usr_some_snapshot
+
+
+

The initrd code will use the mountpoint option (if any) in the original +(without the snapshot part) dataset to find where it should mount the +dataset. Or it will use the name of the dataset below the root +filesystem (rpool/ROOT/debian in this example) for the mount point.

+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/Debian Stretch Root on ZFS.html b/Getting Started/Debian/Debian Stretch Root on ZFS.html new file mode 100644 index 000000000..a10e43b7e --- /dev/null +++ b/Getting Started/Debian/Debian Stretch Root on ZFS.html @@ -0,0 +1,1077 @@ + + + + + + + Debian Stretch Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian Stretch Root on ZFS

+ +
+

Overview

+
+

Newer release available

+ +
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of +memory is recommended for normal performance in basic workloads. If you +wish to use deduplication, you will need massive amounts of +RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports two different encryption options: unencrypted and +LUKS (full-disk encryption). ZFS native encryption has not yet been +released. With either option, all ZFS features are fully available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

LUKS encrypts almost everything: the OS, swap, home directories, and +anything else. The only unencrypted data is the bootloader, kernel, and +initrd. The system cannot boot without the passphrase being entered at +the console. Performance is good, but LUKS sits underneath ZFS, so if +multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+

1.1 Boot the Debian GNU/Linux Live CD. If prompted, login with the +username user and password live. Connect your system to the +Internet as appropriate (e.g. join your WiFi network).

+

1.2 Optional: Install and start the OpenSSH server in the Live CD +environment:

+

If you have a second system, using SSH to access the target system can +be convenient.

+
$ sudo apt update
+$ sudo apt install --yes openssh-server
+$ sudo systemctl restart ssh
+
+
+

Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

+

1.3 Become root:

+
$ sudo -i
+
+
+

1.4 Setup and update the repositories:

+
# echo deb http://deb.debian.org/debian stretch contrib >> /etc/apt/sources.list
+# echo deb http://deb.debian.org/debian stretch-backports main contrib >> /etc/apt/sources.list
+# apt update
+
+
+

1.5 Install ZFS in the Live CD environment:

+
# apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-amd64
+# apt install --yes -t stretch-backports zfs-dkms
+# modprobe zfs
+
+
+
    +
  • The dkms dependency is installed manually just so it comes from +stretch and not stretch-backports. This is not critical.

  • +
+
+
+

Step 2: Disk Formatting

+

2.1 If you are re-using a disk, clear it as necessary:

+
If the disk was previously used in an MD array, zero the superblock:
+# apt install --yes mdadm
+# mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
+
+Clear the partition table:
+# sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1
+
+
+

2.2 Partition your disk(s):

+
Run this if you need legacy (BIOS) booting:
+# sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1
+
+Run this for UEFI booting (for use now or in the future):
+# sgdisk     -n2:1M:+512M   -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1
+
+Run this for the boot pool:
+# sgdisk     -n3:0:+1G      -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1
+
+
+

Choose one of the following options:

+

2.2a Unencrypted:

+
# sgdisk     -n4:0:0        -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1
+
+
+

2.2b LUKS:

+
# sgdisk     -n4:0:0        -t4:8300 /dev/disk/by-id/scsi-SATA_disk1
+
+
+

Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

+

Hints:

+
    +
  • ls -la /dev/disk/by-id will list the aliases.

  • +
  • Are you doing this in a virtual machine? If your virtual disk is +missing from /dev/disk/by-id, use /dev/vda if you are using +KVM with virtio; otherwise, read the +troubleshooting section.

  • +
  • If you are creating a mirror or raidz topology, repeat the +partitioning commands for all the disks which will be part of the +pool.

  • +
+

2.3 Create the boot pool:

+
# zpool create -o ashift=12 -d \
+      -o feature@async_destroy=enabled \
+      -o feature@bookmarks=enabled \
+      -o feature@embedded_data=enabled \
+      -o feature@empty_bpobj=enabled \
+      -o feature@enabled_txg=enabled \
+      -o feature@extensible_dataset=enabled \
+      -o feature@filesystem_limits=enabled \
+      -o feature@hole_birth=enabled \
+      -o feature@large_blocks=enabled \
+      -o feature@lz4_compress=enabled \
+      -o feature@spacemap_histogram=enabled \
+      -o feature@userobj_accounting=enabled \
+      -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
+      -O normalization=formD -O relatime=on -O xattr=sa \
+      -O mountpoint=/ -R /mnt \
+      bpool /dev/disk/by-id/scsi-SATA_disk1-part3
+
+
+

You should not need to customize any of the options for the boot pool.

+

GRUB does not support all of the zpool features. See +spa_feature_names in +grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

+

Hints:

+
    +
  • If you are creating a mirror or raidz topology, create the pool using +zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3 +(or replace mirror with raidz, raidz2, or raidz3 and +list the partitions from additional disks).

  • +
  • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

  • +
+

2.4 Create the root pool:

+

Choose one of the following options:

+

2.4a Unencrypted:

+
# zpool create -o ashift=12 \
+      -O acltype=posixacl -O canmount=off -O compression=lz4 \
+      -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
+      -O mountpoint=/ -R /mnt \
+      rpool /dev/disk/by-id/scsi-SATA_disk1-part4
+
+
+

2.4b LUKS:

+
# apt install --yes cryptsetup
+# cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \
+      /dev/disk/by-id/scsi-SATA_disk1-part4
+# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# zpool create -o ashift=12 \
+      -O acltype=posixacl -O canmount=off -O compression=lz4 \
+      -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
+      -O mountpoint=/ -R /mnt \
+      rpool /dev/mapper/luks1
+
+
+
    +
  • The use of ashift=12 is recommended here because many drives +today have 4KiB (or larger) physical sectors, even though they +present 512B logical sectors. Also, a future replacement drive may +have 4KiB physical sectors (in which case ashift=12 is desirable) +or 4KiB logical sectors (in which case ashift=12 is required).

  • +
  • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires +ACLs

  • +
  • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only +filenames.

  • +
  • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s +documentation +for further information.

  • +
  • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI +applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain +controller. +Note that `xattr=sa is +Linux-specific. <https://openzfs.org/wiki/Platform_code_differences>`__ +If you move your xattr=sa pool to another OpenZFS implementation +besides ZFS-on-Linux, extended attributes will not be readable +(though your data will be). If portability of extended attributes is +important to you, omit the -O xattr=sa above. Even if you do not +want xattr=sa for the whole pool, it is probably fine to use it +for /var/log.

  • +
  • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

  • +
  • For LUKS, the key size chosen is 512 bits. However, XTS mode requires +two keys, so the LUKS key is split in half. Thus, -s 512 means +AES-256.

  • +
  • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup +FAQ +for guidance.

  • +
+

Hints:

+
    +
  • If you are creating a mirror or raidz topology, create the pool using +zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4 +(or replace mirror with raidz, raidz2, or raidz3 and +list the partitions from additional disks). For LUKS, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will +have to create using cryptsetup.

  • +
  • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the +root pool is named rpool by default.

  • +
+
+
+

Step 3: System Installation

+

3.1 Create filesystem datasets to act as containers:

+
# zfs create -o canmount=off -o mountpoint=none rpool/ROOT
+# zfs create -o canmount=off -o mountpoint=none bpool/BOOT
+
+
+

On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality for APT is possible but currently +unimplemented. Even without such a tool, it can still be used for +manually created clones.

+

3.2 Create filesystem datasets for the root and boot filesystems:

+
# zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
+# zfs mount rpool/ROOT/debian
+
+# zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian
+# zfs mount bpool/BOOT/debian
+
+
+

With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

+

3.3 Create datasets:

+
# zfs create                                 rpool/home
+# zfs create -o mountpoint=/root             rpool/home/root
+# zfs create -o canmount=off                 rpool/var
+# zfs create -o canmount=off                 rpool/var/lib
+# zfs create                                 rpool/var/log
+# zfs create                                 rpool/var/spool
+
+The datasets below are optional, depending on your preferences and/or
+software choices:
+
+If you wish to exclude these from snapshots:
+# zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
+# zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
+# chmod 1777 /mnt/var/tmp
+
+If you use /opt on this system:
+# zfs create                                 rpool/opt
+
+If you use /srv on this system:
+# zfs create                                 rpool/srv
+
+If you use /usr/local on this system:
+# zfs create -o canmount=off                 rpool/usr
+# zfs create                                 rpool/usr/local
+
+If this system will have games installed:
+# zfs create                                 rpool/var/games
+
+If this system will store local email in /var/mail:
+# zfs create                                 rpool/var/mail
+
+If this system will use Snap packages:
+# zfs create                                 rpool/var/snap
+
+If you use /var/www on this system:
+# zfs create                                 rpool/var/www
+
+If this system will use GNOME:
+# zfs create                                 rpool/var/lib/AccountsService
+
+If this system will use Docker (which manages its own datasets & snapshots):
+# zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
+
+If this system will use NFS (locking):
+# zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
+
+A tmpfs is recommended later, but if you want a separate dataset for /tmp:
+# zfs create -o com.sun:auto-snapshot=false  rpool/tmp
+# chmod 1777 /mnt/tmp
+
+
+

The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data such as logs (in /var/log). This will be especially +important if/when a beadm or similar utility is integrated. The +com.sun.auto-snapshot setting is used by some ZFS snapshot utilities +to exclude transient data.

+

If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for +/tmp, as shown above. This keeps the /tmp data out of snapshots +of your root filesystem. It also allows you to set a quota on +rpool/tmp, if you want to limit the maximum space used. Otherwise, +you can use a tmpfs (RAM filesystem) later.

+

3.4 Install the minimal system:

+
# debootstrap stretch /mnt
+# zfs set devices=off rpool
+
+
+

The debootstrap command leaves the new system in an unconfigured +state. An alternative to using debootstrap is to copy the entirety +of a working system into the new ZFS root.

+
+
+

Step 4: System Configuration

+

4.1 Configure the hostname (change HOSTNAME to the desired +hostname).

+
# echo HOSTNAME > /mnt/etc/hostname
+
+# vi /mnt/etc/hosts
+Add a line:
+127.0.1.1       HOSTNAME
+or if the system has a real name in DNS:
+127.0.1.1       FQDN HOSTNAME
+
+
+

Hint: Use nano if you find vi confusing.

+

4.2 Configure the network interface:

+
Find the interface name:
+# ip addr show
+
+# vi /mnt/etc/network/interfaces.d/NAME
+auto NAME
+iface NAME inet dhcp
+
+
+

Customize this file if the system is not a DHCP client.

+

4.3 Configure the package sources:

+
# vi /mnt/etc/apt/sources.list
+deb http://deb.debian.org/debian stretch main contrib
+deb-src http://deb.debian.org/debian stretch main contrib
+deb http://security.debian.org/debian-security stretch/updates main contrib
+deb-src http://security.debian.org/debian-security stretch/updates main contrib
+deb http://deb.debian.org/debian stretch-updates main contrib
+deb-src http://deb.debian.org/debian stretch-updates main contrib
+
+# vi /mnt/etc/apt/sources.list.d/stretch-backports.list
+deb http://deb.debian.org/debian stretch-backports main contrib
+deb-src http://deb.debian.org/debian stretch-backports main contrib
+
+# vi /mnt/etc/apt/preferences.d/90_zfs
+Package: src:zfs-linux
+Pin: release n=stretch-backports
+Pin-Priority: 990
+
+
+

4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

+
# mount --rbind /dev  /mnt/dev
+# mount --rbind /proc /mnt/proc
+# mount --rbind /sys  /mnt/sys
+# chroot /mnt /bin/bash --login
+
+
+

Note: This is using --rbind, not --bind.

+

4.5 Configure a basic system environment:

+
# ln -s /proc/self/mounts /etc/mtab
+# apt update
+
+# apt install --yes locales
+# dpkg-reconfigure locales
+
+
+

Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available.

+
# dpkg-reconfigure tzdata
+
+
+

4.6 Install ZFS in the chroot environment for the new system:

+
# apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
+# apt install --yes zfs-initramfs
+
+
+

4.7 For LUKS installs only, setup crypttab:

+
# apt install --yes cryptsetup
+
+# echo luks1 UUID=$(blkid -s UUID -o value \
+      /dev/disk/by-id/scsi-SATA_disk1-part4) none \
+      luks,discard,initramfs > /etc/crypttab
+
+
+ +

Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

+

4.8 Install GRUB

+

Choose one of the following options:

+

4.8a Install GRUB for legacy (BIOS) booting

+
# apt install --yes grub-pc
+
+
+

Install GRUB to the disk(s), not the partition(s).

+

4.8b Install GRUB for UEFI booting

+
# apt install dosfstools
+# mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2
+# mkdir /boot/efi
+# echo PARTUUID=$(blkid -s PARTUUID -o value \
+      /dev/disk/by-id/scsi-SATA_disk1-part2) \
+      /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
+# mount /boot/efi
+# apt install --yes grub-efi-amd64 shim
+
+
+
    +
  • The -s 1 for mkdosfs is only necessary for drives which +present 4 KiB logical sectors (“4Kn” drives) to meet the minimum +cluster size (given the partition size of 512 MiB) for FAT32. It also +works fine on drives which present 512 B sectors.

  • +
+

Note: If you are creating a mirror or raidz topology, this step only +installs GRUB on the first disk. The other disk(s) will be handled +later.

+

4.9 Set a root password

+
# passwd
+
+
+

4.10 Enable importing bpool

+

This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

+
# vi /etc/systemd/system/zfs-import-bpool.service
+[Unit]
+DefaultDependencies=no
+Before=zfs-import-scan.service
+Before=zfs-import-cache.service
+
+[Service]
+Type=oneshot
+RemainAfterExit=yes
+ExecStart=/sbin/zpool import -N -o cachefile=none bpool
+
+[Install]
+WantedBy=zfs-import.target
+
+# systemctl enable zfs-import-bpool.service
+
+
+

4.11 Optional (but recommended): Mount a tmpfs to /tmp

+

If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

+
# cp /usr/share/systemd/tmp.mount /etc/systemd/system/
+# systemctl enable tmp.mount
+
+
+

4.12 Optional (but kindly requested): Install popcon

+

The popularity-contest package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro.

+
# apt install --yes popularity-contest
+
+
+

Choose Yes at the prompt.

+
+
+

Step 5: GRUB Installation

+

5.1 Verify that the ZFS boot filesystem is recognized:

+
# grub-probe /boot
+zfs
+
+
+

5.2 Refresh the initrd files:

+
# update-initramfs -u -k all
+update-initramfs: Generating /boot/initrd.img-4.9.0-8-amd64
+
+
+

Note: When using LUKS, this will print “WARNING could not determine +root device from /etc/fstab”. This is because cryptsetup does not +support +ZFS.

+

5.3 Workaround GRUB’s missing zpool-features support:

+
# vi /etc/default/grub
+Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
+
+
+

5.4 Optional (but highly recommended): Make debugging GRUB easier:

+
# vi /etc/default/grub
+Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
+Uncomment: GRUB_TERMINAL=console
+Save and quit.
+
+
+

Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

+

5.5 Update the boot configuration:

+
# update-grub
+Generating grub configuration file ...
+Found linux image: /boot/vmlinuz-4.9.0-8-amd64
+Found initrd image: /boot/initrd.img-4.9.0-8-amd64
+done
+
+
+

Note: Ignore errors from osprober, if present.

+

5.6 Install the boot loader

+

5.6a For legacy (BIOS) booting, install GRUB to the MBR:

+
# grub-install /dev/disk/by-id/scsi-SATA_disk1
+Installing for i386-pc platform.
+Installation finished. No error reported.
+
+
+

Do not reboot the computer until you get exactly that result message. +Note that you are installing GRUB to the whole disk, not a partition.

+

If you are creating a mirror or raidz topology, repeat the +grub-install command for each disk in the pool.

+

5.6b For UEFI booting, install GRUB:

+
# grub-install --target=x86_64-efi --efi-directory=/boot/efi \
+      --bootloader-id=debian --recheck --no-floppy
+
+
+

5.7 Verify that the ZFS module is installed:

+
# ls /boot/grub/*/zfs.mod
+
+
+

5.8 Fix filesystem mount ordering

+

Until ZFS gains a systemd mount +generator, there are +races between mounting filesystems and starting certain daemons. In +practice, the issues (e.g. +#5754) seem to be +with certain filesystems in /var, specifically /var/log and +/var/tmp. Setting these to use legacy mounting, and listing them +in /etc/fstab makes systemd aware that these are separate +mountpoints. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp +feature of systemd automatically use After=var-tmp.mount.

+

Until there is support for mounting /boot in the initramfs, we also +need to mount that, because it was marked canmount=noauto. Also, +with UEFI, we need to ensure it is mounted before its child filesystem +/boot/efi.

+

rpool is guaranteed to be imported by the initramfs, so there is no +point in adding x-systemd.requires=zfs-import.target to those +filesystems.

+
For UEFI booting, unmount /boot/efi first:
+# umount /boot/efi
+
+Everything else applies to both BIOS and UEFI booting:
+
+# zfs set mountpoint=legacy bpool/BOOT/debian
+# echo bpool/BOOT/debian /boot zfs \
+      nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
+
+# zfs set mountpoint=legacy rpool/var/log
+# echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab
+
+# zfs set mountpoint=legacy rpool/var/spool
+# echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab
+
+If you created a /var/tmp dataset:
+# zfs set mountpoint=legacy rpool/var/tmp
+# echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab
+
+If you created a /tmp dataset:
+# zfs set mountpoint=legacy rpool/tmp
+# echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab
+
+
+
+
+

Step 6: First Boot

+

6.1 Snapshot the initial installation:

+
# zfs snapshot bpool/BOOT/debian@install
+# zfs snapshot rpool/ROOT/debian@install
+
+
+

In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

+

6.2 Exit from the chroot environment back to the LiveCD environment:

+
# exit
+
+
+

6.3 Run these commands in the LiveCD environment to unmount all +filesystems:

+
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+# zpool export -a
+
+
+

6.4 Reboot:

+
# reboot
+
+
+

6.5 Wait for the newly installed system to boot normally. Login as root.

+

6.6 Create a user account:

+
# zfs create rpool/home/YOURUSERNAME
+# adduser YOURUSERNAME
+# cp -a /etc/skel/.[!.]* /home/YOURUSERNAME
+# chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
+
+
+

6.7 Add your user account to the default set of groups for an +administrator:

+
# usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME
+
+
+

6.8 Mirror GRUB

+

If you installed to multiple disks, install GRUB on the additional +disks:

+

6.8a For legacy (BIOS) booting:

+
# dpkg-reconfigure grub-pc
+Hit enter until you get to the device selection screen.
+Select (using the space bar) all of the disks (not partitions) in your pool.
+
+
+

6.8b UEFI

+
# umount /boot/efi
+
+For the second and subsequent disks (increment debian-2 to -3, etc.):
+# dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
+     of=/dev/disk/by-id/scsi-SATA_disk2-part2
+# efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
+      -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
+
+# mount /boot/efi
+
+
+
+
+

Step 7: (Optional) Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. This issue is currently being investigated in: +https://github.com/zfsonlinux/zfs/issues/7734

+

7.1 Create a volume dataset (zvol) for use as a swap device:

+
# zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
+      -o logbias=throughput -o sync=always \
+      -o primarycache=metadata -o secondarycache=none \
+      -o com.sun:auto-snapshot=false rpool/swap
+
+
+

You can adjust the size (the 4G part) to your needs.

+

The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

+

7.2 Configure the swap device:

+

Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

+
# mkswap -f /dev/zvol/rpool/swap
+# echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
+# echo RESUME=none > /etc/initramfs-tools/conf.d/resume
+
+
+

The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

+

7.3 Enable the swap device:

+
# swapon -av
+
+
+
+
+

Step 8: Full Software Installation

+

8.1 Upgrade the minimal system:

+
# apt dist-upgrade --yes
+
+
+

8.2 Install a regular set of software:

+
# tasksel
+
+
+

Note: This will check “Debian desktop environment” and “print server” +by default. If you want a server installation, unselect those.

+

8.3 Optional: Disable log compression:

+

As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. +Also, if you are making snapshots of /var/log, logrotate’s +compression will actually waste space, as the uncompressed data will +live on in the snapshot. You can edit the files in /etc/logrotate.d +by hand to comment out compress, or use this loop (copy-and-paste +highly recommended):

+
# for file in /etc/logrotate.d/* ; do
+    if grep -Eq "(^|[^#y])compress" "$file" ; then
+        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
+    fi
+done
+
+
+

8.4 Reboot:

+
# reboot
+
+
+
+

Step 9: Final Cleanup

+

9.1 Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

+

9.2 Optional: Delete the snapshots of the initial installation:

+
$ sudo zfs destroy bpool/BOOT/debian@install
+$ sudo zfs destroy rpool/ROOT/debian@install
+
+
+

9.3 Optional: Disable the root password

+
$ sudo usermod -p '*' root
+
+
+

9.4 Optional: Re-enable the graphical boot process:

+

If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

+
$ sudo vi /etc/default/grub
+Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
+Comment out GRUB_TERMINAL=console
+Save and quit.
+
+$ sudo update-grub
+
+
+

Note: Ignore errors from osprober, if present.

+

9.5 Optional: For LUKS installs only, backup the LUKS header:

+
$ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
+    --header-backup-file luks1-header.dat
+
+
+

Store that backup somewhere safe (e.g. cloud storage). It is protected +by your LUKS passphrase, but you may wish to use additional encryption.

+

Hint: If you created a mirror or raidz topology, repeat this for +each LUKS volume (luks2, etc.).

+
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install +Environment.

+

This will automatically import your pool. Export it and re-import it to +get the mounts right:

+
For LUKS, first unlock the disk(s):
+# apt install --yes cryptsetup
+# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+Repeat for additional disks, if this is a mirror or raidz topology.
+
+# zpool export -a
+# zpool import -N -R /mnt rpool
+# zpool import -N -R /mnt bpool
+# zfs mount rpool/ROOT/debian
+# zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
# mount --rbind /dev  /mnt/dev
+# mount --rbind /proc /mnt/proc
+# mount --rbind /sys  /mnt/sys
+# chroot /mnt /bin/bash --login
+# mount /boot/efi
+# mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
# exit
+# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+# zpool export -a
+# reboot
+
+
+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that +does slow asynchronous drive initialization, like some IBM M1015 or +OEM-branded cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to +the Linux kernel until after the regular system is started, and ZoL does +not hotplug pool members. See +https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run +update-initramfs -u -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit +this error message.

+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere +configuration. Doing this ensures that /dev/disk aliases are +created in the guest.

  • +
+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
$ sudo apt install ovmf
+$ sudo vi /etc/libvirt/qemu.conf
+Uncomment these lines:
+nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
+]
+$ sudo service libvirt-bin restart
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Debian/index.html b/Getting Started/Debian/index.html new file mode 100644 index 000000000..4fae27af9 --- /dev/null +++ b/Getting Started/Debian/index.html @@ -0,0 +1,209 @@ + + + + + + + Debian — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Debian

+ +
+

Installation

+

If you want to use ZFS as your root filesystem, see the Root on ZFS +links below instead.

+

ZFS packages are included in the contrib repository. The +backports repository +often provides newer releases of ZFS. You can use it as follows.

+

Add the backports repository:

+
vi /etc/apt/sources.list.d/bookworm-backports.list
+
+
+
deb http://deb.debian.org/debian bookworm-backports main contrib
+deb-src http://deb.debian.org/debian bookworm-backports main contrib
+
+
+
vi /etc/apt/preferences.d/90_zfs
+
+
+
Package: src:zfs-linux
+Pin: release n=bookworm-backports
+Pin-Priority: 990
+
+
+

Install the packages:

+
apt update
+apt install dpkg-dev linux-headers-generic linux-image-generic
+apt install zfs-dkms zfsutils-linux
+
+
+

Caution: If you are in a poorly configured environment (e.g. certain VM or container consoles), when apt attempts to pop up a message on first install, it may fail to notice a real console is unavailable, and instead appear to hang indefinitely. To circumvent this, you can prefix the apt install commands with DEBIAN_FRONTEND=noninteractive, like this:

+
DEBIAN_FRONTEND=noninteractive apt install zfs-dkms zfsutils-linux
+
+
+
+
+

Root on ZFS

+ +
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Fedora.html b/Getting Started/Fedora.html new file mode 100644 index 000000000..85c27d311 --- /dev/null +++ b/Getting Started/Fedora.html @@ -0,0 +1,116 @@ + + + + + + + Fedora — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Fedora

+

This page has been moved to here.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Fedora/Root on ZFS.html b/Getting Started/Fedora/Root on ZFS.html new file mode 100644 index 000000000..2cad76960 --- /dev/null +++ b/Getting Started/Fedora/Root on ZFS.html @@ -0,0 +1,605 @@ + + + + + + + Fedora Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Fedora Root on ZFS

+
+

Notes

+
    +
  • As an alternative to the below method of installing Fedora Linux on a ZFS root filesystem, you can use the unofficial script fedora-on-zfs, which is more automated and can generate a Fedora Linux installation that is closer to an official Fedora Linux configuration. The fedora-on-zfs script is different from the below method in that it uses one of Fedora’s official kickstarts (fedora-disk-minimal.ks, fedora-disk-workstation.ks, fedora-disk-kde.ks, etc.) to guide the installation, but with a few overrides to add the ZFS functionality. Bug reports should be submitted to Greg’s fedora-on-zfs GitHub repo.

  • +
+

ZFSBootMenu

+

ZFSBootMenu is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+

UEFI support only

+

Only UEFI is supported by this guide.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Because the kernel of latest Live CD might be incompatible with +ZFS, we will use Alpine Linux Extended, which ships with ZFS by +default.

    +

    Download latest extended variant of Alpine Linux +live image, +verify checksum +and boot from it.

    +
    gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Login as root user. There is no password.

  6. +
  7. Configure Internet

    +
    setup-interfaces -r
    +# You must use "-r" option to start networking services properly
    +# example:
    +network interface: wlan0
    +WiFi name:         <ssid>
    +ip address:        dhcp
    +<enter done to finish network config>
    +manual netconfig:  n
    +
    +
    +
  8. +
  9. If you are using wireless network and it is not shown, see Alpine +Linux wiki for +further details. wpa_supplicant can be installed with apk +add wpa_supplicant without internet connection.

  10. +
  11. Configure SSH server

    +
    setup-sshd
    +# example:
    +ssh server:        openssh
    +allow root:        "prohibit-password" or "yes"
    +ssh key:           "none" or "<public key>"
    +
    +
    +
  12. +
  13. Set root password or /root/.ssh/authorized_keys.

  14. +
  15. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  16. +
  17. Configure NTP client for time synchronization

    +
    setup-ntp busybox
    +
    +
    +
  18. +
  19. Set up apk-repo. A list of available mirrors is shown. +Press space bar to continue

    +
    setup-apkrepos
    +
    +
    +
  20. +
  21. Throughout this guide, we use predictable disk names generated by +udev

    +
    apk update
    +apk add eudev
    +setup-devd udev
    +
    +
    +
  22. +
  23. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  24. +
  25. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  26. +
  27. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  28. +
  29. Install ZFS support from live media:

    +
    apk add zfs
    +
    +
    +
  30. +
  31. Install partition tool

    +
    apk add parted e2fsprogs cryptsetup util-linux
    +
    +
    +
  32. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 1MiB 4GiB \
    + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + set 1 esp on \
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup temporary encrypted swap for this installation only. This is +useful if the available memory is small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3
    +   mkswap /dev/mapper/"${i##*/}"-part3
    +   swapon /dev/mapper/"${i##*/}"-part3
    +done
    +
    +
    +
  4. +
  5. Load ZFS kernel module

    +
    modprobe zfs
    +
    +
    +
  6. +
  7. Create root pool

    +
      +
    • Unencrypted:

      +
      # shellcheck disable=SC2046
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -R "${MNT}" \
      +    -O acltype=posixacl \
      +    -O canmount=off \
      +    -O dnodesize=auto \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O xattr=sa \
      +    -O mountpoint=none \
      +    rpool \
      +    mirror \
      +   $(for i in ${DISK}; do
      +      printf '%s ' "${i}-part2";
      +     done)
      +
      +
      +
    • +
    +
  8. +
  9. Create root system container:

    +
    +
    # dracut demands system root dataset to have non-legacy mountpoint
    +zfs create -o canmount=noauto -o mountpoint=/ rpool/root
    +
    +
    +
    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o mountpoint=legacy rpool/home
    +zfs mount rpool/root
    +mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home
    +
    +
    +
  10. +
  11. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    +done
    +
    +for i in ${DISK}; do
    + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot
    + break
    +done
    +
    +
    +
  12. +
+
+
+

System Configuration

+
    +
  1. Download and extract minimal Fedora root filesystem:

    +
    apk add curl
    +curl --fail-early --fail -L \
    +https://dl.fedoraproject.org/pub/fedora/linux/releases/39/Container/x86_64/images/Fedora-Container-Base-39-1.5.x86_64.tar.xz \
    +-o rootfs.tar.gz
    +curl --fail-early --fail -L \
    +https://dl.fedoraproject.org/pub/fedora/linux/releases/39/Container/x86_64/images/Fedora-Container-39-1.5-x86_64-CHECKSUM \
    +-o checksum
    +
    +# BusyBox sha256sum treats all lines in the checksum file
    +# as checksums and requires two spaces "  "
    +# between filename and checksum
    +
    +grep 'Container-Base' checksum \
    +| grep '^SHA256' \
    +| sed -E 's|.*= ([a-z0-9]*)$|\1  rootfs.tar.gz|' > ./sha256checksum
    +
    +sha256sum -c ./sha256checksum
    +
    +rootfs_tar=$(tar t -af rootfs.tar.gz | grep layer.tar)
    +rootfs_tar_dir=$(dirname "${rootfs_tar}")
    +tar x -af rootfs.tar.gz "${rootfs_tar}"
    +ln -s "${MNT}" "${MNT}"/"${rootfs_tar_dir}"
    +tar x  -C "${MNT}" -af "${rootfs_tar}"
    +unlink "${MNT}"/"${rootfs_tar_dir}"
    +
    +
    +
  2. +
  3. Enable community repo

    +
    sed -i '/edge/d' /etc/apk/repositories
    +sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories
    +
    +
    +
  4. +
  5. Generate fstab:

    +
    apk add arch-install-scripts
    +genfstab -t PARTUUID "${MNT}" \
    +| grep -v swap \
    +| sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \
    +> "${MNT}"/etc/fstab
    +
    +
    +
  6. +
  7. Chroot

    +
    cp /etc/resolv.conf "${MNT}"/etc/resolv.conf
    +for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done
    +chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash
    +
    +
    +
  8. +
  9. Unset all shell aliases, which can interfere with installation:

    +
    unalias -a
    +
    +
    +
  10. +
  11. Install base packages

    +
    dnf -y install @core kernel kernel-devel
    +
    +
    +
  12. +
  13. Install ZFS packages

    +
    dnf -y install \
    +https://zfsonlinux.org/fedora/zfs-release-2-4"$(rpm --eval "%{dist}"||true)".noarch.rpm
    +
    +dnf -y install zfs zfs-dracut
    +
    +
    +
  14. +
  15. Check whether ZFS modules are successfully built

    +
    tail -n10 /var/lib/dkms/zfs/**/build/make.log
    +
    +# ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_start_io_acct'
    +# ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_end_io_acct_remapped'
    +# make[4]:  [scripts/Makefile.modpost:138: /var/lib/dkms/zfs/2.1.9/build/module/Module.symvers] Error 1
    +# make[3]:  [Makefile:1977: modpost] Error 2
    +# make[3]: Leaving directory '/usr/src/kernels/6.2.9-100.fc36.x86_64'
    +# make[2]:  [Makefile:55: modules-Linux] Error 2
    +# make[2]: Leaving directory '/var/lib/dkms/zfs/2.1.9/build/module'
    +# make[1]:  [Makefile:933: all-recursive] Error 1
    +# make[1]: Leaving directory '/var/lib/dkms/zfs/2.1.9/build'
    +# make:  [Makefile:794: all] Error 2
    +
    +
    +

    If the build failed, you need to install an Long Term Support +kernel and its headers, then rebuild ZFS module

    +
    # this is a third-party repo!
    +# you have been warned.
    +#
    +# select a kernel from
    +# https://copr.fedorainfracloud.org/coprs/kwizart/
    +
    +dnf copr enable -y kwizart/kernel-longterm-VERSION
    +dnf install -y kernel-longterm kernel-longterm-devel
    +dnf remove -y kernel-core
    +
    +
    +

    ZFS modules will be built as part of the kernel installation. +Check build log again with tail command.

    +
  16. +
  17. Add zfs modules to dracut

    +
    echo 'add_dracutmodules+=" zfs "' >> /etc/dracut.conf.d/zfs.conf
    +echo 'force_drivers+=" zfs "' >> /etc/dracut.conf.d/zfs.conf
    +
    +
    +
  18. +
  19. Add other drivers to dracut:

    +
    if grep mpt3sas /proc/modules; then
    +  echo 'force_drivers+=" mpt3sas "'  >> /etc/dracut.conf.d/zfs.conf
    +fi
    +if grep virtio_blk /proc/modules; then
    +  echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf
    +fi
    +
    +
    +
  20. +
  21. Build initrd

    +
    find -D exec /lib/modules -maxdepth 1 \
    +-mindepth 1 -type d \
    +-exec sh -vxc \
    +'if test -e "$1"/modules.dep;
    +   then kernel=$(basename "$1");
    +   dracut --verbose --force --kver "${kernel}";
    + fi' sh {} \;
    +
    +
    +
  22. +
  23. For SELinux, relabel filesystem on reboot:

    +
    fixfiles -F onboot
    +
    +
    +
  24. +
  25. Enable internet time synchronisation:

    +
    systemctl enable systemd-timesyncd
    +
    +
    +
  26. +
  27. Generate host id

    +
    zgenhostid -f -o /etc/hostid
    +
    +
    +
  28. +
  29. Install locale package, example for English locale:

    +
    dnf install -y glibc-minimal-langpack glibc-langpack-en
    +
    +
    +
  30. +
  31. Set locale, keymap, timezone, hostname

    +
    rm -f /etc/localtime
    +rm -f /etc/hostname
    +systemd-firstboot \
    +--force \
    +--locale=en_US.UTF-8 \
    +--timezone=Etc/UTC \
    +--hostname=testhost \
    +--keymap=us || true
    +
    +
    +
  32. +
  33. Set root passwd

    +
    printf 'root:yourpassword' | chpasswd
    +
    +
    +
  34. +
+
+
+

Bootloader

+
    +
  1. Install rEFInd boot loader:

    +
    # from http://www.rodsbooks.com/refind/getting.html
    +# use Binary Zip File option
    +curl -L http://sourceforge.net/projects/refind/files/0.14.0.2/refind-bin-0.14.0.2.zip/download --output refind.zip
    +
    +dnf install -y unzip
    +unzip refind.zip
    +mkdir -p /boot/EFI/BOOT
    +find ./refind-bin-0.14.0.2/ -name 'refind_x64.efi' -print0 \
    +| xargs -0I{} mv {} /boot/EFI/BOOT/BOOTX64.EFI
    +rm -rf refind.zip refind-bin-0.14.0.2
    +
    +
    +
  2. +
  3. Add boot entry:

    +
    tee -a /boot/refind-linux.conf <<EOF
    +"Fedora" "root=ZFS=rpool/root"
    +EOF
    +
    +
    +
  4. +
  5. Exit chroot

    +
    exit
    +
    +
    +
  6. +
  7. Unmount filesystems and create initial system snapshot +You can later create a boot environment from this snapshot. +See Root on ZFS maintenance page.

    +
    umount -Rl "${MNT}"
    +zfs snapshot -r rpool@initial-installation
    +
    +
    +
  8. +
  9. Export all pools

    +
    zpool export -a
    +
    +
    +
  10. +
  11. Reboot

    +
    reboot
    +
    +
    +
  12. +
+
+
+

Post installaion

+
    +
  1. Install package groups

    +
    dnf group list --hidden -v       # query package groups
    +dnf group install gnome-desktop
    +
    +
    +
  2. +
  3. Add new user, configure swap.

  4. +
  5. Mount other EFI system partitions then set up a service for syncing +their contents.

  6. +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Fedora/index.html b/Getting Started/Fedora/index.html new file mode 100644 index 000000000..a4a04cf0b --- /dev/null +++ b/Getting Started/Fedora/index.html @@ -0,0 +1,244 @@ + + + + + + + Fedora — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Fedora

+
+

Contents

+ +
+
+

Installation

+

Note: this is for installing ZFS on an existing Fedora +installation. To use ZFS as root file system, +see below.

+
    +
  1. If zfs-fuse from official Fedora repo is installed, +remove it first. It is not maintained and should not be used +under any circumstance:

    +
    rpm -e --nodeps zfs-fuse
    +
    +
    +
  2. +
  3. Add ZFS repo:

    +
    dnf install -y https://zfsonlinux.org/fedora/zfs-release-2-4$(rpm --eval "%{dist}").noarch.rpm
    +
    +
    +

    List of repos is available here.

    +
  4. +
  5. Install kernel headers:

    +
    dnf install -y kernel-devel
    +
    +
    +

    kernel-devel package must be installed before zfs package.

    +
  6. +
  7. Install ZFS packages:

    +
    dnf install -y zfs
    +
    +
    +
  8. +
  9. Load kernel module:

    +
    modprobe zfs
    +
    +
    +

    If kernel module can not be loaded, your kernel version +might be not yet supported by OpenZFS.

    +

    An option is to an LTS kernel from COPR, provided by a third-party. +Use it at your own risk:

    +
    # this is a third-party repo!
    +# you have been warned.
    +#
    +# select a kernel from
    +# https://copr.fedorainfracloud.org/coprs/kwizart/
    +
    +dnf copr enable -y kwizart/kernel-longterm-VERSION
    +dnf install -y kernel-longterm kernel-longterm-devel
    +
    +
    +

    Reboot to new LTS kernel, then load kernel module:

    +
    modprobe zfs
    +
    +
    +
  10. +
  11. By default ZFS kernel modules are loaded upon detecting a pool. +To always load the modules at boot:

    +
    echo zfs > /etc/modules-load.d/zfs.conf
    +
    +
    +
  12. +
  13. By default ZFS may be removed by kernel package updates. +To lock the kernel version to only ones supported by ZFS to prevent this:

    +
    echo 'zfs' > /etc/dnf/protected.d/zfs.conf
    +
    +
    +
    +
    Pending non-kernel updates can still be applied::

    dnf update –exclude=kernel*

    +
    +
    +
  14. +
+
+
+

Testing Repo

+

Testing repository, which is disabled by default, contains +the latest version of OpenZFS which is under active development. +These packages +should not be used on production systems.

+
dnf config-manager --enable zfs-testing
+dnf install zfs
+
+
+
+
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/FreeBSD.html b/Getting Started/FreeBSD.html new file mode 100644 index 000000000..3a8b3e8fb --- /dev/null +++ b/Getting Started/FreeBSD.html @@ -0,0 +1,253 @@ + + + + + + + FreeBSD — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

FreeBSD

+

ZoF-logo

+
+

Installation on FreeBSD

+

OpenZFS is available pre-packaged as:

+
    +
  • the zfs-2.0-release branch, in the FreeBSD base system from FreeBSD 13.0-CURRENT forward

  • +
  • the master branch, in the FreeBSD ports tree as sysutils/openzfs and sysutils/openzfs-kmod from FreeBSD 12.1 forward

  • +
+

The rest of this document describes the use of OpenZFS either from ports/pkg or built manually from sources for development.

+

The ZFS utilities will be installed in /usr/local/sbin/, so make sure +your PATH gets adjusted accordingly.

+

To load the module at boot, put openzfs_load="YES" in +/boot/loader.conf, and remove zfs_load="YES" if migrating a ZFS +install.

+

Beware that the FreeBSD boot loader does not allow booting from root +pools with encryption active (even if it is not in use), so do not try +encryption on a pool you boot from.

+
+
+

Development on FreeBSD

+

The following dependencies are required to build OpenZFS on FreeBSD:

+
    +
  • FreeBSD sources in /usr/src or elsewhere specified by SYSDIR in env. +If you don’t have the sources installed you can install them with +git.

    +

    Install source For FreeBSD 12:

    +
    git clone -b stable/12 https://git.FreeBSD.org/src.git /usr/src
    +
    +
    +

    Install source for FreeBSD Current:

    +
    git clone https://git.FreeBSD.org/src.git /usr/src
    +
    +
    +
  • +
  • Packages for build:

    +
    pkg install \
    +    autoconf \
    +    automake \
    +    autotools \
    +    git \
    +    gmake
    +
    +
    +
  • +
  • Optional packages for build:

    +
    pkg install python
    +pkg install devel/py-sysctl # needed for arcstat, arc_summary, dbufstat
    +
    +
    +
  • +
  • Packages for checks and tests:

    +
    pkg install \
    +    base64 \
    +    bash \
    +    checkbashisms \
    +    fio \
    +    hs-ShellCheck \
    +    ksh93 \
    +    pamtester \
    +    devel/py-flake8 \
    +    sudo
    +
    +
    +

    Your preferred python version may be substituted. The user for +running tests must have NOPASSWD sudo permission.

    +
  • +
+

To build and install:

+
# as user
+git clone https://github.com/openzfs/zfs
+cd zfs
+./autogen.sh
+env MAKE=gmake ./configure
+gmake -j`sysctl -n hw.ncpu`
+# as root
+gmake install
+
+
+

To use the OpenZFS kernel module when FreeBSD starts, edit /boot/loader.conf :

+

Replace the line:

+
zfs_load="YES"
+
+
+

with:

+
openzfs_load="YES"
+
+
+

The stock FreeBSD ZFS binaries are installed in /sbin. OpenZFS binaries are installed to /usr/local/sbin when installed form ports/pkg or manually from the source. To use OpenZFS binaries, adjust your path so /usr/local/sbin is listed before /sbin. Otherwise the native ZFS binaries will be used.

+

For example, make changes to ~/.profile ~/.bashrc ~/.cshrc from this:

+
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:~/bin
+
+
+

To this:

+
PATH=/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:~/bin
+
+
+

For rapid development it can be convenient to do a UFS install instead +of ZFS when setting up the work environment. That way the module can be +unloaded and loaded without rebooting.

+
reboot
+
+
+

Though not required, WITHOUT_ZFS is a useful build option in FreeBSD +to avoid building and installing the legacy zfs tools and kmod - see +src.conf(5).

+

Some tests require fdescfs to be mount on /dev/fd. This can be done +temporarily with:

+
mount -t fdescfs fdescfs /dev/fd
+
+
+

or an entry can be added to /etc/fstab.

+
fdescfs /dev/fd fdescfs rw 0 0
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/NixOS/Root on ZFS.html b/Getting Started/NixOS/Root on ZFS.html new file mode 100644 index 000000000..03c11fc24 --- /dev/null +++ b/Getting Started/NixOS/Root on ZFS.html @@ -0,0 +1,386 @@ + + + + + + + NixOS Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

NixOS Root on ZFS

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

UEFI support only

+

Only UEFI is supported by this guide. Make sure your computer is +booted in UEFI mode.

+
+

Preparation

+
    +
  1. Download NixOS Live Image and boot from it.

    +
    sha256sum -c ./nixos-*.sha256
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  2. +
  3. Connect to the Internet.

  4. +
  5. Set root password or /root/.ssh/authorized_keys.

  6. +
  7. Start SSH server

    +
    systemctl restart sshd
    +
    +
    +
  8. +
  9. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  10. +
  11. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  12. +
  13. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  14. +
  15. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  16. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 1MiB 4GiB \
    + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + set 1 esp on \
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup temporary encrypted swap for this installation only. This is +useful if the available memory is small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3
    +   mkswap /dev/mapper/"${i##*/}"-part3
    +   swapon /dev/mapper/"${i##*/}"-part3
    +done
    +
    +
    +
  4. +
  5. LUKS only: Setup encrypted LUKS container for root pool:

    +
    for i in ${DISK}; do
    +   # see PASSPHRASE PROCESSING section in cryptsetup(8)
    +   printf "YOUR_PASSWD" | cryptsetup luksFormat --type luks2 "${i}"-part2 -
    +   printf "YOUR_PASSWD" | cryptsetup luksOpen "${i}"-part2 luks-rpool-"${i##*/}"-part2 -
    +done
    +
    +
    +
  6. +
  7. Create root pool

    +
      +
    • Unencrypted

      +
      # shellcheck disable=SC2046
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -R "${MNT}" \
      +    -O acltype=posixacl \
      +    -O canmount=off \
      +    -O dnodesize=auto \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O xattr=sa \
      +    -O mountpoint=none \
      +    rpool \
      +    mirror \
      +   $(for i in ${DISK}; do
      +      printf '%s ' "${i}-part2";
      +     done)
      +
      +
      +
    • +
    • LUKS encrypted

      +
      # shellcheck disable=SC2046
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -R "${MNT}" \
      +    -O acltype=posixacl \
      +    -O canmount=off \
      +    -O dnodesize=auto \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O xattr=sa \
      +    -O mountpoint=none \
      +    rpool \
      +    mirror \
      +   $(for i in ${DISK}; do
      +      printf '/dev/mapper/luks-rpool-%s ' "${i##*/}-part2";
      +     done)
      +
      +
      +
    • +
    +

    If not using a multi-disk setup, remove mirror.

    +
  8. +
  9. Create root system container:

    +
    +
    zfs create -o canmount=noauto -o mountpoint=legacy rpool/root
    +
    +
    +
    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o mountpoint=legacy rpool/home
    +mount -o X-mount.mkdir -t zfs rpool/root "${MNT}"
    +mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home
    +
    +
    +
  10. +
  11. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    +done
    +
    +for i in ${DISK}; do
    + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot
    + break
    +done
    +
    +
    +
  12. +
+
+
+

System Configuration

+
    +
  1. Generate system configuration:

    +
    nixos-generate-config --root "${MNT}"
    +
    +
    +
  2. +
  3. Edit system configuration:

    +
    nano "${MNT}"/etc/nixos/hardware-configuration.nix
    +
    +
    +
  4. +
  5. Set networking.hostId:

    +
    networking.hostId = "abcd1234";
    +
    +
    +
  6. +
  7. If using LUKS, add the output from following command to system +configuration

    +
    tee <<EOF
    +  boot.initrd.luks.devices = {
    +EOF
    +
    +for i in ${DISK}; do echo \"luks-rpool-"${i##*/}-part2"\".device = \"${i}-part2\"\; ; done
    +
    +tee <<EOF
    +};
    +EOF
    +
    +
    +
  8. +
  9. Install system and apply configuration

    +
    nixos-install  --root "${MNT}"
    +
    +
    +

    Wait for the root password reset prompt to appear.

    +
  10. +
  11. Unmount filesystems

    +
    cd /
    +umount -Rl "${MNT}"
    +zpool export -a
    +
    +
    +
  12. +
  13. Reboot

    +
    reboot
    +
    +
    +
  14. +
  15. Set up networking, desktop and swap.

  16. +
  17. Mount other EFI system partitions then set up a service for syncing +their contents.

  18. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/NixOS/index.html b/Getting Started/NixOS/index.html new file mode 100644 index 000000000..12cfad169 --- /dev/null +++ b/Getting Started/NixOS/index.html @@ -0,0 +1,231 @@ + + + + + + + NixOS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

NixOS

+
+

Contents

+ +
+
+

Support

+

Reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat.

+

If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @ne9z.

+
+
+

Installation

+

Note: this is for installing ZFS on an existing +NixOS installation. To use ZFS as root file system, +see below.

+

NixOS live image ships with ZFS support by default.

+

Note that you need to apply these settings even if you don’t need +to boot from ZFS. The kernel module ‘zfs.ko’ will not be available +to modprobe until you make these changes and reboot.

+
    +
  1. Edit /etc/nixos/configuration.nix and add the following +options:

    +
    boot.supportedFilesystems = [ "zfs" ];
    +boot.zfs.forceImportRoot = false;
    +networking.hostId = "yourHostId";
    +
    +
    +

    Where hostID can be generated with:

    +
    head -c4 /dev/urandom | od -A none -t x4
    +
    +
    +
  2. +
  3. Apply configuration changes:

    +
    nixos-rebuild boot
    +
    +
    +
  4. +
  5. Reboot:

    +
    reboot
    +
    +
    +
  6. +
+
+
+

Root on ZFS

+ +
+
+

Contribute

+

You can contribute to this documentation. Fork this repo, edit the +documentation, then opening a pull request.

+
    +
  1. To test your changes locally, use the devShell in this repo:

    +
    git clone https://github.com/ne9z/nixos-live openzfs-docs-dev
    +cd openzfs-docs-dev
    +nix develop ./openzfs-docs-dev/#docs
    +
    +
    +
  2. +
  3. Inside the openzfs-docs repo, build pages:

    +
    make html
    +
    +
    +
  4. +
  5. Look for errors and warnings in the make output. If there is no +errors:

    +
    xdg-open _build/html/index.html
    +
    +
    +
  6. +
  7. git commit --signoff to a branch, git push, and create a +pull request. Mention @ne9z.

  8. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/RHEL and CentOS.html b/Getting Started/RHEL and CentOS.html new file mode 100644 index 000000000..2d6ed27f2 --- /dev/null +++ b/Getting Started/RHEL and CentOS.html @@ -0,0 +1,116 @@ + + + + + + + RHEL and CentOS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

RHEL and CentOS

+

This page has been moved to RHEL-based distro.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/RHEL-based distro/Root on ZFS.html b/Getting Started/RHEL-based distro/Root on ZFS.html new file mode 100644 index 000000000..eaa578d13 --- /dev/null +++ b/Getting Started/RHEL-based distro/Root on ZFS.html @@ -0,0 +1,559 @@ + + + + + + + Rocky Linux Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Rocky Linux Root on ZFS

+

ZFSBootMenu

+

ZFSBootMenu is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details.

+

Customization

+

Unless stated otherwise, it is not recommended to customize system +configuration before reboot.

+

Only use well-tested pool features

+

You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, this comment.

+

UEFI support only

+

Only UEFI is supported by this guide.

+
+

Preparation

+
    +
  1. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.

  2. +
  3. Because the kernel of latest Live CD might be incompatible with +ZFS, we will use Alpine Linux Extended, which ships with ZFS by +default.

    +

    Download latest extended variant of Alpine Linux +live image, +verify checksum +and boot from it.

    +
    gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc
    +
    +dd if=input-file of=output-file bs=1M
    +
    +
    +
  4. +
  5. Login as root user. There is no password.

  6. +
  7. Configure Internet

    +
    setup-interfaces -r
    +# You must use "-r" option to start networking services properly
    +# example:
    +network interface: wlan0
    +WiFi name:         <ssid>
    +ip address:        dhcp
    +<enter done to finish network config>
    +manual netconfig:  n
    +
    +
    +
  8. +
  9. If you are using wireless network and it is not shown, see Alpine +Linux wiki for +further details. wpa_supplicant can be installed with apk +add wpa_supplicant without internet connection.

  10. +
  11. Configure SSH server

    +
    setup-sshd
    +# example:
    +ssh server:        openssh
    +allow root:        "prohibit-password" or "yes"
    +ssh key:           "none" or "<public key>"
    +
    +
    +
  12. +
  13. Set root password or /root/.ssh/authorized_keys.

  14. +
  15. Connect from another computer

    +
    ssh root@192.168.1.91
    +
    +
    +
  16. +
  17. Configure NTP client for time synchronization

    +
    setup-ntp busybox
    +
    +
    +
  18. +
  19. Set up apk-repo. A list of available mirrors is shown. +Press space bar to continue

    +
    setup-apkrepos
    +
    +
    +
  20. +
  21. Throughout this guide, we use predictable disk names generated by +udev

    +
    apk update
    +apk add eudev
    +setup-devd udev
    +
    +
    +
  22. +
  23. Target disk

    +

    List available disks with

    +
    find /dev/disk/by-id/
    +
    +
    +

    If virtio is used as disk bus, power off the VM and set serial numbers for disk. +For QEMU, use -drive format=raw,file=disk2.img,serial=AaBb. +For libvirt, edit domain XML. See this page for examples.

    +

    Declare disk array

    +
    DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
    +
    +
    +

    For single disk installation, use

    +
    DISK='/dev/disk/by-id/disk1'
    +
    +
    +
  24. +
  25. Set a mount point

    +
    MNT=$(mktemp -d)
    +
    +
    +
  26. +
  27. Set partition size:

    +

    Set swap size in GB, set to 1 if you don’t want swap to +take up too much space

    +
    SWAPSIZE=4
    +
    +
    +

    Set how much space should be left at the end of the disk, minimum 1GB

    +
    RESERVE=1
    +
    +
    +
  28. +
  29. Install ZFS support from live media:

    +
    apk add zfs
    +
    +
    +
  30. +
  31. Install partition tool

    +
    apk add parted e2fsprogs cryptsetup util-linux
    +
    +
    +
  32. +
+
+
+

System Installation

+
    +
  1. Partition the disks.

    +

    Note: you must clear all existing partition tables and data structures from target disks.

    +

    For flash-based storage, this can be done by the blkdiscard command below:

    +
    partition_disk () {
    + local disk="${1}"
    + blkdiscard -f "${disk}" || true
    +
    + parted --script --align=optimal  "${disk}" -- \
    + mklabel gpt \
    + mkpart EFI 1MiB 4GiB \
    + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \
    + mkpart swap  -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
    + set 1 esp on \
    +
    + partprobe "${disk}"
    +}
    +
    +for i in ${DISK}; do
    +   partition_disk "${i}"
    +done
    +
    +
    +
  2. +
  3. Setup temporary encrypted swap for this installation only. This is +useful if the available memory is small:

    +
    for i in ${DISK}; do
    +   cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3
    +   mkswap /dev/mapper/"${i##*/}"-part3
    +   swapon /dev/mapper/"${i##*/}"-part3
    +done
    +
    +
    +
  4. +
  5. Load ZFS kernel module

    +
    modprobe zfs
    +
    +
    +
  6. +
  7. Create root pool

    +
      +
    • Unencrypted:

      +
      # shellcheck disable=SC2046
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -R "${MNT}" \
      +    -O acltype=posixacl \
      +    -O canmount=off \
      +    -O dnodesize=auto \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O xattr=sa \
      +    -O mountpoint=none \
      +    rpool \
      +    mirror \
      +   $(for i in ${DISK}; do
      +      printf '%s ' "${i}-part2";
      +     done)
      +
      +
      +
    • +
    +
  8. +
  9. Create root system container:

    +
    +
    # dracut demands system root dataset to have non-legacy mountpoint
    +zfs create -o canmount=noauto -o mountpoint=/ rpool/root
    +
    +
    +
    +

    Create system datasets, +manage mountpoints with mountpoint=legacy

    +
    zfs create -o mountpoint=legacy rpool/home
    +zfs mount rpool/root
    +mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home
    +
    +
    +
  10. +
  11. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards

    +
    for i in ${DISK}; do
    + mkfs.vfat -n EFI "${i}"-part1
    +done
    +
    +for i in ${DISK}; do
    + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot
    + break
    +done
    +
    +
    +
  12. +
+
+
+

System Configuration

+
    +
  1. Download and extract minimal Rhel root filesystem:

    +
    apk add curl
    +curl --fail-early --fail -L \
    +https://dl.rockylinux.org/vault/rocky/9.2/images/x86_64/Rocky-9-Container-Base-9.2-20230513.0.x86_64.tar.xz \
    +-o rootfs.tar.gz
    +curl --fail-early --fail -L \
    +https://dl.rockylinux.org/vault/rocky/9.2/images/x86_64/Rocky-9-Container-Base-9.2-20230513.0.x86_64.tar.xz.CHECKSUM \
    +-o checksum
    +
    +# BusyBox sha256sum treats all lines in the checksum file
    +# as checksums and requires two spaces "  "
    +# between filename and checksum
    +
    +grep 'Container-Base' checksum \
    +| grep '^SHA256' \
    +| sed -E 's|.*= ([a-z0-9]*)$|\1  rootfs.tar.gz|' > ./sha256checksum
    +
    +sha256sum -c ./sha256checksum
    +
    +tar x  -C "${MNT}" -af rootfs.tar.gz
    +
    +
    +
  2. +
  3. Enable community repo

    +
    sed -i '/edge/d' /etc/apk/repositories
    +sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories
    +
    +
    +
  4. +
  5. Generate fstab:

    +
    apk add arch-install-scripts
    +genfstab -t PARTUUID "${MNT}" \
    +| grep -v swap \
    +| sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \
    +> "${MNT}"/etc/fstab
    +
    +
    +
  6. +
  7. Chroot

    +
    cp /etc/resolv.conf "${MNT}"/etc/resolv.conf
    +for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done
    +chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash
    +
    +
    +
  8. +
  9. Unset all shell aliases, which can interfere with installation:

    +
    unalias -a
    +
    +
    +
  10. +
  11. Install base packages

    +
    dnf -y install --allowerasing @core kernel-core
    +
    +
    +
  12. +
  13. Install ZFS packages:

    +
    dnf install -y https://zfsonlinux.org/epel/zfs-release-2-3"$(rpm --eval "%{dist}"|| true)".noarch.rpm
    +dnf config-manager --disable zfs
    +dnf config-manager --enable zfs-kmod
    +dnf install -y zfs zfs-dracut
    +
    +
    +
  14. +
  15. Add zfs modules to dracut:

    +
    echo 'add_dracutmodules+=" zfs "' >> /etc/dracut.conf.d/zfs.conf
    +echo 'force_drivers+=" zfs "' >> /etc/dracut.conf.d/zfs.conf
    +
    +
    +
  16. +
  17. Add other drivers to dracut:

    +
    if grep mpt3sas /proc/modules; then
    +  echo 'force_drivers+=" mpt3sas "'  >> /etc/dracut.conf.d/zfs.conf
    +fi
    +if grep virtio_blk /proc/modules; then
    +  echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf
    +fi
    +
    +
    +
  18. +
  19. Build initrd:

    +
    find -D exec /lib/modules -maxdepth 1 \
    +-mindepth 1 -type d \
    +-exec sh -vxc \
    +'if test -e "$1"/modules.dep;
    +   then kernel=$(basename "$1");
    +   dracut --verbose --force --kver "${kernel}";
    + fi' sh {} \;
    +
    +
    +
  20. +
  21. For SELinux, relabel filesystem on reboot:

    +
    fixfiles -F onboot
    +
    +
    +
  22. +
  23. Generate host id:

    +
    zgenhostid -f -o /etc/hostid
    +
    +
    +
  24. +
  25. Install locale package, example for English locale:

    +
    dnf install -y glibc-minimal-langpack glibc-langpack-en
    +
    +
    +
  26. +
  27. Set locale, keymap, timezone, hostname

    +
    rm -f /etc/localtime
    +systemd-firstboot \
    +--force \
    +--locale=en_US.UTF-8 \
    +--timezone=Etc/UTC \
    +--hostname=testhost \
    +--keymap=us
    +
    +
    +
  28. +
  29. Set root passwd

    +
    printf 'root:yourpassword' | chpasswd
    +
    +
    +
  30. +
+
+
+

Bootloader

+
    +
  1. Install rEFInd boot loader:

    +
    # from http://www.rodsbooks.com/refind/getting.html
    +# use Binary Zip File option
    +curl -L http://sourceforge.net/projects/refind/files/0.14.0.2/refind-bin-0.14.0.2.zip/download --output refind.zip
    +
    +dnf install -y unzip
    +unzip refind.zip
    +mkdir -p /boot/EFI/BOOT
    +find ./refind-bin-0.14.0.2/ -name 'refind_x64.efi' -print0 \
    +| xargs -0I{} mv {} /boot/EFI/BOOT/BOOTX64.EFI
    +rm -rf refind.zip refind-bin-0.14.0.2
    +
    +
    +
  2. +
  3. Add boot entry:

    +
    tee -a /boot/refind-linux.conf <<EOF
    +"Rocky Linux" "root=ZFS=rpool/root"
    +EOF
    +
    +
    +
  4. +
  5. Exit chroot

    +
    exit
    +
    +
    +
  6. +
  7. Unmount filesystems and create initial system snapshot +You can later create a boot environment from this snapshot. +See Root on ZFS maintenance page.

    +
    umount -Rl "${MNT}"
    +zfs snapshot -r rpool@initial-installation
    +
    +
    +
  8. +
  9. Export all pools

    +
    zpool export -a
    +
    +
    +
  10. +
  11. Reboot

    +
    reboot
    +
    +
    +
  12. +
+
+
+

Post installaion

+
    +
  1. Install package groups

    +
    dnf group list --hidden -v       # query package groups
    +dnf group install gnome-desktop
    +
    +
    +
  2. +
  3. Add new user, configure swap.

  4. +
  5. Mount other EFI system partitions then set up a service for syncing +their contents.

  6. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/RHEL-based distro/index.html b/Getting Started/RHEL-based distro/index.html new file mode 100644 index 000000000..8d7962a3e --- /dev/null +++ b/Getting Started/RHEL-based distro/index.html @@ -0,0 +1,319 @@ + + + + + + + RHEL-based distro — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

RHEL-based distro

+
+

Contents

+ +

DKMS and kABI-tracking kmod style packages are provided for x86_64 RHEL- +and CentOS-based distributions from the OpenZFS repository. These packages +are updated as new versions are released. Only the repository for the current +minor version of each current major release is updated with new packages.

+

To simplify installation, a zfs-release package is provided which includes +a zfs.repo configuration file and public signing key. All official OpenZFS +packages are signed using this key, and by default yum or dnf will verify a +package’s signature before allowing it be to installed. Users are strongly +encouraged to verify the authenticity of the OpenZFS public key using +the fingerprint listed here.

+
+
Key location: /etc/pki/rpm-gpg/RPM-GPG-KEY-openzfs (previously -zfsonlinux)
+
Current release packages: EL7, EL8, EL9
+
Archived release packages: see repo page
+
+
+
Signing key1 (EL8 and older, Fedora 36 and older) +pgp.mit.edu / +direct link
+
Fingerprint: C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
+
+
+
Signing key2 (EL9+, Fedora 37+) +pgp.mit.edu / +direct link
+
Fingerprint: 7DC7 299D CF7C 7FD9 CD87 701B A599 FD5E 9DB8 4141
+
+

For EL7 run:

+
yum install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
+
+
+

and for EL8 and 9:

+
dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
+
+
+

After installing the zfs-release package and verifying the public key +users can opt to install either the DKMS or kABI-tracking kmod style packages. +DKMS packages are recommended for users running a non-distribution kernel or +for users who wish to apply local customizations to OpenZFS. For most users +the kABI-tracking kmod packages are recommended in order to avoid needing to +rebuild OpenZFS for every kernel update.

+
+
+

DKMS

+

To install DKMS style packages issue the following commands. First add the +EPEL repository which provides DKMS by installing the epel-release +package, then the kernel-devel and zfs packages. Note that it is +important to make sure that the matching kernel-devel package is installed +for the running kernel since DKMS requires it to build OpenZFS.

+

For EL6 and 7, separately run:

+
yum install -y epel-release
+yum install -y kernel-devel
+yum install -y zfs
+
+
+

And for EL8 and newer, separately run:

+
dnf install -y epel-release
+dnf install -y kernel-devel
+dnf install -y zfs
+
+
+
+

Note

+

When switching from DKMS to kABI-tracking kmods first uninstall the +existing DKMS packages. This should remove the kernel modules for all +installed kernels, then the kABI-tracking kmods can be installed as +described in the section below.

+
+
+
+

kABI-tracking kmod

+

By default the zfs-release package is configured to install DKMS style +packages so they will work with a wide range of kernels. In order to +install the kABI-tracking kmods the default repository must be switched +from zfs to zfs-kmod. Keep in mind that the kABI-tracking kmods are +only verified to work with the distribution-provided, non-Stream kernel.

+

For EL6 and 7 run:

+
yum-config-manager --disable zfs
+yum-config-manager --enable zfs-kmod
+yum install zfs
+
+
+

And for EL8 and newer:

+
dnf config-manager --disable zfs
+dnf config-manager --enable zfs-kmod
+dnf install zfs
+
+
+

By default the OpenZFS kernel modules are automatically loaded when a ZFS +pool is detected. If you would prefer to always load the modules at boot +time you can create such configuration in /etc/modules-load.d:

+
echo zfs >/etc/modules-load.d/zfs.conf
+
+
+
+

Note

+

When updating to a new EL minor release the existing kmod +packages may not work due to upstream kABI changes in the kernel. +The configuration of the current release package may have already made an +updated package available, but the package manager may not know to install +that package if the version number isn’t newer. When upgrading, users +should verify that the kmod-zfs package is providing suitable kernel +modules, reinstalling the kmod-zfs package if necessary.

+
+
+
+

Previous minor EL releases

+

The current release package uses “${releasever}” rather than specify a particular +minor release as previous release packages did. Typically “${releasever}” will +resolve to just the major version (e.g. 8), and the resulting repository URL +will be aliased to the current minor version (e.g. 8.7), but you can specify +–releasever to use previous repositories.

+
[vagrant@localhost ~]$ dnf list available --showduplicates kmod-zfs
+Last metadata expiration check: 0:00:08 ago on tor 31 jan 2023 17:50:05 UTC.
+Available Packages
+kmod-zfs.x86_64                          2.1.6-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.7-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.8-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.9-1.el8                          zfs-kmod
+[vagrant@localhost ~]$ dnf list available --showduplicates --releasever=8.6 kmod-zfs
+Last metadata expiration check: 0:16:13 ago on tor 31 jan 2023 17:34:10 UTC.
+Available Packages
+kmod-zfs.x86_64                          2.1.4-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.5-1.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.5-2.el8                          zfs-kmod
+kmod-zfs.x86_64                          2.1.6-1.el8                          zfs-kmod
+[vagrant@localhost ~]$
+
+
+

In the above example, the former packages were built for EL8.7, and the latter for EL8.6.

+
+
+

Testing Repositories

+

In addition to the primary zfs repository a zfs-testing repository +is available. This repository, which is disabled by default, contains +the latest version of OpenZFS which is under active development. These +packages are made available in order to get feedback from users regarding +the functionality and stability of upcoming releases. These packages +should not be used on production systems. Packages from the testing +repository can be installed as follows.

+

For EL6 and 7 run:

+
yum-config-manager --enable zfs-testing
+yum install kernel-devel zfs
+
+
+

And for EL8 and newer:

+
dnf config-manager --enable zfs-testing
+dnf install kernel-devel zfs
+
+
+
+

Note

+

Use zfs-testing for DKMS packages and zfs-testing-kmod +for kABI-tracking kmod packages.

+
+
+
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Slackware/Root on ZFS.html b/Getting Started/Slackware/Root on ZFS.html new file mode 100644 index 000000000..9d721a66a --- /dev/null +++ b/Getting Started/Slackware/Root on ZFS.html @@ -0,0 +1,292 @@ + + + + + + + Slackware Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Slackware Root on ZFS

+

This page shows some possible ways to configure Slackware to use zfs for the root filesystem.

+

There are countless different ways to achieve such setup, particularly with the flexibility that zfs allows. We’ll show only a simple recipe and give pointers for further customization.

+
+

Kernel considerations

+

For this mini-HOWTO we’ll be using the generic kernel and customize the stock initrd.

+

If you use the huge kernel, you may want to switch to the generic kernel first, and install both the kernel-generic and mkinitrd packages. This makes things easier since we’ll need an initrd.

+

If you absolutely do not want to use an initrd, see “Other options” further down.

+
+
+

The problem space

+

In order to have the root filesystem on zfs, two problems need to be addressed:

+
    +
  1. The boot loader needs to be able to load the kernel and its initrd.

  2. +
  3. The kernel (or, rather, the initrd) needs to be able to mount the zfs root filesystem and run /sbin/init.

  4. +
+

The second problem is relatively easy to deal with, and only requires slight modifications to the default Slackware initrd scripts.

+

For the first problem, however, a variety of scenarios are possible; on a PC, for example, you might be booting:

+
    +
  1. In UEFI mode, via an additional bootloader like elilo: here, the kernel and its initrd are on (read: have been copied to) the ESP, and the additional bootloader doesn’t need to understand zfs.

  2. +
  3. In UEFI mode, by booting the kernel straight from the firmware. All Slackware kernels are built with EFI_STUB=Y, so if you copy your kernel and initrd to the ESP and configure a boot entry with efibootmgr, you are all set (note that the kernel image must have a .efi extension).

  4. +
  5. In legacy BIOS mode, using lilo or grub or similar: lilo doesn’t understand zfs and even the latest grub understands it with some limitations (for example, no zstd compression). If you’re stuck with legacy BIOS mode, the best option is to put /boot on a separate partition that your loader understands (for example, ext4).

  6. +
+

If you are not using a PC, things will likely be quite different, so refer to relevant hardware documentation for your platform; on a Raspberry PI, for example, the firmware loads kernel and initrd from a FAT32 partition, so the situation is similar to a PC booting in UEFI mode.

+

The simplest setup, discussed in this recipe, is the one using UEFI. As said above, if you boot in legacy BIOS mode, you will have to ensure that the boot loader of your choice can load the kernel image.

+
+
+

Partition layout

+

Repartitioning an existing system disk in order to make room for a zfs root partition is left as an exercise to the reader (there’s nothing specific to zfs).

+

As a pointer: if you’re starting from a whole-disk ext4 filesystem, you could use resize2fs to shrink it to half of disk size and then relocate it to the second half of the disk with sfdisk. After that, you could create a ZFS partition before it, and copy stuff across using cp or rsync. This approach has the benefit of providing some kind of recovery mechanism in case stuff goes wrong. When you are happy about the final setup, you can then delete the ext4 partition and enlarge the ZFS one.

+

In any case you will want to have a rescue cdrom at hand, and one that supports zfs out of the box. A Ubuntu live CD will do.

+

For this recipe, we’ll be assuming that we’re booting in UEFI mode and there’s a single disk configured like this:

+
/dev/sda1 # EFI system partition
+/dev/sda2 # zfs pool (contains the "root" filesystem)
+
+
+

Since we are creating a zpool inside a disk partition (as opposed to using up a whole disk), make sure that the partition type is set correctly (for GPT, 54 or 67 are good choices).

+

When creating the zfs filesystem, you will want to set “mountpoint=legacy” so that the filesystem can be mounted with “mount” in a traditional way; Slackware startup and shutdown scripts expect that.

+

Back to our recipe, this is a working example:

+
zpool create -o ashift=12 -O mountpoint=none tank /dev/sda2
+zfs create -o mountpoint=legacy -o compression=zstd tank/root
+# add more as needed:
+# zfs create -o mountpoint=legacy [..] tank/home
+# zfs create -o mountpoint=legacy [..] tank/usr
+# zfs create -o mountpoint=legacy [..] tank/opt
+
+
+

Tweak options to taste; while “mountpoint=legacy” is required for the root filesystem, it is not required for any additional filesystems. In the example above we applied it to all of them, but that’s a matter of personal preference, as is setting “mountpoint=none” on the pool itself so it’s not mounted anywhere by default (do note that zpool’s “mountpoint=none” wants an uppercase “-O”).

+

You can check your setup with:

+
zpool list
+zfs list
+
+
+

Then, adjust /etc/fstab to something like this:

+
tank/root    /       zfs   defaults   0   0
+# add more as needed:
+# tank/home    /home   zfs   defaults   0   0
+# tank/usr     /usr    zfs   defaults   0   0
+# tank/opt     /opt    zfs   defaults   0   0
+
+
+

This allow us to mount and umount them as usual, once we have imported the pool with “zpool import tank”. Which leads us to…

+
+
+

Patch and rebuild the initrd

+

Since we’re using the generic kernel, we already have a usable /boot/initrd-tree/ (if you don’t, prepare one by running mkinitrd once).

+

Copy the zfs userspace tools to it (/sbin/zfs isn’t strictly necessary, but may be handy for rescuing a system that refuses to boot):

+
install -m755 /sbin/zpool /sbin/zfs /boot/initrd-tree/sbin/
+
+
+

Modify /boot/initrd-tree/init; locate the first “case” statement that sets ROOTDEV; it reads:

+
root=/dev/*)
+  ROOTDEV=$(echo $ARG | cut -f2 -d=)
+;;
+root=LABEL=*)
+  ROOTDEV=$(echo $ARG | cut -f2- -d=)
+;;
+root=UUID=*)
+  ROOTDEV=$(echo $ARG | cut -f2- -d=)
+;;
+
+
+

Replace the three cases with:

+
root=*)
+  ROOTDEV=$(echo $ARG | cut -f2 -d=)
+;;
+
+
+

This allows us to specify something like “root=tank/root” (if you look carefully at the script, you will notice that you can collapse the /dev/, LABEL=, UUID=* and the newly-added case into a single one).

+

Further down in the script, locate the section that handles RESUMEDEV (”# Resume state from swap”), and insert the following just before it:

+
# Support for zfs root filesystem:
+if [ x"$ROOTFS" = xzfs ]; then
+  POOL=${ROOTDEV%%/*}
+  echo "Importing zfs pool: $POOL"
+  zpool import -o cachefile=none -N $POOL
+fi
+
+
+

Finally, rebuild the initrd with something like:

+
mkinitrd -m zfs
+
+
+

It may make sense to use the “-o” option and create an initrd.gz in a different file, just in case. Look at /boot/README.initrd for more details.

+

Rebuilding the initrd should also copy in the necessary libraries (libzfs.so, etc.) under /lib/; verify it by running:

+
chroot /boot/initrd-tree /sbin/zpool --help
+
+
+

When you’re happy, remember to copy the new initrd.gz to the ESP partition.

+

There are other ways to ensure that the zfs binaries and filesystem module are always built into the initrd - see man initrd.

+
+
+

Configure the boot loader

+

Any of these three options will do:

+
    +
  1. Append “rootfstype=zfs root=tank/root” to the boot loader configuration (e.g. elilo.conf or equivalent).

  2. +
  3. Modify /boot/initrd-tree/rootdev and /boot/initrd-tree/rootfs in the previous step, then rebuild the initrd.

  4. +
  5. When rebuilding the initrd, add “-f zfs -r tank/root”.

  6. +
+

If you’re using elilo, it should look something like this:

+
image=vmlinuz
+  label=linux
+  initrd=initrd.gz
+  append="root=tank/root rootfstype=zfs"
+
+
+

Should go without saying, but doublecheck that the file referenced by initrd is the one you just generated (e.g. if you’re using the ESP, make sure you copy the newly-built initrd to it).

+
+
+

Before rebooting

+

Make sure you have an emergency kernel around in case something goes wrong. +If you upgrade kernel or packages, make use of snapshosts.

+
+
+

Other options

+

You can build zfs support right into the kernel. If you do so and do not want to use an initrd, you can embed a small initramfs in the kernel image that performs the “zpool import” step).

+
+
+

Snapshots and boot environments

+

The modifications above also allow you to create a clone of the root filesystem and boot into it; something like this should work:

+
zfs snapshot tank/root@mysnapshot
+zfs clone tank/root@mysnapshot tank/root-clone
+zfs set mountpoint=legacy tank/root-clone
+zfs promote tank/root-clone
+
+
+

Adjust boot parameters to mount “tank/root-clone” instead of “tank/root” (making a copy of the known-good kernel and initrd on the ESP is not a bad idea).

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at #zfsonlinux on Libera Chat. If you have a bug report or feature request related to this HOWTO, please file a new issue and mention @a-biardi.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Slackware/index.html b/Getting Started/Slackware/index.html new file mode 100644 index 000000000..753979f08 --- /dev/null +++ b/Getting Started/Slackware/index.html @@ -0,0 +1,163 @@ + + + + + + + Slackware — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Slackware

+ +
+

Installation

+

In order to build and install the kernel modules and userspace tools, use the +openzfs SlackBuild script (for 15.0, it’s at https://slackbuilds.org/repository/15.0/system/openzfs/). No special options are required.

+
+
+

Root on ZFS

+

ZFS can be used as root file system for Slackware. +An installation guide is available here:

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.html b/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.html new file mode 100644 index 000000000..eb36279c2 --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.html @@ -0,0 +1,1110 @@ + + + + + + + Ubuntu 18.04 Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 18.04 Root on ZFS

+ +
+

Overview

+
+

Newer release available

+
    +
  • See Ubuntu 20.04 Root on ZFS for new +installs. This guide is no longer receiving most updates. It continues +to exist for reference for existing installs that followed it.

  • +
+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of +memory is recommended for normal performance in basic workloads. If you +wish to use deduplication, you will need massive amounts of +RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports two different encryption options: unencrypted and +LUKS (full-disk encryption). With either option, all ZFS features are fully +available. ZFS native encryption is not available in Ubuntu 18.04.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+

1.1 Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to +the Internet as appropriate (e.g. join your WiFi network). Open a +terminal (press Ctrl-Alt-T).

+

1.2 Setup and update the repositories:

+
sudo apt-add-repository universe
+sudo apt update
+
+
+

1.3 Optional: Install and start the OpenSSH server in the Live CD +environment:

+

If you have a second system, using SSH to access the target system can +be convenient:

+
passwd
+# There is no current password; hit enter at that prompt.
+sudo apt install --yes openssh-server
+
+
+

Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh ubuntu@IP.

+

1.4 Become root:

+
sudo -i
+
+
+

1.5 Install ZFS in the Live CD environment:

+
apt install --yes debootstrap gdisk zfs-initramfs
+
+
+
+
+

Step 2: Disk Formatting

+

2.1 Set a variable with the disk name:

+
DISK=/dev/disk/by-id/scsi-SATA_disk1
+
+
+

Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

+

Hints:

+
    +
  • ls -la /dev/disk/by-id will list the aliases.

  • +
  • Are you doing this in a virtual machine? If your virtual disk is +missing from /dev/disk/by-id, use /dev/vda if you are using +KVM with virtio; otherwise, read the +troubleshooting section.

  • +
  • For a mirror or raidz topology, use DISK1, DISK2, etc.

  • +
  • When choosing a boot pool size, consider how you will use the space. A kernel +and initrd may consume around 100M. If you have multiple kernels and take +snapshots, you may find yourself low on boot pool space, especially if you +need to regenerate your initramfs images, which may be around 85M each. Size +your boot pool appropriately for your needs.

  • +
+

2.2 If you are re-using a disk, clear it as necessary:

+

If the disk was previously used in an MD array, zero the superblock:

+
apt install --yes mdadm
+mdadm --zero-superblock --force $DISK
+
+
+

Clear the partition table:

+
sgdisk --zap-all $DISK
+
+
+

2.3 Partition your disk(s):

+

Run this if you need legacy (BIOS) booting:

+
sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
+
+
+

Run this for UEFI booting (for use now or in the future):

+
sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
+
+
+

Run this for the boot pool:

+
sgdisk     -n3:0:+1G      -t3:BF01 $DISK
+
+
+

Choose one of the following options:

+

2.3a Unencrypted:

+
sgdisk     -n4:0:0        -t4:BF01 $DISK
+
+
+

2.3b LUKS:

+
sgdisk     -n4:0:0        -t4:8300 $DISK
+
+
+

If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

+

2.4 Create the boot pool:

+
zpool create -o ashift=12 -d \
+    -o feature@async_destroy=enabled \
+    -o feature@bookmarks=enabled \
+    -o feature@embedded_data=enabled \
+    -o feature@empty_bpobj=enabled \
+    -o feature@enabled_txg=enabled \
+    -o feature@extensible_dataset=enabled \
+    -o feature@filesystem_limits=enabled \
+    -o feature@hole_birth=enabled \
+    -o feature@large_blocks=enabled \
+    -o feature@lz4_compress=enabled \
+    -o feature@spacemap_histogram=enabled \
+    -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
+    -O normalization=formD -O relatime=on -O xattr=sa \
+    -O mountpoint=/ -R /mnt bpool ${DISK}-part3
+
+
+

You should not need to customize any of the options for the boot pool.

+

GRUB does not support all of the zpool features. See +spa_feature_names in +grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

+

Hints:

+
    +
  • If you are creating a mirror or raidz topology, create the pool using +zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3 +(or replace mirror with raidz, raidz2, or raidz3 and +list the partitions from additional disks).

  • +
  • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

  • +
+

Feature Notes:

+
    +
  • As a read-only compatible feature, the userobj_accounting feature should +be compatible in theory, but in practice, GRUB can fail with an “invalid +dnode type” error. This feature does not matter for /boot anyway.

  • +
+

2.5 Create the root pool:

+

Choose one of the following options:

+

2.5a Unencrypted:

+
zpool create -o ashift=12 \
+    -O acltype=posixacl -O canmount=off -O compression=lz4 \
+    -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
+    -O mountpoint=/ -R /mnt rpool ${DISK}-part4
+
+
+

2.5b LUKS:

+
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
+cryptsetup luksOpen ${DISK}-part4 luks1
+zpool create -o ashift=12 \
+    -O acltype=posixacl -O canmount=off -O compression=lz4 \
+    -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
+    -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
+
+
+

Notes:

+
    +
  • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

  • +
  • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires +ACLs

  • +
  • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only +filenames.

  • +
  • recordsize is unset (leaving it at the default of 128 KiB). If you want to +tune it (e.g. -O recordsize=1M), see these various blog +posts.

  • +
  • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s +documentation +for further information.

  • +
  • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI +applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain +controller. +Note that xattr=sa is +Linux-specific. +If you move your xattr=sa pool to another OpenZFS implementation +besides ZFS-on-Linux, extended attributes will not be readable +(though your data will be). If portability of extended attributes is +important to you, omit the -O xattr=sa above. Even if you do not +want xattr=sa for the whole pool, it is probably fine to use it +for /var/log.

  • +
  • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

  • +
  • For LUKS, the key size chosen is 512 bits. However, XTS mode requires +two keys, so the LUKS key is split in half. Thus, -s 512 means +AES-256.

  • +
  • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup +FAQ +for guidance.

  • +
+

Hints:

+
    +
  • If you are creating a mirror or raidz topology, create the pool using +zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4 +(or replace mirror with raidz, raidz2, or raidz3 and +list the partitions from additional disks). For LUKS, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will +have to create using cryptsetup.

  • +
  • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the +root pool is named rpool by default.

  • +
+
+
+

Step 3: System Installation

+

3.1 Create filesystem datasets to act as containers:

+
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
+zfs create -o canmount=off -o mountpoint=none bpool/BOOT
+
+
+

On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality has been implemented in Ubuntu 20.04 with the +zsys tool, though its dataset layout is more complicated. Even without +such a tool, the rpool/ROOT and bpool/BOOT containers can still be used +for manually created clones.

+

3.2 Create filesystem datasets for the root and boot filesystems:

+
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
+zfs mount rpool/ROOT/ubuntu
+
+zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu
+zfs mount bpool/BOOT/ubuntu
+
+
+

With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

+

3.3 Create datasets:

+
zfs create                                 rpool/home
+zfs create -o mountpoint=/root             rpool/home/root
+zfs create -o canmount=off                 rpool/var
+zfs create -o canmount=off                 rpool/var/lib
+zfs create                                 rpool/var/log
+zfs create                                 rpool/var/spool
+
+
+

The datasets below are optional, depending on your preferences and/or +software choices.

+

If you wish to exclude these from snapshots:

+
zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
+zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
+chmod 1777 /mnt/var/tmp
+
+
+

If you use /opt on this system:

+
zfs create                                 rpool/opt
+
+
+

If you use /srv on this system:

+
zfs create                                 rpool/srv
+
+
+

If you use /usr/local on this system:

+
zfs create -o canmount=off                 rpool/usr
+zfs create                                 rpool/usr/local
+
+
+

If this system will have games installed:

+
zfs create                                 rpool/var/games
+
+
+

If this system will store local email in /var/mail:

+
zfs create                                 rpool/var/mail
+
+
+

If this system will use Snap packages:

+
zfs create                                 rpool/var/snap
+
+
+

If you use /var/www on this system:

+
zfs create                                 rpool/var/www
+
+
+

If this system will use GNOME:

+
zfs create                                 rpool/var/lib/AccountsService
+
+
+

If this system will use Docker (which manages its own datasets & +snapshots):

+
zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
+
+
+

If this system will use NFS (locking):

+
zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
+
+
+

A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

+
zfs create -o com.sun:auto-snapshot=false  rpool/tmp
+chmod 1777 /mnt/tmp
+
+
+

The primary goal of this dataset layout is to separate the OS from user data. +This allows the root filesystem to be rolled back without rolling back user +data. The com.sun.auto-snapshot setting is used by some ZFS +snapshot utilities to exclude transient data.

+

If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for +/tmp, as shown above. This keeps the /tmp data out of snapshots +of your root filesystem. It also allows you to set a quota on +rpool/tmp, if you want to limit the maximum space used. Otherwise, +you can use a tmpfs (RAM filesystem) later.

+

3.4 Install the minimal system:

+
debootstrap bionic /mnt
+zfs set devices=off rpool
+
+
+

The debootstrap command leaves the new system in an unconfigured +state. An alternative to using debootstrap is to copy the entirety +of a working system into the new ZFS root.

+
+
+

Step 4: System Configuration

+

4.1 Configure the hostname:

+

Replace HOSTNAME with the desired hostname:

+
echo HOSTNAME > /mnt/etc/hostname
+vi /mnt/etc/hosts
+
+
+
Add a line:
+127.0.1.1       HOSTNAME
+or if the system has a real name in DNS:
+127.0.1.1       FQDN HOSTNAME
+
+
+

Hint: Use nano if you find vi confusing.

+

4.2 Configure the network interface:

+

Find the interface name:

+
ip addr show
+
+
+

Adjust NAME below to match your interface name:

+
vi /mnt/etc/netplan/01-netcfg.yaml
+
+
+
network:
+  version: 2
+  ethernets:
+    NAME:
+      dhcp4: true
+
+
+

Customize this file if the system is not a DHCP client.

+

4.3 Configure the package sources:

+
vi /mnt/etc/apt/sources.list
+
+
+
deb http://archive.ubuntu.com/ubuntu bionic main restricted universe multiverse
+deb http://archive.ubuntu.com/ubuntu bionic-updates main restricted universe multiverse
+deb http://archive.ubuntu.com/ubuntu bionic-backports main restricted universe multiverse
+deb http://security.ubuntu.com/ubuntu bionic-security main restricted universe multiverse
+
+
+

4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

+
mount --rbind /dev  /mnt/dev
+mount --rbind /proc /mnt/proc
+mount --rbind /sys  /mnt/sys
+chroot /mnt /usr/bin/env DISK=$DISK bash --login
+
+
+

Note: This is using --rbind, not --bind.

+

4.5 Configure a basic system environment:

+
ln -s /proc/self/mounts /etc/mtab
+apt update
+
+
+

Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

+
dpkg-reconfigure locales
+dpkg-reconfigure tzdata
+
+
+

If you prefer nano over vi, install it:

+
apt install --yes nano
+
+
+

4.6 Install ZFS in the chroot environment for the new system:

+
apt install --yes --no-install-recommends linux-image-generic
+apt install --yes zfs-initramfs
+
+
+

Hint: For the HWE kernel, install linux-image-generic-hwe-18.04 +instead of linux-image-generic.

+

4.7 For LUKS installs only, setup /etc/crypttab:

+
apt install --yes cryptsetup
+
+echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
+    luks,discard,initramfs > /etc/crypttab
+
+
+

The use of initramfs is a work-around for cryptsetup does not support ZFS.

+

Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

+

4.8 Install GRUB

+

Choose one of the following options:

+

4.8a Install GRUB for legacy (BIOS) booting:

+
apt install --yes grub-pc
+
+
+

Select (using the space bar) all of the disks (not partitions) in your pool.

+

4.8b Install GRUB for UEFI booting:

+
apt install dosfstools
+mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
+mkdir /boot/efi
+echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \
+    /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
+mount /boot/efi
+apt install --yes grub-efi-amd64-signed shim-signed
+
+
+

Notes:

+
    +
  • The -s 1 for mkdosfs is only necessary for drives which present +4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

  • +
  • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later.

  • +
+

4.9 (Optional): Remove os-prober:

+
apt purge --yes os-prober
+
+
+

This avoids error messages from update-grub. os-prober is only necessary +in dual-boot configurations.

+

4.10 Set a root password:

+
passwd
+
+
+

4.11 Enable importing bpool

+

This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

+
vi /etc/systemd/system/zfs-import-bpool.service
+
+
+
[Unit]
+DefaultDependencies=no
+Before=zfs-import-scan.service
+Before=zfs-import-cache.service
+
+[Service]
+Type=oneshot
+RemainAfterExit=yes
+ExecStart=/sbin/zpool import -N -o cachefile=none bpool
+
+[Install]
+WantedBy=zfs-import.target
+
+
+
systemctl enable zfs-import-bpool.service
+
+
+

4.12 Optional (but recommended): Mount a tmpfs to /tmp

+

If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

+
cp /usr/share/systemd/tmp.mount /etc/systemd/system/
+systemctl enable tmp.mount
+
+
+

4.13 Setup system groups:

+
addgroup --system lpadmin
+addgroup --system sambashare
+
+
+
+
+

Step 5: GRUB Installation

+

5.1 Verify that the ZFS boot filesystem is recognized:

+
grub-probe /boot
+
+
+

5.2 Refresh the initrd files:

+
update-initramfs -c -k all
+
+
+

Note: When using LUKS, this will print “WARNING could not determine +root device from /etc/fstab”. This is because cryptsetup does not +support ZFS.

+

5.3 Workaround GRUB’s missing zpool-features support:

+
vi /etc/default/grub
+# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu"
+
+
+

5.4 Optional (but highly recommended): Make debugging GRUB easier:

+
vi /etc/default/grub
+# Comment out: GRUB_TIMEOUT_STYLE=hidden
+# Set: GRUB_TIMEOUT=5
+# Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
+# Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
+# Uncomment: GRUB_TERMINAL=console
+# Save and quit.
+
+
+

Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

+

5.5 Update the boot configuration:

+
update-grub
+
+
+

Note: Ignore errors from osprober, if present.

+

5.6 Install the boot loader:

+

5.6a For legacy (BIOS) booting, install GRUB to the MBR:

+
grub-install $DISK
+
+
+

Note that you are installing GRUB to the whole disk, not a partition.

+

If you are creating a mirror or raidz topology, repeat the +grub-install command for each disk in the pool.

+

5.6b For UEFI booting, install GRUB:

+
grub-install --target=x86_64-efi --efi-directory=/boot/efi \
+    --bootloader-id=ubuntu --recheck --no-floppy
+
+
+

It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

+

5.7 Fix filesystem mount ordering:

+

Until ZFS gains a systemd mount +generator, there are +races between mounting filesystems and starting certain daemons. In +practice, the issues (e.g. +#5754) seem to be +with certain filesystems in /var, specifically /var/log and +/var/tmp. Setting these to use legacy mounting, and listing them +in /etc/fstab makes systemd aware that these are separate +mountpoints. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp +feature of systemd automatically use After=var-tmp.mount.

+

Until there is support for mounting /boot in the initramfs, we also +need to mount that, because it was marked canmount=noauto. Also, +with UEFI, we need to ensure it is mounted before its child filesystem +/boot/efi.

+

rpool is guaranteed to be imported by the initramfs, so there is no +point in adding x-systemd.requires=zfs-import.target to those +filesystems.

+

For UEFI booting, unmount /boot/efi first:

+
umount /boot/efi
+
+
+

Everything else applies to both BIOS and UEFI booting:

+
zfs set mountpoint=legacy bpool/BOOT/ubuntu
+echo bpool/BOOT/ubuntu /boot zfs \
+    nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
+
+zfs set mountpoint=legacy rpool/var/log
+echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab
+
+zfs set mountpoint=legacy rpool/var/spool
+echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab
+
+
+

If you created a /var/tmp dataset:

+
zfs set mountpoint=legacy rpool/var/tmp
+echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab
+
+
+

If you created a /tmp dataset:

+
zfs set mountpoint=legacy rpool/tmp
+echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab
+
+
+
+
+

Step 6: First Boot

+

6.1 Snapshot the initial installation:

+
zfs snapshot bpool/BOOT/ubuntu@install
+zfs snapshot rpool/ROOT/ubuntu@install
+
+
+

In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

+

6.2 Exit from the chroot environment back to the LiveCD environment:

+
exit
+
+
+

6.3 Run these commands in the LiveCD environment to unmount all +filesystems:

+
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+zpool export -a
+
+
+

6.4 Reboot:

+
reboot
+
+
+

Wait for the newly installed system to boot normally. Login as root.

+

6.5 Create a user account:

+

Replace username with your desired username:

+
zfs create rpool/home/username
+adduser username
+
+cp -a /etc/skel/. /home/username
+chown -R username:username /home/username
+usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
+
+
+

6.6 Mirror GRUB

+

If you installed to multiple disks, install GRUB on the additional +disks:

+

6.6a For legacy (BIOS) booting:

+
dpkg-reconfigure grub-pc
+Hit enter until you get to the device selection screen.
+Select (using the space bar) all of the disks (not partitions) in your pool.
+
+
+

6.6b For UEFI booting:

+
umount /boot/efi
+
+
+

For the second and subsequent disks (increment ubuntu-2 to -3, etc.):

+
dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
+   of=/dev/disk/by-id/scsi-SATA_disk2-part2
+efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
+    -p 2 -L "ubuntu-2" -l '\EFI\ubuntu\shimx64.efi'
+
+mount /boot/efi
+
+
+
+
+

Step 7: (Optional) Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. This issue is currently being investigated in: +https://github.com/zfsonlinux/zfs/issues/7734

+

7.1 Create a volume dataset (zvol) for use as a swap device:

+
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
+    -o logbias=throughput -o sync=always \
+    -o primarycache=metadata -o secondarycache=none \
+    -o com.sun:auto-snapshot=false rpool/swap
+
+
+

You can adjust the size (the 4G part) to your needs.

+

The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

+

7.2 Configure the swap device:

+

Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

+
mkswap -f /dev/zvol/rpool/swap
+echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
+echo RESUME=none > /etc/initramfs-tools/conf.d/resume
+
+
+

The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

+

7.3 Enable the swap device:

+
swapon -av
+
+
+
+
+

Step 8: Full Software Installation

+

8.1 Upgrade the minimal system:

+
apt dist-upgrade --yes
+
+
+

8.2 Install a regular set of software:

+

Choose one of the following options:

+

8.2a Install a command-line environment only:

+
apt install --yes ubuntu-standard
+
+
+

8.2b Install a full GUI environment:

+
apt install --yes ubuntu-desktop
+vi /etc/gdm3/custom.conf
+# In the [daemon] section, add: InitialSetupEnable=false
+
+
+

Hint: If you are installing a full GUI environment, you will likely +want to manage your network with NetworkManager:

+
rm /mnt/etc/netplan/01-netcfg.yaml
+vi /etc/netplan/01-network-manager-all.yaml
+
+
+
network:
+  version: 2
+  renderer: NetworkManager
+
+
+

8.3 Optional: Disable log compression:

+

As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. +Also, if you are making snapshots of /var/log, logrotate’s +compression will actually waste space, as the uncompressed data will +live on in the snapshot. You can edit the files in /etc/logrotate.d +by hand to comment out compress, or use this loop (copy-and-paste +highly recommended):

+
for file in /etc/logrotate.d/* ; do
+    if grep -Eq "(^|[^#y])compress" "$file" ; then
+        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
+    fi
+done
+
+
+

8.4 Reboot:

+
reboot
+
+
+
+
+

Step 9: Final Cleanup

+

9.1 Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

+

9.2 Optional: Delete the snapshots of the initial installation:

+
sudo zfs destroy bpool/BOOT/ubuntu@install
+sudo zfs destroy rpool/ROOT/ubuntu@install
+
+
+

9.3 Optional: Disable the root password:

+
sudo usermod -p '*' root
+
+
+

9.4 Optional: Re-enable the graphical boot process:

+

If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

+
sudo vi /etc/default/grub
+# Uncomment: GRUB_TIMEOUT_STYLE=hidden
+# Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
+# Comment out: GRUB_TERMINAL=console
+# Save and quit.
+
+sudo update-grub
+
+
+

Note: Ignore errors from osprober, if present.

+

9.5 Optional: For LUKS installs only, backup the LUKS header:

+
sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
+    --header-backup-file luks1-header.dat
+
+
+

Store that backup somewhere safe (e.g. cloud storage). It is protected +by your LUKS passphrase, but you may wish to use additional encryption.

+

Hint: If you created a mirror or raidz topology, repeat this for +each LUKS volume (luks2, etc.).

+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install +Environment.

+

For LUKS, first unlock the disk(s):

+
cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs mount rpool/ROOT/ubuntu
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --rbind /dev  /mnt/dev
+mount --rbind /proc /mnt/proc
+mount --rbind /sys  /mnt/sys
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that +does slow asynchronous drive initialization, like some IBM M1015 or +OEM-branded cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to +the Linux kernel until after the regular system is started, and ZoL does +not hotplug pool members. See +https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run +update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit +this error message.

+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere +configuration. Doing this ensures that /dev/disk aliases are +created in the guest.

  • +
+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.html b/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.html new file mode 100644 index 000000000..4718ec24d --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.html @@ -0,0 +1,1022 @@ + + + + + + + Ubuntu 20.04 Root on ZFS for Raspberry Pi — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 20.04 Root on ZFS for Raspberry Pi

+ +
+

Overview

+
+

Newer release available

+ +
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

4 GiB of memory is recommended. Do not use deduplication, as it needs massive +amounts of RAM. +Enabling deduplication is a permanent change that cannot be easily reverted.

+

A Raspberry Pi 3 B/B+ would probably work (as the Pi 3 is 64-bit, though it +has less RAM), but has not been tested. Please report your results (good or +bad) using the issue link below.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

WARNING: Encryption has not yet been tested on the Raspberry Pi.

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+

USB Disks

+

The Raspberry Pi 4 runs much faster using a USB Solid State Drive (SSD) than +a microSD card. These instructions can also be used to install Ubuntu on a +USB-connected SSD or other USB disk. USB disks have three requirements that +do not apply to microSD cards:

+
    +
  1. The Raspberry Pi’s Bootloader EEPROM must be dated 2020-09-03 or later.

    +

    To check the bootloader version, power up the Raspberry Pi without an SD +card inserted or a USB boot device attached; the date will be on the +bootloader line. (If you do not see the bootloader line, the +bootloader is too old.) Alternatively, run sudo rpi-eeprom-update +on an existing OS on the Raspberry Pi (which on Ubuntu requires +apt install rpi-eeprom).

    +

    If needed, the bootloader can be updated from an existing OS on the +Raspberry Pi using rpi-eeprom-update -a and rebooting. +For other options, see Updating the Bootloader.

    +
  2. +
  3. The Raspberry Pi must configured for USB boot. The bootloader will show a +boot line; if order includes 4, USB boot is enabled.

    +

    If not already enabled, it can be enabled from an existing OS on the +Raspberry Pi using rpi-eeprom-config -e: set BOOT_ORDER=0xf41 +and reboot to apply the change. On subsequent reboots, USB boot will be +enabled.

    +

    Otherwise, it can be enabled without an existing OS as follows:

    +
      +
    • Download the Raspberry Pi Imager Utility.

    • +
    • Flash the USB Boot image to a microSD card. The USB Boot image is +listed under Bootload in the Misc utility images folder.

    • +
    • Boot the Raspberry Pi from the microSD card. USB Boot should be enabled +automatically.

    • +
    +
  4. +
  5. U-Boot on Ubuntu 20.04 does not seem to support the Raspberry Pi USB. +Ubuntu 20.10 may work. As a +work-around, the Raspberry Pi bootloader is configured to directly boot +Linux. For this to work, the Linux kernel must not be compressed. These +instructions decompress the kernel and add a script to +/etc/kernel/postinst.d to handle kernel upgrades.

  6. +
+
+
+
+

Step 1: Disk Formatting

+

The commands in this step are run on the system other than the Raspberry Pi.

+

This guide has you go to some extra work so that the stock ext4 partition can +be deleted.

+
    +
  1. Download and unpack the official image:

    +
    curl -O https://cdimage.ubuntu.com/releases/20.04.4/release/ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz
    +xz -d ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz
    +
    +# or combine them to decompress as you download:
    +curl https://cdimage.ubuntu.com/releases/20.04.4/release/ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz | \
    +    xz -d > ubuntu-20.04.4-preinstalled-server-arm64+raspi.img
    +
    +
    +
  2. +
  3. Dump the partition table for the image:

    +
    sfdisk -d ubuntu-20.04.4-preinstalled-server-arm64+raspi.img
    +
    +
    +

    That will output this:

    +
    label: dos
    +label-id: 0xddbefb06
    +device: ubuntu-20.04.4-preinstalled-server-arm64+raspi.img
    +unit: sectors
    +
    +<name>.img1 : start=        2048, size=      524288, type=c, bootable
    +<name>.img2 : start=      526336, size=     6285628, type=83
    +
    +
    +

    The important numbers are 524288 and 6285628. Store those in variables:

    +
    BOOT=524288
    +ROOT=6285628
    +
    +
    +
  4. +
  5. Create a partition script:

    +
    cat > partitions << EOF
    +label: dos
    +unit: sectors
    +
    +1 : start=  2048,  size=$BOOT,  type=c, bootable
    +2 : start=$((2048+BOOT)),  size=$ROOT, type=83
    +3 : start=$((2048+BOOT+ROOT)), size=$ROOT, type=83
    +EOF
    +
    +
    +
  6. +
  7. Connect the disk:

    +

    Connect the disk to a machine other than the target Raspberry Pi. If any +filesystems are automatically mounted (e.g. by GNOME) unmount them. +Determine the device name. For SD, the device name is almost certainly +/dev/mmcblk0. For USB SSDs, the device name is /dev/sdX, where +X is a lowercase letter. lsblk can help determine the device name. +Set the DISK environment variable to the device name:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISK=/dev/sdX        # USB disk
    +
    +
    +

    Because partitions are named differently for /dev/mmcblk0 and /dev/sdX +devices, set a second variable used when working with partitions:

    +
    export DISKP=${DISK}p # microSD card
    +export DISKP=${DISK}  # USB disk ($DISKP == $DISK for /dev/sdX devices)
    +
    +
    +

    Hint: microSD cards connected using a USB reader also have /dev/sdX +names.

    +

    WARNING: The following steps destroy the existing data on the disk. Ensure +DISK and DISKP are correct before proceeding.

    +
  8. +
  9. Ensure swap partitions are not in use:

    +
    swapon -v
    +# If a partition is in use from the disk, disable it:
    +sudo swapoff THAT_PARTITION
    +
    +
    +
  10. +
  11. Clear old ZFS labels:

    +
    sudo zpool labelclear -f ${DISK}
    +
    +
    +

    If a ZFS label still exists from a previous system/attempt, expanding the +pool will result in an unbootable system.

    +

    Hint: If you do not already have the ZFS utilities installed, you can +install them with: sudo apt install zfsutils-linux Alternatively, you +can zero the entire disk with: +sudo dd if=/dev/zero of=${DISK} bs=1M status=progress

    +
  12. +
  13. Delete existing partitions:

    +
    echo "label: dos" | sudo sfdisk ${DISK}
    +sudo partprobe
    +ls ${DISKP}*
    +
    +
    +

    Make sure there are no partitions, just the file for the disk itself. This +step is not strictly necessary; it exists to catch problems.

    +
  14. +
  15. Create the partitions:

    +
    sudo sfdisk $DISK < partitions
    +
    +
    +
  16. +
  17. Loopback mount the image:

    +
    IMG=$(sudo losetup -fP --show \
    +          ubuntu-20.04.4-preinstalled-server-arm64+raspi.img)
    +
    +
    +
  18. +
  19. Copy the bootloader data:

    +
    sudo dd if=${IMG}p1 of=${DISKP}1 bs=1M
    +
    +
    +
  20. +
  21. Clear old label(s) from partition 2:

    +
    sudo wipefs -a ${DISKP}2
    +
    +
    +

    If a filesystem with the writable label from the Ubuntu image is still +present in partition 2, the system will not boot initially.

    +
  22. +
  23. Copy the root filesystem data:

    +
    # NOTE: the destination is p3, not p2.
    +sudo dd if=${IMG}p2 of=${DISKP}3 bs=1M status=progress conv=fsync
    +
    +
    +
  24. +
  25. Unmount the image:

    +
    sudo losetup -d $IMG
    +
    +
    +
  26. +
  27. If setting up a USB disk:

    +

    Decompress the kernel:

    +
    sudo -sE
    +
    +MNT=$(mktemp -d /mnt/XXXXXXXX)
    +mkdir -p $MNT/boot $MNT/root
    +mount ${DISKP}1 $MNT/boot
    +mount ${DISKP}3 $MNT/root
    +
    +zcat -qf $MNT/boot/vmlinuz >$MNT/boot/vmlinux
    +
    +
    +

    Modify boot config:

    +
    cat >> $MNT/boot/usercfg.txt << EOF
    +kernel=vmlinux
    +initramfs initrd.img followkernel
    +boot_delay
    +EOF
    +
    +
    +

    Create a script to automatically decompress the kernel after an upgrade:

    +
    cat >$MNT/root/etc/kernel/postinst.d/zz-decompress-kernel << 'EOF'
    +#!/bin/sh
    +
    +set -eu
    +
    +echo "Updating decompressed kernel..."
    +[ -e /boot/firmware/vmlinux ] && \
    +    cp /boot/firmware/vmlinux /boot/firmware/vmlinux.bak
    +vmlinuxtmp=$(mktemp /boot/firmware/vmlinux.XXXXXXXX)
    +zcat -qf /boot/vmlinuz > "$vmlinuxtmp"
    +mv "$vmlinuxtmp" /boot/firmware/vmlinux
    +EOF
    +
    +chmod +x $MNT/root/etc/kernel/postinst.d/zz-decompress-kernel
    +
    +
    +

    Cleanup:

    +
    umount $MNT/*
    +rm -rf $MNT
    +exit
    +
    +
    +
  28. +
  29. Boot the Raspberry Pi.

    +

    Move the SD/USB disk to the Raspberry Pi. Boot it and login (e.g. via SSH) +with ubuntu as the username and password. If you are using SSH, note +that it takes a little bit for cloud-init to enable password logins on the +first boot. Set a new password when prompted and login again using that +password. If you have your local SSH configured to use ControlPersist, +you will have to kill the existing SSH process before logging in the second +time.

    +
  30. +
+
+
+

Step 2: Setup ZFS

+
    +
  1. Become root:

    +
    sudo -i
    +
    +
    +
  2. +
  3. Set the DISK and DISKP variables again:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISKP=${DISK}p       # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +DISKP=${DISK}        # USB disk
    +
    +
    +

    WARNING: Device names can change when moving a device to a different +computer or switching the microSD card from a USB reader to a built-in +slot. Double check the device name before continuing.

    +
  4. +
  5. Install ZFS:

    +
    apt update
    +
    +apt install pv zfs-initramfs
    +
    +
    +

    Note: Since this is the first boot, you may get Waiting for cache +lock because unattended-upgrades is running in the background. +Wait for it to finish.

    +
  6. +
  7. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISKP}2
      +
      +
      +
    • +
    +

    WARNING: Encryption has not yet been tested on the Raspberry Pi.

    +
      +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O encryption=aes-256-gcm \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISKP}2
      +
      +
      +
    • +
    • LUKS:

      +
      cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISKP}2
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs +Also, disabling ACLs apparently breaks umask handling with NFSv4.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption defaults to aes-256-ccm, but the default has +changed upstream +to aes-256-gcm. AES-GCM seems to be generally preferred over AES-CCM, +is faster now, +and will be even faster in the future.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +
  8. +
+
+
+

Step 3: System Installation

+
    +
  1. Create a filesystem dataset to act as a container:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +
    +
    +
  2. +
  3. Create a filesystem dataset for the root filesystem:

    +
    UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +
    +zfs create -o canmount=noauto -o mountpoint=/ \
    +    -o com.ubuntu.zsys:bootfs=yes \
    +    -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
    +zfs mount rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/srv
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/usr
    +zfs create rpool/ROOT/ubuntu_$UUID/usr/local
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/var
    +zfs create rpool/ROOT/ubuntu_$UUID/var/games
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
    +zfs create rpool/ROOT/ubuntu_$UUID/var/log
    +zfs create rpool/ROOT/ubuntu_$UUID/var/mail
    +zfs create rpool/ROOT/ubuntu_$UUID/var/snap
    +zfs create rpool/ROOT/ubuntu_$UUID/var/spool
    +zfs create rpool/ROOT/ubuntu_$UUID/var/www
    +
    +zfs create -o canmount=off -o mountpoint=/ \
    +    rpool/USERDATA
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
    +    -o canmount=on -o mountpoint=/root \
    +    rpool/USERDATA/root_$UUID
    +
    +
    +

    If you want a separate dataset for /tmp:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Optional: Ignore synchronous requests:

    +

    microSD cards are relatively slow. If you want to increase performance +(especially when installing packages) at the cost of some safety, you can +disable flushing of synchronous requests (e.g. fsync(), O_[D]SYNC):

    +

    Choose one of the following options:

    +
      +
    • For the root filesystem, but not user data:

      +
      zfs set sync=disabled rpool/ROOT
      +
      +
      +
    • +
    • For everything:

      +
      zfs set sync=disabled rpool
      +
      +
      +
    • +
    +

    ZFS is transactional, so it will still be crash consistent. However, you +should leave sync at its default of standard if this system needs +to guarantee persistence (e.g. if it is a database or NFS server).

    +
  8. +
  9. Copy the system into the ZFS filesystems:

    +
    (cd /; tar -cf - --one-file-system --warning=no-file-ignored .) | \
    +    pv -p -bs $(du -sxm --apparent-size / | cut -f1)m | \
    +    (cd /mnt ; tar -x)
    +
    +
    +
  10. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Stop zed:

    +
    systemctl stop zed
    +
    +
    +
  4. +
  5. Bind the virtual filesystems from the running environment to the new +ZFS environment and chroot into it:

    +
    mount --make-private --rbind /boot/firmware /mnt/boot/firmware
    +mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /run  /mnt/run
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
    +
    +
    +
  6. +
  7. Configure a basic system environment:

    +
    apt update
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales
    +dpkg-reconfigure tzdata
    +
    +
    +
  8. +
  9. For LUKS installs only, setup /etc/crypttab:

    +
    # cryptsetup is already installed, but this marks it as manually
    +# installed so it is not automatically removed.
    +apt install --yes cryptsetup
    +
    +echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
    +    luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +
  10. +
  11. Optional: Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  12. +
  13. Setup system groups:

    +
    addgroup --system lpadmin
    +addgroup --system sambashare
    +
    +
    +
  14. +
  15. Patch a dependency loop:

    +

    For ZFS native encryption or LUKS:

    +
    apt install --yes curl patch
    +
    +curl https://launchpadlibrarian.net/478315221/2150-fix-systemd-dependency-loops.patch | \
    +    sed "s|/etc|/lib|;s|\.in$||" | (cd / ; patch -p1)
    +
    +
    +

    Ignore the failure in Hunk #2 (say n twice).

    +

    This patch is from Bug #1875577 Encrypted swap won’t load on 20.04 with +zfs root.

    +
  16. +
  17. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/rpool
    +ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
    +zed -F &
    +
    +
    +

    Force a cache update:

    +
    zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    Verify that zed updated the cache by making sure this is not empty, +which will take a few seconds:

    +
    cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    Stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  18. +
  19. Remove old filesystem from /etc/fstab:

    +
    vi /etc/fstab
    +# Remove the old root filesystem line:
    +#   LABEL=writable / ext4 ...
    +
    +
    +
  20. +
  21. Configure kernel command line:

    +
    cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak
    +sed -i "s|root=LABEL=writable rootfstype=ext4|root=ZFS=rpool/ROOT/ubuntu_$UUID|" \
    +    /boot/firmware/cmdline.txt
    +sed -i "s| fixrtc||" /boot/firmware/cmdline.txt
    +sed -i "s|$| init_on_alloc=0|" /boot/firmware/cmdline.txt
    +
    +
    +

    The fixrtc script is not compatible with ZFS and will cause the boot +to hang for 180 seconds.

    +

    The init_on_alloc=0 is to address performance regressions.

    +
  22. +
  23. Optional (but highly recommended): Make debugging booting easier:

    +
    sed -i "s|$| nosplash|" /boot/firmware/cmdline.txt
    +
    +
    +
  24. +
  25. Reboot:

    +
    exit
    +reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as ubuntu.

    +
  26. +
+
+
+

Step 5: First Boot

+
    +
  1. Become root:

    +
    sudo -i
    +
    +
    +
  2. +
  3. Set the DISK variable again:

    +
    DISK=/dev/mmcblk0    # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +
    +
    +
  4. +
  5. Delete the ext4 partition and expand the ZFS partition:

    +
    sfdisk $DISK --delete 3
    +echo ", +" | sfdisk --no-reread -N 2 $DISK
    +
    +
    +

    Note: This does not automatically expand the pool. That will be happen +on reboot.

    +
  6. +
  7. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
    +    -o canmount=on -o mountpoint=/home/$username \
    +    rpool/USERDATA/${username}_$UUID
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username
    +
    +
    +
  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the system to boot normally. Login using the account you +created.

    +
  10. +
  11. Become root:

    +
    sudo -i
    +
    +
    +
  12. +
  13. Expand the ZFS pool:

    +

    Verify the pool expanded:

    +
    zfs list rpool
    +
    +
    +

    If it did not automatically expand, try to expand it manually:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISKP=${DISK}p       # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +DISKP=${DISK}        # USB disk
    +
    +zpool online -e rpool ${DISKP}2
    +
    +
    +
  14. +
  15. Delete the ubuntu user:

    +
    deluser --remove-home ubuntu
    +
    +
    +
  16. +
+
+
+

Step 6: Full Software Installation

+
    +
  1. Optional: Remove cloud-init:

    +
    vi /etc/netplan/01-netcfg.yaml
    +
    +
    +
    network:
    +  version: 2
    +  ethernets:
    +    eth0:
    +      dhcp4: true
    +
    +
    +
    rm /etc/netplan/50-cloud-init.yaml
    +apt purge --autoremove ^cloud-init
    +rm -rf /etc/cloud
    +
    +
    +
  2. +
  3. Optional: Remove other storage packages:

    +
    apt purge --autoremove bcache-tools btrfs-progs cloud-guest-utils lvm2 \
    +    mdadm multipath-tools open-iscsi overlayroot xfsprogs
    +
    +
    +
  4. +
  5. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  6. +
  7. Optional: Install a full GUI environment:

    +
    apt install --yes ubuntu-desktop
    +echo dtoverlay=vc4-fkms-v3d >> /boot/firmware/usercfg.txt
    +
    +
    +

    Hint: If you are installing a full GUI environment, you will likely +want to remove cloud-init as discussed above but manage your network with +NetworkManager:

    +
    rm /etc/netplan/*.yaml
    +vi /etc/netplan/01-network-manager-all.yaml
    +
    +
    +
    network:
    +  version: 2
    +  renderer: NetworkManager
    +
    +
    +
  8. +
  9. Optional (but recommended): Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  10. +
  11. Reboot:

    +
    reboot
    +
    +
    +
  12. +
+
+
+

Step 7: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  4. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.html b/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.html new file mode 100644 index 000000000..6d50c069d --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.html @@ -0,0 +1,1453 @@ + + + + + + + Ubuntu 20.04 Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 20.04 Root on ZFS

+ +
+

Newer release available

+
    +
  • See Ubuntu 22.04 Root on ZFS for new +installs. This guide is no longer receiving most updates. It continues +to exist for reference for existing installs that followed it.

  • +
+
+
+

Errata

+

If you previously installed using this guide, please apply these fixes if +applicable:

+
+

/boot/grub Not Mounted

+
+
Severity: Normal (previously Grave)
+
Fixed: 2020-12-05 (previously 2020-05-30)
+
+

For a mirror or raidz topology, /boot/grub is on a separate dataset. This +was originally bpool/grub, then changed on 2020-05-30 to +bpool/BOOT/ubuntu_UUID/grub to work-around zsys setting canmount=off +which would result in /boot/grub not mounting. This work-around lead to +issues with snapshot restores. The underlying zsys +issue was fixed and backported +to 20.04, so it is now back to being bpool/grub.

+
    +
  • If you never applied the 2020-05-30 errata fix, then /boot/grub is +probably not mounting. Check that:

    +
    mount | grep /boot/grub
    +
    +
    +

    If it is mounted, everything is fine. Stop. Otherwise:

    +
    zfs set canmount=on bpool/grub
    +update-initramfs -c -k all
    +update-grub
    +
    +grub-install --target=x86_64-efi --efi-directory=/boot/efi \
    +    --bootloader-id=ubuntu --recheck --no-floppy
    +
    +
    +

    Run this for the additional disk(s), incrementing the “2” to “3” and so on +for both /boot/efi2 and ubuntu-2:

    +
    cp -a /boot/efi/EFI /boot/efi2
    +grub-install --target=x86_64-efi --efi-directory=/boot/efi2 \
    +    --bootloader-id=ubuntu-2 --recheck --no-floppy
    +
    +
    +

    Check that these have set prefix=($root)'/grub@':

    +
    grep prefix= \
    +    /boot/efi/EFI/ubuntu/grub.cfg \
    +    /boot/efi2/EFI/ubuntu-2/grub.cfg
    +
    +
    +
  • +
  • If you applied the 2020-05-30 errata fix, then you should revert the dataset +rename:

    +
    umount /boot/grub
    +zfs rename bpool/BOOT/ubuntu_UUID/grub bpool/grub
    +zfs set com.ubuntu.zsys:bootfs=no bpool/grub
    +zfs mount bpool/grub
    +
    +
    +
  • +
+
+
+

AccountsService Not Mounted

+
+
Severity: Normal
+
Fixed: 2020-05-28
+
+

The HOWTO previously had a typo in AccountsService (where Accounts is plural) +as AccountServices (where Services is plural). This means that AccountsService +data will be written to the root filesystem. This is only harmful in the event +of a rollback of the root filesystem that does not include a rollback of the +user data. Check it:

+
zfs list | grep Account
+
+
+

If the “s” is on “Accounts”, you are good. If it is on “Services”, fix it:

+
mv /var/lib/AccountsService /var/lib/AccountsService-old
+zfs list -r rpool
+# Replace the UUID twice below:
+zfs rename rpool/ROOT/ubuntu_UUID/var/lib/AccountServices \
+           rpool/ROOT/ubuntu_UUID/var/lib/AccountsService
+mv /var/lib/AccountsService-old/* /var/lib/AccountsService
+rmdir /var/lib/AccountsService-old
+
+
+
+
+
+

Overview

+
+

Ubuntu Installer

+

The Ubuntu installer has support for root-on-ZFS. +This HOWTO produces nearly identical results as the Ubuntu installer because of +bidirectional collaboration.

+

If you want a single-disk, unencrypted, desktop install, use the installer. It +is far easier and faster than doing everything by hand.

+

If you want a ZFS native encrypted, desktop install, you can trivially edit +the installer. +The -O recordsize=1M there is unrelated to encryption; omit that unless +you understand it. Make sure to use a password that is at least 8 characters +or this hack will crash the installer. Additionally, once the system is +installed, you should switch to encrypted swap:

+
swapon -v
+# Note the device, including the partition.
+
+ls -l /dev/disk/by-id/
+# Find the by-id name of the disk.
+
+sudo swapoff -a
+sudo vi /etc/fstab
+# Remove the swap entry.
+
+sudo apt install --yes cryptsetup
+
+# Replace DISK-partN as appropriate from above:
+echo swap /dev/disk/by-id/DISK-partN /dev/urandom \
+    swap,cipher=aes-xts-plain64:sha256,size=512 | sudo tee -a /etc/crypttab
+echo /dev/mapper/swap none swap defaults 0 0 | sudo tee -a /etc/fstab
+
+
+

Hopefully the installer will gain encryption support in +the future.

+

If you want to setup a mirror or raidz topology, use LUKS encryption, and/or +install a server (no desktop GUI), use this HOWTO.

+
+
+

Raspberry Pi

+

If you are looking to install on a Raspberry Pi, see +Ubuntu 20.04 Root on ZFS for Raspberry Pi.

+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to the +Internet as appropriate (e.g. join your WiFi network). Open a terminal +(press Ctrl-Alt-T).

  2. +
  3. Setup and update the repositories:

    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    passwd
    +# There is no current password.
    +sudo apt install --yes openssh-server vim
    +
    +
    +

    Installing the full vim package fixes terminal problems that occur when +using the vim-tiny package (that ships in the Live CD environment) over +SSH.

    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh ubuntu@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk zfsutils-linux
    +
    +systemctl stop zed
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition (e.g. a swap partition per this HOWTO):
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Create bootloader partition(s):

    +
    sgdisk     -n1:1M:+512M   -t1:EF00 $DISK
    +
    +# For legacy (BIOS) booting:
    +sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK
    +
    +
    +

    Note: While the Ubuntu installer uses an MBR label for legacy (BIOS) +booting, this HOWTO uses GPT partition labels for both UEFI and legacy +(BIOS) booting. This is simpler than having two options. It is also +provides forward compatibility (future proofing). In other words, for +legacy (BIOS) booting, this will allow you to move the disk(s) to a new +system/motherboard in the future without having to rebuild the pool (and +restore your data from a backup). The ESP is created in both cases for +similar reasons. Additionally, the ESP is used for /boot/grub in +single-disk installs, as discussed below.

    +
  6. +
  7. Create a partition for swap:

    +

    Previous versions of this HOWTO put swap on a zvol. Ubuntu recommends +against this configuration due to deadlocks. There +is a bug report upstream.

    +

    Putting swap on a partition gives up the benefit of ZFS checksums (for your +swap). That is probably the right trade-off given the reports of ZFS +deadlocks with swap. If you are bothered by this, simply do not enable +swap.

    +

    Choose one of the following options if you want swap:

    +
      +
    • For a single-disk install:

      +
      sgdisk     -n2:0:+500M    -t2:8200 $DISK
      +
      +
      +
    • +
    • For a mirror or raidz topology:

      +
      sgdisk     -n2:0:+500M    -t2:FD00 $DISK
      +
      +
      +
    • +
    +

    Adjust the swap swize to your needs. If you wish to enable hiberation +(which only works for unencrypted installs), the swap partition must be +at least as large as the system’s RAM.

    +
  8. +
  9. Create a boot pool partition:

    +
    sgdisk     -n3:0:+2G      -t3:BE00 $DISK
    +
    +
    +

    The Ubuntu installer uses 5% of the disk space constrained to a minimum of +500 MiB and a maximum of 2 GiB. Making this too small (and 500 MiB might +be too small) can result in an inability to upgrade the kernel.

    +
  10. +
  11. Create a root pool partition:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  12. +
  13. Create the boot pool:

    +
    zpool create \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o ashift=12 -o autotrim=on -d \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    +    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    +    -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The boot pool name is no longer arbitrary. It _must_ be bpool. +If you really want to rename it, edit /etc/grub.d/10_linux_zfs later, +after GRUB is installed (and run update-grub).

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • The spacemap_v2 feature has been tested and is safe to use. The boot +pool is small, so this does not matter in practice.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  14. +
  15. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 -o autotrim=on \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 -o autotrim=on \
      +    -O encryption=aes-256-gcm \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 -o autotrim=on \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs +Also, disabling ACLs apparently breaks umask handling with NFSv4.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption defaults to aes-256-ccm, but the default has +changed upstream +to aes-256-gcm. AES-GCM seems to be generally preferred over AES-CCM, +is faster now, +and will be even faster in the future.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  16. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +
    +zfs create -o mountpoint=/ \
    +    -o com.ubuntu.zsys:bootfs=yes \
    +    -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID
    +
    +
    +
  4. +
  5. Create datasets:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/srv
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/usr
    +zfs create rpool/ROOT/ubuntu_$UUID/usr/local
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/var
    +zfs create rpool/ROOT/ubuntu_$UUID/var/games
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
    +zfs create rpool/ROOT/ubuntu_$UUID/var/log
    +zfs create rpool/ROOT/ubuntu_$UUID/var/mail
    +zfs create rpool/ROOT/ubuntu_$UUID/var/snap
    +zfs create rpool/ROOT/ubuntu_$UUID/var/spool
    +zfs create rpool/ROOT/ubuntu_$UUID/var/www
    +
    +zfs create -o canmount=off -o mountpoint=/ \
    +    rpool/USERDATA
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
    +    -o canmount=on -o mountpoint=/root \
    +    rpool/USERDATA/root_$UUID
    +chmod 700 /mnt/root
    +
    +
    +

    For a mirror or raidz topology, create a dataset for /boot/grub:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
    +
    +
    +

    Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Install the minimal system:

    +
    debootstrap focal /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  8. +
  9. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  10. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/netplan/01-netcfg.yaml
    +
    +
    +
    network:
    +  version: 2
    +  ethernets:
    +    NAME:
    +      dhcp4: true
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse
    +deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse
    +deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe multiverse
    +deb http://security.ubuntu.com/ubuntu focal-security main restricted universe multiverse
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    apt update
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +

    Install your preferred text editor:

    +
    apt install --yes nano
    +
    +apt install --yes vim
    +
    +
    +

    Installing the full vim package fixes terminal problems that occur when +using the vim-tiny package (that is installed by debootstrap) over +SSH.

    +
  10. +
  11. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  12. +
  13. Create the EFI filesystem:

    +

    Perform these steps for both UEFI and legacy (BIOS) booting:

    +
    apt install --yes dosfstools
    +
    +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1
    +mkdir /boot/efi
    +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part1) \
    +    /boot/efi vfat defaults 0 0 >> /etc/fstab
    +mount /boot/efi
    +
    +
    +

    For a mirror or raidz topology, repeat the mkdosfs for the additional +disks, but do not repeat the other commands.

    +

    Note: The -s 1 for mkdosfs is only necessary for drives which +present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster +size (given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

    +
  14. +
  15. Put /boot/grub on the EFI System Partition:

    +

    For a single-disk install only:

    +
    mkdir /boot/efi/grub /boot/grub
    +echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab
    +mount /boot/grub
    +
    +
    +

    This allows GRUB to write to /boot/grub (since it is on a FAT-formatted +ESP instead of on ZFS), which means that /boot/grub/grubenv and the +recordfail feature works as expected: if the boot fails, the normally +hidden GRUB menu will be shown on the next boot. For a mirror or raidz +topology, we do not want GRUB writing to the EFI System Partition. This is +because we duplicate it at install without a mechanism to update the copies +when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus, +we keep /boot/grub on the boot pool for the mirror or raidz topologies. +This preserves correct mirroring/raidz behavior, at the expense of being +able to write to /boot/grub/grubenv and thus the recordfail +behavior.

    +
  16. +
  17. Install GRUB/Linux/ZFS in the chroot environment for the new system:

    +

    Choose one of the following options:

    +
      +
    • Install GRUB/Linux/ZFS for legacy (BIOS) booting:

      +
      apt install --yes grub-pc linux-image-generic zfs-initramfs zsys
      +
      +
      +

      Select (using the space bar) all of the disks (not partitions) in your +pool.

      +
    • +
    • Install GRUB/Linux/ZFS for UEFI booting:

      +
      apt install --yes \
      +    grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \
      +    shim-signed zfs-initramfs zsys
      +
      +
      +

      Notes:

      +
        +
      • Ignore any error messages saying ERROR: Couldn't resolve device and +WARNING: Couldn't determine root device. cryptsetup does not +support ZFS.

      • +
      • Ignore any error messages saying Module zfs not found and +couldn't connect to zsys daemon. The first seems to occur due to a +version mismatch between the Live CD kernel and the chroot environment, +but this is irrelevant since the module is already loaded. The second +may be caused by the first but either way is irrelevant since zed +is started manually later.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later. For some reason, +grub-efi-amd64 does not prompt for install_devices here, but does +after a reboot.

      • +
      +
    • +
    +
  18. +
  19. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  20. +
  21. Set a root password:

    +
    passwd
    +
    +
    +
  22. +
  23. Configure swap:

    +

    Choose one of the following options if you want swap:

    +
      +
    • For an unencrypted single-disk install:

      +
      mkswap -f ${DISK}-part2
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +    none swap discard 0 0 >> /etc/fstab
      +swapon -a
      +
      +
      +
    • +
    • For an unencrypted mirror or raidz topology:

      +
      apt install --yes mdadm
      +
      +# Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
      +# raid-devices if necessary and specify the actual devices.
      +mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
      +    --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
      +mkswap -f /dev/md0
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/md0) \
      +    none swap discard 0 0 >> /etc/fstab
      +
      +
      +
    • +
    • For an encrypted (LUKS or ZFS native encryption) single-disk install:

      +
      apt install --yes cryptsetup
      +
      +echo swap ${DISK}-part2 /dev/urandom \
      +      swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
      +echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
      +
      +
      +
    • +
    • For an encrypted (LUKS or ZFS native encryption) mirror or raidz +topology:

      +
      apt install --yes cryptsetup mdadm
      +
      +# Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
      +# raid-devices if necessary and specify the actual devices.
      +mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
      +    --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
      +echo swap /dev/md0 /dev/urandom \
      +      swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
      +echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
      +
      +
      +
    • +
    +
  24. +
  25. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  26. +
  27. Setup system groups:

    +
    addgroup --system lpadmin
    +addgroup --system lxd
    +addgroup --system sambashare
    +
    +
    +
  28. +
  29. Patch a dependency loop:

    +

    For ZFS native encryption or LUKS:

    +
    apt install --yes curl patch
    +
    +curl https://launchpadlibrarian.net/478315221/2150-fix-systemd-dependency-loops.patch | \
    +    sed "s|/etc|/lib|;s|\.in$||" | (cd / ; patch -p1)
    +
    +
    +

    Ignore the failure in Hunk #2 (say n twice).

    +

    This patch is from Bug #1875577 Encrypted swap won’t load on 20.04 with +zfs root.

    +
  30. +
  31. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  32. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Disable memory zeroing:

    +
    vi /etc/default/grub
    +# Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT
    +# Save and quit (or see the next step).
    +
    +
    +

    This is to address performance regressions.

    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Comment out: GRUB_TIMEOUT_STYLE=hidden
    +# Set: GRUB_TIMEOUT=5
    +# Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
    +# Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +

    Choose one of the following options:

    +
      +
    • For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +

      Note that you are installing GRUB to the whole disk, not a partition.

      +

      If you are creating a mirror or raidz topology, repeat the +grub-install command for each disk in the pool.

      +
    • +
    • For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=ubuntu --recheck --no-floppy
      +
      +
      +
    • +
    +
  12. +
  13. Disable grub-initrd-fallback.service

    +

    For a mirror or raidz topology:

    +
    systemctl mask grub-initrd-fallback.service
    +
    +
    +

    This is the service for /boot/grub/grubenv which does not work on +mirrored or raidz topologies. Disabling this keeps it from blocking +subsequent mounts of /boot/grub if that mount ever fails.

    +

    Another option would be to set RequiresMountsFor=/boot/grub via a +drop-in unit, but that is more work to do here for no reason. Hopefully +this bug +will be fixed upstream.

    +
  14. +
  15. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on bpool/BOOT/ubuntu_$UUID
    +zfs set canmount=on rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  16. +
  17. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  18. +
  19. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  20. +
  21. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  22. +
+
+
+

Step 6: First Boot

+
    +
  1. Install GRUB to additional disks:

    +

    For a UEFI mirror or raidz topology only:

    +
    dpkg-reconfigure grub-efi-amd64
    +
    +Select (using the space bar) all of the ESP partitions (partition 1 on
    +each of the pool disks).
    +
    +
    +
  2. +
  3. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
    +    -o canmount=on -o mountpoint=/home/$username \
    +    rpool/USERDATA/${username}_$UUID
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username
    +
    +
    +
  4. +
+
+
+

Step 7: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +

    Choose one of the following options:

    +
      +
    • Install a command-line environment only:

      +
      apt install --yes ubuntu-standard
      +
      +
      +
    • +
    • Install a full GUI environment:

      +
      apt install --yes ubuntu-desktop
      +
      +
      +

      Hint: If you are installing a full GUI environment, you will likely +want to manage your network with NetworkManager:

      +
      rm /etc/netplan/01-netcfg.yaml
      +vi /etc/netplan/01-network-manager-all.yaml
      +
      +
      +
      network:
      +  version: 2
      +  renderer: NetworkManager
      +
      +
      +
    • +
    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 8: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  4. +
  5. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  6. +
  7. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Uncomment: GRUB_TIMEOUT_STYLE=hidden
    +# Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  8. +
  9. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  10. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+# Replace “UUID” as appropriate; use zfs list to find it:
+zfs mount rpool/ROOT/ubuntu_UUID
+zfs mount bpool/BOOT/ubuntu_UUID
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.ms.fd:/usr/share/OVMF/OVMF_VARS.ms.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.html b/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.html new file mode 100644 index 000000000..24330d653 --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.html @@ -0,0 +1,1051 @@ + + + + + + + Ubuntu 22.04 Root on ZFS for Raspberry Pi — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 22.04 Root on ZFS for Raspberry Pi

+ +
+

Overview

+
+

Note

+

These are beta instructions. The author still needs to test them. +Additionally, it may be possible to use U-Boot now, which would eliminate +some of the customizations.

+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

4 GiB of memory is recommended. Do not use deduplication, as it needs massive +amounts of RAM. +Enabling deduplication is a permanent change that cannot be easily reverted.

+

A Raspberry Pi 3 B/B+ would probably work (as the Pi 3 is 64-bit, though it +has less RAM), but has not been tested. Please report your results (good or +bad) using the issue link below.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

WARNING: Encryption has not yet been tested on the Raspberry Pi.

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+

USB Disks

+

The Raspberry Pi 4 runs much faster using a USB Solid State Drive (SSD) than +a microSD card. These instructions can also be used to install Ubuntu on a +USB-connected SSD or other USB disk. USB disks have three requirements that +do not apply to microSD cards:

+
    +
  1. The Raspberry Pi’s Bootloader EEPROM must be dated 2020-09-03 or later.

    +

    To check the bootloader version, power up the Raspberry Pi without an SD +card inserted or a USB boot device attached; the date will be on the +bootloader line. (If you do not see the bootloader line, the +bootloader is too old.) Alternatively, run sudo rpi-eeprom-update +on an existing OS on the Raspberry Pi (which on Ubuntu requires +apt install rpi-eeprom).

    +

    If needed, the bootloader can be updated from an existing OS on the +Raspberry Pi using rpi-eeprom-update -a and rebooting. +For other options, see Updating the Bootloader.

    +
  2. +
  3. The Raspberry Pi must configured for USB boot. The bootloader will show a +boot line; if order includes 4, USB boot is enabled.

    +

    If not already enabled, it can be enabled from an existing OS on the +Raspberry Pi using rpi-eeprom-config -e: set BOOT_ORDER=0xf41 +and reboot to apply the change. On subsequent reboots, USB boot will be +enabled.

    +

    Otherwise, it can be enabled without an existing OS as follows:

    +
      +
    • Download the Raspberry Pi Imager Utility.

    • +
    • Flash the USB Boot image to a microSD card. The USB Boot image is +listed under Bootload in the Misc utility images folder.

    • +
    • Boot the Raspberry Pi from the microSD card. USB Boot should be enabled +automatically.

    • +
    +
  4. +
  5. U-Boot on Ubuntu 20.04 does not seem to support the Raspberry Pi USB. +Ubuntu 20.10 may work. As a +work-around, the Raspberry Pi bootloader is configured to directly boot +Linux. For this to work, the Linux kernel must not be compressed. These +instructions decompress the kernel and add a script to +/etc/kernel/postinst.d to handle kernel upgrades.

  6. +
+
+
+
+

Step 1: Disk Formatting

+

The commands in this step are run on the system other than the Raspberry Pi.

+

This guide has you go to some extra work so that the stock ext4 partition can +be deleted.

+
    +
  1. Download and unpack the official image:

    +
    curl -O https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz
    +xz -d ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz
    +
    +# or combine them to decompress as you download:
    +curl https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz | \
    +    xz -d > ubuntu-22.04.1-preinstalled-server-arm64+raspi.img
    +
    +
    +
  2. +
  3. Dump the partition table for the image:

    +
    sfdisk -d ubuntu-22.04.1-preinstalled-server-arm64+raspi.img
    +
    +
    +

    That will output this:

    +
    label: dos
    +label-id: 0x638274e3
    +device: ubuntu-22.04.1-preinstalled-server-arm64+raspi.img
    +unit: sectors
    +
    +<name>.img1 : start=        2048, size=      524288, type=c, bootable
    +<name>.img2 : start=      526336, size=     7193932, type=83
    +
    +
    +

    The important numbers are 524288 and 7193932. Store those in variables:

    +
    BOOT=524288
    +ROOT=7193932
    +
    +
    +
  4. +
  5. Create a partition script:

    +
    cat > partitions << EOF
    +label: dos
    +unit: sectors
    +
    +1 : start=  2048,  size=$BOOT,  type=c, bootable
    +2 : start=$((2048+BOOT)),  size=$ROOT, type=83
    +3 : start=$((2048+BOOT+ROOT)), size=$ROOT, type=83
    +EOF
    +
    +
    +
  6. +
  7. Connect the disk:

    +

    Connect the disk to a machine other than the target Raspberry Pi. If any +filesystems are automatically mounted (e.g. by GNOME) unmount them. +Determine the device name. For SD, the device name is almost certainly +/dev/mmcblk0. For USB SSDs, the device name is /dev/sdX, where +X is a lowercase letter. lsblk can help determine the device name. +Set the DISK environment variable to the device name:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISK=/dev/sdX        # USB disk
    +
    +
    +

    Because partitions are named differently for /dev/mmcblk0 and /dev/sdX +devices, set a second variable used when working with partitions:

    +
    export DISKP=${DISK}p # microSD card
    +export DISKP=${DISK}  # USB disk ($DISKP == $DISK for /dev/sdX devices)
    +
    +
    +

    Hint: microSD cards connected using a USB reader also have /dev/sdX +names.

    +

    WARNING: The following steps destroy the existing data on the disk. Ensure +DISK and DISKP are correct before proceeding.

    +
  8. +
  9. Ensure swap partitions are not in use:

    +
    swapon -v
    +# If a partition is in use from the disk, disable it:
    +sudo swapoff THAT_PARTITION
    +
    +
    +
  10. +
  11. Clear old ZFS labels:

    +
    sudo zpool labelclear -f ${DISK}
    +
    +
    +

    If a ZFS label still exists from a previous system/attempt, expanding the +pool will result in an unbootable system.

    +

    Hint: If you do not already have the ZFS utilities installed, you can +install them with: sudo apt install zfsutils-linux Alternatively, you +can zero the entire disk with: +sudo dd if=/dev/zero of=${DISK} bs=1M status=progress

    +
  12. +
  13. Delete existing partitions:

    +
    echo "label: dos" | sudo sfdisk ${DISK}
    +sudo partprobe
    +ls ${DISKP}*
    +
    +
    +

    Make sure there are no partitions, just the file for the disk itself. This +step is not strictly necessary; it exists to catch problems.

    +
  14. +
  15. Create the partitions:

    +
    sudo sfdisk $DISK < partitions
    +
    +
    +
  16. +
  17. Loopback mount the image:

    +
    IMG=$(sudo losetup -fP --show \
    +          ubuntu-22.04.1-preinstalled-server-arm64+raspi.img)
    +
    +
    +
  18. +
  19. Copy the bootloader data:

    +
    sudo dd if=${IMG}p1 of=${DISKP}1 bs=1M
    +
    +
    +
  20. +
  21. Clear old label(s) from partition 2:

    +
    sudo wipefs -a ${DISKP}2
    +
    +
    +

    If a filesystem with the writable label from the Ubuntu image is still +present in partition 2, the system will not boot initially.

    +
  22. +
  23. Copy the root filesystem data:

    +
    # NOTE: the destination is p3, not p2.
    +sudo dd if=${IMG}p2 of=${DISKP}3 bs=1M status=progress conv=fsync
    +
    +
    +
  24. +
  25. Unmount the image:

    +
    sudo losetup -d $IMG
    +
    +
    +
  26. +
  27. If setting up a USB disk:

    +

    Decompress the kernel:

    +
    sudo -sE
    +
    +MNT=$(mktemp -d /mnt/XXXXXXXX)
    +mkdir -p $MNT/boot $MNT/root
    +mount ${DISKP}1 $MNT/boot
    +mount ${DISKP}3 $MNT/root
    +
    +zcat -qf $MNT/boot/vmlinuz >$MNT/boot/vmlinux
    +
    +
    +

    Modify boot config:

    +
    cat >> $MNT/boot/usercfg.txt << EOF
    +kernel=vmlinux
    +initramfs initrd.img followkernel
    +boot_delay
    +EOF
    +
    +
    +

    Create a script to automatically decompress the kernel after an upgrade:

    +
    cat >$MNT/root/etc/kernel/postinst.d/zz-decompress-kernel << 'EOF'
    +#!/bin/sh
    +
    +set -eu
    +
    +echo "Updating decompressed kernel..."
    +[ -e /boot/firmware/vmlinux ] && \
    +    cp /boot/firmware/vmlinux /boot/firmware/vmlinux.bak
    +vmlinuxtmp=$(mktemp /boot/firmware/vmlinux.XXXXXXXX)
    +zcat -qf /boot/vmlinuz > "$vmlinuxtmp"
    +mv "$vmlinuxtmp" /boot/firmware/vmlinux
    +EOF
    +
    +chmod +x $MNT/root/etc/kernel/postinst.d/zz-decompress-kernel
    +
    +
    +

    Cleanup:

    +
    umount $MNT/*
    +rm -rf $MNT
    +exit
    +
    +
    +
  28. +
  29. Boot the Raspberry Pi.

    +

    Move the SD/USB disk to the Raspberry Pi. Boot it and login (e.g. via SSH) +with ubuntu as the username and password. If you are using SSH, note +that it takes a little bit for cloud-init to enable password logins on the +first boot. Set a new password when prompted and login again using that +password. If you have your local SSH configured to use ControlPersist, +you will have to kill the existing SSH process before logging in the second +time.

    +
  30. +
+
+
+

Step 2: Setup ZFS

+
    +
  1. Become root:

    +
    sudo -i
    +
    +
    +
  2. +
  3. Set the DISK and DISKP variables again:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISKP=${DISK}p       # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +DISKP=${DISK}        # USB disk
    +
    +
    +

    WARNING: Device names can change when moving a device to a different +computer or switching the microSD card from a USB reader to a built-in +slot. Double check the device name before continuing.

    +
  4. +
  5. Install ZFS:

    +
    apt update
    +
    +apt install pv zfs-initramfs
    +
    +
    +

    Note: Since this is the first boot, you may get Waiting for cache +lock because unattended-upgrades is running in the background. +Wait for it to finish.

    +
  6. +
  7. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISKP}2
      +
      +
      +
    • +
    +

    WARNING: Encryption has not yet been tested on the Raspberry Pi.

    +
      +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -O encryption=on \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISKP}2
      +
      +
      +
    • +
    • LUKS:

      +
      cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISKP}2
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs +Also, disabling ACLs apparently breaks umask handling with NFSv4.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +
  8. +
+
+
+

Step 3: System Installation

+
    +
  1. Create a filesystem dataset to act as a container:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +
    +
    +
  2. +
  3. Create a filesystem dataset for the root filesystem:

    +
    UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +
    +zfs create -o canmount=noauto -o mountpoint=/ \
    +    -o com.ubuntu.zsys:bootfs=yes \
    +    -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
    +zfs mount rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/usr
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/var
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib
    +zfs create rpool/ROOT/ubuntu_$UUID/var/log
    +zfs create rpool/ROOT/ubuntu_$UUID/var/spool
    +
    +zfs create -o canmount=off -o mountpoint=/ \
    +    rpool/USERDATA
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
    +    -o canmount=on -o mountpoint=/root \
    +    rpool/USERDATA/root_$UUID
    +chmod 700 /mnt/root
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to separate these to exclude them from snapshots:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/cache
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/nfs
    +zfs create rpool/ROOT/ubuntu_$UUID/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If desired (the Ubuntu installer creates these):

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/games
    +
    +
    +

    If this system will have a GUI:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/docker
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/www
    +
    +
    +

    For a mirror or raidz topology, create a dataset for /boot/grub:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +

    Note: If you separate a directory required for booting (e.g. /etc) +into its own dataset, you must add it to +ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs. Datasets +with canmount=off (like rpool/usr above) do not matter for this.

    +
  6. +
  7. Optional: Ignore synchronous requests:

    +

    microSD cards are relatively slow. If you want to increase performance +(especially when installing packages) at the cost of some safety, you can +disable flushing of synchronous requests (e.g. fsync(), O_[D]SYNC):

    +

    Choose one of the following options:

    +
      +
    • For the root filesystem, but not user data:

      +
      zfs set sync=disabled rpool/ROOT
      +
      +
      +
    • +
    • For everything:

      +
      zfs set sync=disabled rpool
      +
      +
      +
    • +
    +

    ZFS is transactional, so it will still be crash consistent. However, you +should leave sync at its default of standard if this system needs +to guarantee persistence (e.g. if it is a database or NFS server).

    +
  8. +
  9. Copy the system into the ZFS filesystems:

    +
    (cd /; tar -cf - --one-file-system --warning=no-file-ignored .) | \
    +    pv -p -bs $(du -sxm --apparent-size / | cut -f1)m | \
    +    (cd /mnt ; tar -x)
    +
    +
    +
  10. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Stop zed:

    +
    systemctl stop zed
    +
    +
    +
  4. +
  5. Bind the virtual filesystems from the running environment to the new +ZFS environment and chroot into it:

    +
    mount --make-private --rbind /boot/firmware /mnt/boot/firmware
    +mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /run  /mnt/run
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
    +
    +
    +
  6. +
  7. Configure a basic system environment:

    +
    apt update
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales
    +dpkg-reconfigure tzdata
    +
    +
    +
  8. +
  9. For LUKS installs only, setup /etc/crypttab:

    +
    # cryptsetup is already installed, but this marks it as manually
    +# installed so it is not automatically removed.
    +apt install --yes cryptsetup
    +
    +echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
    +    luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +
  10. +
  11. Optional: Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  12. +
  13. Setup system groups:

    +
    addgroup --system lpadmin
    +addgroup --system sambashare
    +
    +
    +
  14. +
  15. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Force a cache update:

    +
    zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    Verify that zed updated the cache by making sure this is not empty, +which will take a few seconds:

    +
    cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    Stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  16. +
  17. Remove old filesystem from /etc/fstab:

    +
    vi /etc/fstab
    +# Remove the old root filesystem line:
    +#   LABEL=writable / ext4 ...
    +
    +
    +
  18. +
  19. Configure kernel command line:

    +
    cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak
    +sed -i "s|root=LABEL=writable rootfstype=ext4|root=ZFS=rpool/ROOT/ubuntu_$UUID|" \
    +    /boot/firmware/cmdline.txt
    +sed -i "s| fixrtc||" /boot/firmware/cmdline.txt
    +sed -i "s|$| init_on_alloc=0|" /boot/firmware/cmdline.txt
    +
    +
    +

    The fixrtc script is not compatible with ZFS and will cause the boot +to hang for 180 seconds.

    +

    The init_on_alloc=0 is to address performance regressions.

    +
  20. +
  21. Optional (but highly recommended): Make debugging booting easier:

    +
    sed -i "s|$| nosplash|" /boot/firmware/cmdline.txt
    +
    +
    +
  22. +
  23. Reboot:

    +
    exit
    +reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as ubuntu.

    +
  24. +
+
+
+

Step 5: First Boot

+
    +
  1. Become root:

    +
    sudo -i
    +
    +
    +
  2. +
  3. Set the DISK variable again:

    +
    DISK=/dev/mmcblk0    # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +
    +
    +
  4. +
  5. Delete the ext4 partition and expand the ZFS partition:

    +
    sfdisk $DISK --delete 3
    +echo ", +" | sfdisk --no-reread -N 2 $DISK
    +
    +
    +

    Note: This does not automatically expand the pool. That will be happen +on reboot.

    +
  6. +
  7. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
    +    -o canmount=on -o mountpoint=/home/$username \
    +    rpool/USERDATA/${username}_$UUID
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username
    +
    +
    +
  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the system to boot normally. Login using the account you +created.

    +
  10. +
  11. Become root:

    +
    sudo -i
    +
    +
    +
  12. +
  13. Expand the ZFS pool:

    +

    Verify the pool expanded:

    +
    zfs list rpool
    +
    +
    +

    If it did not automatically expand, try to expand it manually:

    +
    DISK=/dev/mmcblk0    # microSD card
    +DISKP=${DISK}p       # microSD card
    +
    +DISK=/dev/sdX        # USB disk
    +DISKP=${DISK}        # USB disk
    +
    +zpool online -e rpool ${DISKP}2
    +
    +
    +
  14. +
  15. Delete the ubuntu user:

    +
    deluser --remove-home ubuntu
    +
    +
    +
  16. +
+
+
+

Step 6: Full Software Installation

+
    +
  1. Optional: Remove cloud-init:

    +
    vi /etc/netplan/01-netcfg.yaml
    +
    +
    +
    network:
    +  version: 2
    +  ethernets:
    +    eth0:
    +      dhcp4: true
    +
    +
    +
    rm /etc/netplan/50-cloud-init.yaml
    +apt purge --autoremove ^cloud-init
    +rm -rf /etc/cloud
    +
    +
    +
  2. +
  3. Optional: Remove other storage packages:

    +
    apt purge --autoremove bcache-tools btrfs-progs cloud-guest-utils lvm2 \
    +    mdadm multipath-tools open-iscsi overlayroot xfsprogs
    +
    +
    +
  4. +
  5. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  6. +
  7. Optional: Install a full GUI environment:

    +
    apt install --yes ubuntu-desktop
    +echo dtoverlay=vc4-fkms-v3d >> /boot/firmware/usercfg.txt
    +
    +
    +

    Hint: If you are installing a full GUI environment, you will likely +want to remove cloud-init as discussed above but manage your network with +NetworkManager:

    +
    rm /etc/netplan/*.yaml
    +vi /etc/netplan/01-network-manager-all.yaml
    +
    +
    +
    network:
    +  version: 2
    +  renderer: NetworkManager
    +
    +
    +
  8. +
  9. Optional (but recommended): Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  10. +
  11. Reboot:

    +
    reboot
    +
    +
    +
  12. +
+
+
+

Step 7: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  4. +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.html b/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.html new file mode 100644 index 000000000..559b4c634 --- /dev/null +++ b/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.html @@ -0,0 +1,1374 @@ + + + + + + + Ubuntu 22.04 Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu 22.04 Root on ZFS

+ +
+

Overview

+
+

Ubuntu Installer

+

The Ubuntu installer still has ZFS support, but it was almost removed for +22.04 +and it no longer installs zsys. At +the moment, this HOWTO still uses zsys, but that will be probably be removed +in the near future.

+
+
+

Raspberry Pi

+

If you are looking to install on a Raspberry Pi, see +Ubuntu 22.04 Root on ZFS for Raspberry Pi.

+
+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @rlaager.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo apt install python3-pip
    +
    +pip3 install -r docs/requirements.txt
    +
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request. Mention @rlaager.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the Ubuntu Live CD. From the GRUB boot menu, select Try or Install Ubuntu. +On the Welcome page, select your preferred language and Try Ubuntu. +Connect your system to the Internet as appropriate (e.g. join your WiFi network). +Open a terminal (press Ctrl-Alt-T).

  2. +
  3. Setup and update the repositories:

    +
    sudo apt update
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    passwd
    +# There is no current password.
    +sudo apt install --yes openssh-server vim
    +
    +
    +

    Installing the full vim package fixes terminal problems that occur when +using the vim-tiny package (that ships in the Live CD environment) over +SSH.

    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh ubuntu@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    apt install --yes debootstrap gdisk zfsutils-linux
    +
    +systemctl stop zed
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    • For a mirror or raidz topology, use DISK1, DISK2, etc.

    • +
    • When choosing a boot pool size, consider how you will use the space. A +kernel and initrd may consume around 100M. If you have multiple kernels +and take snapshots, you may find yourself low on boot pool space, +especially if you need to regenerate your initramfs images, which may be +around 85M each. Size your boot pool appropriately for your needs.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    Ensure swap partitions are not in use:

    +
    swapoff --all
    +
    +
    +

    If the disk was previously used in an MD array:

    +
    apt install --yes mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition (e.g. a swap partition per this HOWTO):
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    If the disk was previously used with zfs:

    +
    wipefs -a $DISK
    +
    +
    +

    For flash-based storage, if the disk was previously used, you may wish to +do a full-disk discard (TRIM/UNMAP), which can improve performance:

    +
    blkdiscard -f $DISK
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Create bootloader partition(s):

    +
    sgdisk     -n1:1M:+512M   -t1:EF00 $DISK
    +
    +# For legacy (BIOS) booting:
    +sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK
    +
    +
    +

    Note: While the Ubuntu installer uses an MBR label for legacy (BIOS) +booting, this HOWTO uses GPT partition labels for both UEFI and legacy +(BIOS) booting. This is simpler than having two options. It is also +provides forward compatibility (future proofing). In other words, for +legacy (BIOS) booting, this will allow you to move the disk(s) to a new +system/motherboard in the future without having to rebuild the pool (and +restore your data from a backup). The ESP is created in both cases for +similar reasons. Additionally, the ESP is used for /boot/grub in +single-disk installs, as discussed below.

    +
  6. +
  7. Create a partition for swap:

    +

    Previous versions of this HOWTO put swap on a zvol. Ubuntu recommends +against this configuration due to deadlocks. There +is a bug report upstream.

    +

    Putting swap on a partition gives up the benefit of ZFS checksums (for your +swap). That is probably the right trade-off given the reports of ZFS +deadlocks with swap. If you are bothered by this, simply do not enable +swap.

    +

    Choose one of the following options if you want swap:

    +
      +
    • For a single-disk install:

      +
      sgdisk     -n2:0:+500M    -t2:8200 $DISK
      +
      +
      +
    • +
    • For a mirror or raidz topology:

      +
      sgdisk     -n2:0:+500M    -t2:FD00 $DISK
      +
      +
      +
    • +
    +

    Adjust the swap swize to your needs. If you wish to enable hiberation +(which only works for unencrypted installs), the swap partition must be +at least as large as the system’s RAM.

    +
  8. +
  9. Create a boot pool partition:

    +
    sgdisk     -n3:0:+2G      -t3:BE00 $DISK
    +
    +
    +

    The Ubuntu installer uses 5% of the disk space constrained to a minimum of +500 MiB and a maximum of 2 GiB. Making this too small (and 500 MiB might +be too small) can result in an inability to upgrade the kernel.

    +
  10. +
  11. Create a root pool partition:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  12. +
  13. Create the boot pool:

    +
    zpool create \
    +    -o ashift=12 \
    +    -o autotrim=on \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o compatibility=grub2 \
    +    -o feature@livelist=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O devices=off \
    +    -O acltype=posixacl -O xattr=sa \
    +    -O compression=lz4 \
    +    -O normalization=formD \
    +    -O relatime=on \
    +    -O canmount=off -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    Ignore the warnings about the features “not in specified ‘compatibility’ +feature set.”

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The boot pool name is no longer arbitrary. It _must_ be bpool. +If you really want to rename it, edit /etc/grub.d/10_linux_zfs later, +after GRUB is installed (and run update-grub).

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The device_rebuild feature should be safe to use (except on raidz, +which it is incompatible with), but the boot pool is small, so this does +not matter in practice.

    • +
    • The log_spacemap and spacemap_v2 features have been tested and +are safe to use. The boot pool is small, so these do not matter in +practice.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  14. +
  15. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o ashift=12 \
      +    -o autotrim=on \
      +    -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
      +    -O compression=lz4 \
      +    -O normalization=formD \
      +    -O relatime=on \
      +    -O canmount=off -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs +Also, disabling ACLs apparently breaks umask handling with NFSv4.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  16. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +
    +zfs create -o mountpoint=/ \
    +    -o com.ubuntu.zsys:bootfs=yes \
    +    -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID
    +
    +
    +
  4. +
  5. Create datasets:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/usr
    +zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
    +    rpool/ROOT/ubuntu_$UUID/var
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib
    +zfs create rpool/ROOT/ubuntu_$UUID/var/log
    +zfs create rpool/ROOT/ubuntu_$UUID/var/spool
    +
    +zfs create -o canmount=off -o mountpoint=/ \
    +    rpool/USERDATA
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
    +    -o canmount=on -o mountpoint=/root \
    +    rpool/USERDATA/root_$UUID
    +chmod 700 /mnt/root
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to separate these to exclude them from snapshots:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/cache
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/nfs
    +zfs create rpool/ROOT/ubuntu_$UUID/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If desired (the Ubuntu installer creates these):

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/games
    +
    +
    +

    If this system will have a GUI:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService
    +zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/lib/docker
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/snap
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create rpool/ROOT/ubuntu_$UUID/var/www
    +
    +
    +

    For a mirror or raidz topology, create a dataset for /boot/grub:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.ubuntu.zsys:bootfs=no \
    +    rpool/ROOT/ubuntu_$UUID/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +

    Note: If you separate a directory required for booting (e.g. /etc) +into its own dataset, you must add it to +ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs. Datasets +with canmount=off (like rpool/usr above) do not matter for this.

    +
  6. +
  7. Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +
  8. +
  9. Install the minimal system:

    +
    debootstrap jammy /mnt
    +
    +
    +

    The debootstrap command leaves the new system in an unconfigured state. +An alternative to using debootstrap is to copy the entirety of a +working system into the new ZFS root.

    +
  10. +
  11. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  12. +
+
+
+

Step 4: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    hostname HOSTNAME
    +hostname > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +
    Add a line:
    +127.0.1.1       HOSTNAME
    +or if the system has a real name in DNS:
    +127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Configure the network interface:

    +

    Find the interface name:

    +
    ip addr show
    +
    +
    +

    Adjust NAME below to match your interface name:

    +
    vi /mnt/etc/netplan/01-netcfg.yaml
    +
    +
    +
    network:
    +  version: 2
    +  ethernets:
    +    NAME:
    +      dhcp4: true
    +
    +
    +

    Customize this file if the system is not a DHCP client.

    +
  4. +
  5. Configure the package sources:

    +
    vi /mnt/etc/apt/sources.list
    +
    +
    +
    deb http://archive.ubuntu.com/ubuntu jammy main restricted universe multiverse
    +deb http://archive.ubuntu.com/ubuntu jammy-updates main restricted universe multiverse
    +deb http://archive.ubuntu.com/ubuntu jammy-backports main restricted universe multiverse
    +deb http://security.ubuntu.com/ubuntu jammy-security main restricted universe multiverse
    +
    +
    +
  6. +
  7. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  8. +
  9. Configure a basic system environment:

    +
    apt update
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    dpkg-reconfigure locales tzdata keyboard-configuration console-setup
    +
    +
    +

    Install your preferred text editor:

    +
    apt install --yes nano
    +
    +apt install --yes vim
    +
    +
    +

    Installing the full vim package fixes terminal problems that occur when +using the vim-tiny package (that is installed by debootstrap) over +SSH.

    +
  10. +
  11. For LUKS installs only, setup /etc/crypttab:

    +
    apt install --yes cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \
    +    none luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  12. +
  13. Create the EFI filesystem:

    +

    Perform these steps for both UEFI and legacy (BIOS) booting:

    +
    apt install --yes dosfstools
    +
    +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1
    +mkdir /boot/efi
    +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part1) \
    +    /boot/efi vfat defaults 0 0 >> /etc/fstab
    +mount /boot/efi
    +
    +
    +

    For a mirror or raidz topology, repeat the mkdosfs for the additional +disks, but do not repeat the other commands.

    +

    Note: The -s 1 for mkdosfs is only necessary for drives which +present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster +size (given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

    +
  14. +
  15. Put /boot/grub on the EFI System Partition:

    +

    For a single-disk install only:

    +
    mkdir /boot/efi/grub /boot/grub
    +echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab
    +mount /boot/grub
    +
    +
    +

    This allows GRUB to write to /boot/grub (since it is on a FAT-formatted +ESP instead of on ZFS), which means that /boot/grub/grubenv and the +recordfail feature works as expected: if the boot fails, the normally +hidden GRUB menu will be shown on the next boot. For a mirror or raidz +topology, we do not want GRUB writing to the EFI System Partition. This is +because we duplicate it at install without a mechanism to update the copies +when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus, +we keep /boot/grub on the boot pool for the mirror or raidz topologies. +This preserves correct mirroring/raidz behavior, at the expense of being +able to write to /boot/grub/grubenv and thus the recordfail +behavior.

    +
  16. +
  17. Install GRUB/Linux/ZFS in the chroot environment for the new system:

    +

    Choose one of the following options:

    +
      +
    • Install GRUB/Linux/ZFS for legacy (BIOS) booting:

      +
      apt install --yes grub-pc linux-image-generic zfs-initramfs zsys
      +
      +
      +

      Select (using the space bar) all of the disks (not partitions) in your +pool.

      +
    • +
    • Install GRUB/Linux/ZFS for UEFI booting:

      +
      apt install --yes \
      +    grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \
      +    shim-signed zfs-initramfs zsys
      +
      +
      +

      Notes:

      +
        +
      • Ignore any error messages saying ERROR: Couldn't resolve device and +WARNING: Couldn't determine root device. cryptsetup does not +support ZFS.

      • +
      • Ignore any error messages saying Module zfs not found and +couldn't connect to zsys daemon. The first seems to occur due to a +version mismatch between the Live CD kernel and the chroot environment, +but this is irrelevant since the module is already loaded. The second +may be caused by the first but either way is irrelevant since zed +is started manually later.

      • +
      • For a mirror or raidz topology, this step only installs GRUB on the +first disk. The other disk(s) will be handled later. For some reason, +grub-efi-amd64 does not prompt for install_devices here, but does +after a reboot.

      • +
      +
    • +
    +
  18. +
  19. Optional: Remove os-prober:

    +
    apt purge --yes os-prober
    +
    +
    +

    This avoids error messages from update-grub. os-prober is only +necessary in dual-boot configurations.

    +
  20. +
  21. Set a root password:

    +
    passwd
    +
    +
    +
  22. +
  23. Configure swap:

    +

    Choose one of the following options if you want swap:

    +
      +
    • For an unencrypted single-disk install:

      +
      mkswap -f ${DISK}-part2
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \
      +    none swap discard 0 0 >> /etc/fstab
      +swapon -a
      +
      +
      +
    • +
    • For an unencrypted mirror or raidz topology:

      +
      apt install --yes mdadm
      +
      +# Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
      +# raid-devices if necessary and specify the actual devices.
      +mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
      +    --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
      +mkswap -f /dev/md0
      +echo /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/md0) \
      +    none swap discard 0 0 >> /etc/fstab
      +
      +
      +
    • +
    • For an encrypted (LUKS or ZFS native encryption) single-disk install:

      +
      apt install --yes cryptsetup
      +
      +echo swap ${DISK}-part2 /dev/urandom \
      +      swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
      +echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
      +
      +
      +
    • +
    • For an encrypted (LUKS or ZFS native encryption) mirror or raidz +topology:

      +
      apt install --yes cryptsetup mdadm
      +
      +# Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
      +# raid-devices if necessary and specify the actual devices.
      +mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
      +    --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
      +echo swap /dev/md0 /dev/urandom \
      +      swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
      +echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
      +
      +
      +
    • +
    +
  24. +
  25. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  26. +
  27. Setup system groups:

    +
    addgroup --system lpadmin
    +addgroup --system lxd
    +addgroup --system sambashare
    +
    +
    +
  28. +
  29. Optional: Install SSH:

    +
    apt install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  30. +
+
+
+

Step 5: GRUB Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub-probe /boot
    +
    +
    +
  2. +
  3. Refresh the initrd files:

    +
    update-initramfs -c -k all
    +
    +
    +

    Note: Ignore any error messages saying ERROR: Couldn't resolve +device and WARNING: Couldn't determine root device. cryptsetup +does not support ZFS.

    +
  4. +
  5. Disable memory zeroing:

    +
    vi /etc/default/grub
    +# Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT
    +# Save and quit (or see the next step).
    +
    +
    +

    This is to address performance regressions.

    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Comment out: GRUB_TIMEOUT_STYLE=hidden
    +# Set: GRUB_TIMEOUT=5
    +# Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
    +# Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Install the boot loader:

    +

    Choose one of the following options:

    +
      +
    • For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub-install $DISK
      +
      +
      +

      Note that you are installing GRUB to the whole disk, not a partition.

      +

      If you are creating a mirror or raidz topology, repeat the +grub-install command for each disk in the pool.

      +
    • +
    • For UEFI booting, install GRUB to the ESP:

      +
      grub-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=ubuntu --recheck --no-floppy
      +
      +
      +
    • +
    +
  12. +
  13. Disable grub-initrd-fallback.service

    +

    For a mirror or raidz topology:

    +
    systemctl mask grub-initrd-fallback.service
    +
    +
    +

    This is the service for /boot/grub/grubenv which does not work on +mirrored or raidz topologies. Disabling this keeps it from blocking +subsequent mounts of /boot/grub if that mount ever fails.

    +

    Another option would be to set RequiresMountsFor=/boot/grub via a +drop-in unit, but that is more work to do here for no reason. Hopefully +this bug +will be fixed upstream.

    +
  14. +
  15. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on bpool/BOOT/ubuntu_$UUID
    +zfs set canmount=on rpool/ROOT/ubuntu_$UUID
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Once the files have data, stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  16. +
  17. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  18. +
  19. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  20. +
  21. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  22. +
+
+
+

Step 6: First Boot

+
    +
  1. Install GRUB to additional disks:

    +

    For a UEFI mirror or raidz topology only:

    +
    dpkg-reconfigure grub-efi-amd64
    +
    +Select (using the space bar) all of the ESP partitions (partition 1 on
    +each of the pool disks).
    +
    +
    +
  2. +
  3. Create a user account:

    +

    Replace YOUR_USERNAME with your desired username:

    +
    username=YOUR_USERNAME
    +
    +UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null |
    +    tr -dc 'a-z0-9' | cut -c-6)
    +ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
    +zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
    +    -o canmount=on -o mountpoint=/home/$username \
    +    rpool/USERDATA/${username}_$UUID
    +adduser $username
    +
    +cp -a /etc/skel/. /home/$username
    +chown -R $username:$username /home/$username
    +usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username
    +
    +
    +
  4. +
+
+
+

Step 7: Full Software Installation

+
    +
  1. Upgrade the minimal system:

    +
    apt dist-upgrade --yes
    +
    +
    +
  2. +
  3. Install a regular set of software:

    +

    Choose one of the following options:

    +
      +
    • Install a command-line environment only:

      +
      apt install --yes ubuntu-standard
      +
      +
      +
    • +
    • Install a full GUI environment:

      +
      apt install --yes ubuntu-desktop
      +
      +
      +

      Hint: If you are installing a full GUI environment, you will likely +want to manage your network with NetworkManager:

      +
      rm /etc/netplan/01-netcfg.yaml
      +vi /etc/netplan/01-network-manager-all.yaml
      +
      +
      +
      network:
      +  version: 2
      +  renderer: NetworkManager
      +
      +
      +
    • +
    +
  4. +
  5. Optional: Disable log compression:

    +

    As /var/log is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. Also, +if you are making snapshots of /var/log, logrotate’s compression will +actually waste space, as the uncompressed data will live on in the +snapshot. You can edit the files in /etc/logrotate.d by hand to comment +out compress, or use this loop (copy-and-paste highly recommended):

    +
    for file in /etc/logrotate.d/* ; do
    +    if grep -Eq "(^|[^#y])compress" "$file" ; then
    +        sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
    +    fi
    +done
    +
    +
    +
  6. +
  7. Reboot:

    +
    reboot
    +
    +
    +
  8. +
+
+
+

Step 8: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  4. +
  5. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    sudo vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +sudo systemctl restart ssh
    +
    +
    +
  6. +
  7. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Uncomment: GRUB_TIMEOUT_STYLE=hidden
    +# Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-grub
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  8. +
  9. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  10. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+# Replace “UUID” as appropriate; use zfs list to find it:
+zfs mount rpool/ROOT/ubuntu_UUID
+zfs mount bpool/BOOT/ubuntu_UUID
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+mount -t tmpfs tmpfs /mnt/run
+mkdir /mnt/run/lock
+chroot /mnt /bin/bash --login
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo apt install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.ms.fd:/usr/share/OVMF/OVMF_VARS.ms.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/Ubuntu/index.html b/Getting Started/Ubuntu/index.html new file mode 100644 index 000000000..94b9caa87 --- /dev/null +++ b/Getting Started/Ubuntu/index.html @@ -0,0 +1,183 @@ + + + + + + + Ubuntu — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Ubuntu

+ +
+

Installation

+
+

Note

+

If you want to use ZFS as your root filesystem, see the +Root on ZFS links below instead.

+
+

On Ubuntu, ZFS is included in the default Linux kernel packages. +To install the ZFS utilities, first make sure universe is enabled in +/etc/apt/sources.list:

+
deb http://archive.ubuntu.com/ubuntu <CODENAME> main universe
+
+
+

Then install zfsutils-linux:

+
apt update
+apt install zfsutils-linux
+
+
+
+
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/index.html b/Getting Started/index.html new file mode 100644 index 000000000..3b56217a5 --- /dev/null +++ b/Getting Started/index.html @@ -0,0 +1,262 @@ + + + + + + + Getting Started — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Getting Started

+

To get started with OpenZFS refer to the provided documentation for your +distribution. It will cover the recommended installation method and any +distribution specific information. First time OpenZFS users are +encouraged to check out Aaron Toponce’s excellent +documentation.

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/openSUSE/index.html b/Getting Started/openSUSE/index.html new file mode 100644 index 000000000..36f83cd51 --- /dev/null +++ b/Getting Started/openSUSE/index.html @@ -0,0 +1,174 @@ + + + + + + + openSUSE — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

openSUSE

+ +
+

Installation

+

If you want to use ZFS as your root filesystem, see the Root on ZFS +links below instead.

+

ZFS packages are not included in official openSUSE repositories, but repository of filesystems projects of openSUSE +includes such packages of filesystems including OpenZFS.

+

openSUSE progresses through 3 main distribution branches, these are called Tumbleweed, Leap and SLE. There are ZFS packages available for all three.

+
+ +
+

Root on ZFS

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/openSUSE/openSUSE Leap Root on ZFS.html b/Getting Started/openSUSE/openSUSE Leap Root on ZFS.html new file mode 100644 index 000000000..d6e71e587 --- /dev/null +++ b/Getting Started/openSUSE/openSUSE Leap Root on ZFS.html @@ -0,0 +1,1442 @@ + + + + + + + openSUSE Leap Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

openSUSE Leap Root on ZFS

+ +
+

Overview

+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
  • This is not an openSUSE official HOWTO page. This document will be updated if Root on ZFS support of +openSUSE is added in the future. +Also, openSUSE’s default system installer Yast2 does not support zfs. The method of setting up system +with zypper without Yast2 used in this page is based on openSUSE installation methods written by the +experience of the people in the community. +For more information about this, please look at the external links.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @Zaryob.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo zypper install python3-pip
    +pip3 install -r docs/requirements.txt
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+

Notes

+
    +
  • You can use unofficial script LroZ (Linux Root On Zfs), which is based on this manual and automates most steps.

  • +
+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the openSUSE Live CD. If prompted, login with the username +linux without password. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Check your openSUSE Leap release:

    +
    lsb_release -d
    +Description:    openSUSE Leap {$release}
    +
    +
    +
  4. +
  5. Setup and update the repositories:

    +
    sudo zypper addrepo https://download.opensuse.org/repositories/filesystems/$(lsb_release -rs)/filesystems.repo
    +sudo zypper refresh   # Refresh all repositories
    +
    +
    +
  6. +
  7. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo zypper install openssh-server
    +sudo systemctl restart sshd.service
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP. Do not forget to set the password for user by passwd.

    +
  8. +
  9. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  10. +
  11. Become root:

    +
    sudo -i
    +
    +
    +
  12. +
  13. Install ZFS in the Live CD environment:

    +
    zypper install zfs zfs-kmp-default
    +zypper install gdisk dkms
    +modprobe zfs
    +
    +
    +
  14. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    If the disk was previously used in an MD array:

    +
    zypper install mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool.

    • +
    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o ashift=12 -d \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    +    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    +    -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • The spacemap_v2 feature has been tested and is safe to use. The boot +pool is small, so this does not matter in practice.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O encryption=on \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      zypper install cryptsetup
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    • If you want to use grub bootloader, you must set:

      +
      -o feature@async_destroy=enabled \
      +-o feature@bookmarks=enabled \
      +-o feature@embedded_data=enabled \
      +-o feature@empty_bpobj=enabled \
      +-o feature@enabled_txg=enabled \
      +-o feature@extensible_dataset=enabled \
      +-o feature@filesystem_limits=enabled \
      +-o feature@hole_birth=enabled \
      +-o feature@large_blocks=enabled \
      +-o feature@lz4_compress=enabled \
      +-o feature@spacemap_histogram=enabled \
      +-o feature@zpool_checkpoint=enabled \
      +
      +
      +

      for your root pool. Relevant for grub 2.04 and Leap 15.3. Don’t use zpool +upgrade for this pool or you will lost the possibility to use grub2-install command.

      +
    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality has been implemented in Ubuntu 20.04 with +the zsys tool, though its dataset layout is more complicated. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/suse
    +zfs mount rpool/ROOT/suse
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/suse
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                                 rpool/home
    +zfs create -o mountpoint=/root             rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off                 rpool/var
    +zfs create -o canmount=off                 rpool/var/lib
    +zfs create                                 rpool/var/log
    +zfs create                                 rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to exclude these from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /opt on this system:

    +
    zfs create                                 rpool/opt
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create                                 rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off                 rpool/usr
    +zfs create                                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create                                 rpool/var/games
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create                                 rpool/var/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create                                 rpool/var/snap
    +
    +
    +

    If this system will use Flatpak packages:

    +
    zfs create                                 rpool/var/lib/flatpak
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create                                 rpool/var/www
    +
    +
    +

    If this system will use GNOME:

    +
    zfs create                                 rpool/var/lib/AccountsService
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
    +
    +
    +

    If this system will use NFS (locking):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
    +
    +
    +

    Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs -p
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  8. +
+
+
+

Step 4. Install System

+
    +
  1. Add repositories into chrooting directory:

    +
    zypper --root /mnt ar http://download.opensuse.org/distribution/leap/$(lsb_release -rs)/repo/non-oss  non-oss
    +zypper --root /mnt ar http://download.opensuse.org/distribution/leap/$(lsb_release -rs)/repo/oss oss
    +zypper --root /mnt ar http://download.opensuse.org/update/leap/$(lsb_release -rs)/oss  update-oss
    +zypper --root /mnt ar http://download.opensuse.org/update/leap/$(lsb_release -rs)/non-oss update-nonoss
    +
    +
    +
  2. +
  3. Generate repository indexes:

    +
    zypper --root /mnt refresh
    +
    +
    +

    You will get fingerprint exception, click a to say always trust and continue.:

    +
    New repository or package signing key received:
    +
    +Repository:       oss
    +Key Name:         openSUSE Project Signing Key <opensuse@opensuse.org>
    +Key Fingerprint:  22C07BA5 34178CD0 2EFE22AA B88B2FD4 3DBDC284
    +Key Created:      Mon May  5 11:37:40 2014
    +Key Expires:      Thu May  2 11:37:40 2024
    +Rpm Name:         gpg-pubkey-3dbdc284-53674dd4
    +
    +Do you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r):
    +
    +
    +
  4. +
  5. Install openSUSE Leap with zypper:

    +

    If you install base pattern, zypper will install busybox-grep which masks default kernel package. +Thats why I recommend you to install enhanced_base pattern, if you’re new in openSUSE. But in enhanced_base, bloats +can annoy you, while you want to use it openSUSE on server. So, you need to select

    +
      +
    1. Install base packages of openSUSE Leap with zypper (Recommended for server):

      +
      zypper --root /mnt install -t pattern base
      +
      +
      +
    2. +
    3. Install enhanced base of openSUSE Leap with zypper (Recommended for desktop):

      +
      zypper --root /mnt install -t pattern enhanced_base
      +
      +
      +
    4. +
    +
  6. +
  7. Install openSUSE zypper package system into chroot:

    +
    zypper --root /mnt install zypper
    +
    +
    +
  8. +
  9. Recommended: Install openSUSE yast2 system into chroot:

    +
    zypper --root /mnt install yast2
    +zypper --root /mnt install -t pattern yast2_basis
    +
    +
    +

    It will make easier to configure network and other configurations for beginners.

    +
  10. +
+

To install a desktop environment, see the openSUSE wiki

+
+
+

Step 5: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    echo HOSTNAME > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +

    Add a line:

    +
    127.0.1.1       HOSTNAME
    +
    +
    +

    or if the system has a real name in DNS:

    +
    127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Copy network information:

    +
    rm /mnt/etc/resolv.conf
    +cp /etc/resolv.conf /mnt/etc/
    +
    +
    +

    You will reconfigure network with yast2 later.

    +
  4. +
  5. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  6. +
  7. Configure a basic system environment:

    +
    ln -s /proc/self/mounts /etc/mtab
    +zypper refresh
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    locale -a
    +
    +
    +

    Output must include that languages:

    +
      +
    • C

    • +
    • C.utf8

    • +
    • en_US.utf8

    • +
    • POSIX

    • +
    +

    Find yout locale from locale -a commands output then set it with following command.

    +
    localectl set-locale LANG=en_US.UTF-8
    +
    +
    +
  8. +
  9. Optional: Reinstallation for stability:

    +

    After installation it may need. Some packages may have minor errors. +For that, do this if you wish. Since there is no command like +dpkg-reconfigure in openSUSE, zypper install -f stated as a alternative for +it +but it will reinstall packages.

    +
    zypper install -f permissions-config iputils ca-certificates  ca-certificates-mozilla pam shadow dbus libutempter0 suse-module-tools util-linux
    +
    +
    +
  10. +
  11. Install kernel:

    +
    zypper install kernel-default kernel-firmware
    +
    +
    +

    Note: If you installed base pattern, you need to deinstall busybox-grep to install kernel-default package.

    +
  12. +
  13. Install ZFS in the chroot environment for the new system:

    +
    zypper install lsb-release
    +zypper addrepo https://download.opensuse.org/repositories/filesystems/`lsb_release -rs`/filesystems.repo
    +zypper refresh   # Refresh all repositories
    +zypper install zfs zfs-kmp-default
    +
    +
    +
  14. +
  15. For LUKS installs only, setup /etc/crypttab:

    +
    zypper install cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) none \
    +    luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  16. +
  17. For LUKS installs only, fix cryptsetup naming for ZFS:

    +
    echo 'ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}"
    +ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"' >> /etc/udev/rules.d/99-local-crypt.rules
    +
    +
    +
  18. +
  19. Recommended: Generate and setup hostid:

    +
    cd /root
    +zypper install wget
    +wget https://github.com/openzfs/zfs/files/4537537/genhostid.sh.gz
    +gzip -d genhostid.sh.gz
    +chmod +x genhostid.sh
    +zgenhostid `/root/genhostid.sh`
    +
    +
    +

    Check, that generated and system hostid matches:

    +
    /root/genhostid.sh
    +hostid
    +
    +
    +
  20. +
  21. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      zypper install grub2-x86_64-pc
      +
      +
      +

      If your processor is 32bit use grub2-i386-pc instead of x86_64 one.

      +
    • +
    • Install GRUB for UEFI booting:

      +
      zypper install grub2-x86_64-efi dosfstools os-prober
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s PARTUUID -o value ${DISK}-part2) \
      +    /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +
      +
      +

      Notes:

      +
        +
      • +
        The -s 1 for mkdosfs is only necessary for drives which present

        4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

        +
        +
        +
      • +
      • +
        For a mirror or raidz topology, this step only installs GRUB on the

        first disk. The other disk(s) will be handled later.

        +
        +
        +
      • +
      +
    • +
    +
  22. +
  23. Optional: Remove os-prober:

    +
    zypper remove os-prober
    +
    +
    +

    This avoids error messages from update-bootloader. os-prober is only +necessary in dual-boot configurations.

    +
  24. +
  25. Set a root password:

    +
    passwd
    +
    +
    +
  26. +
  27. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/usr/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +
  28. +
  29. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  30. +
+
+
+

Step 6: Kernel Installation

+
    +
  1. Add zfs module into dracut:

    +
    echo 'zfs'>> /etc/modules-load.d/zfs.conf
    +
    +
    +
  2. +
  3. Kernel version of livecd can differ from currently installed version. Get kernel version of your new OS:

    +
    kernel_version=$(find /boot/vmlinuz-* | grep -Eo '[[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*\-.*-default')
    +
    +
    +
  4. +
  5. Refresh kernel files:

    +
    kernel-install add "$kernel_version" /boot/vmlinuz-"$kernel_version"
    +
    +
    +
  6. +
  7. Refresh the initrd files:

    +
    mkinitrd
    +
    +
    +

    Note: After some installations, LUKS partition cannot seen by dracut, +this will print “Failure occured during following action: +configuring encrypted DM device X VOLUME_CRYPTSETUP_FAILED“. For fix this +issue you need to check cryptsetup installation. See for more information +Note: Although we add the zfs config to the system module into /etc/modules.d, if it is not seen by dracut, we have to add it to dracut by force. +dracut –kver $(uname -r) –force –add-drivers “zfs”

    +
  8. +
+
+
+

Step 7: Grub2 Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub2-probe /boot
    +
    +
    +

    Output must be zfs

    +
  2. +
  3. If you having trouble with grub2-probe command make this:

    +
    echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile
    +export ZPOOL_VDEV_NAME_PATH=YES
    +
    +
    +

    then go back to grub2-probe step.

    +
  4. +
  5. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  6. +
  7. Update the boot configuration:

    +
    update-bootloader
    +
    +
    +

    Note: Ignore errors from osprober, if present. +Note: If you have had trouble with the grub2 installation, I suggest you use systemd-boot. +Note: If this command don’t gives any output, use classic grub.cfg generation with following command: +grub2-mkconfig -o /boot/grub2/grub.cfg

    +
  8. +
  9. Check that /boot/grub2/grub.cfg have the menuentry root=ZFS=rpool/ROOT/suse, like this:

    +
    linux   /boot@/vmlinuz-5.3.18-150300.59.60-default root=ZFS=rpool/ROOT/suse
    +
    +
    +

    If not, change /etc/default/grub:

    +
    GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/suse"
    +
    +
    +

    and repeat previous step.

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub2-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub2-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=opensuse --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
+
+
+

Step 8: Systemd-Boot Installation

+

Warning: This will break your Yast2 Bootloader Configuration. Make sure that you +are not able to fix the problem you are having with grub2. I decided to write this +part because sometimes grub2 doesn’t see the rpool pool in some cases.

+
    +
  1. Install systemd-boot:

    +
    bootctl install
    +
    +
    +

    Note: Only if previous cmd replied “Failed to get machine id: No medium found”, you need:

    +
    +

    systemd-machine-id-setup

    +
    +

    and repeat installation systemd-boot.

    +
  2. +
  3. Configure bootloader configuration:

    +
    tee -a /boot/efi/loader/loader.conf << EOF
    +default openSUSE_Leap.conf
    +timeout 5
    +console-mode auto
    +EOF
    +
    +
    +
  4. +
  5. Write Entries:

    +
    tee -a /boot/efi/loader/entries/openSUSE_Leap.conf << EOF
    +title   openSUSE Leap
    +linux   /EFI/openSUSE/vmlinuz
    +initrd  /EFI/openSUSE/initrd
    +options root=zfs:rpool/ROOT/suse boot=zfs
    +EOF
    +
    +
    +
  6. +
  7. Copy files into EFI:

    +
    mkdir /boot/efi/EFI/openSUSE
    +cp /boot/{vmlinuz,initrd} /boot/efi/EFI/openSUSE
    +
    +
    +
  8. +
  9. Update systemd-boot variables:

    +
    bootctl update
    +
    +
    +
  10. +
+
+
+

Step 9: Filesystem Configuration

+
    +
  1. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/suse
    +zfs set canmount=noauto rpool/ROOT/suse
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  2. +
+
+
+

Step 10: First Boot

+
    +
  1. Optional: Install SSH:

    +
    zypper install -y openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  2. +
  3. Optional: Snapshot the initial installation:

    +
    zfs snapshot -r bpool/BOOT/suse@install
    +zfs snapshot -r rpool/ROOT/suse@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  4. +
  5. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  6. +
  7. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  10. +
  11. Create a user account:

    +

    Replace username with your desired username:

    +
    zfs create rpool/home/username
    +adduser username
    +
    +cp -a /etc/skel/. /home/username
    +chown -R username:username /home/username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
    +
    +
    +
  12. +
  13. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:: +Check to be sure we using efi mode:

      +
      efibootmgr -v
      +
      +
      +

      This must return a message contains legacy_boot

      +

      Then reconfigure grub:

      +
      grub-install $DISK
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "opensuse-2" -l '\EFI\opensuse\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  14. +
+
+
+

Step 11: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 12: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/suse@install
    +sudo zfs destroy rpool/ROOT/suse@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +systemctl restart sshd
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-bootloader
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
zypper install cryptsetup
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/suse
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo zypper install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.html b/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.html new file mode 100644 index 000000000..a241dc7bf --- /dev/null +++ b/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.html @@ -0,0 +1,1389 @@ + + + + + + + openSUSE Tumbleweed Root on ZFS — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

openSUSE Tumbleweed Root on ZFS

+ +
+

Overview

+
+

Caution

+
    +
  • This HOWTO uses a whole physical disk.

  • +
  • Do not use these instructions for dual-booting.

  • +
  • Backup your data. Any existing data will be lost.

  • +
  • This is not an openSUSE official HOWTO page. This document will be updated if Root on ZFS support of +openSUSE is added in the future. +Also, openSUSE’s default system installer Yast2 does not support zfs. The method of setting up system +with zypper without Yast2 used in this page is based on openSUSE installation methods written by the +experience of the people in the community. +For more information about this, please look at the external links.

  • +
+
+
+

System Requirements

+ +

Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need massive amounts of RAM. Enabling +deduplication is a permanent change that cannot be easily reverted.

+
+
+

Support

+

If you need help, reach out to the community using the Mailing Lists or IRC at +#zfsonlinux on Libera Chat. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention @Zaryob.

+
+
+

Contributing

+
    +
  1. Fork and clone: https://github.com/openzfs/openzfs-docs

  2. +
  3. Install the tools:

    +
    sudo zypper install python3-pip
    +pip3 install -r docs/requirements.txt
    +# Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc:
    +PATH=$HOME/.local/bin:$PATH
    +
    +
    +
  4. +
  5. Make your changes.

  6. +
  7. Test:

    +
    cd docs
    +make html
    +sensible-browser _build/html/index.html
    +
    +
    +
  8. +
  9. git commit --signoff to a branch, git push, and create a pull +request.

  10. +
+
+
+

Encryption

+

This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available.

+

Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance.

+

ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in /etc/fstab, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once.

+

LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk.

+
+
+
+

Step 1: Prepare The Install Environment

+
    +
  1. Boot the openSUSE Live CD. If prompted, login with the username +live and password live. Connect your system to the Internet as +appropriate (e.g. join your WiFi network). Open a terminal.

  2. +
  3. Setup and update the repositories:

    +
    sudo zypper addrepo https://download.opensuse.org/repositories/filesystems/openSUSE_Tumbleweed/filesystems.repo
    +sudo zypper refresh  # Refresh all repositories
    +
    +
    +
  4. +
  5. Optional: Install and start the OpenSSH server in the Live CD environment:

    +

    If you have a second system, using SSH to access the target system can be +convenient:

    +
    sudo zypper install openssh-server
    +sudo systemctl restart sshd.service
    +
    +
    +

    Hint: You can find your IP address with +ip addr show scope global | grep inet. Then, from your main machine, +connect with ssh user@IP.

    +
  6. +
  7. Disable automounting:

    +

    If the disk has been used before (with partitions at the same offsets), +previous filesystems (e.g. the ESP) will automount if not disabled:

    +
    gsettings set org.gnome.desktop.media-handling automount false
    +
    +
    +
  8. +
  9. Become root:

    +
    sudo -i
    +
    +
    +
  10. +
  11. Install ZFS in the Live CD environment:

    +
    zypper install zfs zfs-kmp-default
    +zypper install gdisk
    +modprobe zfs
    +
    +
    +
  12. +
+
+
+

Step 2: Disk Formatting

+
    +
  1. Set a variable with the disk name:

    +
    DISK=/dev/disk/by-id/scsi-SATA_disk1
    +
    +
    +

    Always use the long /dev/disk/by-id/* aliases with ZFS. Using the +/dev/sd* device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool.

    +

    Hints:

    +
      +
    • ls -la /dev/disk/by-id will list the aliases.

    • +
    • Are you doing this in a virtual machine? If your virtual disk is missing +from /dev/disk/by-id, use /dev/vda if you are using KVM with +virtio; otherwise, read the troubleshooting +section.

    • +
    +
  2. +
  3. If you are re-using a disk, clear it as necessary:

    +

    If the disk was previously used in an MD array:

    +
    zypper install mdadm
    +
    +# See if one or more MD arrays are active:
    +cat /proc/mdstat
    +# If so, stop them (replace ``md0`` as required):
    +mdadm --stop /dev/md0
    +
    +# For an array using the whole disk:
    +mdadm --zero-superblock --force $DISK
    +# For an array using a partition:
    +mdadm --zero-superblock --force ${DISK}-part2
    +
    +
    +

    Clear the partition table:

    +
    sgdisk --zap-all $DISK
    +
    +
    +

    If you get a message about the kernel still using the old partition table, +reboot and start over (except that you can skip this step).

    +
  4. +
  5. Partition your disk(s):

    +

    Run this if you need legacy (BIOS) booting:

    +
    sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
    +
    +
    +

    Run this for UEFI booting (for use now or in the future):

    +
    sgdisk     -n2:1M:+512M   -t2:EF00 $DISK
    +
    +
    +

    Run this for the boot pool:

    +
    sgdisk     -n3:0:+1G      -t3:BF01 $DISK
    +
    +
    +

    Choose one of the following options:

    +
      +
    • Unencrypted or ZFS native encryption:

      +
      sgdisk     -n4:0:0        -t4:BF00 $DISK
      +
      +
      +
    • +
    • LUKS:

      +
      sgdisk     -n4:0:0        -t4:8309 $DISK
      +
      +
      +
    • +
    +

    If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool.

    +
  6. +
  7. Create the boot pool:

    +
    zpool create \
    +    -o cachefile=/etc/zfs/zpool.cache \
    +    -o ashift=12 -d \
    +    -o feature@async_destroy=enabled \
    +    -o feature@bookmarks=enabled \
    +    -o feature@embedded_data=enabled \
    +    -o feature@empty_bpobj=enabled \
    +    -o feature@enabled_txg=enabled \
    +    -o feature@extensible_dataset=enabled \
    +    -o feature@filesystem_limits=enabled \
    +    -o feature@hole_birth=enabled \
    +    -o feature@large_blocks=enabled \
    +    -o feature@lz4_compress=enabled \
    +    -o feature@spacemap_histogram=enabled \
    +    -o feature@zpool_checkpoint=enabled \
    +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    +    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    +    -O mountpoint=/boot -R /mnt \
    +    bpool ${DISK}-part3
    +
    +
    +

    You should not need to customize any of the options for the boot pool.

    +

    GRUB does not support all of the zpool features. See spa_feature_names +in grub-core/fs/zfs/zfs.c. +This step creates a separate boot pool for /boot with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB.

    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    bpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part3 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part3
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. The bpool convention originated in this HOWTO.

    • +
    +

    Feature Notes:

    +
      +
    • The allocation_classes feature should be safe to use. However, unless +one is using it (i.e. a special vdev), there is no point to enabling +it. It is extremely unlikely that someone would use this feature for a +boot pool. If one cares about speeding up the boot pool, it would make +more sense to put the whole pool on the faster disk rather than using it +as a special vdev.

    • +
    • The project_quota feature has been tested and is safe to use. This +feature is extremely unlikely to matter for the boot pool.

    • +
    • The resilver_defer should be safe but the boot pool is small enough +that it is unlikely to be necessary.

    • +
    • The spacemap_v2 feature has been tested and is safe to use. The boot +pool is small, so this does not matter in practice.

    • +
    • As a read-only compatible feature, the userobj_accounting feature +should be compatible in theory, but in practice, GRUB can fail with an +“invalid dnode type” error. This feature does not matter for /boot +anyway.

    • +
    +
  8. +
  9. Create the root pool:

    +

    Choose one of the following options:

    +
      +
    • Unencrypted:

      +
      zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • ZFS native encryption:

      +
      zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O encryption=on \
      +    -O keylocation=prompt -O keyformat=passphrase \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool ${DISK}-part4
      +
      +
      +
    • +
    • LUKS:

      +
      zypper install cryptsetup
      +cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
      +cryptsetup luksOpen ${DISK}-part4 luks1
      +zpool create \
      +    -o cachefile=/etc/zfs/zpool.cache \
      +    -o ashift=12 \
      +    -O acltype=posixacl -O canmount=off -O compression=lz4 \
      +    -O dnodesize=auto -O normalization=formD -O relatime=on \
      +    -O xattr=sa -O mountpoint=/ -R /mnt \
      +    rpool /dev/mapper/luks1
      +
      +
      +
    • +
    +

    Notes:

    +
      +
    • The use of ashift=12 is recommended here because many drives +today have 4 KiB (or larger) physical sectors, even though they +present 512 B logical sectors. Also, a future replacement drive may +have 4 KiB physical sectors (in which case ashift=12 is desirable) +or 4 KiB logical sectors (in which case ashift=12 is required).

    • +
    • Setting -O acltype=posixacl enables POSIX ACLs globally. If you +do not want this, remove that option, but later add +-o acltype=posixacl (note: lowercase “o”) to the zfs create +for /var/log, as journald requires ACLs

    • +
    • Setting normalization=formD eliminates some corner cases relating +to UTF-8 filename normalization. It also implies utf8only=on, +which means that only UTF-8 filenames are allowed. If you care to +support non-UTF-8 filenames, do not use this option. For a discussion +of why requiring UTF-8 filenames may be a bad idea, see The problems +with enforced UTF-8 only filenames.

    • +
    • recordsize is unset (leaving it at the default of 128 KiB). If you +want to tune it (e.g. -O recordsize=1M), see these various blog +posts.

    • +
    • Setting relatime=on is a middle ground between classic POSIX +atime behavior (with its significant performance impact) and +atime=off (which provides the best performance by completely +disabling atime updates). Since Linux 2.6.30, relatime has been +the default for other filesystems. See RedHat’s documentation +for further information.

    • +
    • Setting xattr=sa vastly improves the performance of extended +attributes. +Inside ZFS, extended attributes are used to implement POSIX ACLs. +Extended attributes can also be used by user-space applications. +They are used by some desktop GUI applications. +They can be used by Samba to store Windows ACLs and DOS attributes; +they are required for a Samba Active Directory domain controller. +Note that xattr=sa is Linux-specific. If you move your +xattr=sa pool to another OpenZFS implementation besides ZFS-on-Linux, +extended attributes will not be readable (though your data will be). If +portability of extended attributes is important to you, omit the +-O xattr=sa above. Even if you do not want xattr=sa for the whole +pool, it is probably fine to use it for /var/log.

    • +
    • Make sure to include the -part4 portion of the drive path. If you +forget that, you are specifying the whole disk, which ZFS will then +re-partition, and you will lose the bootloader partition(s).

    • +
    • ZFS native encryption now +defaults to aes-256-gcm.

    • +
    • For LUKS, the key size chosen is 512 bits. However, XTS mode requires two +keys, so the LUKS key is split in half. Thus, -s 512 means AES-256.

    • +
    • Your passphrase will likely be the weakest link. Choose wisely. See +section 5 of the cryptsetup FAQ +for guidance.

    • +
    +

    Hints:

    +
      +
    • If you are creating a mirror topology, create the pool using:

      +
      zpool create \
      +    ... \
      +    rpool mirror \
      +    /dev/disk/by-id/scsi-SATA_disk1-part4 \
      +    /dev/disk/by-id/scsi-SATA_disk2-part4
      +
      +
      +
    • +
    • For raidz topologies, replace mirror in the above command with +raidz, raidz2, or raidz3 and list the partitions from +the additional disks.

    • +
    • When using LUKS with mirror or raidz topologies, use +/dev/mapper/luks1, /dev/mapper/luks2, etc., which you will have +to create using cryptsetup.

    • +
    • The pool name is arbitrary. If changed, the new name must be used +consistently. On systems that can automatically install to ZFS, the root +pool is named rpool by default.

    • +
    +
  10. +
+
+
+

Step 3: System Installation

+
    +
  1. Create filesystem datasets to act as containers:

    +
    zfs create -o canmount=off -o mountpoint=none rpool/ROOT
    +zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    +
    +
    +

    On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through pkg image-update or +beadm. Similar functionality has been implemented in Ubuntu 20.04 with +the zsys tool, though its dataset layout is more complicated. Even +without such a tool, the rpool/ROOT and bpool/BOOT containers can still +be used for manually created clones. That said, this HOWTO assumes a single +filesystem for /boot for simplicity.

    +
  2. +
  3. Create filesystem datasets for the root and boot filesystems:

    +
    zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/suse
    +zfs mount rpool/ROOT/suse
    +
    +zfs create -o mountpoint=/boot bpool/BOOT/suse
    +
    +
    +

    With ZFS, it is not normally necessary to use a mount command (either +mount or zfs mount). This situation is an exception because of +canmount=noauto.

    +
  4. +
  5. Create datasets:

    +
    zfs create                                 rpool/home
    +zfs create -o mountpoint=/root             rpool/home/root
    +chmod 700 /mnt/root
    +zfs create -o canmount=off                 rpool/var
    +zfs create -o canmount=off                 rpool/var/lib
    +zfs create                                 rpool/var/log
    +zfs create                                 rpool/var/spool
    +
    +
    +

    The datasets below are optional, depending on your preferences and/or +software choices.

    +

    If you wish to exclude these from snapshots:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/cache
    +zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp
    +chmod 1777 /mnt/var/tmp
    +
    +
    +

    If you use /opt on this system:

    +
    zfs create                                 rpool/opt
    +
    +
    +

    If you use /srv on this system:

    +
    zfs create                                 rpool/srv
    +
    +
    +

    If you use /usr/local on this system:

    +
    zfs create -o canmount=off                 rpool/usr
    +zfs create                                 rpool/usr/local
    +
    +
    +

    If this system will have games installed:

    +
    zfs create                                 rpool/var/games
    +
    +
    +

    If this system will store local email in /var/mail:

    +
    zfs create                                 rpool/var/spool/mail
    +
    +
    +

    If this system will use Snap packages:

    +
    zfs create                                 rpool/var/snap
    +
    +
    +

    If this system will use Flatpak packages:

    +
    zfs create                                 rpool/var/lib/flatpak
    +
    +
    +

    If you use /var/www on this system:

    +
    zfs create                                 rpool/var/www
    +
    +
    +

    If this system will use GNOME:

    +
    zfs create                                 rpool/var/lib/AccountsService
    +
    +
    +

    If this system will use Docker (which manages its own datasets & +snapshots):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
    +
    +
    +

    If this system will use NFS (locking):

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/nfs
    +
    +
    +

    Mount a tmpfs at /run:

    +
    mkdir /mnt/run
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +
    +

    A tmpfs is recommended later, but if you want a separate dataset for +/tmp:

    +
    zfs create -o com.sun:auto-snapshot=false  rpool/tmp
    +chmod 1777 /mnt/tmp
    +
    +
    +

    The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data.

    +

    If you do nothing extra, /tmp will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for /tmp, +as shown above. This keeps the /tmp data out of snapshots of your root +filesystem. It also allows you to set a quota on rpool/tmp, if you want +to limit the maximum space used. Otherwise, you can use a tmpfs (RAM +filesystem) later.

    +
  6. +
  7. Copy in zpool.cache:

    +
    mkdir /mnt/etc/zfs -p
    +cp /etc/zfs/zpool.cache /mnt/etc/zfs/
    +
    +
    +
  8. +
+
+
+

Step 4. Install System

+
    +
  1. Add repositories into chrooting directory:

    +
    zypper --root /mnt ar http://download.opensuse.org/tumbleweed/repo/non-oss/ non-oss
    +zypper --root /mnt ar http://download.opensuse.org/tumbleweed/repo/oss/ oss
    +
    +
    +
  2. +
  3. Generate repository indexes:

    +
    zypper --root /mnt refresh
    +
    +
    +

    You will get fingerprint exception, click a to say always trust and continue.:

    +
    New repository or package signing key received:
    +
    +Repository:       oss
    +Key Name:         openSUSE Project Signing Key <opensuse@opensuse.org>
    +Key Fingerprint:  22C07BA5 34178CD0 2EFE22AA B88B2FD4 3DBDC284
    +Key Created:      Mon May  5 11:37:40 2014
    +Key Expires:      Thu May  2 11:37:40 2024
    +Rpm Name:         gpg-pubkey-3dbdc284-53674dd4
    +
    +Do you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r):
    +
    +
    +
  4. +
  5. Install openSUSE Tumbleweed with zypper:

    +

    If you install base pattern, zypper will install busybox-grep which masks default kernel package. +Thats why I recommend you to install enhanced_base pattern, if you’re new in openSUSE. But in enhanced_base, bloats +can annoy you, while you want to use it openSUSE on server. So, you need to select

    +
      +
    1. Install base packages of openSUSE Tumbleweed with zypper (Recommended for server):

      +
      zypper --root /mnt install -t pattern base
      +
      +
      +
    2. +
    3. Install enhanced base of openSUSE Tumbleweed with zypper (Recommended for desktop):

      +
      zypper --root /mnt install -t pattern enhanced_base
      +
      +
      +
    4. +
    +
  6. +
  7. Install openSUSE zypper package system into chroot:

    +
    zypper --root /mnt install zypper
    +
    +
    +
  8. +
  9. Recommended: Install openSUSE yast2 system into chroot:

    +
    zypper --root /mnt install yast2
    +
    +
    +
  10. +
+
+
+

Note

+

If your /etc/resolv.conf file is empty, proceed this command.

+

echo “nameserver 8.8.4.4” | tee -a /mnt/etc/resolv.conf

+
+

It will make easier to configure network and other configurations for beginners.

+
+

To install a desktop environment, see the openSUSE wiki

+
+
+

Step 5: System Configuration

+
    +
  1. Configure the hostname:

    +

    Replace HOSTNAME with the desired hostname:

    +
    echo HOSTNAME > /mnt/etc/hostname
    +vi /mnt/etc/hosts
    +
    +
    +

    Add a line:

    +
    127.0.1.1       HOSTNAME
    +
    +
    +

    or if the system has a real name in DNS:

    +
    127.0.1.1       FQDN HOSTNAME
    +
    +
    +

    Hint: Use nano if you find vi confusing.

    +
  2. +
  3. Copy network information:

    +
    cp /etc/resolv.conf /mnt/etc
    +
    +
    +

    You will reconfigure network with yast2.

    +
    +

    Note

    +

    If your /etc/resolv.conf file is empty, proceed this command.

    +

    echo “nameserver 8.8.4.4” | tee -a /mnt/etc/resolv.conf

    +
    +
  4. +
  5. Bind the virtual filesystems from the LiveCD environment to the new +system and chroot into it:

    +
    mount --make-private --rbind /dev  /mnt/dev
    +mount --make-private --rbind /proc /mnt/proc
    +mount --make-private --rbind /sys  /mnt/sys
    +mount -t tmpfs tmpfs /mnt/run
    +mkdir /mnt/run/lock
    +
    +chroot /mnt /usr/bin/env DISK=$DISK bash --login
    +
    +
    +

    Note: This is using --rbind, not --bind.

    +
  6. +
  7. Configure a basic system environment:

    +
    ln -s /proc/self/mounts /etc/mtab
    +zypper refresh
    +
    +
    +

    Even if you prefer a non-English system language, always ensure that +en_US.UTF-8 is available:

    +
    locale -a
    +
    +
    +

    Output must include that languages:

    +
      +
    • C

    • +
    • C.UTF-8

    • +
    • en_US.utf8

    • +
    • POSIX

    • +
    +

    Find yout locale from locale -a commands output then set it with following command.

    +
    localectl set-locale LANG=en_US.UTF-8
    +
    +
    +
  8. +
  9. Optional: Reinstallation for stability:

    +

    After installation it may need. Some packages may have minor errors. +For that, do this if you wish. Since there is no command like +dpkg-reconfigure in openSUSE, zypper install -f stated as a alternative for +it +but it will reinstall packages.

    +
    zypper install -f permissions-config iputils ca-certificates  ca-certificates-mozilla pam shadow dbus-1 libutempter0 suse-module-tools util-linux
    +
    +
    +
  10. +
  11. Install kernel:

    +
    zypper install kernel-default kernel-firmware
    +
    +
    +
    +

    Note

    +

    If you installed base pattern, you need to deinstall busybox-grep to install kernel-default package.

    +
    +
  12. +
  13. Install ZFS in the chroot environment for the new system:

    +
    zypper addrepo https://download.opensuse.org/repositories/filesystems/openSUSE_Tumbleweed/filesystems.repo
    +zypper refresh   # Refresh all repositories
    +zypper install zfs
    +
    +
    +
  14. +
  15. For LUKS installs only, setup /etc/crypttab:

    +
    zypper install cryptsetup
    +
    +echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) none \
    +    luks,discard,initramfs > /etc/crypttab
    +
    +
    +

    The use of initramfs is a work-around for cryptsetup does not support +ZFS.

    +

    Hint: If you are creating a mirror or raidz topology, repeat the +/etc/crypttab entries for luks2, etc. adjusting for each disk.

    +
  16. +
  17. For LUKS installs only, fix cryptsetup naming for ZFS:

    +
    echo 'ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}"
    +ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"' >> /etc/udev/rules.d/99-local-crypt.rules
    +
    +
    +
  18. +
  19. Install GRUB

    +

    Choose one of the following options:

    +
      +
    • Install GRUB for legacy (BIOS) booting:

      +
      zypper install grub2-i386-pc
      +
      +
      +
    • +
    • Install GRUB for UEFI booting:

      +
      zypper install grub2-x86_64-efi dosfstools os-prober
      +mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
      +mkdir /boot/efi
      +echo /dev/disk/by-uuid/$(blkid -s PARTUUID -o value ${DISK}-part2) \
      +   /boot/efi vfat defaults 0 0 >> /etc/fstab
      +mount /boot/efi
      +
      +
      +

      Notes:

      +
        +
      • +
        The -s 1 for mkdosfs is only necessary for drives which present

        4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size +(given the partition size of 512 MiB) for FAT32. It also works fine on +drives which present 512 B sectors.

        +
        +
        +
      • +
      • +
        For a mirror or raidz topology, this step only installs GRUB on the

        first disk. The other disk(s) will be handled later.

        +
        +
        +
      • +
      +
    • +
    +
  20. +
  21. Optional: Remove os-prober:

    +
    zypper remove os-prober
    +
    +
    +

    This avoids error messages from update-bootloader. os-prober is only +necessary in dual-boot configurations.

    +
  22. +
  23. Set a root password:

    +
    passwd
    +
    +
    +
  24. +
  25. Enable importing bpool

    +

    This ensures that bpool is always imported, regardless of whether +/etc/zfs/zpool.cache exists, whether it is in the cachefile or not, +or whether zfs-import-scan.service is enabled.

    +
    vi /etc/systemd/system/zfs-import-bpool.service
    +
    +
    +
    [Unit]
    +DefaultDependencies=no
    +Before=zfs-import-scan.service
    +Before=zfs-import-cache.service
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    +# Work-around to preserve zpool cache:
    +ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
    +ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
    +
    +[Install]
    +WantedBy=zfs-import.target
    +
    +
    +
    systemctl enable zfs-import-bpool.service
    +
    +
    +
  26. +
  27. Optional (but recommended): Mount a tmpfs to /tmp

    +

    If you chose to create a /tmp dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put /tmp on a +tmpfs (RAM filesystem) by enabling the tmp.mount unit.

    +
    cp /usr/share/systemd/tmp.mount /etc/systemd/system/
    +systemctl enable tmp.mount
    +
    +
    +
  28. +
+
+
+

Step 6: Kernel Installation

+
    +
  1. Add zfs module into dracut:

    +
    echo 'zfs'>> /etc/modules-load.d/zfs.conf
    +
    +
    +
  2. +
  3. Refresh kernel files:

    +
    kernel-install add $(uname -r) /boot/vmlinuz-$(uname -r)
    +
    +
    +
  4. +
  5. Refresh the initrd files:

    +
    mkinitrd
    +
    +
    +

    Note: After some installations, LUKS partition cannot seen by dracut, +this will print “Failure occured during following action: +configuring encrypted DM device X VOLUME_CRYPTSETUP_FAILED“. For fix this +issue you need to check cryptsetup installation. See for more information +Note: Although we add the zfs config to the system module into /etc/modules.d, if it is not seen by dracut, we have to add it to dracut by force. +dracut –kver $(uname -r) –force –add-drivers “zfs”

    +
  6. +
+
+
+

Step 7: Grub2 Installation

+
    +
  1. Verify that the ZFS boot filesystem is recognized:

    +
    grub2-probe /boot
    +
    +
    +

    Output must be zfs

    +
  2. +
  3. If you having trouble with grub2-probe command make this:

    +
    echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile
    +export ZPOOL_VDEV_NAME_PATH=YES
    +
    +
    +

    then go back to grub2-probe step.

    +
  4. +
  5. Workaround GRUB’s missing zpool-features support:

    +
    vi /etc/default/grub
    +# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/suse"
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Make debugging GRUB easier:

    +
    vi /etc/default/grub
    +# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    +# Uncomment: GRUB_TERMINAL=console
    +# Save and quit.
    +
    +
    +

    Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired.

    +
  8. +
  9. Update the boot configuration:

    +
    update-bootloader
    +
    +
    +

    Note: Ignore errors from osprober, if present. +Note: If you have had trouble with the grub2 installation, I suggest you use systemd-boot. +Note: If this command don’t gives any output, use classic grub.cfg generation with following command: +grub2-mkconfig -o /boot/grub2/grub.cfg

    +
  10. +
  11. Install the boot loader:

    +
      +
    1. For legacy (BIOS) booting, install GRUB to the MBR:

      +
      grub2-install $DISK
      +
      +
      +
    2. +
    +

    Note that you are installing GRUB to the whole disk, not a partition.

    +

    If you are creating a mirror or raidz topology, repeat the grub-install +command for each disk in the pool.

    +
      +
    1. For UEFI booting, install GRUB to the ESP:

      +
      grub2-install --target=x86_64-efi --efi-directory=/boot/efi \
      +    --bootloader-id=opensuse --recheck --no-floppy
      +
      +
      +

      It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later.

      +
    2. +
    +
  12. +
+
+
+

Step 8: Systemd-Boot Installation

+

Warning: This will break your Yast2 Bootloader Configuration. Make sure that you +are not able to fix the problem you are having with grub2. I decided to write this +part because sometimes grub2 doesn’t see the rpool pool in some cases.

+
    +
  1. Install systemd-boot:

    +
    bootctl install
    +
    +
    +
  2. +
  3. Configure bootloader configuration:

    +
    tee -a /boot/efi/loader/loader.conf << EOF
    +default openSUSE_Tumbleweed.conf
    +timeout 5
    +console-mode auto
    +EOF
    +
    +
    +
  4. +
  5. Write Entries:

    +
    tee -a /boot/efi/loader/entries/openSUSE_Tumbleweed.conf << EOF
    +title   openSUSE Tumbleweed
    +linux   /EFI/openSUSE/vmlinuz
    +initrd  /EFI/openSUSE/initrd
    +options root=zfs=rpool/ROOT/suse boot=zfs
    +EOF
    +
    +
    +
  6. +
  7. Copy files into EFI:

    +
    mkdir /boot/efi/EFI/openSUSE
    +cp /boot/{vmlinuz,initrd} /boot/efi/EFI/openSUSE
    +
    +
    +
  8. +
  9. Update systemd-boot variables:

    +
    bootctl update
    +
    +
    +
  10. +
+
+
+

Step 9: Filesystem Configuration

+
    +
  1. Fix filesystem mount ordering:

    +

    We need to activate zfs-mount-generator. This makes systemd aware of +the separate mountpoints, which is important for things like /var/log +and /var/tmp. In turn, rsyslog.service depends on var-log.mount +by way of local-fs.target and services using the PrivateTmp feature +of systemd automatically use After=var-tmp.mount.

    +
    mkdir /etc/zfs/zfs-list.cache
    +touch /etc/zfs/zfs-list.cache/bpool
    +touch /etc/zfs/zfs-list.cache/rpool
    +ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
    +zed -F &
    +
    +
    +

    Verify that zed updated the cache by making sure these are not empty:

    +
    cat /etc/zfs/zfs-list.cache/bpool
    +cat /etc/zfs/zfs-list.cache/rpool
    +
    +
    +

    If either is empty, force a cache update and check again:

    +
    zfs set canmount=on     bpool/BOOT/suse
    +zfs set canmount=noauto rpool/ROOT/suse
    +
    +
    +

    If they are still empty, stop zed (as below), start zed (as above) and try +again.

    +

    Stop zed:

    +
    fg
    +Press Ctrl-C.
    +
    +
    +

    Fix the paths to eliminate /mnt:

    +
    sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
    +
    +
    +
  2. +
+
+
+

Step 10: First Boot

+
    +
  1. Optional: Install SSH:

    +
    zypper install --yes openssh-server
    +
    +vi /etc/ssh/sshd_config
    +# Set: PermitRootLogin yes
    +
    +
    +
  2. +
  3. Optional: Snapshot the initial installation:

    +
    zfs snapshot bpool/BOOT/suse@install
    +zfs snapshot rpool/ROOT/suse@install
    +
    +
    +

    In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space.

    +
  4. +
  5. Exit from the chroot environment back to the LiveCD environment:

    +
    exit
    +
    +
    +
  6. +
  7. Run these commands in the LiveCD environment to unmount all +filesystems:

    +
    mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
    +    xargs -i{} umount -lf {}
    +zpool export -a
    +
    +
    +
  8. +
  9. Reboot:

    +
    reboot
    +
    +
    +

    Wait for the newly installed system to boot normally. Login as root.

    +
  10. +
  11. Create a user account:

    +

    Replace username with your desired username:

    +
    zfs create rpool/home/username
    +adduser username
    +
    +cp -a /etc/skel/. /home/username
    +chown -R username:username /home/username
    +usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
    +
    +
    +
  12. +
  13. Mirror GRUB

    +

    If you installed to multiple disks, install GRUB on the additional +disks.

    +
      +
    • For legacy (BIOS) booting:: +Check to be sure we using efi mode:

      +
      efibootmgr -v
      +
      +
      +

      This must return a message contains legacy_boot

      +

      Then reconfigure grub:

      +
      grub-install $DISK
      +
      +
      +

      Hit enter until you get to the device selection screen. +Select (using the space bar) all of the disks (not partitions) in your pool.

      +
    • +
    • For UEFI booting:

      +
      umount /boot/efi
      +
      +
      +

      For the second and subsequent disks (increment debian-2 to -3, etc.):

      +
      dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
      +   of=/dev/disk/by-id/scsi-SATA_disk2-part2
      +efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
      +    -p 2 -L "opensuse-2" -l '\EFI\opensuse\grubx64.efi'
      +
      +mount /boot/efi
      +
      +
      +
    • +
    +
  14. +
+
+
+

Step 11: Optional: Configure Swap

+

Caution: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is a bug report upstream.

+
    +
  1. Create a volume dataset (zvol) for use as a swap device:

    +
    zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
    +    -o logbias=throughput -o sync=always \
    +    -o primarycache=metadata -o secondarycache=none \
    +    -o com.sun:auto-snapshot=false rpool/swap
    +
    +
    +

    You can adjust the size (the 4G part) to your needs.

    +

    The compression algorithm is set to zle because it is the cheapest +available algorithm. As this guide recommends ashift=12 (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior.

    +
  2. +
  3. Configure the swap device:

    +

    Caution: Always use long /dev/zvol aliases in configuration +files. Never use a short /dev/zdX device name.

    +
    mkswap -f /dev/zvol/rpool/swap
    +echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
    +echo RESUME=none > /etc/initramfs-tools/conf.d/resume
    +
    +
    +

    The RESUME=none is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear.

    +
  4. +
  5. Enable the swap device:

    +
    swapon -av
    +
    +
    +
  6. +
+
+
+

Step 12: Final Cleanup

+
    +
  1. Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally.

  2. +
  3. Optional: Delete the snapshots of the initial installation:

    +
    sudo zfs destroy bpool/BOOT/suse@install
    +sudo zfs destroy rpool/ROOT/suse@install
    +
    +
    +
  4. +
  5. Optional: Disable the root password:

    +
    sudo usermod -p '*' root
    +
    +
    +
  6. +
  7. Optional (but highly recommended): Disable root SSH logins:

    +

    If you installed SSH earlier, revert the temporary change:

    +
    vi /etc/ssh/sshd_config
    +# Remove: PermitRootLogin yes
    +
    +systemctl restart sshd
    +
    +
    +
  8. +
  9. Optional: Re-enable the graphical boot process:

    +

    If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer.

    +
    sudo vi /etc/default/grub
    +# Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
    +# Comment out GRUB_TERMINAL=console
    +# Save and quit.
    +
    +sudo update-bootloader
    +
    +
    +

    Note: Ignore errors from osprober, if present.

    +
  10. +
  11. Optional: For LUKS installs only, backup the LUKS header:

    +
    sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
    +    --header-backup-file luks1-header.dat
    +
    +
    +

    Store that backup somewhere safe (e.g. cloud storage). It is protected by +your LUKS passphrase, but you may wish to use additional encryption.

    +

    Hint: If you created a mirror or raidz topology, repeat this for each +LUKS volume (luks2, etc.).

    +
  12. +
+
+
+

Troubleshooting

+
+

Rescuing using a Live CD

+

Go through Step 1: Prepare The Install Environment.

+

For LUKS, first unlock the disk(s):

+
zypper install cryptsetup
+cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
+# Repeat for additional disks, if this is a mirror or raidz topology.
+
+
+

Mount everything correctly:

+
zpool export -a
+zpool import -N -R /mnt rpool
+zpool import -N -R /mnt bpool
+zfs load-key -a
+zfs mount rpool/ROOT/suse
+zfs mount -a
+
+
+

If needed, you can chroot into your installed environment:

+
mount --make-private --rbind /dev  /mnt/dev
+mount --make-private --rbind /proc /mnt/proc
+mount --make-private --rbind /sys  /mnt/sys
+chroot /mnt /bin/bash --login
+mount /boot/efi
+mount -a
+
+
+

Do whatever you need to do to fix your system.

+

When done, cleanup:

+
exit
+mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+    xargs -i{} umount -lf {}
+zpool export -a
+reboot
+
+
+
+
+

Areca

+

Systems that require the arcsas blob driver should add it to the +/etc/initramfs-tools/modules file and run update-initramfs -c -k all.

+

Upgrade or downgrade the Areca driver if something like +RIP: 0010:[<ffffffff8101b316>]  [<ffffffff8101b316>] native_read_tsc+0x6/0x20 +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message.

+
+
+

MPT2SAS

+

Most problem reports for this tutorial involve mpt2sas hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware.

+

The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.

+

Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool.

+
+
+

QEMU/KVM/XEN

+

Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. -drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890).

+

To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:

+
sudo zypper install ovmf
+sudo vi /etc/libvirt/qemu.conf
+
+
+

Uncomment these lines:

+
nvram = [
+   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
+   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
+]
+
+
+
sudo systemctl restart libvirtd.service
+
+
+
+
+

VMware

+
    +
  • Set disk.EnableUUID = "TRUE" in the vmx file or vsphere configuration. +Doing this ensures that /dev/disk aliases are created in the guest.

  • +
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/License.html b/License.html new file mode 100644 index 000000000..02b086d91 --- /dev/null +++ b/License.html @@ -0,0 +1,152 @@ + + + + + + + License — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

License

+
    +
  • The OpenZFS software is licensed under the Common Development and Distribution License +(CDDL) unless otherwise noted.

  • +
  • The OpenZFS documentation content is licensed under a Creative Commons Attribution-ShareAlike +license (CC BY-SA 3.0) +unless otherwise noted.

  • +
  • OpenZFS is an associated project of SPI (Software in the Public Interest). SPI is a 501(c)(3) nonprofit +organization which handles the donations, finances, and legal holdings of the project.

  • +
+
+

Note

+

The Linux Kernel is licensed under the GNU General Public License +Version 2 (GPLv2). While +both (OpenZFS and Linux Kernel) are free open source licenses they are +restrictive licenses. The combination of them causes problems because it +prevents using pieces of code exclusively available under one license +with pieces of code exclusively available under the other in the same binary. +In the case of the Linux Kernel, this prevents us from distributing OpenZFS +as part of the Linux Kernel binary. However, there is nothing in either license +that prevents distributing it in the form of a binary module or in the form +of source code.

+

Additional reading and opinions:

+ +
+

CC BY-SA 3.0: Creative Commons License

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/Async Write.html b/Performance and Tuning/Async Write.html new file mode 100644 index 000000000..9e35ebd29 --- /dev/null +++ b/Performance and Tuning/Async Write.html @@ -0,0 +1,159 @@ + + + + + + + Async Writes — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Async Writes

+

The number of concurrent operations issued for the async write I/O class +follows a piece-wise linear function defined by a few adjustable points.

+
       |              o---------| <-- zfs_vdev_async_write_max_active
+  ^    |             /^         |
+  |    |            / |         |
+active |           /  |         |
+ I/O   |          /   |         |
+count  |         /    |         |
+       |        /     |         |
+       |-------o      |         | <-- zfs_vdev_async_write_min_active
+      0|_______^______|_________|
+       0%      |      |       100% of zfs_dirty_data_max
+               |      |
+               |      `-- zfs_vdev_async_write_active_max_dirty_percent
+               `--------- zfs_vdev_async_write_active_min_dirty_percent
+
+
+

Until the amount of dirty data exceeds a minimum percentage of the dirty +data allowed in the pool, the I/O scheduler will limit the number of +concurrent operations to the minimum. As that threshold is crossed, the +number of concurrent operations issued increases linearly to the maximum +at the specified maximum percentage of the dirty data allowed in the +pool.

+

Ideally, the amount of dirty data on a busy pool will stay in the sloped +part of the function between +zfs_vdev_async_write_active_min_dirty_percent and +zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the maximum +percentage, this indicates that the rate of incoming data is greater +than the rate that the backend storage can handle. In this case, we must +further throttle incoming writes, as described in the next section.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/Hardware.html b/Performance and Tuning/Hardware.html new file mode 100644 index 000000000..6079f054c --- /dev/null +++ b/Performance and Tuning/Hardware.html @@ -0,0 +1,970 @@ + + + + + + + Hardware — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Hardware

+ +
+

Introduction

+

Storage before ZFS involved rather expensive hardware that was unable to +protect against silent corruption and did not scale very well. The +introduction of ZFS has enabled people to use far less expensive +hardware than previously used in the industry with superior scaling. +This page attempts to provide some basic guidance to people buying +hardware for use in ZFS-based servers and workstations.

+

Hardware that adheres to this guidance will enable ZFS to reach its full +potential for performance and reliability. Hardware that does not adhere +to it will serve as a handicap. Unless otherwise stated, such handicaps +apply to all storage stacks and are by no means specific to ZFS. Systems +built using competing storage stacks will also benefit from these +suggestions.

+
+
+

BIOS / CPU microcode updates

+

Running the latest BIOS and CPU microcode is highly recommended.

+
+

Background

+

Computer microprocessors are very complex designs that often have bugs, +which are called errata. Modern microprocessors are designed to utilize +microcode. This puts part of the hardware design into quasi-software +that can be patched without replacing the entire chip. Errata are often +resolved through CPU microcode updates. These are often bundled in BIOS +updates. In some cases, the BIOS interactions with the CPU through +machine registers can be modified to fix things with the same microcode. +If a newer microcode is not bundled as part of a BIOS update, it can +often be loaded by the operating system bootloader or the operating +system itself.

+
+
+
+

ECC Memory

+

Bit flips can have fairly dramatic consequences for all computer +filesystems and ZFS is no exception. No technique used in ZFS (or any +other filesystem) is capable of protecting against bit flips. +Consequently, ECC Memory is highly recommended.

+
+

Background

+

Ordinary background radiation will randomly flip bits in computer +memory, which causes undefined behavior. These are known as “bit flips”. +Each bit flip can have any of four possible consequences depending on +which bit is flipped:

+
    +
  • Bit flips can have no effect.

    +
      +
    • Bit flips that have no effect occur in unused memory.

    • +
    +
  • +
  • Bit flips can cause runtime failures.

    +
      +
    • This is the case when a bit flip occurs in something read from +disk.

    • +
    • Failures are typically observed when program code is altered.

    • +
    • If the bit flip is in a routine within the system’s kernel or +/sbin/init, the system will likely crash. Otherwise, reloading the +affected data can clear it. This is typically achieved by a +reboot.

    • +
    +
  • +
  • It can cause data corruption.

    +
      +
    • This is the case when the bit is in use by data being written to +disk.

    • +
    • If the bit flip occurs before ZFS’ checksum calculation, ZFS will +not realize that the data is corrupt.

    • +
    • If the bit flip occurs after ZFS’ checksum calculation, but before +write-out, ZFS will detect it, but it might not be able to correct +it.

    • +
    +
  • +
  • It can cause metadata corruption.

    +
      +
    • This is the case when a bit flips in an on-disk structure being +written to disk.

    • +
    • If the bit flip occurs before ZFS’ checksum calculation, ZFS will +not realize that the metadata is corrupt.

    • +
    • If the bit flip occurs after ZFS’ checksum calculation, but before +write-out, ZFS will detect it, but it might not be able to correct +it.

    • +
    • Recovery from such an event will depend on what was corrupted. In +the worst, case, a pool could be rendered unimportable.

      +
        +
      • All filesystems have poor reliability in their absolute worst +case bit-flip failure scenarios. Such scenarios should be +considered extraordinarily rare.

      • +
      +
    • +
    +
  • +
+
+
+
+

Drive Interfaces

+
+

SAS versus SATA

+

ZFS depends on the block device layer for storage. Consequently, ZFS is +affected by the same things that affect other filesystems, such as +driver support and non-working hardware. Consequently, there are a few +things to note:

+
    +
  • Never place SATA disks into a SAS expander without a SAS interposer.

    +
      +
    • If you do this and it does work, it is the exception, rather than +the rule.

    • +
    +
  • +
  • Do not expect SAS controllers to be compatible with SATA port +multipliers.

    +
      +
    • This configuration is typically not tested.

    • +
    • The disks could be unrecognized.

    • +
    +
  • +
  • Support for SATA port multipliers is inconsistent across OpenZFS +platforms

    +
      +
    • Linux drivers generally support them.

    • +
    • Illumos drivers generally do not support them.

    • +
    • FreeBSD drivers are somewhere between Linux and Illumos in terms +of support.

    • +
    +
  • +
+
+
+

USB Hard Drives and/or Adapters

+

These have problems involving sector size reporting, SMART passthrough, +the ability to set ERC and other areas. ZFS will perform as well on such +devices as they are capable of allowing, but try to avoid them. They +should not be expected to have the same up-time as SAS and SATA drives +and should be considered unreliable.

+
+
+
+

Controllers

+

The ideal storage controller for ZFS has the following attributes:

+
    +
  • Driver support on major OpenZFS platforms

    +
      +
    • Stability is important.

    • +
    +
  • +
  • High per-port bandwidth

    +
      +
    • PCI Express interface bandwidth divided by the number of ports

    • +
    +
  • +
  • Low cost

    +
      +
    • Support for RAID, Battery Backup Units and hardware write caches +is unnecessary.

    • +
    +
  • +
+

Marc Bevand’s blog post From 32 to 2 ports: Ideal SATA/SAS Controllers +for ZFS & Linux MD RAID contains an +excellent list of storage controllers that meet these criteria. He +regularly updates it as newer controllers become available.

+
+

Hardware RAID controllers

+

Hardware RAID controllers should not be used with ZFS. While ZFS will +likely be more reliable than other filesystems on Hardware RAID, it will +not be as reliable as it would be on its own.

+
    +
  • Hardware RAID will limit opportunities for ZFS to perform self +healing on checksum failures. When ZFS does RAID-Z or mirroring, a +checksum failure on one disk can be corrected by treating the disk +containing the sector as bad for the purpose of reconstructing the +original information. This cannot be done when a RAID controller +handles the redundancy unless a duplicate copy is stored by ZFS in +the case that the corruption involving as metadata, the copies flag +is set or the RAID array is part of a mirror/raid-z vdev within ZFS.

  • +
  • Sector size information is not necessarily passed correctly by +hardware RAID on RAID 1. Sector size information cannot be passed +correctly on RAID 5/6. +Hardware RAID 1 is more likely to experience read-modify-write +overhead from partial sector writes while Hardware RAID 5/6 will almost +certainty suffer from partial stripe writes (i.e. the RAID write +hole). ZFS using the disks natively allows it to obtain the +sector size information reported by the disks to avoid +read-modify-write on sectors, while ZFS avoids partial stripe writes +on RAID-Z by design from using copy-on-write.

    +
      +
    • There can be sector alignment problems on ZFS when a drive +misreports its sector size. Such drives are typically NAND-flash +based solid state drives and older SATA drives from the advanced +format (4K sector size) transition before Windows XP EoL occurred. +This can be manually corrected at +vdev creation.

    • +
    • It is possible for the RAID header to cause misalignment of sector +writes on RAID 1 by starting the array within a sector on an +actual drive, such that manual correction of sector alignment at +vdev creation does not solve the problem.

    • +
    +
  • +
  • RAID controller failures can require that the controller be replaced with +the same model, or in less extreme cases, a model from the same +manufacturer. Using ZFS by itself allows any controller to be used.

  • +
  • If a hardware RAID controller’s write cache is used, an additional +failure point is introduced that can only be partially mitigated by +additional complexity from adding flash to save data in power loss +events. The data can still be lost if the battery fails when it is +required to survive a power loss event or there is no flash and power +is not restored in a timely manner. The loss of the data in the write +cache can severely damage anything stored on a RAID array when many +outstanding writes are cached. In addition, all writes are stored in +the cache rather than just synchronous writes that require a write +cache, which is inefficient, and the write cache is relatively small. +ZFS allows synchronous writes to be written directly to flash, which +should provide similar acceleration to hardware RAID and the ability +to accelerate many more in-flight operations.

  • +
  • Behavior during RAID reconstruction when silent corruption damages +data is undefined. There are reports of RAID 5 and 6 arrays being +lost during reconstruction when the controller encounters silent +corruption. ZFS’ checksums allow it to avoid this situation by +determining whether enough information exists to reconstruct data. If +not, the file is listed as damaged in zpool status and the +system administrator has the opportunity to restore it from a backup.

  • +
  • IO response times will be reduced whenever the OS blocks on IO +operations because the system CPU blocks on a much weaker embedded +CPU used in the RAID controller. This lowers IOPS relative to what +ZFS could have achieved.

  • +
  • The controller’s firmware is an additional layer of complexity that +cannot be inspected by arbitrary third parties. The ZFS source code +is open source and can be inspected by anyone.

  • +
  • If multiple RAID arrays are formed by the same controller and one +fails, the identifiers provided by the arrays exposed to the OS might +become inconsistent. Giving the drives directly to the OS allows this +to be avoided via naming that maps to a unique port or unique drive +identifier.

    +
      +
    • e.g. If you have arrays A, B, C and D; array B dies, the +interaction between the hardware RAID controller and the OS might +rename arrays C and D to look like arrays B and C respectively. +This can fault pools verbatim imported from the cachefile.

    • +
    • Not all RAID controllers behave this way. This issue has +been observed on both Linux and FreeBSD when system administrators +used single drive RAID 0 arrays, however. It has also been observed +with controllers from different vendors.

    • +
    +
  • +
+

One might be inclined to try using single-drive RAID 0 arrays to try to +use a RAID controller like a HBA, but this is not recommended for many +of the reasons listed for other hardware RAID types. It is best to use a +HBA instead of a RAID controller, for both performance and reliability.

+
+
+
+

Hard drives

+
+

Sector Size

+

Historically, all hard drives had 512-byte sectors, with the exception +of some SCSI drives that could be modified to support slightly larger +sectors. In 2009, the industry migrated from 512-byte sectors to +4096-byte “Advanced Format” sectors. Since Windows XP is not compatible +with 4096-byte sectors or drives larger than 2TB, some of the first +advanced format drives implemented hacks to maintain Windows XP +compatibility.

+
    +
  • The first advanced format drives on the market misreported their +sector size as 512-bytes for Windows XP compatibility. As of 2013, it +is believed that such hard drives are no longer in production. +Advanced format hard drives made during or after this time should +report their true physical sector size.

  • +
  • Drives storing 2TB and smaller might have a jumper that can be set to +map all sectors off by 1. This to provide proper alignment for +Windows XP, which started its first partition at sector 63. This +jumper setting should be off when using such drives with ZFS.

  • +
+

As of 2014, there are still 512-byte and 4096-byte drives on the market, +but they are known to properly identify themselves unless behind a USB +to SATA controller. Replacing a 512-byte sector drive with a 4096-byte +sector drives in a vdev created with 512-byte sector drives will +adversely affect performance. Replacing a 4096-byte sector drive with a +512-byte sector drive will have no negative effect on performance.

+
+
+

Error recovery control

+

ZFS is said to be able to use cheap drives. This was true when it was +introduced and hard drives supported Error recovery control. Since ZFS’ +introduction, error recovery control has been removed from low-end +drives from certain manufacturers, most notably Western Digital. +Consistent performance requires hard drives that support error recovery +control.

+
+

Background

+

Hard drives store data using small polarized regions a magnetic surface. +Reading from and/or writing to this surface poses a few reliability +problems. One is that imperfections in the surface can corrupt bits. +Another is that vibrations can cause drive heads to miss their targets. +Consequently, hard drive sectors are composed of three regions:

+
    +
  • A sector number

  • +
  • The actual data

  • +
  • ECC

  • +
+

The sector number and ECC enables hard drives to detect and respond to +such events. When either event occurs during a read, hard drives will +retry the read many times until they either succeed or conclude that the +data cannot be read. The latter case can take a substantial amount of +time and consequently, IO to the drive will stall.

+

Enterprise hard drives and some consumer hard drives implement a feature +called Time-Limited Error Recovery (TLER) by Western Digital, Error +Recovery Control (ERC) by Seagate and Command Completion Time Limit by +Hitachi and Samsung, which permits the time drives are willing to spend +on such events to be limited by the system administrator.

+

Drives that lack such functionality can be expected to have arbitrarily +high limits. Several minutes is not impossible. Drives with this +functionality typically default to 7 seconds. ZFS does not currently +adjust this setting on drives. However, it is advisable to write a +script to set the error recovery time to a low value, such as 0.1 +seconds until ZFS is modified to control it. This must be done on every +boot.

+
+
+
+

RPM Speeds

+

High RPM drives have lower seek times, which is historically regarded as +being desirable. They increase cost and sacrifice storage density in +order to achieve what is typically no more than a factor of 6 +improvement over their lower RPM counterparts.

+

To provide some numbers, a 15k RPM drive from a major manufacturer is +rated for 3.4 millisecond average read and 3.9 millisecond average +write. Presumably, this number assumes that the target sector is at most +half the number of drive tracks away from the head and half the disk +away. Being even further away is worst-case 2 times slower. Manufacturer +numbers for 7200 RPM drives are not available, but they average 13 to 16 +milliseconds in empirical measurements. 5400 RPM drives can be expected +to be slower.

+

ARC and ZIL are able to mitigate much of the benefit of lower seek +times. Far larger increases in IOPS performance can be obtained by +adding additional RAM for ARC, L2ARC devices and SLOG devices. Even +higher increases in performance can be obtained by replacing hard drives +with solid state storage entirely. Such things are typically more cost +effective than high RPM drives when considering IOPS.

+
+
+

Command Queuing

+

Drives with command queues are able to reorder IO operations to increase +IOPS. This is called Native Command Queuing on SATA and Tagged Command +Queuing on PATA/SCSI/SAS. ZFS stores objects in metaslabs and it can use +several metastabs at any given time. Consequently, ZFS is not only +designed to take advantage of command queuing, but good ZFS performance +requires command queuing. Almost all drives manufactured within the past +10 years can be expected to support command queuing. The exceptions are:

+
    +
  • Consumer PATA/IDE drives

  • +
  • First generation SATA drives, which used IDE to SATA translation +chips, from 2003 to 2004.

  • +
  • SATA drives operating under IDE emulation that was configured in the +system BIOS.

  • +
+

Each OpenZFS system has different methods for checking whether command +queuing is supported. On Linux, hdparm -I /path/to/device \| grep +Queue is used. On FreeBSD, camcontrol identify $DEVICE is used.

+
+
+
+

NAND Flash SSDs

+

As of 2014, Solid state storage is dominated by NAND-flash and most +articles on solid state storage focus on it exclusively. As of 2014, the +most popular form of flash storage used with ZFS involve drives with +SATA interfaces. Enterprise models with SAS interfaces are beginning to +become available.

+

As of 2017, Solid state storage using NAND-flash with PCI-E interfaces +are widely available on the market. They are predominantly enterprise +drives that utilize a NVMe interface that has lower overhead than the +ATA used in SATA or SCSI used in SAS. There is also an interface known +as M.2 that is primarily used by consumer SSDs, although not necessarily +limited to them. It can provide electrical connectivity for multiple +buses, such as SATA, PCI-E and USB. M.2 SSDs appear to use either SATA +or NVME.

+
+

NVMe low level formatting

+

Many NVMe SSDs support both 512-byte sectors and 4096-byte sectors. They +often ship with 512-byte sectors, which are less performant than +4096-byte sectors. Some also support metadata for T10/DIF CRC to try to +improve reliability, although this is unnecessary with ZFS.

+

NVMe drives should be +formatted +to use 4096-byte sectors without metadata prior to being given to ZFS +for best performance unless they indicate that 512-byte sectors are as +performant as 4096-byte sectors, although this is unlikely. Lower +numbers in the Rel_Perf of Supported LBA Sizes from smartctl -a +/dev/$device_namespace (for example smartctl -a /dev/nvme1n1) +indicate higher performance low level formats, with 0 being the best. +The current formatting will be marked by a plus sign under the format +Fmt.

+

You may format a drive using nvme format /dev/nvme1n1 -l $ID. The $ID +corresponds to the Id field value from the Supported LBA Sizes SMART +information.

+
+
+

Power Failure Protection

+
+

Background

+

On-flash data structures are highly complex and traditionally have been +highly vulnerable to corruption. In the past, such corruption would +result in the loss of *all* drive data and an event such as a PSU +failure could result in multiple drives simultaneously failing. Since +the drive firmware is not available for review, the traditional +conclusion was that all drives that lack hardware features to avoid +power failure events cannot be trusted, which was found to be the case +multiple times in the +past [1] [2] [3]. +Discussion of power failures bricking NAND flash SSDs appears to have +vanished from literature following the year 2015. SSD manufacturers now +claim that firmware power loss protection is robust enough to provide +equivalent protection to hardware power loss protection. Kingston is one +example. +Firmware power loss protection is used to guarantee the protection of +flushed data and the drives’ own metadata, which is all that filesystems +such as ZFS need.

+

However, those that either need or want strong guarantees that firmware +bugs are unlikely to be able to brick drives following power loss events +should continue to use drives that provide hardware power loss +protection. The basic concept behind how hardware power failure +protection works has been documented by +Intel +for those who wish to read about the details. As of 2020, use of +hardware power loss protection is now a feature solely of enterprise +SSDs that attempt to protect unflushed data in addition to drive +metadata and flushed data. This additional protection beyond protecting +flushed data and the drive metadata provides no additional benefit to +ZFS, but it does not hurt it.

+

It should also be noted that drives in data centers and laptops are +unlikely to experience power loss events, reducing the usefulness of +hardware power loss protection. This is especially the case in +datacenters where redundant power, UPS power and the use of IPMI to do +forced reboots should prevent most drives from experiencing power loss +events.

+

Lists of drives that provide hardware power loss protection are +maintained below for those who need/want it. Since ZFS, like other +filesystems, only requires power failure protection for flushed data and +drive metadata, older drives that only protect these things are included +on the lists.

+
+
+

NVMe drives with power failure protection

+

A non-exhaustive list of NVMe drives with power failure protection is as +follows:

+
    +
  • Intel 750

  • +
  • Intel DC P3500/P3600/P3608/P3700

  • +
  • Micron 7300/7400/7450 PRO/MAX

  • +
  • Samsung PM963 (M.2 form factor)

  • +
  • Samsung PM1725/PM1725a

  • +
  • Samsung XS1715

  • +
  • Toshiba ZD6300

  • +
  • Seagate Nytro 5000 M.2 (XP1920LE30002 tested; read notes below +before buying)

    +
      +
    • Inexpensive 22110 M.2 enterprise drive using consumer MLC that is +optimized for read mostly workloads. It is not a good choice for a +SLOG device, which is a write mostly workload.

    • +
    • The +manual +for this drive specifies airflow requirements. If the drive does +not receive sufficient airflow from case fans, it will overheat at +idle. It’s thermal throttling will severely degrade performance +such that write throughput performance will be limited to 1/10 of +the specification and read latencies will reach several hundred +milliseconds. Under continuous load, the device will continue to +become hotter until it suffers a “degraded reliability” event +where all data on at least one NVMe namespace is lost. The NVMe +namespace is then unusable until a secure erase is done. Even with +sufficient airflow under normal circumstances, data loss is +possible under load following the failure of fans in an enterprise +environment. Anyone deploying this into production in an +enterprise environment should be mindful of this failure mode.

    • +
    • Those who wish to use this drive in a low airflow situation can +workaround this failure mode by placing a passive heatsink such as +this on the +NAND flash controller. It is the chip under the sticker closest to +the capacitors. This was tested by placing the heatsink over the +sticker (as removing it was considered undesirable). The heatsink +will prevent the drive from overheating to the point of data loss, +but it will not fully alleviate the overheating situation under +load without active airflow. A scrub will cause it to overheat +after a few hundred gigabytes are read. However, the thermal +throttling will quickly cool the drive from 76 degrees Celsius to +74 degrees Celsius, restoring performance.

      +
        +
      • It might be possible to use the heatsink in an enterprise +environment to provide protection against data loss following +fan failures. However, this was not evaluated. Furthermore, +operating temperatures for consumer NAND flash should be at or +above 40 degrees Celsius for long term data integrity. +Therefore, the use of a heatsink to provide protection against +data loss following fan failures in an enterprise environment +should be evaluated before deploying drives into production to +ensure that the drive is not overcooled.

      • +
      +
    • +
    +
  • +
+
+
+

SAS drives with power failure protection

+

A non-exhaustive list of SAS drives with power failure protection is as +follows:

+
    +
  • Samsung PM1633/PM1633a

  • +
  • Samsung SM1625

  • +
  • Samsung PM853T

  • +
  • Toshiba PX05SHB***/PX04SHB***/PX04SHQ***

  • +
  • Toshiba PX05SLB***/PX04SLB***/PX04SLQ***

  • +
  • Toshiba PX05SMB***/PX04SMB***/PX04SMQ***

  • +
  • Toshiba PX05SRB***/PX04SRB***/PX04SRQ***

  • +
  • Toshiba PX05SVB***/PX04SVB***/PX04SVQ***

  • +
+
+
+

SATA drives with power failure protection

+

A non-exhaustive list of SATA drives with power failure protection is as +follows:

+
    +
  • Crucial MX100/MX200/MX300

  • +
  • Crucial M500/M550/M600

  • +
  • Intel 320

    +
      +
    • Early reports claimed that the 330 and 335 had power failure +protection too, but they do +not.

    • +
    +
  • +
  • Intel 710

  • +
  • Intel 730

  • +
  • Intel DC S3500/S3510/S3610/S3700/S3710

  • +
  • Kingston DC500R/DC500M

  • +
  • Micron 5210 Ion

    +
      +
    • First QLC drive on the list. High capacity with a low price per +gigabyte.

    • +
    +
  • +
  • Samsung PM863/PM863a

  • +
  • Samsung SM843T (do not confuse with SM843)

  • +
  • Samsung SM863/SM863a

  • +
  • Samsung 845DC Evo

  • +
  • Samsung 845DC Pro

    + +
  • +
  • Toshiba HK4E/HK3E2

  • +
  • Toshiba HK4R/HK3R2/HK3R

  • +
+
+
+

Criteria/process for inclusion into these lists

+

These lists have been compiled on a volunteer basis by OpenZFS +contributors (mainly Richard Yao) from trustworthy sources of +information. The lists are intended to be vendor neutral and are not +intended to benefit any particular manufacturer. Any perceived bias +toward any manufacturer is caused by a lack of awareness and a lack of +time to research additional options. Confirmation of the presence of +adequate power loss protection by a reliable source is the only +requirement for inclusion into this list. Adequate power loss protection +means that the drive must protect both its own internal metadata and all +flushed data. Protection of unflushed data is irrelevant and therefore +not a requirement. ZFS only expects storage to protect flushed data. +Consequently, solid state drives whose power loss protection only +protects flushed data is sufficient for ZFS to ensure that data remains +safe.

+

Anyone who believes an unlisted drive to provide adequate power failure +protection may contact the Mailing Lists with +a request for inclusion and substantiation for the claim that power +failure protection is provided. Examples of substantiation include +pictures of drive internals showing the presence of capacitors, +statements by well regarded independent review sites such as Anandtech +and manufacturer specification sheets. The latter are accepted on the +honor system until a manufacturer is found to misstate reality on the +protection of the drives’ own internal metadata structures and/or the +protection of flushed data. Thus far, all manufacturers have been +honest.

+
+
+
+

Flash pages

+

The smallest unit on a NAND chip that can be written is a flash page. +The first NAND-flash SSDs on the market had 4096-byte pages. Further +complicating matters is that the the page size has been doubled twice +since then. NAND flash SSDs should report these pages as being +sectors, but so far, all of them incorrectly report 512-byte sectors for +Windows XP compatibility. The consequence is that we have a similar +situation to what we had with early advanced format hard drives.

+

As of 2014, most NAND-flash SSDs on the market have 8192-byte page +sizes. However, models using 128-Gbit NAND from certain manufacturers +have a 16384-byte page size. Maximum performance requires that vdevs be +created with correct ashift values (13 for 8192-byte and 14 for +16384-byte). However, not all OpenZFS platforms support this. The Linux +port supports ashift=13, while others are limited to ashift=12 +(4096-byte).

+

As of 2017, NAND-flash SSDs are tuned for 4096-byte IOs. Matching the +flash page size is unnecessary and ashift=12 is usually the correct +choice. Public documentation on flash page size is also nearly +non-existent.

+
+
+

ATA TRIM / SCSI UNMAP

+

It should be noted that this is a separate case from +discard on zvols or hole punching on filesystems. Those work regardless +of whether ATA TRIM / SCSI UNMAP is sent to the actual block devices.

+
+

ATA TRIM Performance Issues

+

The ATA TRIM command in SATA 3.0 and earlier is a non-queued command. +Issuing a TRIM command on a SATA drive conforming to SATA 3.0 or earlier +will cause the drive to drain its IO queue and stop servicing requests +until it finishes, which hurts performance. SATA 3.1 removed this +limitation, but very few SATA drives on the market are conformant to +SATA 3.1 and it is difficult to distinguish them from SATA 3.0 drives. +At the same time, SCSI UNMAP has no such problems.

+
+
+
+
+

Optane / 3D XPoint SSDs

+

These are SSDs with far better latencies and write endurance than NAND +flash SSDs. They are byte addressable, such that ashift=9 is fine for +use on them. Unlike NAND flash SSDs, they do not require any special +power failure protection circuitry for reliability. There is also no +need to run TRIM on them. However, they cost more per GB than NAND flash +(as of 2020). The enterprise models make excellent SLOG devices. Here is +a list of models that are known to perform well:

+ +

Note that SLOG devices rarely have more than 4GB in use at any given +time, so the smaller sized devices are generally the best choice in +terms of cost, with larger sizes giving no benefit. Larger sizes could +be a good choice for other vdev types, depending on performance needs +and cost considerations.

+
+
+

Power

+

Ensuring that computers are properly grounded is highly recommended. +There have been cases in user homes where machines experienced random +failures when plugged into power receptacles that had open grounds (i.e. +no ground wire at all). This can cause random failures on any computer +system, whether it uses ZFS or not.

+

Power should also be relatively stable. Large dips in voltages from +brownouts are preferably avoided through the use of UPS units or line +conditioners. Systems subject to unstable power that do not outright +shutdown can exhibit undefined behavior. PSUs with longer hold-up times +should be able to provide partial protection against this, but hold up +times are often undocumented and are not a substitute for a UPS or line +conditioner.

+
+

PWR_OK signal

+

PSUs are supposed to deassert a PWR_OK signal to indicate that provided +voltages are no longer within the rated specification. This should force +an immediate shutdown. However, the system clock of a developer +workstation was observed to significantly deviate from the expected +value following during a series of ~1 second brown outs. This machine +did not use a UPS at the time. However, the PWR_OK mechanism should have +protected against this. The observation of the PWR_OK signal failing to +force a shutdown with adverse consequences (to the system clock in this +case) suggests that the PWR_OK mechanism is not a strict guarantee.

+
+
+

PSU Hold-up Times

+

A PSU hold-up time is the amount of time that a PSU can continue to +output power at maximum output within standard voltage tolerances +following the loss of input power. This is important for supporting UPS +units because the transfer +time +taken by a standard UPS to supply power from its battery can leave +machines without power for “5-12 ms”. Intel’s ATX Power Supply design +guide +specifies a hold up time of 17 milliseconds at maximum continuous +output. The hold-up time is a inverse function of how much power is +being output by the PSU, with lower power output increasing holdup +times.

+

Capacitor aging in PSUs will lower the hold-up time below what it was +when new, which could cause reliability issues as equipment ages. +Machines using substandard PSUs with hold-up times below the +specification therefore require higher end UPS units for protection to +ensure that the transfer time does not exceed the hold-up time. A +hold-up time below the transfer time during a transfer to battery power +can cause undefined behavior should the PWR_OK signal not become +deasserted to force the machine to power off.

+

If in doubt, use a double conversion UPS unit. Double conversion UPS +units always run off the battery, such that the transfer time is 0. This +is unless they are high efficiency models that are hybrids between +standard UPS units and double conversion UPS units, although these are +reported to have much lower transfer times than standard PSUs. You could +also contact your PSU manufacturer for the hold up time specification, +but if reliability for years is a requirement, you should use a higher +end UPS with a low transfer time.

+

Note that double conversion units are at most 94% efficient unless they +support a high efficiency mode, which adds latency to the time to +transition to battery power.

+
+
+

UPS batteries

+

The lead acid batteries in UPS units generally need to be replaced +regularly to ensure that they provide power during power outages. For +home systems, this is every 3 to 5 years, although this varies with +temperature [4]. For +enterprise systems, contact your vendor.

+

Footnotes

+ +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/Module Parameters.html b/Performance and Tuning/Module Parameters.html new file mode 100644 index 000000000..3eb985add --- /dev/null +++ b/Performance and Tuning/Module Parameters.html @@ -0,0 +1,13854 @@ + + + + + + + Module Parameters — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Module Parameters

+

Most of the ZFS kernel module parameters are accessible in the SysFS +/sys/module/zfs/parameters directory. Current values can be observed +by

+
cat /sys/module/zfs/parameters/PARAMETER
+
+
+

Many of these can be changed by writing new values. These are denoted by +Change|Dynamic in the PARAMETER details below.

+
echo NEWVALUE >> /sys/module/zfs/parameters/PARAMETER
+
+
+

If the parameter is not dynamically adjustable, an error can occur and +the value will not be set. It can be helpful to check the permissions +for the PARAMETER file in SysFS.

+

In some cases, the parameter must be set prior to loading the kernel +modules or it is desired to have the parameters set automatically at +boot time. For many distros, this can be accomplished by creating a file +named /etc/modprobe.d/zfs.conf containing a text line for each +module parameter using the format:

+
# change PARAMETER for workload XZY to solve problem PROBLEM_DESCRIPTION
+# changed by YOUR_NAME on DATE
+options zfs PARAMETER=VALUE
+
+
+

Some parameters related to ZFS operations are located in module +parameters other than in the zfs kernel module. These are documented +in the individual parameter description. Unless otherwise noted, the +tunable applies to the zfs kernel module. For example, the icp +kernel module parameters are visible in the +/sys/module/icp/parameters directory and can be set by default at +boot time by changing the /etc/modprobe.d/icp.conf file.

+

See the man page for modprobe.d for more information.

+
+

Manual Pages

+

The zfs(4) and spl(4) man +pages (previously zfs- and spl-module-parameters(5), respectively, +prior to OpenZFS 2.1) contain brief descriptions of +the module parameters. Alas, man pages are not as suitable for quick +reference as documentation pages. This page is intended to be a better +cross-reference and capture some of the wisdom of ZFS developers and +practitioners.

+
+
+

ZFS Module Parameters

+

The ZFS kernel module, zfs.ko, parameters are detailed below.

+

To observe the list of parameters along with a short synopsis of each +parameter, use the modinfo command:

+
modinfo zfs
+
+
+
+
+

Tags

+

The list of parameters is quite large and resists hierarchical +representation. To assist in finding relevant information +quickly, each module parameter has a “Tags” row with keywords for +frequent searches.

+
+

ABD

+ +
+
+

allocation

+ +
+
+

ARC

+ +
+
+

channel_programs

+ +
+
+

checkpoint

+ +
+
+

checksum

+ +
+
+

compression

+ +
+
+

CPU

+ +
+
+

dataset

+ +
+
+

dbuf_cache

+ +
+
+

debug

+ +
+
+

dedup

+ +
+
+

delay

+ +
+
+

delete

+ +
+
+

discard

+ +
+
+

disks

+ +
+
+

DMU

+ +
+
+

encryption

+ +
+
+

filesystem

+ +
+
+

fragmentation

+ +
+
+

HDD

+ +
+
+

hostid

+ +
+
+

import

+ +
+
+

L2ARC

+ +
+
+

memory

+ +
+
+

metadata

+ +
+
+

metaslab

+ +
+
+

mirror

+ +
+
+

MMP

+ +
+
+

panic

+ +
+
+

prefetch

+ +
+
+

QAT

+ +
+
+

raidz

+ +
+
+

receive

+ +
+
+

remove

+ +
+
+

resilver

+ +
+
+

scrub

+ +
+
+

send

+ +
+
+

snapshot

+ +
+
+

SPA

+ +
+
+

special_vdev

+ +
+
+

SSD

+ +
+
+

taskq

+ +
+
+

trim

+ +
+
+

vdev

+ +
+
+

vdev_cache

+ +
+
+

vdev_initialize

+ +
+
+

vdev_removal

+ +
+
+

volume

+ +
+
+

write_throttle

+ +
+
+

zed

+ +
+
+

ZIL

+ +
+
+

ZIO_scheduler

+ +
+
+
+

Index

+ +
+
+

Module Parameters

+
+

ignore_hole_birth

+

When set, the hole_birth optimization will not be used and all holes +will always be sent by zfs send In the source code, +ignore_hole_birth is an alias for and SysFS PARAMETER for +send_holes_without_birth_time.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

ignore_hole_birth

Notes

Tags

send

When to change

Enable if you suspect your datasets are +affected by a bug in hole_birth during +zfs send operations

Data Type

boolean

Range

0=disabled, 1=enabled

Default

1 (hole birth optimization is ignored)

Change

Dynamic

Versions Affected

TBD

+
+
+

l2arc_exclude_special

+

Controls whether buffers present on special vdevs are eligible for +caching into L2ARC.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_exclude_special

Notes

Tags

ARC, +L2ARC, +special_vdev,

When to change

If cache and special devices exist and caching +data on special devices in L2ARC is not desired

Data Type

boolean

Range

0=disabled, 1=enabled

Default

0

Change

Dynamic

Versions Affected

TBD

+
+
+

l2arc_feed_again

+

Turbo L2ARC cache warm-up. When the L2ARC is cold the fill interval will +be set to aggressively fill as fast as possible.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_feed_again

Notes

Tags

ARC, L2ARC

When to change

If cache devices exist and it is desired to +fill them as fast as possible

Data Type

boolean

Range

0=disabled, 1=enabled

Default

1

Change

Dynamic

Versions Affected

TBD

+
+
+

l2arc_feed_min_ms

+

Minimum time period for aggressively feeding the L2ARC. The L2ARC feed +thread wakes up once per second (see +l2arc_feed_secs) to look for data to feed into +the L2ARC. l2arc_feed_min_ms only affects the turbo L2ARC cache +warm-up and allows the aggressiveness to be adjusted.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_feed_min_ms

Notes

Tags

ARC, L2ARC

When to change

If cache devices exist and +l2arc_feed_again and +the feed is too aggressive, then this tunable +can be adjusted to reduce the impact of the +fill

Data Type

uint64

Units

milliseconds

Range

0 to (1000 * l2arc_feed_secs)

Default

200

Change

Dynamic

Versions Affected

0.6 and later

+
+
+

l2arc_feed_secs

+

Seconds between waking the L2ARC feed thread. One feed thread works for +all cache devices in turn.

+

If the pool that owns a cache device is imported readonly, then the feed +thread is delayed 5 * l2arc_feed_secs before +moving onto the next cache device. If multiple pools are imported with +cache devices and one pool with cache is imported readonly, the L2ARC +feed rate to all caches can be slowed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_feed_secs

Notes

Tags

ARC, L2ARC

When to change

Do not change

Data Type

uint64

Units

seconds

Range

1 to UINT64_MAX

Default

1

Change

Dynamic

Versions Affected

0.6 and later

+
+
+

l2arc_headroom

+

How far through the ARC lists to search for L2ARC cacheable content, +expressed as a multiplier of l2arc_write_max

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_headroom

Notes

Tags

ARC, L2ARC

When to change

If the rate of change in the ARC is faster than +the overall L2ARC feed rate, then increasing +l2arc_headroom can increase L2ARC efficiency. +Setting the value too large can cause the L2ARC +feed thread to consume more CPU time looking +for data to feed.

Data Type

uint64

Units

unit

Range

0 to UINT64_MAX

Default

2

Change

Dynamic

Versions Affected

0.6 and later

+
+
+

l2arc_headroom_boost

+

Percentage scale for l2arc_headroom when L2ARC +contents are being successfully compressed before writing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_headroom_boost

Notes

Tags

ARC, L2ARC

When to change

If average compression efficiency is greater +than 2:1, then increasing +l2a +rc_headroom_boost +can increase the L2ARC feed rate

Data Type

uint64

Units

percent

Range

100 to UINT64_MAX, when set to 100, the +L2ARC headroom boost feature is effectively +disabled

Default

200

Change

Dynamic

Versions Affected

all

+
+
+

l2arc_nocompress

+

Disable writing compressed data to cache devices. Disabling allows the +legacy behavior of writing decompressed data to cache devices.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_nocompress

Notes

Tags

ARC, L2ARC

When to change

When testing compressed L2ARC feature

Data Type

boolean

Range

0=store compressed blocks in cache device, +1=store uncompressed blocks in cache device

Default

0

Change

Dynamic

Versions Affected

deprecated in v0.7.0 by new compressed ARC +design

+
+
+

l2arc_meta_percent

+

Percent of ARC size allowed for L2ARC-only headers. +Since L2ARC buffers are not evicted on memory pressure, too large amount of +headers on system with irrationaly large L2ARC can render it slow or unusable. +This parameter limits L2ARC writes and rebuild to achieve it.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_nocompress

Notes

Tags

ARC, L2ARC

When to change

When workload really require enormous L2ARC.

Data Type

int

Range

0 to 100

Default

33

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_mfuonly

+

Controls whether only MFU metadata and data are cached from ARC into L2ARC. +This may be desirable to avoid wasting space on L2ARC when reading/writing +large amounts of data that are not expected to be accessed more than once. +By default both MRU and MFU data and metadata are cached in the L2ARC.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_mfuonly

Notes

Tags

ARC, L2ARC

When to change

When accessing a large amount of data only +once.

Data Type

boolean

Range

0=store MRU and MFU blocks in cache device, +1=store MFU blocks in cache device

Default

0

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_noprefetch

+

Disables writing prefetched, but unused, buffers to cache devices.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_noprefetch

Notes

Tags

ARC, L2ARC, +prefetch

When to change

Setting to 0 can increase L2ARC hit rates for +workloads where the ARC is too small for a read +workload that benefits from prefetching. Also, +if the main pool devices are very slow, setting +to 0 can improve some workloads such as +backups.

Data Type

boolean

Range

0=write prefetched but unused buffers to cache +devices, 1=do not write prefetched but unused +buffers to cache devices

Default

1

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

l2arc_norw

+

Disables writing to cache devices while they are being read.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_norw

Notes

Tags

ARC, L2ARC

When to change

In the early days of SSDs, some devices did not +perform well when reading and writing +simultaneously. Modern SSDs do not have these +issues.

Data Type

boolean

Range

0=read and write simultaneously, 1=avoid writes +when reading for antique SSDs

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

l2arc_rebuild_blocks_min_l2size

+

The minimum required size (in bytes) of an L2ARC device in order to +write log blocks in it. The log blocks are used upon importing the pool +to rebuild the persistent L2ARC. For L2ARC devices less than 1GB the +overhead involved offsets most of benefit so log blocks are not written +for cache devices smaller than this.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_rebuild_blocks_min_l2size

Notes

Tags

ARC, +L2ARC

When to change

The cache device is small and +the pool is frequently imported.

Data Type

bytes

Range

0 to UINT64_MAX

Default

1,073,741,824

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_rebuild_enabled

+

Rebuild the persistent L2ARC when importing a pool.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_rebuild_enabled

Notes

Tags

ARC, L2ARC

When to change

If there are problems importing a pool or +attaching an L2ARC device.

Data Type

boolean

Range

0=disable persistent L2ARC rebuild, +1=enable persistent L2ARC rebuild

Default

1

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_trim_ahead

+

Once the cache device has been filled TRIM ahead of the current write size +l2arc_write_max on L2ARC devices by this percentage. This can speed +up future writes depending on the performance characteristics of the +cache device.

+

When set to 100% TRIM twice the space required to accommodate upcoming +writes. A minimum of 64MB will be trimmed. If set it enables TRIM of +the whole L2ARC device when it is added to a pool. By default, this +option is disabled since it can put significant stress on the underlying +storage devices.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_trim_ahead

Notes

Tags

ARC, L2ARC

When to change

Consider setting for cache devices which +effeciently handle TRIM commands.

Data Type

ulong

Units

percent of l2arc_write_max

Range

0 to 100

Default

0

Change

Dynamic

Versions Affected

v2.0 and later

+
+
+

l2arc_write_boost

+

Until the ARC fills, increases the L2ARC fill rate +l2arc_write_max by l2arc_write_boost.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_write_boost

Notes

Tags

ARC, L2ARC

When to change

To fill the cache devices more aggressively +after pool import.

Data Type

uint64

Units

bytes

Range

0 to UINT64_MAX

Default

8,388,608

Change

Dynamic

Versions Affected

all

+
+
+

l2arc_write_max

+

Maximum number of bytes to be written to each cache device for each +L2ARC feed thread interval (see l2arc_feed_secs). +The actual limit can be adjusted by +l2arc_write_boost. By default +l2arc_feed_secs is 1 second, delivering a maximum +write workload to cache devices of 8 MiB/sec.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

l2arc_write_max

Notes

Tags

ARC, L2ARC

When to change

If the cache devices can sustain the write +workload, increasing the rate of cache device +fill when workloads generate new data at a rate +higher than l2arc_write_max can increase L2ARC +hit rate

Data Type

uint64

Units

bytes

Range

1 to UINT64_MAX

Default

8,388,608

Change

Dynamic

Versions Affected

all

+
+
+

metaslab_aliquot

+

Sets the metaslab granularity. Nominally, ZFS will try to allocate this +amount of data to a top-level vdev before moving on to the next +top-level vdev. This is roughly similar to what would be referred to as +the “stripe size” in traditional RAID arrays.

+

When tuning for HDDs, it can be more efficient to have a few larger, +sequential writes to a device rather than switching to the next device. +Monitoring the size of contiguous writes to the disks relative to the +write throughput can be used to determine if increasing +metaslab_aliquot can help. For modern devices, it is unlikely that +decreasing metaslab_aliquot from the default will help.

+

If there is only one top-level vdev, this tunable is not used.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_aliquot

Notes

Tags

allocation, +metaslab, vdev

When to change

If write performance increases as devices more +efficiently write larger, contiguous blocks

Data Type

uint64

Units

bytes

Range

0 to UINT64_MAX

Default

524,288

Change

Dynamic

Versions Affected

all

+
+
+

metaslab_bias_enabled

+

Enables metaslab group biasing based on a top-level vdev’s utilization +relative to the pool. Nominally, all top-level devs are the same size +and the allocation is spread evenly. When the top-level vdevs are not of +the same size, for example if a new (empty) top-level is added to the +pool, this allows the new top-level vdev to get a larger portion of new +allocations.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_bias_enabled

Notes

Tags

allocation, +metaslab, vdev

When to change

If a new top-level vdev is added and you do +not want to bias new allocations to the new +top-level vdev

Data Type

boolean

Range

0=spread evenly across top-level vdevs, +1=bias spread to favor less full top-level +vdevs

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_metaslab_segment_weight_enabled

+

Enables metaslab allocation based on largest free segment rather than +total amount of free space. The goal is to avoid metaslabs that exhibit +free space fragmentation: when there is a lot of small free spaces, but +few larger free spaces.

+

If zfs_metaslab_segment_weight_enabled is enabled, then +metaslab_fragmentation_factor_enabled +is ignored.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs +_metaslab_segment_weight_enabled

Notes

Tags

allocation, +metaslab

When to change

When testing allocation and +fragmentation

Data Type

boolean

Range

0=do not consider metaslab +fragmentation, 1=avoid metaslabs +where free space is highly +fragmented

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_metaslab_switch_threshold

+

When using segment-based metaslab selection (see +zfs_metaslab_segment_weight_enabled), +continue allocating from the active metaslab until +zfs_metaslab_switch_threshold worth of free space buckets have been +exhausted.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_metaslab_switch_threshold

Notes

Tags

allocation, +metaslab

When to change

When testing allocation and +fragmentation

Data Type

uint64

Units

free spaces

Range

0 to UINT64_MAX

Default

2

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

metaslab_debug_load

+

When enabled, all metaslabs are loaded into memory during pool import. +Nominally, metaslab space map information is loaded and unloaded as +needed (see metaslab_debug_unload)

+

It is difficult to predict how much RAM is required to store a space +map. An empty or completely full metaslab has a small space map. +However, a highly fragmented space map can consume significantly more +memory.

+

Enabling metaslab_debug_load can increase pool import time.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_debug_load

Notes

Tags

allocation, +memory, +metaslab

When to change

When RAM is plentiful and pool import time is +not a consideration

Data Type

boolean

Range

0=do not load all metaslab info at pool +import, 1=dynamically load metaslab info as +needed

Default

0

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

metaslab_debug_unload

+

When enabled, prevents metaslab information from being dynamically +unloaded from RAM. Nominally, metaslab space map information is loaded +and unloaded as needed (see +metaslab_debug_load)

+

It is difficult to predict how much RAM is required to store a space +map. An empty or completely full metaslab has a small space map. +However, a highly fragmented space map can consume significantly more +memory.

+

Enabling metaslab_debug_unload consumes RAM that would otherwise be +freed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_debug_unload

Notes

Tags

allocation, +memory, +metaslab

When to change

When RAM is plentiful and the penalty for +dynamically reloading metaslab info from +the pool is high

Data Type

boolean

Range

0=dynamically unload metaslab info, +1=unload metaslab info only upon pool +export

Default

0

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

metaslab_fragmentation_factor_enabled

+

Enable use of the fragmentation metric in computing metaslab weights.

+

In version v0.7.0, if +zfs_metaslab_segment_weight_enabled +is enabled, then metaslab_fragmentation_factor_enabled is ignored.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metas +lab_fragmentation_factor_enabled

Notes

Tags

allocation, +metaslab

When to change

To test metaslab fragmentation

Data Type

boolean

Range

0=do not consider metaslab free +space fragmentation, 1=try to +avoid fragmented metaslabs

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

metaslabs_per_vdev

+

When a vdev is added, it will be divided into approximately, but no more +than, this number of metaslabs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslabs_per_vdev

Notes

Tags

allocation, +metaslab, vdev

When to change

When testing metaslab allocation

Data Type

uint64

Units

metaslabs

Range

16 to UINT64_MAX

Default

200

Change

Prior to pool creation or adding new top-level +vdevs

Versions Affected

all

+
+
+

metaslab_preload_enabled

+

Enable metaslab group preloading. Each top-level vdev has a metaslab +group. By default, up to 3 copies of metadata can exist and are +distributed across multiple top-level vdevs. +metaslab_preload_enabled allows the corresponding metaslabs to be +preloaded, thus improving allocation efficiency.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_preload_enabled

Notes

Tags

allocation, +metaslab

When to change

When testing metaslab allocation

Data Type

boolean

Range

0=do not preload metaslab info, +1=preload up to 3 metaslabs

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

metaslab_lba_weighting_enabled

+

Modern HDDs have uniform bit density and constant angular velocity. +Therefore, the outer recording zones are faster (higher bandwidth) than +the inner zones by the ratio of outer to inner track diameter. The +difference in bandwidth can be 2:1, and is often available in the HDD +detailed specifications or drive manual. For HDDs when +metaslab_lba_weighting_enabled is true, write allocation preference +is given to the metaslabs representing the outer recording zones. Thus +the allocation to metaslabs prefers faster bandwidth over free space.

+

If the devices are not rotational, yet misrepresent themselves to the OS +as rotational, then disabling metaslab_lba_weighting_enabled can +result in more even, free-space-based allocation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_lba_weighting_enabled

Notes

Tags

allocation, +metaslab, +HDD, SSD

When to change

disable if using only SSDs and +version v0.6.4 or earlier

Data Type

boolean

Range

0=do not use LBA weighting, 1=use +LBA weighting

Default

1

Change

Dynamic

Verification

The rotational setting described +by a block device in sysfs by +observing +/sys/ +block/DISK_NAME/queue/rotational

Versions Affected

prior to v0.6.5, the check for +non-rotation media did not exist

+
+
+

spa_config_path

+

By default, the zpool import command searches for pool information +in the zpool.cache file. If the pool to be imported has an entry in +zpool.cache then the devices do not have to be scanned to determine +if they are pool members. The path to the cache file is spa_config_path.

+

For more information on zpool import and the -o cachefile and +-d options, see the man page for zpool(8)

+

See also zfs_autoimport_disable

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_config_path

Notes

Tags

import

When to change

If creating a non-standard distribution and the +cachefile property is inconvenient

Data Type

string

Default

/etc/zfs/zpool.cache

Change

Dynamic, applies only to the next invocation of +zpool import

Versions Affected

all

+
+
+

spa_asize_inflation

+

Multiplication factor used to estimate actual disk consumption from the +size of data being written. The default value is a worst case estimate, +but lower values may be valid for a given pool depending on its +configuration. Pool administrators who understand the factors involved +may wish to specify a more realistic inflation factor, particularly if +they operate close to quota or capacity limits.

+

The worst case space requirement for allocation is single-sector +max-parity RAIDZ blocks, in which case the space requirement is exactly +4 times the size, accounting for a maximum of 3 parity blocks. This is +added to the maximum number of ZFS copies parameter (copies max=3). +Additional space is required if the block could impact deduplication +tables. Altogether, the worst case is 24.

+

If the estimation is not correct, then quotas or out-of-space conditions +can lead to optimistic expectations of the ability to allocate. +Applications are typically not prepared to deal with such failures and +can misbehave.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_asize_inflation

Notes

Tags

allocation, SPA

When to change

If the allocation requirements for the +workload are well known and quotas are used

Data Type

uint64

Units

unit

Range

1 to 24

Default

24

Change

Dynamic

Versions Affected

v0.6.3 and later

+
+
+

spa_load_verify_data

+

An extreme rewind import (see zpool import -X) normally performs a +full traversal of all blocks in the pool for verification. If this +parameter is set to 0, the traversal skips non-metadata blocks. It can +be toggled once the import has started to stop or start the traversal of +non-metadata blocks. See also +spa_load_verify_metadata.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_verify_data

Notes

Tags

allocation, SPA

When to change

At the risk of data integrity, to speed +extreme import of large pool

Data Type

boolean

Range

0=do not verify data upon pool import, +1=verify pool data upon import

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

spa_load_verify_metadata

+

An extreme rewind import (see zpool import -X) normally performs a +full traversal of all blocks in the pool for verification. If this +parameter is set to 0, the traversal is not performed. It can be toggled +once the import has started to stop or start the traversal. See +spa_load_verify_data

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_verify_metadata

Notes

Tags

import

When to change

At the risk of data integrity, to speed +extreme import of large pool

Data Type

boolean

Range

0=do not verify metadata upon pool +import, 1=verify pool metadata upon +import

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

spa_load_verify_maxinflight

+

Maximum number of concurrent I/Os during the data verification performed +during an extreme rewind import (see zpool import -X)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_verify_maxinflight

Notes

Tags

import

When to change

During an extreme rewind import, to +match the concurrent I/O capabilities +of the pool devices

Data Type

int

Units

I/Os

Range

1 to MAX_INT

Default

10,000

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

spa_slop_shift

+

Normally, the last 3.2% (1/(2^spa_slop_shift)) of pool space is +reserved to ensure the pool doesn’t run completely out of space, due to +unaccounted changes (e.g. to the MOS). This also limits the worst-case +time to allocate space. When less than this amount of free space exists, +most ZPL operations (e.g. write, create) return error:no space (ENOSPC).

+

Changing spa_slop_shift affects the currently loaded ZFS module and all +imported pools. spa_slop_shift is not stored on disk. Beware when +importing full pools on systems with larger spa_slop_shift can lead to +over-full conditions.

+

The minimum SPA slop space is limited to 128 MiB. +The maximum SPA slop space is limited to 128 GiB.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_slop_shift

Notes

Tags

allocation, SPA

When to change

For large pools, when 3.2% may be too +conservative and more usable space is desired, +consider increasing spa_slop_shift

Data Type

int

Units

shift

Range

1 to MAX_INT, however the practical upper limit +is 15 for a system with 4TB of RAM

Default

5

Change

Dynamic

Versions Affected

v0.6.5 and later (max. slop space since v2.1.0)

+
+
+

zfetch_array_rd_sz

+

If prefetching is enabled, do not prefetch blocks larger than +zfetch_array_rd_sz size.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfetch_array_rd_sz

Notes

Tags

prefetch

When to change

To allow prefetching when using large block sizes

Data Type

unsigned long

Units

bytes

Range

0 to MAX_ULONG

Default

1,048,576 (1 MiB)

Change

Dynamic

Versions Affected

all

+
+
+

zfetch_max_distance

+

Limits the maximum number of bytes to prefetch per stream.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfetch_max_distance

Notes

Tags

prefetch

When to change

Consider increasing read workloads that use +large blocks and exhibit high prefetch hit +ratios

Data Type

uint

Units

bytes

Range

0 to UINT_MAX

Default

8,388,608

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

zfetch_max_streams

+

Maximum number of prefetch streams per file.

+

For version v0.7.0 and later, when prefetching small files the number of +prefetch streams is automatically reduced below to prevent the streams +from overlapping.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfetch_max_streams

Notes

Tags

prefetch

When to change

If the workload benefits from prefetching and +has more than zfetch_max_streams +concurrent reader threads

Data Type

uint

Units

streams

Range

1 to MAX_UINT

Default

8

Change

Dynamic

Versions Affected

all

+
+
+

zfetch_min_sec_reap

+

Prefetch streams that have been accessed in zfetch_min_sec_reap +seconds are automatically stopped.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfetch_min_sec_reap

Notes

Tags

prefetch

When to change

To test prefetch efficiency

Data Type

uint

Units

seconds

Range

0 to MAX_UINT

Default

2

Change

Dynamic

Versions Affected

all

+
+
+

zfs_arc_dnode_limit_percent

+

Percentage of ARC metadata space that can be used for dnodes.

+

The value calculated for zfs_arc_dnode_limit_percent can be +overridden by zfs_arc_dnode_limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_dnode_limit_percent

Notes

Tags

ARC

When to change

Consider increasing if arc_prune +is using excessive system time and +/proc/spl/kstat/zfs/arcstats +shows arc_dnode_size is near or +over arc_dnode_limit

Data Type

int

Units

percent of arc_meta_limit

Range

0 to 100

Default

10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_dnode_limit

+

When the number of bytes consumed by dnodes in the ARC exceeds +zfs_arc_dnode_limit bytes, demand for new metadata can take from the +space consumed by dnodes.

+

The default value 0, indicates that a percent which is based on +zfs_arc_dnode_limit_percent of the +ARC meta buffers that may be used for dnodes.

+

zfs_arc_dnode_limit is similar to +zfs_arc_meta_prune which serves a similar +purpose for metadata.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_dnode_limit

Notes

Tags

ARC

When to change

Consider increasing if arc_prune is using +excessive system time and +/proc/spl/kstat/zfs/arcstats shows +arc_dnode_size is near or over +arc_dnode_limit

Data Type

uint64

Units

bytes

Range

0 to MAX_UINT64

Default

0 (uses +zfs_arc_dnode_lim +it_percent)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_dnode_reduce_percent

+

Percentage of ARC dnodes to try to evict in response to demand for +non-metadata when the number of bytes consumed by dnodes exceeds +zfs_arc_dnode_limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_dnode_reduce_percent

Notes

Tags

ARC

When to change

Testing dnode cache efficiency

Data Type

uint64

Units

percent of size of dnode space used +above +zfs_arc_d +node_limit

Range

0 to 100

Default

10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_average_blocksize

+

The ARC’s buffer hash table is sized based on the assumption of an +average block size of zfs_arc_average_blocksize. The default of 8 +KiB uses approximately 1 MiB of hash table per 1 GiB of physical memory +with 8-byte pointers.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_average_blocksize

Notes

Tags

ARC, memory

When to change

For workloads where the known average +blocksize is larger, increasing +zfs_arc_average_blocksize can +reduce memory usage

Data Type

int

Units

bytes

Range

512 to 16,777,216

Default

8,192

Change

Prior to zfs module load

Versions Affected

all

+
+
+

zfs_arc_evict_batch_limit

+

Number ARC headers to evict per sublist before proceeding to another +sublist. This batch-style operation prevents entire sublists from being +evicted at once but comes at a cost of additional unlocking and locking.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_evict_batch_limit

Notes

Tags

ARC

When to change

Testing ARC multilist features

Data Type

int

Units

count of ARC headers

Range

1 to INT_MAX

Default

10

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_grow_retry

+

When the ARC is shrunk due to memory demand, do not retry growing the +ARC for zfs_arc_grow_retry seconds. This operates as a damper to +prevent oscillating grow/shrink cycles when there is memory pressure.

+

If zfs_arc_grow_retry = 0, the internal default of 5 seconds is +used.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_grow_retry

Notes

Tags

ARC, memory

When to change

TBD

Data Type

int

Units

seconds

Range

1 to MAX_INT

Default

0

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_lotsfree_percent

+

Throttle ARC memory consumption, effectively throttling I/O, when free +system memory drops below this percentage of total system memory. +Setting zfs_arc_lotsfree_percent to 0 disables the throttle.

+

The arcstat_memory_throttle_count counter in +/proc/spl/kstat/arcstats can indicate throttle activity.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_lotsfree_percent

Notes

Tags

ARC, memory

When to change

TBD

Data Type

int

Units

percent

Range

0 to 100

Default

10

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_max

+

Maximum size of ARC in bytes.

+

If set to 0 then the maximum size of ARC +is determined by the amount of system memory installed:

+
    +
  • Linux: 1/2 of system memory

  • +
  • FreeBSD: the larger of all_system_memory - 1GB and 5/8 × all_system_memory

  • +
+

zfs_arc_max can be changed dynamically with some caveats. It cannot +be set back to 0 while running and reducing it below the current ARC +size will not cause the ARC to shrink without memory pressure to induce +shrinking.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_max

Notes

Tags

ARC, memory

When to change

Reduce if ARC competes too much with other +applications, increase if ZFS is the primary +application and can use more RAM

Data Type

uint64

Units

bytes

Range

67,108,864 to RAM size in bytes

Default

0 (see description above, OS-dependent)

Change

Dynamic (see description above)

Verification

c column in arcstats.py or +/proc/spl/kstat/zfs/arcstats entry +c_max

Versions Affected

all

+
+
+

zfs_arc_meta_adjust_restarts

+

The number of restart passes to make while scanning the ARC attempting +the free buffers in order to stay below the +zfs_arc_meta_limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_adjust_restarts

Notes

Tags

ARC

When to change

Testing ARC metadata adjustment feature

Data Type

int

Units

restarts

Range

0 to INT_MAX

Default

4,096

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_meta_limit

+

Sets the maximum allowed size metadata buffers in the ARC. When +zfs_arc_meta_limit is reached metadata buffers +are reclaimed, even if the overall c_max has not been reached.

+

In version v0.7.0, with a default value = 0, +zfs_arc_meta_limit_percent is used to set arc_meta_limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_limit

Notes

Tags

ARC

When to change

For workloads where the metadata to data ratio +in the ARC can be changed to improve ARC hit +rates

Data Type

uint64

Units

bytes

Range

0 to c_max

Default

0

Change

Dynamic, except that it cannot be set back to +0 for a specific percent of the ARC; it must +be set to an explicit value

Verification

/proc/spl/kstat/zfs/arcstats entry +arc_meta_limit

Versions Affected

all

+
+
+

zfs_arc_meta_limit_percent

+

Sets the limit to ARC metadata, arc_meta_limit, as a percentage of +the maximum size target of the ARC, c_max

+

Prior to version v0.7.0, the +zfs_arc_meta_limit was used to set the limit +as a fixed size. zfs_arc_meta_limit_percent provides a more +convenient interface for setting the limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_limit_percent

Notes

Tags

ARC

When to change

For workloads where the metadata to +data ratio in the ARC can be changed +to improve ARC hit rates

Data Type

uint64

Units

percent of c_max

Range

0 to 100

Default

75

Change

Dynamic

Verification

/proc/spl/kstat/zfs/arcstats entry +arc_meta_limit

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_meta_min

+

The minimum allowed size in bytes that metadata buffers may consume in +the ARC. This value defaults to 0 which disables a floor on the amount +of the ARC devoted meta data.

+

When evicting data from the ARC, if the metadata_size is less than +arc_meta_min then data is evicted instead of metadata.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_min

Notes

Tags

ARC

When to change

Data Type

uint64

Units

bytes

Range

16,777,216 to c_max

Default

0 (use internal default 16 MiB)

Change

Dynamic

Verification

/proc/spl/kstat/zfs/arcstats entry arc_meta_min

Versions Affected

all

+
+
+

zfs_arc_meta_prune

+

zfs_arc_meta_prune sets the number of dentries and znodes to be +scanned looking for entries which can be dropped. This provides a +mechanism to ensure the ARC can honor the arc_meta_limit and reclaim +otherwise pinned ARC buffers. Pruning may be required when the ARC size +drops to arc_meta_limit because dentries and znodes can pin buffers +in the ARC. Increasing this value will cause to dentry and znode caches +to be pruned more aggressively and the arc_prune thread becomes more +active. Setting zfs_arc_meta_prune to 0 will disable pruning.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_prune

Notes

Tags

ARC

When to change

TBD

Data Type

uint64

Units

entries

Range

0 to INT_MAX

Default

10,000

Change

Dynamic

! Verification

Prune activity is counted by the +/proc/spl/kstat/zfs/arcstats entry +arc_prune

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_meta_strategy

+

Defines the strategy for ARC metadata eviction (meta reclaim strategy). +A value of 0 (META_ONLY) will evict only the ARC metadata. A value of 1 +(BALANCED) indicates that additional data may be evicted if required in +order to evict the requested amount of metadata.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_meta_strategy

Notes

Tags

ARC

When to change

Testing ARC metadata eviction

Data Type

int

Units

enum

Range

0=evict metadata only, 1=also evict data +buffers if they can free metadata buffers +for eviction

Default

1 (BALANCED)

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_min

+

Minimum ARC size limit. When the ARC is asked to shrink, it will stop +shrinking at c_min as tuned by zfs_arc_min.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_min

Notes

Tags

ARC

When to change

If the primary focus of the system is ZFS, then +increasing can ensure the ARC gets a minimum +amount of RAM

Data Type

uint64

Units

bytes

Range

33,554,432 to c_max

Default

For kernel: greater of 33,554,432 (32 MiB) and +memory size / 32. For user-land: greater of +33,554,432 (32 MiB) and c_max / 2.

Change

Dynamic

Verification

/proc/spl/kstat/zfs/arcstats entry +c_min

Versions Affected

all

+
+
+

zfs_arc_min_prefetch_ms

+

Minimum time prefetched blocks are locked in the ARC.

+

A value of 0 represents the default of 1 second. However, once changed, +dynamically setting to 0 will not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_min_prefetch_ms

Notes

Tags

ARC, prefetch

When to change

TBD

Data Type

int

Units

milliseconds

Range

1 to INT_MAX

Default

0 (use internal default of 1000 ms)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_arc_min_prescient_prefetch_ms

+

Minimum time “prescient prefetched” blocks are locked in the ARC. These +blocks are meant to be prefetched fairly aggressively ahead of the code +that may use them.

+

A value of 0 represents the default of 6 seconds. However, once changed, +dynamically setting to 0 will not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

z +fs_arc_min_prescient_prefetch_ms

Notes

Tags

ARC, +prefetch

When to change

TBD

Data Type

int

Units

milliseconds

Range

1 to INT_MAX

Default

0 (use internal default of 6000 +ms)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_multilist_num_sublists

+

To allow more fine-grained locking, each ARC state contains a series of +lists (sublists) for both data and metadata objects. Locking is +performed at the sublist level. This parameters controls the number of +sublists per ARC state, and also applies to other uses of the multilist +data structure.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multilist_num_sublists

Notes

Tags

ARC

When to change

TBD

Data Type

int

Units

lists

Range

1 to INT_MAX

Default

0 (internal value is greater of number +of online CPUs or 4)

Change

Prior to zfs module load

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_overflow_shift

+

The ARC size is considered to be overflowing if it exceeds the current +ARC target size (/proc/spl/kstat/zfs/arcstats entry c) by a +threshold determined by zfs_arc_overflow_shift. The threshold is +calculated as a fraction of c using the formula: (ARC target size) +c >> zfs_arc_overflow_shift

+

The default value of 8 causes the ARC to be considered to be overflowing +if it exceeds the target size by 1/256th (0.3%) of the target size.

+

When the ARC is overflowing, new buffer allocations are stalled until +the reclaim thread catches up and the overflow condition no longer +exists.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_overflow_shift

Notes

Tags

ARC

When to change

TBD

Data Type

int

Units

shift

Range

1 to INT_MAX

Default

8

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_arc_p_min_shift

+

arc_p_min_shift is used to shift of ARC target size +(/proc/spl/kstat/zfs/arcstats entry c) for calculating both +minimum and maximum most recently used (MRU) target size +(/proc/spl/kstat/zfs/arcstats entry p)

+

A value of 0 represents the default setting of arc_p_min_shift = 4. +However, once changed, dynamically setting zfs_arc_p_min_shift to 0 +will not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_p_min_shift

Notes

Tags

ARC

When to change

TBD

Data Type

int

Units

shift

Range

1 to INT_MAX

Default

0 (internal default = 4)

Change

Dynamic

Verification

Observe changes to +/proc/spl/kstat/zfs/arcstats entry p

Versions Affected

all

+
+
+

zfs_arc_p_dampener_disable

+

When data is being added to the ghost lists, the MRU target size is +adjusted. The amount of adjustment is based on the ratio of the MRU/MFU +sizes. When enabled, the ratio is capped to 10, avoiding large +adjustments.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_p_dampener_disable

Notes

Tags

ARC

When to change

Testing ARC ghost list behaviour

Data Type

boolean

Range

0=avoid large adjustments, 1=permit +large adjustments

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_arc_shrink_shift

+

arc_shrink_shift is used to adjust the ARC target sizes when large +reduction is required. The current ARC target size, c, and MRU size +p can be reduced by by the current size >> arc_shrink_shift. For +the default value of 7, this reduces the target by approximately 0.8%.

+

A value of 0 represents the default setting of arc_shrink_shift = 7. +However, once changed, dynamically setting arc_shrink_shift to 0 will +not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_shrink_shift

Notes

Tags

ARC, memory

When to change

During memory shortfall, reducing +zfs_arc_shrink_shift increases the rate +of ARC shrinkage

Data Type

int

Units

shift

Range

1 to INT_MAX

Default

0 (arc_shrink_shift = 7)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_arc_pc_percent

+

zfs_arc_pc_percent allows ZFS arc to play more nicely with the +kernel’s LRU pagecache. It can guarantee that the arc size won’t +collapse under scanning pressure on the pagecache, yet still allows arc +to be reclaimed down to zfs_arc_min if necessary. This value is +specified as percent of pagecache size (as measured by +NR_FILE_PAGES) where that percent may exceed 100. This only operates +during memory pressure/reclaim.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_pc_percent

Notes

Tags

ARC, memory

When to change

When using file systems under memory +shortfall, if the page scanner causes the ARC +to shrink too fast, then adjusting +zfs_arc_pc_percent can reduce the shrink +rate

Data Type

int

Units

percent

Range

0 to 100

Default

0 (disabled)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_arc_sys_free

+

zfs_arc_sys_free is the target number of bytes the ARC should leave +as free memory on the system. Defaults to the larger of 1/64 of physical +memory or 512K. Setting this option to a non-zero value will override +the default.

+

A value of 0 represents the default setting of larger of 1/64 of +physical memory or 512 KiB. However, once changed, dynamically setting +zfs_arc_sys_free to 0 will not return to the default.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_sys_free

Notes

Tags

ARC, memory

When to change

Change if more free memory is desired as a +margin against memory demand by applications

Data Type

ulong

Units

bytes

Range

0 to ULONG_MAX

Default

0 (default to larger of 1/64 of physical memory +or 512 KiB)

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_autoimport_disable

+

Disable reading zpool.cache file (see +spa_config_path) when loading the zfs module.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_autoimport_disable

Notes

Tags

import

When to change

Leave as default so that zfs behaves as +other Linux kernel modules

Data Type

boolean

Range

0=read zpool.cache at module load, +1=do not read zpool.cache at module +load

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_commit_timeout_pct

+

zfs_commit_timeout_pct controls the amount of time that a log (ZIL) +write block (lwb) remains “open” when it isn’t “full” and it has a +thread waiting to commit to stable storage. The timeout is scaled based +on a percentage of the last lwb latency to avoid significantly impacting +the latency of each individual intent log transaction (itx).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_commit_timeout_pct

Notes

Tags

ZIL

When to change

TBD

Data Type

int

Units

percent

Range

1 to 100

Default

5

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_dbgmsg_enable

+
+
Internally ZFS keeps a small log to facilitate debugging. The contents +of the log are in the /proc/spl/kstat/zfs/dbgmsg file.
+
Writing 0 to /proc/spl/kstat/zfs/dbgmsg file clears the log.
+
+

See also zfs_dbgmsg_maxsize

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dbgmsg_enable

Notes

Tags

debug

When to change

To view ZFS internal debug log

Data Type

boolean

Range

0=do not log debug messages, 1=log debug messages

Default

0 (1 for debug builds)

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_dbgmsg_maxsize

+

The /proc/spl/kstat/zfs/dbgmsg file size limit is set by +zfs_dbgmsg_maxsize.

+

See also zfs_dbgmsg_enable

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dbgmsg_maxsize

Notes

Tags

debug

When to change

TBD

Data Type

int

Units

bytes

Range

0 to INT_MAX

Default

4 MiB

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_dbuf_state_index

+

The zfs_dbuf_state_index feature is currently unused. It is normally +used for controlling values in the /proc/spl/kstat/zfs/dbufs file.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dbuf_state_index

Notes

Tags

debug

When to change

Do not change

Data Type

int

Units

TBD

Range

TBD

Default

0

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_deadman_enabled

+

When a pool sync operation takes longer than zfs_deadman_synctime_ms +milliseconds, a “slow spa_sync” message is logged to the debug log (see +zfs_dbgmsg_enable). If zfs_deadman_enabled +is set to 1, then all pending IO operations are also checked and if any +haven’t completed within zfs_deadman_synctime_ms milliseconds, a “SLOW +IO” message is logged to the debug log and a “deadman” system event (see +zpool events command) with the details of the hung IO is posted.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_enabled

Notes

Tags

debug

When to change

To disable logging of slow I/O

Data Type

boolean

Range

0=do not log slow I/O, 1=log slow I/O

Default

1

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_deadman_checktime_ms

+

Once a pool sync operation has taken longer than +zfs_deadman_synctime_ms milliseconds, +continue to check for slow operations every +zfs_deadman_checktime_ms milliseconds.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_checktime_ms

Notes

Tags

debug

When to change

When debugging slow I/O

Data Type

ulong

Units

milliseconds

Range

1 to ULONG_MAX

Default

60,000 (1 minute)

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_deadman_ziotime_ms

+

When an individual I/O takes longer than zfs_deadman_ziotime_ms +milliseconds, then the operation is considered to be “hung”. If +zfs_deadman_enabled is set then the deadman +behaviour is invoked as described by the +zfs_deadman_failmode option.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_ziotime_ms

Notes

Tags

debug

When to change

Testing ABD features

Data Type

ulong

Units

milliseconds

Range

1 to ULONG_MAX

Default

300,000 (5 minutes)

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_deadman_synctime_ms

+

The I/O deadman timer expiration time has two meanings

+
    +
  1. determines when the spa_deadman() logic should fire, indicating +the txg sync has not completed in a timely manner

  2. +
  3. determines if an I/O is considered “hung”

  4. +
+

In version v0.8.0, any I/O that has not completed in +zfs_deadman_synctime_ms is considered “hung” resulting in one of +three behaviors controlled by the +zfs_deadman_failmode parameter.

+

zfs_deadman_synctime_ms takes effect if +zfs_deadman_enabled = 1.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_synctime_ms

Notes

Tags

debug

When to change

When debugging slow I/O

Data Type

ulong

Units

milliseconds

Range

1 to ULONG_MAX

Default

600,000 (10 minutes)

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_deadman_failmode

+

zfs_deadman_failmode controls the behavior of the I/O deadman timer when +it detects a “hung” I/O. Valid values are:

+
    +
  • wait - Wait for the “hung” I/O (default)

  • +
  • continue - Attempt to recover from a “hung” I/O

  • +
  • panic - Panic the system

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_deadman_failmode

Notes

Tags

debug

When to change

In some cluster cases, panic can be appropriate

Data Type

string

Range

wait, continue, or panic

Default

wait

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_dedup_prefetch

+

ZFS can prefetch deduplication table (DDT) entries. +zfs_dedup_prefetch allows DDT prefetches to be enabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dedup_prefetch

Notes

Tags

prefetch, memory

When to change

For systems with limited RAM using the dedup +feature, disabling deduplication table +prefetch can reduce memory pressure

Data Type

boolean

Range

0=do not prefetch, 1=prefetch dedup table +entries

Default

0

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_delete_blocks

+

zfs_delete_blocks defines a large file for the purposes of delete. +Files containing more than zfs_delete_blocks will be deleted +asynchronously while smaller files are deleted synchronously. Decreasing +this value reduces the time spent in an unlink(2) system call at the +expense of a longer delay before the freed space is available.

+

The zfs_delete_blocks value is specified in blocks, not bytes. The +size of blocks can vary and is ultimately limited by the filesystem’s +recordsize property.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_delete_blocks

Notes

Tags

filesystem, +delete

When to change

If applications delete large files and blocking +on unlink(2) is not desired

Data Type

ulong

Units

blocks

Range

1 to ULONG_MAX

Default

20,480

Change

Dynamic

Versions Affected

all

+
+
+

zfs_delay_min_dirty_percent

+

The ZFS write throttle begins to delay each transaction when the amount +of dirty data reaches the threshold zfs_delay_min_dirty_percent of +zfs_dirty_data_max. This value should be >= +zfs_vdev_async_write_active_max_dirty_percent.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_delay_min_dirty_percent

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

int

Units

percent

Range

0 to 100

Default

60

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_delay_scale

+

zfs_delay_scale controls how quickly the ZFS write throttle +transaction delay approaches infinity. Larger values cause longer delays +for a given amount of dirty data.

+

For the smoothest delay, this value should be about 1 billion divided by +the maximum number of write operations per second the pool can sustain. +The throttle will smoothly handle between 10x and 1/10th +zfs_delay_scale.

+

Note: zfs_delay_scale * +zfs_dirty_data_max must be < 2^64.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_delay_scale

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

ulong

Units

scalar (nanoseconds)

Range

0 to ULONG_MAX

Default

500,000

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_max

+

zfs_dirty_data_max is the ZFS write throttle dirty space limit. Once +this limit is exceeded, new writes are delayed until space is freed by +writes being committed to the pool.

+

zfs_dirty_data_max takes precedence over +zfs_dirty_data_max_percent.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_max

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

ulong

Units

bytes

Range

1 to +zfs_d +irty_data_max_max

Default

10% of physical RAM

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_max_percent

+

zfs_dirty_data_max_percent is an alternative method of specifying +zfs_dirty_data_max, the ZFS write throttle +dirty space limit. Once this limit is exceeded, new writes are delayed +until space is freed by writes being committed to the pool.

+

zfs_dirty_data_max takes precedence over +zfs_dirty_data_max_percent.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_max_percent

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

int

Units

percent

Range

1 to 100

Default

10% of physical RAM

Change

Prior to zfs module load or a memory +hot plug event

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_max_max

+

zfs_dirty_data_max_max is the maximum allowable value of +zfs_dirty_data_max.

+

zfs_dirty_data_max_max takes precedence over +zfs_dirty_data_max_max_percent.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_max_max

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

ulong

Units

bytes

Range

1 to physical RAM size

Default

physical_ram/4

+

since v0.7: min(physical_ram/4, 4GiB)

+

since v2.0 for 32-bit systems: min(physical_ram/4, 1GiB)

+

Change

Prior to zfs module load

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_max_max_percent

+

zfs_dirty_data_max_max_percent an alternative to +zfs_dirty_data_max_max for setting the +maximum allowable value of zfs_dirty_data_max

+

zfs_dirty_data_max_max takes precedence +over zfs_dirty_data_max_max_percent

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_max_max_percent

Notes

Tags

write_throttle

When to change

See section “ZFS TRANSACTION DELAY”

Data Type

int

Units

percent

Range

1 to 100

Default

25% of physical RAM

Change

Prior to zfs module load

Versions Affected

v0.6.4 and later

+
+
+

zfs_dirty_data_sync

+

When there is at least zfs_dirty_data_sync dirty data, a transaction +group sync is started. This allows a transaction group sync to occur +more frequently than the transaction group timeout interval (see +zfs_txg_timeout) when there is dirty data to be +written.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_sync

Notes

Tags

write_throttle, +ZIO_scheduler

When to change

TBD

Data Type

ulong

Units

bytes

Range

1 to ULONG_MAX

Default

67,108,864 (64 MiB)

Change

Dynamic

Versions Affected

v0.6.4 through v0.8.x, deprecation planned +for v2

+
+
+

zfs_dirty_data_sync_percent

+

When there is at least zfs_dirty_data_sync_percent of +zfs_dirty_data_max dirty data, a transaction +group sync is started. This allows a transaction group sync to occur +more frequently than the transaction group timeout interval (see +zfs_txg_timeout) when there is dirty data to be +written.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dirty_data_sync_percent

Notes

Tags

write_throttle, +ZIO_scheduler

When to change

TBD

Data Type

int

Units

percent

Range

1 to +zfs_vdev_async_write_ac +tive_min_dirty_percent

Default

20

Change

Dynamic

Versions Affected

planned for v2, deprecates +zfs_dirt +y_data_sync

+
+
+

zfs_fletcher_4_impl

+

Fletcher-4 is the default checksum algorithm for metadata and data. When +the zfs kernel module is loaded, a set of microbenchmarks are run to +determine the fastest algorithm for the current hardware. The +zfs_fletcher_4_impl parameter allows a specific implementation to be +specified other than the default (fastest). Selectors other than +fastest and scalar require instruction set extensions to be +available and will only appear if ZFS detects their presence. The +scalar implementation works on all processors.

+

The results of the microbenchmark are visible in the +/proc/spl/kstat/zfs/fletcher_4_bench file. Larger numbers indicate +better performance. Since ZFS is processor endian-independent, the +microbenchmark is run against both big and little-endian transformation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_fletcher_4_impl

Notes

Tags

CPU, checksum

When to change

Testing Fletcher-4 algorithms

Data Type

string

Range

fastest, scalar, superscalar, +superscalar4, sse2, ssse3, avx2, +avx512f, or aarch64_neon depending on +hardware support

Default

fastest

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_free_bpobj_enabled

+

The processing of the free_bpobj object can be enabled by +zfs_free_bpobj_enabled

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_free_bpobj_enabled

Notes

Tags

delete

When to change

If there’s a problem with processing +free_bpobj (e.g. i/o error or bug)

Data Type

boolean

Range

0=do not process free_bpobj objects, +1=process free_bpobj objects

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_free_max_blocks

+

zfs_free_max_blocks sets the maximum number of blocks to be freed in +a single transaction group (txg). For workloads that delete (free) large +numbers of blocks in a short period of time, the processing of the frees +can negatively impact other operations, including txg commits. +zfs_free_max_blocks acts as a limit to reduce the impact.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_free_max_blocks

Notes

Tags

filesystem, +delete

When to change

For workloads that delete large files, +zfs_free_max_blocks can be adjusted to +meet performance requirements while reducing +the impacts of deletion

Data Type

ulong

Units

blocks

Range

1 to ULONG_MAX

Default

100,000

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_async_read_max_active

+

Maximum asynchronous read I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_async_read_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_ma +x_active

Default

3

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_read_min_active

+

Minimum asynchronous read I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_async_read_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +( +zfs_vdev_async_read_max_active +- 1)

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_write_active_max_dirty_percent

+

When the amount of dirty data exceeds the threshold +zfs_vdev_async_write_active_max_dirty_percent of +zfs_dirty_data_max dirty data, then +zfs_vdev_async_write_max_active +is used to limit active async writes. If the dirty data is between +zfs_vdev_async_write_active_min_dirty_percent +and zfs_vdev_async_write_active_max_dirty_percent, the active I/O +limit is linearly interpolated between +zfs_vdev_async_write_min_active +and +zfs_vdev_async_write_max_active

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_asyn +c_write_active_max_dirty_percent

Notes

Tags

vdev, +Z +IO_scheduler

When to change

See ZFS I/O +Sch +eduler

Data Type

int

Units

percent of +zfs_dirty_d +ata_max

Range

0 to 100

Default

60

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_write_active_min_dirty_percent

+

If the amount of dirty data is between +zfs_vdev_async_write_active_min_dirty_percent and +zfs_vdev_async_write_active_max_dirty_percent +of zfs_dirty_data_max, the active I/O limit is +linearly interpolated between +zfs_vdev_async_write_min_active +and +zfs_vdev_async_write_max_active

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_asyn +c_write_active_min_dirty_percent

Notes

Tags

vdev, +Z +IO_scheduler

When to change

See ZFS I/O +Sch +eduler

Data Type

int

Units

percent of zfs_dirty_data_max

Range

0 to +(z +fs_vdev_async_write_active_max_d +irty_percent +- 1)

Default

30

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_write_max_active

+

zfs_vdev_async_write_max_active sets the maximum asynchronous write +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_async_write_max_active

Notes

Tags

vdev, +` +ZIO_scheduler <#zio-scheduler>`__

When to change

See ZFS I/O +S +cheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_max +_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_async_write_min_active

+

zfs_vdev_async_write_min_active sets the minimum asynchronous write +I/Os active to each device.

+

Lower values are associated with better latency on rotational media but +poorer resilver performance. The default value of 2 was chosen as a +compromise. A value of 3 has been shown to improve resilver performance +further at a cost of further increasing latency.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_async_write_min_active

Notes

Tags

vdev, +` +ZIO_scheduler <#zio-scheduler>`__

When to change

See ZFS I/O +S +cheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs +_vdev_async_write_max_active

Default

1 for v0.6.x, 2 for v0.7.0 and +later

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_max_active

+

The maximum number of I/Os active to each device. Ideally, +zfs_vdev_max_active >= the sum of each queue’s max_active.

+

Once queued to the device, the ZFS I/O scheduler is no longer able to +prioritize I/O operations. The underlying device drivers have their own +scheduler and queue depth limits. Values larger than the device’s +maximum queue depth can have the affect of increased latency as the I/Os +are queued in the intervening device driver layers.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

sum of each queue’s min_active to UINT32_MAX

Default

1,000

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_scrub_max_active

+

zfs_vdev_scrub_max_active sets the maximum scrub or scan read I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_scrub_max_active

Notes

Tags

vdev, +ZIO_scheduler, +scrub, +resilver

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vd +ev_max_active

Default

2

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_scrub_min_active

+

zfs_vdev_scrub_min_active sets the minimum scrub or scan read I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_scrub_min_active

Notes

Tags

vdev, +ZIO_scheduler, +scrub, +resilver

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_scrub_max +_active

Default

1

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_sync_read_max_active

+

Maximum synchronous read I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_sync_read_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_m +ax_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_sync_read_min_active

+

zfs_vdev_sync_read_min_active sets the minimum synchronous read I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_sync_read_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_sync_read_max_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_sync_write_max_active

+

zfs_vdev_sync_write_max_active sets the maximum synchronous write +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_sync_write_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_ma +x_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_sync_write_min_active

+

zfs_vdev_sync_write_min_active sets the minimum synchronous write +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_sync_write_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_sync_write_max_active

Default

10

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_vdev_queue_depth_pct

+

Maximum number of queued allocations per top-level vdev expressed as a +percentage of +zfs_vdev_async_write_max_active. +This allows the system to detect devices that are more capable of +handling allocations and to allocate more blocks to those devices. It +also allows for dynamic allocation distribution when devices are +imbalanced as fuller devices will tend to be slower than empty devices. +Once the queue depth reaches (zfs_vdev_queue_depth_pct * +zfs_vdev_async_write_max_active / +100) then allocator will stop allocating blocks on that top-level device +and switch to the next.

+

See also zio_dva_throttle_enabled

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_queue_depth_pct

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to UINT32_MAX

Default

1,000

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_disable_dup_eviction

+

Disable duplicate buffer eviction from ARC.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_disable_dup_eviction

Notes

Tags

ARC, dedup

When to change

TBD

Data Type

boolean

Range

0=duplicate buffers can be evicted, 1=do +not evict duplicate buffers

Default

0

Change

Dynamic

Versions Affected

v0.6.5, deprecated in v0.7.0

+
+
+

zfs_expire_snapshot

+

Snapshots of filesystems are normally automounted under the filesystem’s +.zfs/snapshot subdirectory. When not in use, snapshots are unmounted +after zfs_expire_snapshot seconds.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_expire_snapshot

Notes

Tags

filesystem, +snapshot

When to change

TBD

Data Type

int

Units

seconds

Range

0 disables automatic unmounting, maximum time +is INT_MAX

Default

300

Change

Dynamic

Versions Affected

v0.6.1 and later

+
+
+

zfs_admin_snapshot

+

Allow the creation, removal, or renaming of entries in the +.zfs/snapshot subdirectory to cause the creation, destruction, or +renaming of snapshots. When enabled this functionality works both +locally and over NFS exports which have the “no_root_squash” option set.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_admin_snapshot

Notes

Tags

filesystem, +snapshot

When to change

TBD

Data Type

boolean

Range

0=do not allow snapshot manipulation via the +filesystem, 1=allow snapshot manipulation via +the filesystem

Default

1

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_flags

+

Set additional debugging flags (see +zfs_dbgmsg_enable)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

flag value

symbolic name

description

0x1

ZFS_DEBUG_DPRINTF

Enable dprintf entries in +the debug log

0x2

ZFS_DEBUG_DBUF_VERIFY

Enable extra dnode +verifications

0x4

ZFS_DEBUG_DNODE_VERIFY

Enable extra dnode +verifications

0x8

ZFS_DEBUG_SNAPNAMES

Enable snapshot name +verification

0x10

ZFS_DEBUG_MODIFY

Check for illegally +modified ARC buffers

0x20

ZFS_DEBUG_SPA

Enable spa_dbgmsg entries +in the debug log

0x40

ZFS_DEBUG_ZIO_FREE

Enable verification of +block frees

0x80

Z +FS_DEBUG_HISTOGRAM_VERIFY

Enable extra spacemap +histogram verifications

0x100

ZFS_DEBUG_METASLAB_VERIFY

Verify space accounting +on disk matches in-core +range_trees

0x200

ZFS_DEBUG_SET_ERROR

Enable SET_ERROR and +dprintf entries in the +debug log

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_flags

Notes

Tags

debug

When to change

When debugging ZFS

Data Type

int

Default

0 no debug flags set, for debug builds: all +except ZFS_DEBUG_DPRINTF and ZFS_DEBUG_SPA

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_free_leak_on_eio

+

If destroy encounters an I/O error (EIO) while reading metadata (eg +indirect blocks), space referenced by the missing metadata cannot be +freed. Normally, this causes the background destroy to become “stalled”, +as the destroy is unable to make forward progress. While in this stalled +state, all remaining space to free from the error-encountering +filesystem is temporarily leaked. Set zfs_free_leak_on_eio = 1 to +ignore the EIO, permanently leak the space from indirect blocks that can +not be read, and continue to free everything else that it can.

+

The default, stalling behavior is useful if the storage partially fails +(eg some but not all I/Os fail), and then later recovers. In this case, +we will be able to continue pool operations while it is partially +failed, and when it recovers, we can continue to free the space, with no +leaks. However, note that this case is rare.

+

Typically pools either:

+
    +
  1. fail completely (but perhaps temporarily (eg a top-level vdev going +offline)

  2. +
  3. have localized, permanent errors (eg disk returns the wrong data due +to bit flip or firmware bug)

  4. +
+

In case (1), the zfs_free_leak_on_eio setting does not matter +because the pool will be suspended and the sync thread will not be able +to make forward progress. In case (2), because the error is permanent, +the best effort do is leak the minimum amount of space. Therefore, it is +reasonable for zfs_free_leak_on_eio be set, but by default the more +conservative approach is taken, so that there is no possibility of +leaking space in the “partial temporary” failure case.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_free_leak_on_eio

Notes

Tags

debug

When to change

When debugging I/O errors during destroy

Data Type

boolean

Range

0=normal behavior, 1=ignore error and +permanently leak space

Default

0

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zfs_free_min_time_ms

+

During a zfs destroy operation using feature@async_destroy a +minimum of zfs_free_min_time_ms time will be spent working on +freeing blocks per txg commit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_free_min_time_ms

Notes

Tags

delete

When to change

TBD

Data Type

int

Units

milliseconds

Range

1 to (zfs_txg_timeout * 1000)

Default

1,000

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

zfs_immediate_write_sz

+

If a pool does not have a log device, data blocks equal to or larger +than zfs_immediate_write_sz are treated as if the dataset being +written to had the property setting logbias=throughput

+

Terminology note: logbias=throughput writes the blocks in “indirect +mode” to the ZIL where the data is written to the pool and a pointer to +the data is written to the ZIL.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_immediate_write_sz

Notes

Tags

ZIL

When to change

TBD

Data Type

long

Units

bytes

Range

512 to 16,777,216 (valid block sizes)

Default

32,768 (32 KiB)

Change

Dynamic

Verification

Data blocks that exceed +zfs_immediate_write_sz or are written +as logbias=throughput increment the +zil_itx_indirect_count entry in +/proc/spl/kstat/zfs/zil

Versions Affected

all

+
+
+

zfs_max_recordsize

+

ZFS supports logical record (block) sizes from 512 bytes to 16 MiB. The +benefits of larger blocks, and thus larger average I/O sizes, can be +weighed against the cost of copy-on-write of large block to modify one +byte. Additionally, very large blocks can have a negative impact on both +I/O latency at the device level and the memory allocator. The +zfs_max_recordsize parameter limits the upper bound of the dataset +volblocksize and recordsize properties.

+

Larger blocks can be created by enabling zpool large_blocks +feature and changing this zfs_max_recordsize. Pools with larger +blocks can always be imported and used, regardless of the value of +zfs_max_recordsize.

+

For 32-bit systems, zfs_max_recordsize also limits the size of +kernel virtual memory caches used in the ZFS I/O pipeline (zio_buf_* +and zio_data_buf_*).

+

See also the zpool large_blocks feature.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_max_recordsize

Notes

Tags

filesystem, +memory, volume

When to change

To create datasets with larger volblocksize or +recordsize

Data Type

int

Units

bytes

Range

512 to 16,777,216 (valid block sizes)

Default

1,048,576

Change

Dynamic, set prior to creating volumes or +changing filesystem recordsize

Versions Affected

v0.6.5 and later

+
+
+

zfs_mdcomp_disable

+

zfs_mdcomp_disable allows metadata compression to be disabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_mdcomp_disable

Notes

Tags

CPU, metadata

When to change

When CPU cycles cost less than I/O

Data Type

boolean

Range

0=compress metadata, 1=do not compress metadata

Default

0

Change

Dynamic

Versions Affected

from v0.6.0 to v0.8.0

+
+
+

zfs_metaslab_fragmentation_threshold

+

Allow metaslabs to keep their active state as long as their +fragmentation percentage is less than or equal to this value. When +writing, an active metaslab whose fragmentation percentage exceeds +zfs_metaslab_fragmentation_threshold is avoided allowing metaslabs +with less fragmentation to be preferred.

+

Metaslab fragmentation is used to calculate the overall pool +fragmentation property value. However, individual metaslab +fragmentation levels are observable using the zdb with the -mm +option.

+

zfs_metaslab_fragmentation_threshold works at the metaslab level and +each top-level vdev has approximately +metaslabs_per_vdev metaslabs. See also +zfs_mg_fragmentation_threshold

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_metaslab_fragmentation_thresh +old

Notes

Tags

allocation, +fr +agmentation, +vdev

When to change

Testing metaslab allocation

Data Type

int

Units

percent

Range

1 to 100

Default

70

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_mg_fragmentation_threshold

+

Metaslab groups (top-level vdevs) are considered eligible for +allocations if their fragmentation percentage metric is less than or +equal to zfs_mg_fragmentation_threshold. If a metaslab group exceeds +this threshold then it will be skipped unless all metaslab groups within +the metaslab class have also crossed the +zfs_mg_fragmentation_threshold threshold.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_mg_fragmentation_threshold

Notes

Tags

allocation, +` +fragmentation <#fragmentation>`__, +vdev

When to change

Testing metaslab allocation

Data Type

int

Units

percent

Range

1 to 100

Default

85

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_mg_noalloc_threshold

+

Metaslab groups (top-level vdevs) with free space percentage greater +than zfs_mg_noalloc_threshold are eligible for new allocations. If a +metaslab group’s free space is less than or equal to the threshold, the +allocator avoids allocating to that group unless all groups in the pool +have reached the threshold. Once all metaslab groups have reached the +threshold, all metaslab groups are allowed to accept allocations. The +default value of 0 disables the feature and causes all metaslab groups +to be eligible for allocations.

+

This parameter allows one to deal with pools having heavily imbalanced +vdevs such as would be the case when a new vdev has been added. Setting +the threshold to a non-zero percentage will stop allocations from being +made to vdevs that aren’t filled to the specified percentage and allow +lesser filled vdevs to acquire more allocations than they otherwise +would under the older zfs_mg_alloc_failures facility.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_mg_noalloc_threshold

Notes

Tags

allocation, +fragmentation, +vdev

When to change

To force rebalancing as top-level vdevs +are added or expanded

Data Type

int

Units

percent

Range

0 to 100

Default

0 (disabled)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_multihost_history

+

The pool multihost multimodifier protection (MMP) subsystem can +record historical updates in the +/proc/spl/kstat/zfs/POOL_NAME/multihost file for debugging purposes. +The number of lines of history is determined by zfs_multihost_history.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multihost_history

Notes

Tags

MMP, import

When to change

When testing multihost feature

Data Type

int

Units

lines

Range

0 to INT_MAX

Default

0

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_multihost_interval

+

zfs_multihost_interval controls the frequency of multihost writes +performed by the pool multihost multimodifier protection (MMP) +subsystem. The multihost write period is (zfs_multihost_interval / +number of leaf-vdevs) milliseconds. Thus on average a multihost write +will be issued for each leaf vdev every zfs_multihost_interval +milliseconds. In practice, the observed period can vary with the I/O +load and this observed value is the delay which is stored in the +uberblock.

+

On import the multihost activity check waits a minimum amount of time +determined by (zfs_multihost_interval * +zfs_multihost_import_intervals) +with a lower bound of 1 second. The activity check time may be further +extended if the value of mmp delay found in the best uberblock indicates +actual multihost updates happened at longer intervals than +zfs_multihost_interval

+

Note: the multihost protection feature applies to storage devices that +can be shared between multiple systems.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multihost_interval

Notes

Tags

MMP, import, +vdev

When to change

To optimize pool import time against +possibility of simultaneous import by +another system

Data Type

ulong

Units

milliseconds

Range

100 to ULONG_MAX

Default

1000

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_multihost_import_intervals

+

zfs_multihost_import_intervals controls the duration of the activity +test on pool import for the multihost multimodifier protection (MMP) +subsystem. The activity test can be expected to take a minimum time of +(zfs_multihost_import_intervals * +zfs_multihost_interval * random(25%)) +milliseconds. The random period of up to 25% improves simultaneous +import detection. For example, if two hosts are rebooted at the same +time and automatically attempt to import the pool, then is is highly +probable that one host will win.

+

Smaller values of zfs_multihost_import_intervals reduces the import +time but increases the risk of failing to detect an active pool. The +total activity check time is never allowed to drop below one second.

+

Note: the multihost protection feature applies to storage devices that +can be shared between multiple systems.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multihost_import_intervals

Notes

Tags

MMP, import

When to change

TBD

Data Type

uint

Units

intervals

Range

1 to UINT_MAX

Default

20 since v0.8, previously 10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_multihost_fail_intervals

+

zfs_multihost_fail_intervals controls the behavior of the pool when +write failures are detected in the multihost multimodifier protection +(MMP) subsystem.

+

If zfs_multihost_fail_intervals = 0 then multihost write failures +are ignored. The write failures are reported to the ZFS event daemon +(zed) which can take action such as suspending the pool or offlining +a device.

+
+
If zfs_multihost_fail_intervals > 0 then sequential multihost +write failures will cause the pool to be suspended. This occurs when +(zfs_multihost_fail_intervals * +zfs_multihost_interval) milliseconds +have passed since the last successful multihost write.
+
This guarantees the activity test will see multihost writes if the +pool is attempted to be imported by another system.
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_multihost_fail_intervals

Notes

Tags

MMP, import

When to change

TBD

Data Type

uint

Units

intervals

Range

0 to UINT_MAX

Default

10 since v0.8, previously 5

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_delays_per_second

+

The ZFS Event Daemon (zed) processes events from ZFS. However, it can be +overwhelmed by high rates of error reports which can be generated by +failing, high-performance devices. zfs_delays_per_second limits the +rate of delay events reported to zed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_delays_per_second

Notes

Tags

zed, delay

When to change

If processing delay events at a higher rate +is desired

Data Type

uint

Units

events per second

Range

0 to UINT_MAX

Default

20

Change

Dynamic

Versions Affected

v0.7.7 and later

+
+
+

zfs_checksums_per_second

+

The ZFS Event Daemon (zed) processes events from ZFS. However, it can be +overwhelmed by high rates of error reports which can be generated by +failing, high-performance devices. zfs_checksums_per_second limits +the rate of checksum events reported to zed.

+

Note: do not set this value lower than the SERD limit for checksum +in zed. By default, checksum_N = 10 and checksum_T = 10 minutes, +resulting in a practical lower limit of 1.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_checksums_per_second

Notes

Tags

zed, checksum

When to change

If processing checksum error events at a +higher rate is desired

Data Type

uint

Units

events per second

Range

0 to UINT_MAX

Default

20

Change

Dynamic

Versions Affected

v0.7.7 and later

+
+
+

zfs_no_scrub_io

+

When zfs_no_scrub_io = 1 scrubs do not actually scrub data and +simply doing a metadata crawl of the pool instead.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_no_scrub_io

Notes

Tags

scrub

When to change

Testing scrub feature

Data Type

boolean

Range

0=perform scrub I/O, 1=do not perform scrub I/O

Default

0

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

zfs_no_scrub_prefetch

+

When zfs_no_scrub_prefetch = 1, prefetch is disabled for scrub I/Os.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_no_scrub_prefetch

Notes

Tags

prefetch, scrub

When to change

Testing scrub feature

Data Type

boolean

Range

0=prefetch scrub I/Os, 1=do not prefetch scrub I/Os

Default

0

Change

Dynamic

Versions Affected

v0.6.4 and later

+
+
+

zfs_nocacheflush

+

ZFS uses barriers (volatile cache flush commands) to ensure data is +committed to permanent media by devices. This ensures consistent +on-media state for devices where caches are volatile (eg HDDs).

+

For devices with nonvolatile caches, the cache flush operation can be a +no-op. However, in some RAID arrays, cache flushes can cause the entire +cache to be flushed to the backing devices.

+

To ensure on-media consistency, keep cache flush enabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_nocacheflush

Notes

Tags

disks

When to change

If the storage device has nonvolatile cache, +then disabling cache flush can save the cost of +occasional cache flush commands

Data Type

boolean

Range

0=send cache flush commands, 1=do not send +cache flush commands

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_nopwrite_enabled

+

The NOP-write feature is enabled by default when a +crytographically-secure checksum algorithm is in use by the dataset. +zfs_nopwrite_enabled allows the NOP-write feature to be completely +disabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_nopwrite_enabled

Notes

Tags

checksum, debug

When to change

TBD

Data Type

boolean

Range

0=disable NOP-write feature, 1=enable +NOP-write feature

Default

1

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

zfs_dmu_offset_next_sync

+

zfs_dmu_offset_next_sync enables forcing txg sync to find holes. +This causes ZFS to act like older versions when SEEK_HOLE or +SEEK_DATA flags are used: when a dirty dnode causes txgs to be +synced so the previous data can be found.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_dmu_offset_next_sync

Notes

Tags

DMU

When to change

to exchange strict hole reporting for +performance

Data Type

boolean

Range

0=do not force txg sync to find holes, +1=force txg sync to find holes

Default

1 since v2.1.5, previously 0

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_pd_bytes_max

+

zfs_pd_bytes_max limits the number of bytes prefetched during a pool +traversal (eg zfs send or other data crawling operations). These +prefetches are referred to as “prescient prefetches” and are always 100% +hit rate. The traversal operations do not use the default data or +metadata prefetcher.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_pd_bytes_max

Notes

Tags

prefetch, send

When to change

TBD

Data Type

int32

Units

bytes

Range

0 to INT32_MAX

Default

52,428,800 (50 MiB)

Change

Dynamic

Versions Affected

TBD

+
+
+

zfs_per_txg_dirty_frees_percent

+

zfs_per_txg_dirty_frees_percent as a percentage of +zfs_dirty_data_max controls the percentage of +dirtied blocks from frees in one txg. After the threshold is crossed, +additional dirty blocks from frees wait until the next txg. Thus, when +deleting large files, filling consecutive txgs with deletes/frees, does +not throttle other, perhaps more important, writes.

+

A side effect of this throttle can impact zfs receive workloads that +contain a large number of frees and the +ignore_hole_birth optimization is disabled. The +symptom is that the receive workload causes an increase in the frequency +of txg commits. The frequency of txg commits is observable via the +otime column of /proc/spl/kstat/zfs/POOLNAME/txgs. Since txg +commits also flush data from volatile caches in HDDs to media, HDD +performance can be negatively impacted. Also, since the frees do not +consume much bandwidth over the pipe, the pipe can appear to stall. Thus +the overall progress of receives is slower than expected.

+

A value of zero will disable this throttle.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_per_txg_dirty_frees_percent

Notes

Tags

delete

When to change

For zfs receive workloads, +consider increasing or disabling. +See section ZFS I/O +S +cheduler

Data Type

ulong

Units

percent

Range

0 to 100

Default

30

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_prefetch_disable

+

zfs_prefetch_disable controls the predictive prefetcher.

+

Note that it leaves “prescient” prefetch (eg prefetch for zfs send) +intact (see zfs_pd_bytes_max)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_prefetch_disable

Notes

Tags

prefetch

When to change

In some case where the workload is +completely random reads, overall performance +can be better if prefetch is disabled

Data Type

boolean

Range

0=prefetch enabled, 1=prefetch disabled

Default

0

Change

Dynamic

Verification

prefetch efficacy is observed by +arcstat, arc_summary, and the +relevant entries in +/proc/spl/kstat/zfs/arcstats

Versions Affected

all

+
+
+

zfs_read_chunk_size

+

zfs_read_chunk_size is the limit for ZFS filesystem reads. If an +application issues a read() larger than zfs_read_chunk_size, +then the read() is divided into multiple operations no larger than +zfs_read_chunk_size

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_read_chunk_size

Notes

Tags

filesystem

When to change

TBD

Data Type

ulong

Units

bytes

Range

512 to ULONG_MAX

Default

1,048,576

Change

Dynamic

Versions Affected

all

+
+
+

zfs_read_history

+

Historical statistics for the last zfs_read_history reads are +available in /proc/spl/kstat/zfs/POOL_NAME/reads

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_read_history

Notes

Tags

debug

When to change

To observe read operation details

Data Type

int

Units

lines

Range

0 to INT_MAX

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_read_history_hits

+

When zfs_read_history> 0, +zfs_read_history_hits controls whether ARC hits are displayed in the +read history file, /proc/spl/kstat/zfs/POOL_NAME/reads

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_read_history_hits

Notes

Tags

debug

When to change

To observe read operation details with ARC +hits

Data Type

boolean

Range

0=do not include data for ARC hits, +1=include ARC hit data

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_recover

+

zfs_recover can be set to true (1) to attempt to recover from +otherwise-fatal errors, typically caused by on-disk corruption. When +set, calls to zfs_panic_recover() will turn into warning messages +rather than calling panic()

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_recover

Notes

Tags

import

When to change

zfs_recover should only be used as a last +resort, as it typically results in leaked +space, or worse

Data Type

boolean

Range

0=normal operation, 1=attempt recovery zpool +import

Default

0

Change

Dynamic

Verification

check output of dmesg and other logs for +details

Versions Affected

v0.6.4 or later

+
+
+

zfs_resilver_min_time_ms

+

Resilvers are processed by the sync thread in syncing context. While +resilvering, ZFS spends at least zfs_resilver_min_time_ms time +working on a resilver between txg commits.

+

The zfs_txg_timeout tunable sets a nominal +timeout value for the txg commits. By default, this timeout is 5 seconds +and the zfs_resilver_min_time_ms is 3 seconds. However, many +variables contribute to changing the actual txg times. The measured txg +interval is observed as the otime column (in nanoseconds) in the +/proc/spl/kstat/zfs/POOL_NAME/txgs file.

+

See also zfs_txg_timeout and +zfs_scan_min_time_ms

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_resilver_min_time_ms

Notes

Tags

resilver

When to change

In some resilvering cases, increasing +zfs_resilver_min_time_ms can result +in faster completion

Data Type

int

Units

milliseconds

Range

1 to +zfs_txg_timeout +converted to milliseconds

Default

3,000

Change

Dynamic

Versions Affected

all

+
+
+

zfs_scan_min_time_ms

+

Scrubs are processed by the sync thread in syncing context. While +scrubbing, ZFS spends at least zfs_scan_min_time_ms time working on +a scrub between txg commits.

+

See also zfs_txg_timeout and +zfs_resilver_min_time_ms

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_min_time_ms

Notes

Tags

scrub

When to change

In some scrub cases, increasing +zfs_scan_min_time_ms can result in +faster completion

Data Type

int

Units

milliseconds

Range

1 to zfs_txg_timeout +converted to milliseconds

Default

1,000

Change

Dynamic

Versions Affected

all

+
+
+

zfs_scan_checkpoint_intval

+

To preserve progress across reboots the sequential scan algorithm +periodically needs to stop metadata scanning and issue all the +verifications I/Os to disk every zfs_scan_checkpoint_intval seconds.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_checkpoint_intval

Notes

Tags

resilver, scrub

When to change

TBD

Data Type

int

Units

seconds

Range

1 to INT_MAX

Default

7,200 (2 hours)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_fill_weight

+

This tunable affects how scrub and resilver I/O segments are ordered. A +higher number indicates that we care more about how filled in a segment +is, while a lower number indicates we care more about the size of the +extent without considering the gaps within a segment.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_fill_weight

Notes

Tags

resilver, scrub

When to change

Testing sequential scrub and resilver

Data Type

int

Units

scalar

Range

0 to INT_MAX

Default

3

Change

Prior to zfs module load

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_issue_strategy

+

zfs_scan_issue_strategy controls the order of data verification +while scrubbing or resilvering.

+ + + + + + + + + + + + + + + + + +

value

description

0

fs will use strategy 1 during normal verification and +strategy 2 while taking a checkpoint

1

data is verified as sequentially as possible, given the +amount of memory reserved for scrubbing (see +zfs_scan_mem_lim_fact). This +can improve scrub performance if the pool’s data is heavily +fragmented.

2

the largest mostly-contiguous chunk of found data is +verified first. By deferring scrubbing of small segments, +we may later find adjacent data to coalesce and increase +the segment size.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_issue_strategy

Notes

Tags

resilver, scrub

When to change

TBD

Data Type

enum

Range

0 to 2

Default

0

Change

Dynamic

Versions Affected

TBD

+
+
+

zfs_scan_legacy

+

Setting zfs_scan_legacy = 1 enables the legacy scan and scrub +behavior instead of the newer sequential behavior.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_legacy

Notes

Tags

resilver, scrub

When to change

In some cases, the new scan mode can consumer +more memory as it collects and sorts I/Os; +using the legacy algorithm can be more memory +efficient at the expense of HDD read efficiency

Data Type

boolean

Range

0=use new method: scrubs and resilvers will +gather metadata in memory before issuing +sequential I/O, 1=use legacy algorithm will be +used where I/O is initiated as soon as it is +discovered

Default

0

Change

Dynamic, however changing to 0 does not affect +in-progress scrubs or resilvers

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_max_ext_gap

+

zfs_scan_max_ext_gap limits the largest gap in bytes between scrub +and resilver I/Os that will still be considered sequential for sorting +purposes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_max_ext_gap

Notes

Tags

resilver, scrub

When to change

TBD

Data Type

ulong

Units

bytes

Range

512 to ULONG_MAX

Default

2,097,152 (2 MiB)

Change

Dynamic, however changing to 0 does not +affect in-progress scrubs or resilvers

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_mem_lim_fact

+

zfs_scan_mem_lim_fact limits the maximum fraction of RAM used for +I/O sorting by sequential scan algorithm. When the limit is reached +scanning metadata is stopped and data verification I/O is started. Data +verification I/O continues until the memory used by the sorting +algorithm drops by +zfs_scan_mem_lim_soft_fact

+

Memory used by the sequential scan algorithm can be observed as the kmem +sio_cache. This is visible from procfs as +grep sio_cache /proc/slabinfo and can be monitored using +slab-monitoring tools such as slabtop

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_mem_lim_fact

Notes

Tags

memory, +resilver, +scrub

When to change

TBD

Data Type

int

Units

divisor of physical RAM

Range

TBD

Default

20 (physical RAM / 20 or 5%)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_mem_lim_soft_fact

+

zfs_scan_mem_lim_soft_fact sets the fraction of the hard limit, +zfs_scan_mem_lim_fact, used to determined +the RAM soft limit for I/O sorting by the sequential scan algorithm. +After zfs_scan_mem_lim_fact has been +reached, metadata scanning is stopped until the RAM usage drops by +zfs_scan_mem_lim_soft_fact

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_mem_lim_soft_fact

Notes

Tags

resilver, +scrub

When to change

TBD

Data Type

int

Units

divisor of (physical RAM / +zfs_scan_mem +_lim_fact)

Range

1 to INT_MAX

Default

20 (for default +zfs_scan_mem +_lim_fact, +0.25% of physical RAM)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_scan_vdev_limit

+

zfs_scan_vdev_limit is the maximum amount of data that can be +concurrently issued at once for scrubs and resilvers per leaf vdev. +zfs_scan_vdev_limit attempts to strike a balance between keeping the +leaf vdev queues full of I/Os while not overflowing the queues causing +high latency resulting in long txg sync times. While +zfs_scan_vdev_limit represents a bandwidth limit, the existing I/O +limit of zfs_vdev_scrub_max_active +remains in effect, too.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_vdev_limit

Notes

Tags

resilver, scrub, +vdev

When to change

TBD

Data Type

ulong

Units

bytes

Range

512 to ULONG_MAX

Default

4,194,304 (4 MiB)

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_send_corrupt_data

+

zfs_send_corrupt_data enables zfs send to send of corrupt data +by ignoring read and checksum errors. The corrupted or unreadable blocks +are replaced with the value 0x2f5baddb10c (ZFS bad block)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_send_corrupt_data

Notes

Tags

send

When to change

When data corruption exists and an attempt +to recover at least some data via +zfs send is needed

Data Type

boolean

Range

0=do not send corrupt data, 1=replace +corrupt data with cookie

Default

0

Change

Dynamic

Versions Affected

v0.6.0 and later

+
+
+

zfs_sync_pass_deferred_free

+

The SPA sync process is performed in multiple passes. Once the pass +number reaches zfs_sync_pass_deferred_free, frees are no long +processed and must wait for the next SPA sync.

+

The zfs_sync_pass_deferred_free value is expected to be removed as a +tunable once the optimal value is determined during field testing.

+

The zfs_sync_pass_deferred_free pass must be greater than 1 to +ensure that regular blocks are not deferred.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_sync_pass_deferred_free

Notes

Tags

SPA

When to change

Testing SPA sync process

Data Type

int

Units

SPA sync passes

Range

1 to INT_MAX

Default

2

Change

Dynamic

Versions Affected

all

+
+
+

zfs_sync_pass_dont_compress

+

The SPA sync process is performed in multiple passes. Once the pass +number reaches zfs_sync_pass_dont_compress, data block compression +is no longer processed and must wait for the next SPA sync.

+

The zfs_sync_pass_dont_compress value is expected to be removed as a +tunable once the optimal value is determined during field testing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_sync_pass_dont_compress

Notes

Tags

SPA

When to change

Testing SPA sync process

Data Type

int

Units

SPA sync passes

Range

1 to INT_MAX

Default

5

Change

Dynamic

Versions Affected

all

+
+
+

zfs_sync_pass_rewrite

+

The SPA sync process is performed in multiple passes. Once the pass +number reaches zfs_sync_pass_rewrite, blocks can be split into gang +blocks.

+

The zfs_sync_pass_rewrite value is expected to be removed as a +tunable once the optimal value is determined during field testing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_sync_pass_rewrite

Notes

Tags

SPA

When to change

Testing SPA sync process

Data Type

int

Units

SPA sync passes

Range

1 to INT_MAX

Default

2

Change

Dynamic

Versions Affected

all

+
+
+

zfs_sync_taskq_batch_pct

+

zfs_sync_taskq_batch_pct controls the number of threads used by the +DSL pool sync taskq, dp_sync_taskq

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_sync_taskq_batch_pct

Notes

Tags

SPA

When to change

to adjust the number of +dp_sync_taskq threads

Data Type

int

Units

percent of number of online CPUs

Range

1 to 100

Default

75

Change

Prior to zfs module load

Versions Affected

v0.7.0 and later

+
+
+

zfs_txg_history

+

Historical statistics for the last zfs_txg_history txg commits are +available in /proc/spl/kstat/zfs/POOL_NAME/txgs

+

The work required to measure the txg commit (SPA statistics) is low. +However, for debugging purposes, it can be useful to observe the SPA +statistics.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_txg_history

Notes

Tags

debug

When to change

To observe details of SPA sync behavior.

Data Type

int

Units

lines

Range

0 to INT_MAX

Default

0 for version v0.6.0 to v0.7.6, 100 for version v0.8.0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_txg_timeout

+

The open txg is committed to the pool periodically (SPA sync) and +zfs_txg_timeout represents the default target upper limit.

+

txg commits can occur more frequently and a rapid rate of txg commits +often indicates a busy write workload, quota limits reached, or the free +space is critically low.

+

Many variables contribute to changing the actual txg times. txg commits +can also take longer than zfs_txg_timeout if the ZFS write throttle +is not properly tuned or the time to sync is otherwise delayed (eg slow +device). Shorter txg commit intervals can occur due to +zfs_dirty_data_sync for write-intensive +workloads. The measured txg interval is observed as the otime column +(in nanoseconds) in the /proc/spl/kstat/zfs/POOL_NAME/txgs file.

+

See also zfs_dirty_data_sync and +zfs_txg_history

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_txg_timeout

Notes

Tags

SPA, +ZIO_scheduler

When to change

To optimize the work done by txg commit +relative to the pool requirements. See also +section ZFS I/O +Scheduler

Data Type

int

Units

seconds

Range

1 to INT_MAX

Default

5

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_aggregation_limit

+

To reduce IOPs, small, adjacent I/Os can be aggregated (coalesced) into +a large I/O. For reads, aggregations occur across small adjacency gaps. +For writes, aggregation can occur at the ZFS or disk level. +zfs_vdev_aggregation_limit is the upper bound on the size of the +larger, aggregated I/O.

+

Setting zfs_vdev_aggregation_limit = 0 effectively disables +aggregation by ZFS. However, the block device scheduler can still merge +(aggregate) I/Os. Also, many devices, such as modern HDDs, contain +schedulers that can aggregate I/Os.

+

In general, I/O aggregation can improve performance for devices, such as +HDDs, where ordering I/O operations for contiguous LBAs is a benefit. +For random access devices, such as SSDs, aggregation might not improve +performance relative to the CPU cycles needed to aggregate. For devices +that represent themselves as having no rotation, the +zfs_vdev_aggregation_limit_non_rotating +parameter is used instead of zfs_vdev_aggregation_limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_aggregation_limit

Notes

Tags

vdev, +ZIO_scheduler

When to change

If the workload does not benefit from +aggregation, the +zfs_vdev_aggregation_limit can be +reduced to avoid aggregation attempts

Data Type

int

Units

bytes

Range

0 to 1,048,576 (default) or 16,777,216 +(if zpool large_blocks feature +is enabled)

Default

1,048,576, or 131,072 for <v0.8

Change

Dynamic

Verification

ZFS aggregation is observed with +zpool iostat -r and the block +scheduler merging is observed with +iostat -x

Versions Affected

all

+
+
+

zfs_vdev_cache_size

+

Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thusit is disabled by setting the +zfs_vdev_cache_size = 0

+

zfs_vdev_cache_size is the size of the vdev cache.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_cache_size

Notes

Tags

vdev, +vdev_cache

When to change

Do not change

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

0 (vdev cache is disabled)

Change

Dynamic

Verification

vdev cache statistics are available in the +/proc/spl/kstat/zfs/vdev_cache_stats file

Versions Affected

all

+
+
+

zfs_vdev_cache_bshift

+

Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thus it is disabled by setting the +zfs_vdev_cache_size to zero. This related +tunable is, by default, inoperative.

+

All read I/Os smaller than zfs_vdev_cache_max +are turned into (1 << zfs_vdev_cache_bshift) byte reads by the vdev +cache. At most zfs_vdev_cache_size bytes will +be kept in each vdev’s cache.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_cache_bshift

Notes

Tags

vdev, vdev_cache

When to change

Do not change

Data Type

int

Units

shift

Range

1 to INT_MAX

Default

16 (65,536 bytes)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_cache_max

+

Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thus it is disabled by setting the +zfs_vdev_cache_size to zero. This related +tunable is, by default, inoperative.

+

All read I/Os smaller than zfs_vdev_cache_max will be turned into +(1 <<zfs_vdev_cache_bshift byte reads +by the vdev cache. At most zfs_vdev_cache_size bytes will be kept in +each vdev’s cache.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_cache_max

Notes

Tags

vdev, vdev_cache

When to change

Do not change

Data Type

int

Units

bytes

Range

512 to INT_MAX

Default

16,384 (16 KiB)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_mirror_rotating_inc

+

The mirror read algorithm uses current load and an incremental weighting +value to determine the vdev to service a read operation. Lower values +determine the preferred vdev. The weighting value is +zfs_vdev_mirror_rotating_inc for rotating media and +zfs_vdev_mirror_non_rotating_inc +for nonrotating media.

+

Verify the rotational setting described by a block device in sysfs by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_mirror_rotating_inc

Notes

Tags

vdev, +mirror, HDD

When to change

Increasing for mirrors with both +rotating and nonrotating media more +strongly favors the nonrotating +media

Data Type

int

Units

scalar

Range

0 to MAX_INT

Default

0

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_mirror_non_rotating_inc

+

The mirror read algorithm uses current load and an incremental weighting +value to determine the vdev to service a read operation. Lower values +determine the preferred vdev. The weighting value is +zfs_vdev_mirror_rotating_inc for +rotating media and zfs_vdev_mirror_non_rotating_inc for nonrotating +media.

+

Verify the rotational setting described by a block device in sysfs by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_mirror_non_rotating_inc

Notes

Tags

vdev, +mirror, +SSD

When to change

TBD

Data Type

int

Units

scalar

Range

0 to INT_MAX

Default

0

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_mirror_rotating_seek_inc

+

For rotating media in a mirror, if the next I/O offset is within +zfs_vdev_mirror_rotating_seek_offset +then the weighting factor is incremented by +(zfs_vdev_mirror_rotating_seek_inc / 2). Otherwise the weighting +factor is increased by zfs_vdev_mirror_rotating_seek_inc. This +algorithm prefers rotating media with lower seek distance.

+

Verify the rotational setting described by a block device in sysfs by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

z +fs_vdev_mirror_rotating_seek_inc

Notes

Tags

vdev, +mirror, +HDD

When to change

TBD

Data Type

int

Units

scalar

Range

0 to INT_MAX

Default

5

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_mirror_rotating_seek_offset

+

For rotating media in a mirror, if the next I/O offset is within +zfs_vdev_mirror_rotating_seek_offset then the weighting factor is +incremented by +(zfs_vdev_mirror_rotating_seek_inc/ 2). +Otherwise the weighting factor is increased by +zfs_vdev_mirror_rotating_seek_inc. This algorithm prefers rotating +media with lower seek distance.

+

Verify the rotational setting described by a block device in sysfs by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_mirror_rotating_seek_off +set

Notes

Tags

vdev, +mirror, +HDD

When to change

TBD

Data Type

int

Units

bytes

Range

0 to INT_MAX

Default

1,048,576 (1 MiB)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_mirror_non_rotating_seek_inc

+

For nonrotating media in a mirror, a seek penalty is applied as +sequential I/O’s can be aggregated into fewer operations, avoiding +unnecessary per-command overhead, often boosting performance.

+

Verify the rotational setting described by a block device in SysFS by +observing /sys/block/DISK_NAME/queue/rotational

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_v +dev_mirror_non_rotating_seek_inc

Notes

Tags

vdev, +mirror, +SSD

When to change

TBD

Data Type

int

Units

scalar

Range

0 to INT_MAX

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_vdev_read_gap_limit

+

To reduce IOPs, small, adjacent I/Os are aggregated (coalesced) into +into a large I/O. For reads, aggregations occur across small adjacency +gaps where the gap is less than zfs_vdev_read_gap_limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_read_gap_limit

Notes

Tags

vdev, +ZIO_scheduler

When to change

TBD

Data Type

int

Units

bytes

Range

0 to INT_MAX

Default

32,768 (32 KiB)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_write_gap_limit

+

To reduce IOPs, small, adjacent I/Os are aggregated (coalesced) into +into a large I/O. For writes, aggregations occur across small adjacency +gaps where the gap is less than zfs_vdev_write_gap_limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_write_gap_limit

Notes

Tags

vdev, +ZIO_scheduler

When to change

TBD

Data Type

int

Units

bytes

Range

0 to INT_MAX

Default

4,096 (4 KiB)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_vdev_scheduler

+

Prior to version 0.8.3, when the pool is imported, for whole disk vdevs, +the block device I/O scheduler is set to zfs_vdev_scheduler. +The most common schedulers are: noop, cfq, bfq, and deadline. +In some cases, the scheduler is not changeable using this method. +Known schedulers that cannot be changed are: scsi_mq and none. +In these cases, the scheduler is unchanged and an error message can be +reported to logs.

+

The parameter was disabled in v0.8.3 but left in place to avoid breaking +loading of the zfs module if the parameter is specified in modprobe +configuration on existing installations. It is recommended that users +leave the default scheduler “unless you’re encountering a specific +problem, or have clearly measured a performance improvement for your +workload,” +and if so, to change it via the /sys/block/<device>/queue/scheduler +interface and/or udev rule.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_scheduler

Notes

Tags

vdev, +ZIO_scheduler

When to change

since ZFS has its own I/O scheduler, using a +simple scheduler can result in more consistent +performance

Data Type

string

Range

expected: noop, cfq, bfq, and deadline

Default

noop

Change

Dynamic, but takes effect upon pool creation +or import

Versions Affected

all, but no effect since v0.8.3

+
+
+

zfs_vdev_raidz_impl

+

zfs_vdev_raidz_impl overrides the raidz parity algorithm. By +default, the algorithm is selected at zfs module load time by the +results of a microbenchmark of algorithms based on the current hardware.

+

Once the module is loaded, the content of +/sys/module/zfs/parameters/zfs_vdev_raidz_impl shows available +options with the currently selected enclosed in []. Details of the +results of the microbenchmark are observable in the +/proc/spl/kstat/zfs/vdev_raidz_bench file.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

algorithm

architecture

description

fastest

all

fastest implementation +selected by +microbenchmark

original

all

original raidz +implementation

scalar

all

scalar raidz +implementation

sse2

64-bit x86

uses SSE2 instruction +set

ssse3

64-bit x86

uses SSSE3 instruction +set

avx2

64-bit x86

uses AVX2 instruction +set

avx512f

64-bit x86

uses AVX512F +instruction set

avx512bw

64-bit x86

uses AVX512F & AVX512BW +instruction sets

aarch64_neon

aarch64/64 bit ARMv8

uses NEON

aarch64_neonx2

aarch64/64 bit ARMv8

uses NEON with more +unrolling

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_raidz_impl

Notes

Tags

CPU, raidz, vdev

When to change

testing raidz algorithms

Data Type

string

Range

see table above

Default

fastest

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_zevent_cols

+

zfs_zevent_cols is a soft wrap limit in columns (characters) for ZFS +events logged to the console.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zevent_cols

Notes

Tags

debug

When to change

if 80 columns isn’t enough

Data Type

int

Units

characters

Range

1 to INT_MAX

Default

80

Change

Dynamic

Versions Affected

all

+
+
+

zfs_zevent_console

+

If zfs_zevent_console is true (1), then ZFS events are logged to the +console.

+

More logging and log filtering capabilities are provided by zed

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zevent_console

Notes

Tags

debug

When to change

to log ZFS events to the console

Data Type

boolean

Range

0=do not log to console, 1=log to console

Default

0

Change

Dynamic

Versions Affected

all

+
+
+

zfs_zevent_len_max

+

zfs_zevent_len_max is the maximum ZFS event queue length. A value of +0 results in a calculated value (16 * number of CPUs) with a minimum of +64. Events in the queue can be viewed with the zpool events command.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zevent_len_max

Notes

Tags

debug

When to change

increase to see more ZFS events

Data Type

int

Units

events

Range

0 to INT_MAX

Default

0 (calculate as described above)

Change

Dynamic

Versions Affected

all

+
+
+

zfs_zil_clean_taskq_maxalloc

+

During a SPA sync, intent log transaction groups (itxg) are cleaned. The +cleaning work is dispatched to the DSL pool ZIL clean taskq +(dp_zil_clean_taskq). +zfs_zil_clean_taskq_minalloc is the +minimum and zfs_zil_clean_taskq_maxalloc is the maximum number of +cached taskq entries for dp_zil_clean_taskq. The actual number of +taskq entries dynamically varies between these values.

+

When zfs_zil_clean_taskq_maxalloc is exceeded transaction records +(itxs) are cleaned synchronously with possible negative impact to the +performance of SPA sync.

+

Ideally taskq entries are pre-allocated prior to being needed by +zil_clean(), thus avoiding dynamic allocation of new taskq entries.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zil_clean_taskq_maxalloc

Notes

Tags

ZIL

When to change

If more dp_zil_clean_taskq +entries are needed to prevent the +itxs from being synchronously +cleaned

Data Type

int

Units

dp_zil_clean_taskq taskq entries

Range

zfs_zil_clean_taskq_minallo +c +to INT_MAX

Default

1,048,576

Change

Dynamic, takes effect per-pool when +the pool is imported

Versions Affected

v0.8.0

+
+
+

zfs_zil_clean_taskq_minalloc

+

During a SPA sync, intent log transaction groups (itxg) are cleaned. The +cleaning work is dispatched to the DSL pool ZIL clean taskq +(dp_zil_clean_taskq). zfs_zil_clean_taskq_minalloc is the +minimum and +zfs_zil_clean_taskq_maxalloc is the +maximum number of cached taskq entries for dp_zil_clean_taskq. The +actual number of taskq entries dynamically varies between these values.

+

zfs_zil_clean_taskq_minalloc is the minimum number of ZIL +transaction records (itxs).

+

Ideally taskq entries are pre-allocated prior to being needed by +zil_clean(), thus avoiding dynamic allocation of new taskq entries.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zil_clean_taskq_minalloc

Notes

Tags

ZIL

When to change

TBD

Data Type

int

Units

dp_zil_clean_taskq taskq entries

Range

1 to +zfs_zil_clean_taskq_maxallo +c

Default

1,024

Change

Dynamic, takes effect per-pool when +the pool is imported

Versions Affected

v0.8.0

+
+
+

zfs_zil_clean_taskq_nthr_pct

+

zfs_zil_clean_taskq_nthr_pct controls the number of threads used by +the DSL pool ZIL clean taskq (dp_zil_clean_taskq). The default value +of 100% will create a maximum of one thread per cpu.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_zil_clean_taskq_nthr_pct

Notes

Tags

taskq, ZIL

When to change

Testing ZIL clean and SPA sync +performance

Data Type

int

Units

percent of number of CPUs

Range

1 to 100

Default

100

Change

Dynamic, takes effect per-pool when +the pool is imported

Versions Affected

v0.8.0

+
+
+

zil_replay_disable

+

If zil_replay_disable = 1, then when a volume or filesystem is +brought online, no attempt to replay the ZIL is made and any existing +ZIL is destroyed. This can result in loss of data without notice.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zil_replay_disable

Notes

Tags

debug, ZIL

When to change

Do not change

Data Type

boolean

Range

0=replay ZIL, 1=destroy ZIL

Default

0

Change

Dynamic

Versions Affected

v0.6.5

+
+
+

zil_slog_bulk

+

zil_slog_bulk is the log device write size limit per commit executed +with synchronous priority. Writes below zil_slog_bulk are executed +with synchronous priority. Writes above zil_slog_bulk are executed +with lower (asynchronous) priority to reduct potential log device abuse +by a single active ZIL writer.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zil_slog_bulk

Notes

Tags

ZIL

When to change

See ZFS I/O +Scheduler

Data Type

ulong

Units

bytes

Range

0 to ULONG_MAX

Default

786,432

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zio_delay_max

+

If a ZFS I/O operation takes more than zio_delay_max milliseconds to +complete, then an event is logged. Note that this is only a logging +facility, not a timeout on operations. See also zpool events

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_delay_max

Notes

Tags

debug

When to change

when debugging slow I/O

Data Type

int

Units

milliseconds

Range

1 to INT_MAX

Default

30,000 (30 seconds)

Change

Dynamic

Versions Affected

all

+
+
+

zio_dva_throttle_enabled

+

zio_dva_throttle_enabled controls throttling of block allocations in +the ZFS I/O (ZIO) pipeline. When enabled, the maximum number of pending +allocations per top-level vdev is limited by +zfs_vdev_queue_depth_pct

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_dva_throttle_enabled

Notes

Tags

vdev, +ZIO_scheduler

When to change

Testing ZIO block allocation algorithms

Data Type

boolean

Range

0=do not throttle ZIO block allocations, +1=throttle ZIO block allocations

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zio_requeue_io_start_cut_in_line

+

zio_requeue_io_start_cut_in_line controls prioritization of a +re-queued ZFS I/O (ZIO) in the ZIO pipeline by the ZIO taskq.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_requeue_io_start_cut_in_line

Notes

Tags

Z +IO_scheduler

When to change

Do not change

Data Type

boolean

Range

0=don’t prioritize re-queued +I/Os, 1=prioritize re-queued +I/Os

Default

1

Change

Dynamic

Versions Affected

all

+
+
+

zio_taskq_batch_pct

+

zio_taskq_batch_pct sets the number of I/O worker threads as a +percentage of online CPUs. These workers threads are responsible for IO +work such as compression and checksum calculations.

+

Each block is handled by one worker thread, so maximum overall worker +thread throughput is function of the number of concurrent blocks being +processed, the number of worker threads, and the algorithms used. The +default value of 75% is chosen to avoid using all CPUs which can result +in latency issues and inconsistent application performance, especially +when high compression is enabled.

+

The taskq batch processes are:

+ + + + + + + + + + + + + +

taskq

process name

Notes

Write issue

z_wr_iss[_#]

Can be CPU intensive, runs at lower +priority than other taskqs

+

Other taskqs exist, but most have fixed numbers of instances and +therefore require recompiling the kernel module to adjust.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_taskq_batch_pct

Notes

Tags

taskq, +ZIO_scheduler

When to change

To tune parallelism in multiprocessor systems

Data Type

int

Units

percent of number of CPUs

Range

1 to 100, fractional number of CPUs are +rounded down

Default

75

Change

Prior to zfs module load

Verification

The number of taskqs for each batch group can +be observed using ps and counting the +threads

Versions Affected

TBD

+
+
+

zvol_inhibit_dev

+

zvol_inhibit_dev controls the creation of volume device nodes upon +pool import.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_inhibit_dev

Notes

Tags

import, volume

When to change

Inhibiting can slightly improve startup time on +systems with a very large number of volumes

Data Type

boolean

Range

0=create volume device nodes, 1=do not create +volume device nodes

Default

0

Change

Dynamic, takes effect per-pool when the pool is +imported

Versions Affected

v0.6.0 and later

+
+
+

zvol_major

+

zvol_major is the default major number for volume devices.

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_major

Notes

Tags

volume

When to change

Do not change

Data Type

uint

Default

230

Change

Dynamic, takes effect per-pool when the pool is +imported or volumes are created

Versions Affected

all

+
+
+

zvol_max_discard_blocks

+

Discard (aka ATA TRIM or SCSI UNMAP) operations done on volumes are done +in batches zvol_max_discard_blocks blocks. The block size is +determined by the volblocksize property of a volume.

+

Some applications, such as mkfs, discard the whole volume at once +using the maximum possible discard size. As a result, many gigabytes of +discard requests are not uncommon. Unfortunately, if a large amount of +data is already allocated in the volume, ZFS can be quite slow to +process discard requests. This is especially true if the volblocksize is +small (eg default=8KB). As a result, very large discard requests can +take a very long time (perhaps minutes under heavy load) to complete. +This can cause a number of problems, most notably if the volume is +accessed remotely (eg via iSCSI), in which case the client has a high +probability of timing out on the request.

+

Limiting the zvol_max_discard_blocks can decrease the amount of +discard workload request by setting the discard_max_bytes and +discard_max_hw_bytes for the volume’s block device in SysFS. This +value is readable by volume device consumers.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_max_discard_blocks

Notes

Tags

discard, +volume

When to change

if volume discard activity severely +impacts other workloads

Data Type

ulong

Units

number of blocks of size volblocksize

Range

0 to ULONG_MAX

Default

16,384

Change

Dynamic, takes effect per-pool when the +pool is imported or volumes are created

Verification

Observe value of +/sys/block/ +VOLUME_INSTANCE/queue/discard_max_bytes

Versions Affected

v0.6.0 and later

+
+
+

zvol_prefetch_bytes

+

When importing a pool with volumes or adding a volume to a pool, +zvol_prefetch_bytes are prefetch from the start and end of the +volume. Prefetching these regions of the volume is desirable because +they are likely to be accessed immediately by blkid(8) or by the +kernel scanning for a partition table.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_prefetch_bytes

Notes

Tags

prefetch, volume

When to change

TBD

Data Type

uint

Units

bytes

Range

0 to UINT_MAX

Default

131,072

Change

Dynamic

Versions Affected

v0.6.5 and later

+
+
+

zvol_request_sync

+

When processing I/O requests for a volume submit them synchronously. +This effectively limits the queue depth to 1 for each I/O submitter. +When set to 0 requests are handled asynchronously by the “zvol” thread +pool.

+

See also zvol_threads

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_request_sync

Notes

Tags

volume

When to change

Testing concurrent volume requests

Data Type

boolean

Range

0=do concurrent (async) volume requests, 1=do +sync volume requests

Default

0

Change

Dynamic

Versions Affected

v0.7.2 and later

+
+
+

zvol_threads

+

zvol_threads controls the maximum number of threads handling concurrent +volume I/O requests.

+

The default of 32 threads behaves similarly to a disk with a 32-entry +command queue. The actual number of threads required can vary widely by +workload and available CPUs. If lock analysis shows high contention in +the zvol taskq threads, then reducing the number of zvol_threads or +workload queue depth can improve overall throughput.

+

See also zvol_request_sync

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_threads

Notes

Tags

volume

When to change

Matching the number of concurrent volume +requests with workload requirements can improve +concurrency

Data Type

uint

Units

threads

Range

1 to UINT_MAX

Default

32

Change

Dynamic, takes effect per-volume when the pool +is imported or volumes are created

Verification

iostat using avgqu-sz or aqu-sz +results

Versions Affected

v0.7.0 and later

+
+
+

zvol_volmode

+

zvol_volmode defines volume block devices behaviour when the +volmode property is set to default

+

Note: to maintain compatibility with ZFS on BSD, “geom” is synonymous +with “full”

+ + + + + + + + + + + + + + + + + + + + + +

value

volmode

Description

1

full

legacy fully functional behaviour (default)

2

dev

hide partitions on volume block devices

3

none

not exposing volumes outside ZFS

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zvol_volmode

Notes

Tags

volume

When to change

TBD

Data Type

enum

Range

1, 2, or 3

Default

1

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_qat_disable

+

zfs_qat_disable controls the Intel QuickAssist Technology (QAT) +driver providing hardware acceleration for gzip compression. When the +QAT hardware is present and qat driver available, the default behaviour +is to enable QAT.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_qat_disable

Notes

Tags

compression, QAT

When to change

Testing QAT functionality

Data Type

boolean

Range

0=use QAT acceleration if available, 1=do not +use QAT acceleration

Default

0

Change

Dynamic

Versions Affected

v0.7, renamed to +zfs_qat_ +compress_disable +in v0.8

+
+
+

zfs_qat_checksum_disable

+

zfs_qat_checksum_disable controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for checksums. When the QAT +hardware is present and qat driver available, the default behaviour is +to enable QAT.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_qat_checksum_disable

Notes

Tags

checksum, QAT

When to change

Testing QAT functionality

Data Type

boolean

Range

0=use QAT acceleration if available, +1=do not use QAT acceleration

Default

0

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_qat_compress_disable

+

zfs_qat_compress_disable controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for gzip compression. When +the QAT hardware is present and qat driver available, the default +behaviour is to enable QAT.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_qat_compress_disable

Notes

Tags

compression, +QAT

When to change

Testing QAT functionality

Data Type

boolean

Range

0=use QAT acceleration if available, +1=do not use QAT acceleration

Default

0

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_qat_encrypt_disable

+

zfs_qat_encrypt_disable controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for encryption. When the +QAT hardware is present and qat driver available, the default behaviour +is to enable QAT.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_qat_encrypt_disable

Notes

Tags

encryption, +QAT

When to change

Testing QAT functionality

Data Type

boolean

Range

0=use QAT acceleration if available, 1=do +not use QAT acceleration

Default

0

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

dbuf_cache_hiwater_pct

+

The dbuf_cache_hiwater_pct and +dbuf_cache_lowater_pct define the +operating range for dbuf cache evict thread. The hiwater and lowater are +percentages of the dbuf_cache_max_bytes +value. When the dbuf cache grows above ((100% + +dbuf_cache_hiwater_pct) * +dbuf_cache_max_bytes) then the dbuf cache +thread begins evicting. When the dbug cache falls below ((100% - +dbuf_cache_lowater_pct) * +dbuf_cache_max_bytes) then the dbuf cache +thread stops evicting.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_hiwater_pct

Notes

Tags

dbuf_cache

When to change

Testing dbuf cache algorithms

Data Type

uint

Units

percent

Range

0 to UINT_MAX

Default

10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

dbuf_cache_lowater_pct

+

The dbuf_cache_hiwater_pct and dbuf_cache_lowater_pct define the +operating range for dbuf cache evict thread. The hiwater and lowater are +percentages of the dbuf_cache_max_bytes +value. When the dbuf cache grows above ((100% + +dbuf_cache_hiwater_pct) * +dbuf_cache_max_bytes) then the dbuf cache +thread begins evicting. When the dbug cache falls below ((100% - +dbuf_cache_lowater_pct) * +dbuf_cache_max_bytes) then the dbuf cache +thread stops evicting.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_lowater_pct

Notes

Tags

dbuf_cache

When to change

Testing dbuf cache algorithms

Data Type

uint

Units

percent

Range

0 to UINT_MAX

Default

10

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

dbuf_cache_max_bytes

+

The dbuf cache maintains a list of dbufs that are not currently held but +have been recently released. These dbufs are not eligible for ARC +eviction until they are aged out of the dbuf cache. Dbufs are added to +the dbuf cache once the last hold is released. If a dbuf is later +accessed and still exists in the dbuf cache, then it will be removed +from the cache and later re-added to the head of the cache. Dbufs that +are aged out of the cache will be immediately destroyed and become +eligible for ARC eviction.

+

The size of the dbuf cache is set by dbuf_cache_max_bytes. The +actual size is dynamically adjusted to the minimum of current ARC target +size (c) >> dbuf_cache_max_shift and the +default dbuf_cache_max_bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_max_bytes

Notes

Tags

dbuf_cache

When to change

Testing dbuf cache algorithms

Data Type

ulong

Units

bytes

Range

16,777,216 to ULONG_MAX

Default

104,857,600 (100 MiB)

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

dbuf_cache_max_shift

+

The dbuf_cache_max_bytes minimum is the +lesser of dbuf_cache_max_bytes and the +current ARC target size (c) >> dbuf_cache_max_shift

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_max_shift

Notes

Tags

dbuf_cache

When to change

Testing dbuf cache algorithms

Data Type

int

Units

shift

Range

1 to 63

Default

5

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

dmu_object_alloc_chunk_shift

+

Each of the concurrent object allocators grabs +2^dmu_object_alloc_chunk_shift dnode slots at a time. The default is +to grab 128 slots, or 4 blocks worth. This default value was +experimentally determined to be the lowest value that eliminates the +measurable effect of lock contention in the DMU object allocation code +path.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dmu_object_alloc_chunk_shift

Notes

Tags

allocation, +DMU

When to change

If the workload creates many files +concurrently on a system with many +CPUs, then increasing +dmu_object_alloc_chunk_shift can +decrease lock contention

Data Type

int

Units

shift

Range

7 to 9

Default

7

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

send_holes_without_birth_time

+

Alias for ignore_hole_birth

+
+
+

zfs_abd_scatter_enabled

+

zfs_abd_scatter_enabled controls the ARC Buffer Data (ABD) +scatter/gather feature.

+

When disabled, the legacy behaviour is selected using linear buffers. +For linear buffers, all the data in the ABD is stored in one contiguous +buffer in memory (from a zio_[data_]buf_* kmem cache).

+

When enabled (default), the data in the ABD is split into equal-sized +chunks (from the abd_chunk_cache kmem_cache), with pointers to the +chunks recorded in an array at the end of the ABD structure. This allows +more efficient memory allocation for buffers, especially when large +recordsizes are used.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_abd_scatter_enabled

Notes

Tags

ABD, memory

When to change

Testing ABD

Data Type

boolean

Range

0=use linear allocation only, 1=allow +scatter/gather

Default

1

Change

Dynamic

Verification

ABD statistics are observable in +/proc/spl/kstat/zfs/abdstats. Slab +allocations are observable in +/proc/slabinfo

Versions Affected

v0.7.0 and later

+
+
+

zfs_abd_scatter_max_order

+

zfs_abd_scatter_max_order sets the maximum order for physical page +allocation when ABD is enabled (see +zfs_abd_scatter_enabled)

+

See also Buddy Memory Allocation in the Linux kernel documentation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_abd_scatter_max_order

Notes

Tags

ABD, memory

When to change

Testing ABD features

Data Type

int

Units

orders

Range

1 to 10 (upper limit is +hardware-dependent)

Default

10

Change

Dynamic

Verification

ABD statistics are observable in +/proc/spl/kstat/zfs/abdstats

Versions Affected

v0.7.0 and later

+
+
+

zfs_compressed_arc_enabled

+

When compression is enabled for a dataset, later reads of the data can +store the blocks in ARC in their on-disk, compressed state. This can +increse the effective size of the ARC, as counted in blocks, and thus +improve the ARC hit ratio.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_compressed_arc_enabled

Notes

Tags

ABD, +compression

When to change

Testing ARC compression feature

Data Type

boolean

Range

0=compressed ARC disabled (legacy +behaviour), 1=compress ARC data

Default

1

Change

Dynamic

Verification

raw ARC statistics are observable in +/proc/spl/kstat/zfs/arcstats and +ARC hit ratios can be observed using +arcstat

Versions Affected

v0.7.0 and later

+
+
+

zfs_key_max_salt_uses

+

For encrypted datasets, the salt is regenerated every +zfs_key_max_salt_uses blocks. This automatic regeneration reduces +the probability of collisions due to the Birthday problem. When set to +the default (400,000,000) the probability of collision is approximately +1 in 1 trillion.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_key_max_salt_uses

Notes

Tags

encryption

When to change

Testing encryption features

Data Type

ulong

Units

blocks encrypted

Range

1 to ULONG_MAX

Default

400,000,000

Change

Dynamic

Versions Affected

v0.8.0 and later

+
+
+

zfs_object_mutex_size

+

zfs_object_mutex_size facilitates resizing the the per-dataset znode +mutex array for testing deadlocks therein.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_object_mutex_size

Notes

Tags

debug

When to change

Testing znode mutex array deadlocks

Data Type

uint

Units

orders

Range

1 to UINT_MAX

Default

64

Change

Dynamic

Versions Affected

v0.7.0 and later

+
+
+

zfs_scan_strict_mem_lim

+

When scrubbing or resilvering, by default, ZFS checks to ensure it is +not over the hard memory limit before each txg commit. If finer-grained +control of this is needed zfs_scan_strict_mem_lim can be set to 1 to +enable checking before scanning each block.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_strict_mem_lim

Notes

Tags

memory, +resilver, +scrub

When to change

Do not change

Data Type

boolean

Range

0=normal scan behaviour, 1=check hard +memory limit strictly during scan

Default

0

Change

Dynamic

Versions Affected

v0.8.0

+
+
+

zfs_send_queue_length

+

zfs_send_queue_length is the maximum number of bytes allowed in the +zfs send queue.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_send_queue_length

Notes

Tags

send

When to change

When using the largest recordsize or +volblocksize (16 MiB), increasing can +improve send efficiency

Data Type

int

Units

bytes

Range

Must be at least twice the maximum +recordsize or volblocksize in use

Default

16,777,216 bytes (16 MiB)

Change

Dynamic

Versions Affected

v0.8.1

+
+
+

zfs_recv_queue_length

+

zfs_recv_queue_length is the maximum number of bytes allowed in the +zfs receive queue.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_recv_queue_length

Notes

Tags

receive

When to change

When using the largest recordsize or +volblocksize (16 MiB), increasing can +improve receive efficiency

Data Type

int

Units

bytes

Range

Must be at least twice the maximum +recordsize or volblocksize in use

Default

16,777,216 bytes (16 MiB)

Change

Dynamic

Versions Affected

v0.8.1

+
+
+

zfs_arc_min_prefetch_lifespan

+

arc_min_prefetch_lifespan is the minimum time for a prefetched block +to remain in ARC before it is eligible for eviction.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_arc_min_prefetch_lifespan

Notes

Tags

ARC

When to change

TBD

Data Type

int

Units

clock ticks

Range

0 = use default value

Default

1 second (as expressed in clock ticks)

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

zfs_scan_ignore_errors

+

zfs_scan_ignore_errors allows errors discovered during scrub or +resilver to be ignored. This can be tuned as a workaround to remove the +dirty time list (DTL) when completing a pool scan. It is intended to be +used during pool repair or recovery to prevent resilvering when the pool +is imported.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_ignore_errors

Notes

Tags

resilver

When to change

See description above

Data Type

boolean

Range

0 = do not ignore errors, 1 = ignore +errors during pool scrub or resilver

Default

0

Change

Dynamic

Versions Affected

v0.8.1

+
+
+

zfs_top_maxinflight

+

zfs_top_maxinflight is used to limit the maximum number of I/Os +queued to top-level vdevs during scrub or resilver operations. The +actual top-level vdev limit is calculated by multiplying the number of +child vdevs by zfs_top_maxinflight This limit is an additional cap +over and above the scan limits

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_top_maxinflight

Notes

Tags

resilver, scrub, +ZIO_scheduler

When to change

for modern ZFS versions, the ZIO scheduler +limits usually take precedence

Data Type

int

Units

I/O operations

Range

1 to MAX_INT

Default

32

Change

Dynamic

Versions Affected

v0.6.0

+
+
+

zfs_resilver_delay

+

zfs_resilver_delay sets a time-based delay for resilver I/Os. This +delay is in addition to the ZIO scheduler’s treatment of scrub +workloads. See also zfs_scan_idle

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_resilver_delay

Notes

Tags

resilver, +ZIO_scheduler

When to change

increasing can reduce impact of resilver +workload on dynamic workloads

Data Type

int

Units

clock ticks

Range

0 to MAX_INT

Default

2

Change

Dynamic

Versions Affected

v0.6.0

+
+
+

zfs_scrub_delay

+

zfs_scrub_delay sets a time-based delay for scrub I/Os. This delay +is in addition to the ZIO scheduler’s treatment of scrub workloads. See +also zfs_scan_idle

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scrub_delay

Notes

Tags

scrub, +ZIO_scheduler

When to change

increasing can reduce impact of scrub workload +on dynamic workloads

Data Type

int

Units

clock ticks

Range

0 to MAX_INT

Default

4

Change

Dynamic

Versions Affected

v0.6.0

+
+
+

zfs_scan_idle

+

When a non-scan I/O has occurred in the past zfs_scan_idle clock +ticks, then zfs_resilver_delay or +zfs_scrub_delay are enabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_idle

Notes

Tags

resilver, scrub, +ZIO_scheduler

When to change

as part of a resilver/scrub tuning effort

Data Type

int

Units

clock ticks

Range

0 to MAX_INT

Default

50

Change

Dynamic

Versions Affected

v0.6.0

+
+
+

icp_aes_impl

+

By default, ZFS will choose the highest performance, hardware-optimized +implementation of the AES encryption algorithm. The icp_aes_impl +tunable overrides this automatic choice.

+

Note: icp_aes_impl is set in the icp kernel module, not the +zfs kernel module.

+

To observe the available options +cat /sys/module/icp/parameters/icp_aes_impl The default option is +shown in brackets ‘[]’

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

icp_aes_impl

Notes

Tags

encryption

Kernel module

icp

When to change

debugging ZFS encryption on hardware

Data Type

string

Range

varies by hardware

Default

automatic, depends on the hardware

Change

dynamic

Versions Affected

planned for v2

+
+
+

icp_gcm_impl

+

By default, ZFS will choose the highest performance, hardware-optimized +implementation of the GCM encryption algorithm. The icp_gcm_impl +tunable overrides this automatic choice.

+

Note: icp_gcm_impl is set in the icp kernel module, not the +zfs kernel module.

+

To observe the available options +cat /sys/module/icp/parameters/icp_gcm_impl The default option is +shown in brackets ‘[]’

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

icp_gcm_impl

Notes

Tags

encryption

Kernel module

icp

When to change

debugging ZFS encryption on hardware

Data Type

string

Range

varies by hardware

Default

automatic, depends on the hardware

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_abd_scatter_min_size

+

zfs_abd_scatter_min_size changes the ARC buffer data (ABD) +allocator’s threshold for using linear or page-based scatter buffers. +Allocations smaller than zfs_abd_scatter_min_size use linear ABDs.

+

Scatter ABD’s use at least one page each, so sub-page allocations waste +some space when allocated as scatter allocations. For example, 2KB +scatter allocation wastes half of each page. Using linear ABD’s for +small allocations results in slabs containing many allocations. This can +improve memory efficiency, at the expense of more work for ARC evictions +attempting to free pages, because all the buffers on one slab need to be +freed in order to free the slab and its underlying pages.

+

Typically, 512B and 1KB kmem caches have 16 buffers per slab, so it’s +possible for them to actually waste more memory than scatter +allocations:

+
    +
  • one page per buf = wasting 3/4 or 7/8

  • +
  • one buf per slab = wasting 15/16

  • +
+

Spill blocks are typically 512B and are heavily used on systems running +selinux with the default dnode size and the xattr=sa property set.

+

By default, linear allocations for 512B and 1KB, and scatter allocations +for larger (>= 1.5KB) allocation requests.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_abd_scatter_min_size

Notes

Tags

ARC

When to change

debugging memory allocation, especially +for large pages

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

1536 (512B and 1KB allocations will be +linear)

Change

Dynamic

Versions Affected

planned for v2

+
+ +
+

spa_load_verify_shift

+

spa_load_verify_shift sets the fraction of ARC that can be used by +inflight I/Os when verifying the pool during import. This value is a +“shift” representing the fraction of ARC target size +(grep -w c /proc/spl/kstat/zfs/arcstats). The ARC target size is +shifted to the right. Thus a value of ‘2’ results in the fraction = 1/4, +while a value of ‘4’ results in the fraction = 1/8.

+

For large memory machines, pool import can consume large amounts of ARC: +much larger than the value of maxinflight. This can result in +spa_load_verify_maxinflight having a +value of 0 causing the system to hang. Setting spa_load_verify_shift +can reduce this limit and allow importing without hanging.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_verify_shift

Notes

Tags

import, ARC, +SPA

When to change

troubleshooting pool import on large memory +machines

Data Type

int

Units

shift

Range

1 to MAX_INT

Default

4

Change

prior to importing a pool

Versions Affected

planned for v2

+
+
+

spa_load_print_vdev_tree

+

spa_load_print_vdev_tree enables printing of the attempted pool +import’s vdev tree to kernel message to the ZFS debug message log +/proc/spl/kstat/zfs/dbgmsg Both the provided vdev tree and MOS vdev +tree are printed, which can be useful for debugging problems with the +zpool cachefile

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spa_load_print_vdev_tree

Notes

Tags

import, SPA

When to change

troubleshooting pool import failures

Data Type

boolean

Range

0 = do not print pool configuration in +logs, 1 = print pool configuration in +logs

Default

0

Change

prior to pool import

Versions Affected

planned for v2

+
+
+

zfs_max_missing_tvds

+

When importing a pool in readonly mode +(zpool import -o readonly=on ...) then up to +zfs_max_missing_tvds top-level vdevs can be missing, but the import +can attempt to progress.

+

Note: This is strictly intended for advanced pool recovery cases since +missing data is almost inevitable. Pools with missing devices can only +be imported read-only for safety reasons, and the pool’s failmode +property is automatically set to continue

+

The expected use case is to recover pool data immediately after +accidentally adding a non-protected vdev to a protected pool.

+
    +
  • With 1 missing top-level vdev, ZFS should be able to import the pool +and mount all datasets. User data that was not modified after the +missing device has been added should be recoverable. Thus snapshots +created prior to the addition of that device should be completely +intact.

  • +
  • With 2 missing top-level vdevs, some datasets may fail to mount since +there are dataset statistics that are stored as regular metadata. +Some data might be recoverable if those vdevs were added recently.

  • +
  • With 3 or more top-level missing vdevs, the pool is severely damaged +and MOS entries may be missing entirely. Chances of data recovery are +very low. Note that there are also risks of performing an inadvertent +rewind as we might be missing all the vdevs with the latest +uberblocks.

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_max_missing_tvds

Notes

Tags

import

When to change

troubleshooting pools with missing devices

Data Type

int

Units

missing top-level vdevs

Range

0 to MAX_INT

Default

0

Change

prior to pool import

Versions Affected

planned for v2

+
+
+

dbuf_metadata_cache_shift

+

dbuf_metadata_cache_shift sets the size of the dbuf metadata cache +as a fraction of ARC target size. This is an alternate method for +setting dbuf metadata cache size than +dbuf_metadata_cache_max_bytes.

+

dbuf_metadata_cache_max_bytes +overrides dbuf_metadata_cache_shift

+

This value is a “shift” representing the fraction of ARC target size +(grep -w c /proc/spl/kstat/zfs/arcstats). The ARC target size is +shifted to the right. Thus a value of ‘2’ results in the fraction = 1/4, +while a value of ‘6’ results in the fraction = 1/64.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_metadata_cache_shift

Notes

Tags

ARC, +dbuf_cache

When to change

Data Type

int

Units

shift

Range

practical range is +(` +dbuf_cache_shift <#dbuf-cache-shift>`__ ++ 1) to MAX_INT

Default

6

Change

Dynamic

Versions Affected

planned for v2

+
+
+

dbuf_metadata_cache_max_bytes

+

dbuf_metadata_cache_max_bytes sets the size of the dbuf metadata +cache as a number of bytes. This is an alternate method for setting dbuf +metadata cache size than +dbuf_metadata_cache_shift

+

dbuf_metadata_cache_max_bytes +overrides dbuf_metadata_cache_shift

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_metadata_cache_max_bytes

Notes

Tags

dbuf_cache

When to change

Data Type

int

Units

bytes

Range

0 = use +dbuf_metadata_cache_sh +ift +to ARC c_max

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

dbuf_cache_shift

+

dbuf_cache_shift sets the size of the dbuf cache as a fraction of +ARC target size. This is an alternate method for setting dbuf cache size +than dbuf_cache_max_bytes.

+

dbuf_cache_max_bytes overrides +dbuf_cache_shift

+

This value is a “shift” representing the fraction of ARC target size +(grep -w c /proc/spl/kstat/zfs/arcstats). The ARC target size is +shifted to the right. Thus a value of ‘2’ results in the fraction = 1/4, +while a value of ‘5’ results in the fraction = 1/32.

+

Performance tuning of dbuf cache can be monitored using:

+
    +
  • dbufstat command

  • +
  • node_exporter ZFS +module for prometheus environments

  • +
  • telegraf ZFS plugin for +general-purpose metric collection

  • +
  • /proc/spl/kstat/zfs/dbufstats kstat

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_shift

Notes

Tags

ARC, dbuf_cache

When to change

to improve performance of read-intensive +channel programs

Data Type

int

Units

shift

Range

5 to MAX_INT

Default

5

Change

Dynamic

Versions Affected

planned for v2

+
+
+

dbuf_cache_max_bytes

+

dbuf_cache_max_bytes sets the size of the dbuf cache in bytes. This +is an alternate method for setting dbuf cache size than +dbuf_cache_shift

+

Performance tuning of dbuf cache can be monitored using:

+
    +
  • dbufstat command

  • +
  • node_exporter ZFS +module for prometheus environments

  • +
  • telegraf ZFS plugin for +general-purpose metric collection

  • +
  • /proc/spl/kstat/zfs/dbufstats kstat

  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

dbuf_cache_max_bytes

Notes

Tags

ARC, dbuf_cache

When to change

Data Type

int

Units

bytes

Range

0 = use +dbuf_cache_shift to +ARC c_max

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

metaslab_force_ganging

+

When testing allocation code, metaslab_force_ganging forces blocks +above the specified size to be ganged.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

metaslab_force_ganging

Notes

Tags

allocation

When to change

for development testing purposes only

Data Type

ulong

Units

bytes

Range

SPA_MINBLOCKSIZE to (SPA_MAXBLOCKSIZE + 1)

Default

SPA_MAXBLOCKSIZE + 1 (16,777,217 bytes)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_default_ms_count

+

When adding a top-level vdev, zfs_vdev_default_ms_count is the +target number of metaslabs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_default_ms_count

Notes

Tags

allocation

When to change

for development testing purposes only

Data Type

int

Range

16 to MAX_INT

Default

200

Change

prior to creating a pool or adding a +top-level vdev

Versions Affected

planned for v2

+
+
+

vdev_removal_max_span

+

During top-level vdev removal, chunks of data are copied from the vdev +which may include free space in order to trade bandwidth for IOPS. +vdev_removal_max_span sets the maximum span of free space included +as unnecessary data in a chunk of copied data.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

vdev_removal_max_span

Notes

Tags

vdev_removal

When to change

TBD

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

32,768 (32 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_removal_ignore_errors

+

When removing a device, zfs_removal_ignore_errors controls the +process for handling hard I/O errors. When set, if a device encounters a +hard IO error during the removal process the removal will not be +cancelled. This can result in a normally recoverable block becoming +permanently damaged and is not recommended. This should only be used as +a last resort when the pool cannot be returned to a healthy state prior +to removing the device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_removal_ignore_errors

Notes

Tags

vdev_removal

When to change

See description for caveat

Data Type

boolean

Range

during device removal: 0 = hard errors +are not ignored, 1 = hard errors are +ignored

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_removal_suspend_progress

+

zfs_removal_suspend_progress is used during automated testing of the +ZFS code to incease test coverage.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_removal_suspend_progress

Notes

Tags

vdev_removal

When to change

do not change

Data Type

boolean

Range

0 = do not suspend during vdev removal

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_condense_indirect_commit_entry_delay_ms

+

During vdev removal, the vdev indirection layer sleeps for +zfs_condense_indirect_commit_entry_delay_ms milliseconds during +mapping generation. This parameter is used during automated testing of +the ZFS code to improve test coverage.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_condens +e_indirect_commit_entry_delay_ms

Notes

Tags

vdev_removal

When to change

do not change

Data Type

int

Units

milliseconds

Range

0 to MAX_INT

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_condense_indirect_vdevs_enable

+

During vdev removal, condensing process is an attempt to save memory by +removing obsolete mappings. zfs_condense_indirect_vdevs_enable +enables condensing indirect vdev mappings. When set, ZFS attempts to +condense indirect vdev mappings if the mapping uses more than +zfs_condense_min_mapping_bytes +bytes of memory and if the obsolete space map object uses more than +zfs_condense_max_obsolete_bytes +bytes on disk.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zf +s_condense_indirect_vdevs_enable

Notes

Tags

vdev_removal

When to change

TBD

Data Type

boolean

Range

0 = do not save memory, 1 = save +memory by condensing obsolete +mapping after vdev removal

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_condense_max_obsolete_bytes

+

After vdev removal, zfs_condense_max_obsolete_bytes sets the limit +for beginning the condensing process. Condensing begins if the obsolete +space map takes up more than zfs_condense_max_obsolete_bytes of +space on disk (logically). The default of 1 GiB is small enough relative +to a typical pool that the space consumed by the obsolete space map is +minimal.

+

See also +zfs_condense_indirect_vdevs_enable

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_condense_max_obsolete_bytes

Notes

Tags

vdev_removal

When to change

no not change

Data Type

ulong

Units

bytes

Range

0 to MAX_ULONG

Default

1,073,741,824 (1 GiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_condense_min_mapping_bytes

+

After vdev removal, zfs_condense_min_mapping_bytes is the lower +limit for determining when to condense the in-memory obsolete space map. +The condensing process will not continue unless a minimum of +zfs_condense_min_mapping_bytes of memory can be freed.

+

See also +zfs_condense_indirect_vdevs_enable

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_condense_min_mapping_bytes

Notes

Tags

vdev_removal

When to change

do not change

Data Type

ulong

Units

bytes

Range

0 to MAX_ULONG

Default

128 KiB

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_initializing_max_active

+

zfs_vdev_initializing_max_active sets the maximum initializing I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_initializing_max_active

Notes

Tags

vdev, +Z +IO_scheduler

When to change

See ZFS I/O +Sch +eduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_max_ +active

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_initializing_min_active

+

zfs_vdev_initializing_min_active sets the minimum initializing I/Os +active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_initializing_min_active

Notes

Tags

vdev, +Z +IO_scheduler

When to change

See ZFS I/O +Sch +eduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vde +v_initializing_max_active

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_removal_max_active

+

zfs_vdev_removal_max_active sets the maximum top-level vdev removal +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_removal_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev +_max_active

Default

2

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_removal_min_active

+

zfs_vdev_removal_min_active sets the minimum top-level vdev removal +I/Os active to each device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_removal_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_removal_max_act +ive

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_trim_max_active

+

zfs_vdev_trim_max_active sets the maximum trim I/Os active to each +device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_trim_max_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_v +dev_max_active

Default

2

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_trim_min_active

+

zfs_vdev_trim_min_active sets the minimum trim I/Os active to each +device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_trim_min_active

Notes

Tags

vdev, +ZIO_scheduler

When to change

See ZFS I/O +Scheduler

Data Type

uint32

Units

I/O operations

Range

1 to +zfs_vdev_trim_m +ax_active

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_initialize_value

+

When initializing a vdev, ZFS writes patterns of +zfs_initialize_value bytes to the device.

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_initialize_value

Notes

Tags

vdev_initialize

When to change

when debugging initialization code

Data Type

uint32 or uint64

Default

0xdeadbeef for 32-bit systems, +0xdeadbeefdeadbeee for 64-bit systems

Change

prior to running zpool initialize

Versions Affected

planned for v2

+
+
+

zfs_lua_max_instrlimit

+

zfs_lua_max_instrlimit limits the maximum time for a ZFS channel +program to run.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_lua_max_instrlimit

Notes

Tags

channel_programs

When to change

to enforce a CPU usage limit on ZFS +channel programs

Data Type

ulong

Units

LUA instructions

Range

0 to MAX_ULONG

Default

100,000,000

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_lua_max_memlimit

+

‘zfs_lua_max_memlimit’ is the maximum memory limit for a ZFS channel +program.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_lua_max_memlimit

Notes

Tags

channel_programs

When to change

Data Type

ulong

Units

bytes

Range

0 to MAX_ULONG

Default

104,857,600 (100 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_max_dataset_nesting

+

zfs_max_dataset_nesting limits the depth of nested datasets. Deeply +nested datasets can overflow the stack. The maximum stack depth depends +on kernel compilation options, so it is impractical to predict the +possible limits. For kernels compiled with small stack sizes, +zfs_max_dataset_nesting may require changes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_max_dataset_nesting

Notes

Tags

dataset

When to change

can be tuned temporarily to fix existing +datasets that exceed the predefined limit

Data Type

int

Units

datasets

Range

0 to MAX_INT

Default

50

Change

Dynamic, though once on-disk the value +for the pool is set

Versions Affected

planned for v2

+
+
+

zfs_ddt_data_is_special

+

zfs_ddt_data_is_special enables the deduplication table (DDT) to +reside on a special top-level vdev.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_ddt_data_is_special

Notes

Tags

dedup, +special_vdev

When to change

when using a special top-level vdev and +no dedup top-level vdev and it is desired +to store the DDT in the main pool +top-level vdevs

Data Type

boolean

Range

0=do not use special vdevs to store DDT, +1=store DDT in special vdevs

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_user_indirect_is_special

+

If special vdevs are in use, zfs_user_indirect_is_special enables +user data indirect blocks (a form of metadata) to be written to the +special vdevs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_user_indirect_is_special

Notes

Tags

special_vdev

When to change

to force user data indirect blocks +to remain in the main pool top-level +vdevs

Data Type

boolean

Range

0=do not write user indirect blocks +to a special vdev, 1=write user +indirect blocks to a special vdev

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_reconstruct_indirect_combinations_max

+

After device removal, if an indirect split block contains more than +zfs_reconstruct_indirect_combinations_max many possible unique +combinations when being reconstructed, it can be considered too +computationally expensive to check them all. Instead, at most +zfs_reconstruct_indirect_combinations_max randomly-selected +combinations are attempted each time the block is accessed. This allows +all segment copies to participate fairly in the reconstruction when all +combinations cannot be checked and prevents repeated use of one bad +copy.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_recon +struct_indirect_combinations_max

Notes

Tags

vdev_removal

When to change

TBD

Data Type

int

Units

attempts

Range

0=do not limit attempts, 1 to +MAX_INT = limit for attempts

Default

4096

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_send_unmodified_spill_blocks

+

zfs_send_unmodified_spill_blocks enables sending of unmodified spill +blocks in the send stream. Under certain circumstances, previous +versions of ZFS could incorrectly remove the spill block from an +existing object. Including unmodified copies of the spill blocks creates +a backwards compatible stream which will recreate a spill block if it +was incorrectly removed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_send_unmodified_spill_blocks

Notes

Tags

send

When to change

TBD

Data Type

boolean

Range

0=do not send unmodified spill +blocks, 1=send unmodified spill +blocks

Default

1

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_spa_discard_memory_limit

+

zfs_spa_discard_memory_limit sets the limit for maximum memory used +for prefetching a pool’s checkpoint space map on each vdev while +discarding a pool checkpoint.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_spa_discard_memory_limit

Notes

Tags

checkpoint

When to change

TBD

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

16,777,216 (16 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_special_class_metadata_reserve_pct

+

zfs_special_class_metadata_reserve_pct sets a threshold for space in +special vdevs to be reserved exclusively for metadata. This prevents +small data blocks from completely consuming a special vdev.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_special_class_metadata_reserve_pct

Notes

Tags

special_vdev

When to change

TBD

Data Type

int

Units

percent

Range

0 to 100

Default

25

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_extent_bytes_max

+

zfs_trim_extent_bytes_max sets the maximum size of a trim (aka +discard, scsi unmap) command. Ranges larger than +zfs_trim_extent_bytes_max are split in to chunks no larger than +zfs_trim_extent_bytes_max bytes prior to being issued to the device. +Use zpool iostat -w to observe the latency of trim commands.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_extent_bytes_max

Notes

Tags

trim

When to change

if the device can efficiently handle +larger trim requests

Data Type

uint

Units

bytes

Range

zfs_trim_extent_by +tes_min +to MAX_UINT

Default

134,217,728 (128 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_extent_bytes_min

+

zfs_trim_extent_bytes_min sets the minimum size of trim (aka +discard, scsi unmap) commands. Trim ranges smaller than +zfs_trim_extent_bytes_min are skipped unless they’re part of a +larger range which was broken in to chunks. Some devices have +performance degradation during trim operations, so using a larger +zfs_trim_extent_bytes_min can reduce the total amount of space +trimmed. Use zpool iostat -w to observe the latency of trim +commands.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_extent_bytes_min

Notes

Tags

trim

When to change

when trim is in use and device +performance suffers from trimming small +allocations

Data Type

uint

Units

bytes

Range

0=trim all unallocated space, otherwise +minimum physical block size to MAX_

Default

32,768 (32 KiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_metaslab_skip

+
+
zfs_trim_metaslab_skip enables uninitialized metaslabs to be +skipped during the trim (aka discard, scsi unmap) process. +zfs_trim_metaslab_skip can be useful for pools constructed from +large thinly-provisioned devices where trim operations perform slowly.
+
As a pool ages an increasing fraction of the pool’s metaslabs are +initialized, progressively degrading the usefulness of this option. +This setting is stored when starting a manual trim and persists for +the duration of the requested trim. Use zpool iostat -w to observe +the latency of trim commands.
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_metaslab_skip

Notes

Tags

trim

When to change

Data Type

boolean

Range

0=do not skip uninitialized metaslabs +during trim, 1=skip uninitialized +metaslabs during trim

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_queue_limit

+

zfs_trim_queue_limit sets the maximum queue depth for leaf vdevs. +See also zfs_vdev_trim_max_active and +zfs_trim_extent_bytes_max Use +zpool iostat -q to observe trim queue depth.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_queue_limit

Notes

Tags

trim

When to change

to restrict the number of trim commands in the queue

Data Type

uint

Units

I/O operations

Range

1 to MAX_UINT

Default

10

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_trim_txg_batch

+

zfs_trim_txg_batch sets the number of transaction groups worth of +frees which should be aggregated before trim (aka discard, scsi unmap) +commands are issued to a device. This setting represents a trade-off +between issuing larger, more efficient trim commands and the delay +before the recently trimmed space is available for use by the device.

+

Increasing this value will allow frees to be aggregated for a longer +time. This will result is larger trim operations and potentially +increased memory usage. Decreasing this value will have the opposite +effect. The default value of 32 was empirically determined to be a +reasonable compromise.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_trim_txg_batch

Notes

Tags

trim

When to change

TBD

Data Type

uint

Units

metaslabs to stride

Range

1 to MAX_UINT

Default

32

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_aggregate_trim

+

zfs_vdev_aggregate_trim allows trim I/Os to be aggregated. This is +normally not helpful because the extents to be trimmed will have been +already been aggregated by the metaslab.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_aggregate_trim

Notes

Tags

trim, vdev, +ZIO_scheduler

When to change

when debugging trim code or trim +performance issues

Data Type

boolean

Range

0=do not attempt to aggregate trim +commands, 1=attempt to aggregate trim +commands

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_aggregation_limit_non_rotating

+

zfs_vdev_aggregation_limit_non_rotating is the equivalent of +zfs_vdev_aggregation_limit for devices +which represent themselves as non-rotating to the Linux blkdev +interfaces. Such devices have a value of 0 in +/sys/block/DEVICE/queue/rotational and are expected to be SSDs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vde +v_aggregation_limit_non_rotating

Notes

Tags

vdev, +Z +IO_scheduler

When to change

see +zfs_vdev_aggregation_limit

Data Type

int

Units

bytes

Range

0 to MAX_INT

Default

131,072 bytes (128 KiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zil_nocacheflush

+

ZFS uses barriers (volatile cache flush commands) to ensure data is +committed to permanent media by devices. This ensures consistent +on-media state for devices where caches are volatile (eg HDDs).

+

zil_nocacheflush disables the cache flush commands that are normally +sent to devices by the ZIL after a log write has completed.

+

The difference between zil_nocacheflush and +zfs_nocacheflush is zil_nocacheflush applies +to ZIL writes while zfs_nocacheflush disables +barrier writes to the pool devices at the end of transaction group syncs.

+

WARNING: setting this can cause ZIL corruption on power loss if the +device has a volatile write cache.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zil_nocacheflush

Notes

Tags

disks, ZIL

When to change

If the storage device has nonvolatile cache, +then disabling cache flush can save the cost of +occasional cache flush commands

Data Type

boolean

Range

0=send cache flush commands, 1=do not send +cache flush commands

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zio_deadman_log_all

+

zio_deadman_log_all enables debugging messages for all ZFS I/Os, +rather than only for leaf ZFS I/Os for a vdev. This is meant to be used +by developers to gain diagnostic information for hang conditions which +don’t involve a mutex or other locking primitive. Typically these are +conditions where a thread in the zio pipeline is looping indefinitely.

+

See also zfs_dbgmsg_enable

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_deadman_log_all

Notes

Tags

debug

When to change

when debugging ZFS I/O pipeline

Data Type

boolean

Range

0=do not log all deadman events, 1=log all +deadman events

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zio_decompress_fail_fraction

+

If non-zero, zio_decompress_fail_fraction represents the denominator +of the probability that ZFS should induce a decompression failure. For +instance, for a 5% decompression failure rate, this value should be set +to 20.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_decompress_fail_fraction

Notes

Tags

debug

When to change

when debugging ZFS internal +compressed buffer code

Data Type

ulong

Units

probability of induced decompression +failure is +1/zio_decompress_fail_fraction

Range

0 = do not induce failures, or 1 to +MAX_ULONG

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zio_slow_io_ms

+

An I/O operation taking more than zio_slow_io_ms milliseconds to +complete is marked as a slow I/O. Slow I/O counters can be observed with +zpool status -s. Each slow I/O causes a delay zevent, observable +using zpool events. See also zfs-events(5).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zio_slow_io_ms

Notes

Tags

vdev, zed

When to change

when debugging slow devices and the default +value is inappropriate

Data Type

int

Units

milliseconds

Range

0 to MAX_INT

Default

30,000 (30 seconds)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

vdev_validate_skip

+

vdev_validate_skip disables label validation steps during pool +import. Changing is not recommended unless you know what you are doing +and are recovering a damaged label.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

vdev_validate_skip

Notes

Tags

vdev

When to change

do not change

Data Type

boolean

Range

0=validate labels during pool import, 1=do not +validate vdev labels during pool import

Default

0

Change

prior to pool import

Versions Affected

planned for v2

+
+
+

zfs_async_block_max_blocks

+

zfs_async_block_max_blocks limits the number of blocks freed in a +single transaction group commit. During deletes of large objects, such +as snapshots, the number of freed blocks can cause the DMU to extend txg +sync times well beyond zfs_txg_timeout. +zfs_async_block_max_blocks is used to limit these effects.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_async_block_max_blocks

Notes

Tags

delete, DMU

When to change

TBD

Data Type

ulong

Units

blocks

Range

1 to MAX_ULONG

Default

MAX_ULONG (do not limit)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_checksum_events_per_second

+

zfs_checksum_events_per_second is a rate limit for checksum events. +Note that this should not be set below the zed thresholds (currently +10 checksums over 10 sec) or else zed may not trigger any action.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_checksum_events_per_second

Notes

Tags

vdev

When to change

TBD

Data Type

uint

Units

checksum events

Range

zed threshold to MAX_UINT

Default

20

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_disable_ivset_guid_check

+

zfs_disable_ivset_guid_check disables requirement for IVset guids to +be present and match when doing a raw receive of encrypted datasets. +Intended for users whose pools were created with ZFS on Linux +pre-release versions and now have compatibility issues.

+

For a ZFS raw receive, from a send stream created by zfs send --raw, +the crypt_keydata nvlist includes a to_ivset_guid to be set on the new +snapshot. This value will override the value generated by the snapshot +code. However, this value may not be present, because older +implementations of the raw send code did not include this value. When +zfs_disable_ivset_guid_check is enabled, the receive proceeds and a +newly-generated value is used.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_disable_ivset_guid_check

Notes

Tags

receive

When to change

debugging pre-release ZFS raw sends

Data Type

boolean

Range

0=check IVset guid, 1=do not check +IVset guid

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_obsolete_min_time_ms

+

zfs_obsolete_min_time_ms is similar to +zfs_free_min_time_ms and used for cleanup of +old indirection records for vdevs removed using the zpool remove +command.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_obsolete_min_time_ms

Notes

Tags

delete, remove

When to change

TBD

Data Type

int

Units

milliseconds

Range

0 to MAX_INT

Default

500

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_override_estimate_recordsize

+

zfs_override_estimate_recordsize overrides the default logic for +estimating block sizes when doing a zfs send. The default heuristic is +that the average block size will be the current recordsize.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_override_estimate_recordsize

Notes

Tags

send

When to change

if most data in your dataset is +not of the current recordsize +and you require accurate zfs +send size estimates

Data Type

ulong

Units

bytes

Range

0=do not override, 1 to +MAX_ULONG

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_remove_max_segment

+

zfs_remove_max_segment sets the largest contiguous segment that ZFS +attempts to allocate when removing a vdev. This can be no larger than +16MB. If there is a performance problem with attempting to allocate +large blocks, consider decreasing this. The value is rounded up to a +power-of-2.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_remove_max_segment

Notes

Tags

remove

When to change

after removing a top-level vdev, consider +decreasing if there is a performance +degradation when attempting to allocate +large blocks

Data Type

int

Units

bytes

Range

maximum of the physical block size of all +vdevs in the pool to 16,777,216 bytes (16 +MiB)

Default

16,777,216 bytes (16 MiB)

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_resilver_disable_defer

+

zfs_resilver_disable_defer disables the resilver_defer pool +feature. The resilver_defer feature allows ZFS to postpone new +resilvers if an existing resilver is in progress.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_resilver_disable_defer

Notes

Tags

resilver

When to change

if resilver postponement is not +desired due to overall resilver time +constraints

Data Type

boolean

Range

0=allow resilver_defer to postpone +new resilver operations, 1=immediately +restart resilver when needed

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_scan_suspend_progress

+

zfs_scan_suspend_progress causes a scrub or resilver scan to freeze +without actually pausing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scan_suspend_progress

Notes

Tags

resilver, scrub

When to change

testing or debugging scan code

Data Type

boolean

Range

0=do not freeze scans, 1=freeze scans

Default

0

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_scrub_min_time_ms

+

Scrubs are processed by the sync thread. While scrubbing at least +zfs_scrub_min_time_ms time is spent working on a scrub between txg +syncs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_scrub_min_time_ms

Notes

Tags

scrub

When to change

Data Type

int

Units

milliseconds

Range

1 to (zfs_txg_timeout - 1)

Default

1,000

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_slow_io_events_per_second

+

zfs_slow_io_events_per_second is a rate limit for slow I/O events. +Note that this should not be set below the zed thresholds (currently +10 checksums over 10 sec) or else zed may not trigger any action.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_slow_io_events_per_second

Notes

Tags

vdev

When to change

TBD

Data Type

uint

Units

slow I/O events

Range

zed threshold to MAX_UINT

Default

20

Change

Dynamic

Versions Affected

planned for v2

+
+
+

zfs_vdev_min_ms_count

+

zfs_vdev_min_ms_count is the minimum number of metaslabs to create +in a top-level vdev.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_min_ms_count

Notes

Tags

metaslab, vdev

When to change

TBD

Data Type

int

Units

metaslabs

Range

16 to +zfs_vdev_m +s_count_limit

Default

16

Change

prior to creating a pool or adding a +top-level vdev

Versions Affected

planned for v2

+
+
+

zfs_vdev_ms_count_limit

+

zfs_vdev_ms_count_limit is the practical upper limit for the number +of metaslabs per top-level vdev.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

zfs_vdev_ms_count_limit

Notes

Tags

metaslab, +vdev

When to change

TBD

Data Type

int

Units

metaslabs

Range

zfs_vdev +_min_ms_count +to 131,072

Default

131,072

Change

prior to creating a pool or adding a +top-level vdev

Versions Affected

planned for v2

+
+
+

spl_hostid

+
+
spl_hostid is a unique system id number. It originated in Sun’s +products where most systems had a unique id assigned at the factory. +This assignment does not exist in modern hardware.
+
In ZFS, the hostid is stored in the vdev label and can be used to +determine if another system had imported the pool. When set +spl_hostid can be used to uniquely identify a system. By default +this value is set to zero which indicates the hostid is disabled. It +can be explicitly enabled by placing a unique non-zero value in the +file shown in spl_hostid_path
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_hostid

Notes

Tags

hostid, MMP

Kernel module

spl

When to change

to uniquely identify a system when vdevs can be +shared across multiple systems

Data Type

ulong

Range

0=ignore hostid, 1 to 4,294,967,295 (32-bits or +0xffffffff)

Default

0

Change

prior to importing pool

Versions Affected

v0.6.1

+
+
+

spl_hostid_path

+

spl_hostid_path is the path name for a file that can contain a +unique hostid. For testing purposes, spl_hostid_path can be +overridden by the ZFS_HOSTID environment variable.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_hostid_path

Notes

Tags

hostid, MMP

Kernel module

spl

When to change

when creating a new ZFS distribution where the +default value is inappropriate

Data Type

string

Default

“/etc/hostid”

Change

read-only, can only be changed prior to spl +module load

Versions Affected

v0.6.1

+
+
+

spl_kmem_alloc_max

+

Large kmem_alloc() allocations fail if they exceed KMALLOC_MAX_SIZE, +as determined by the kernel source. Allocations which are marginally +smaller than this limit may succeed but should still be avoided due to +the expense of locating a contiguous range of free pages. Therefore, a +maximum kmem size with reasonable safely margin of 4x is set. +kmem_alloc() allocations larger than this maximum will quickly fail. +vmem_alloc() allocations less than or equal to this value will use +kmalloc(), but shift to vmalloc() when exceeding this value.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_alloc_max

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

bytes

Range

TBD

Default

KMALLOC_MAX_SIZE / 4

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_alloc_warn

+

As a general rule kmem_alloc() allocations should be small, +preferably just a few pages since they must by physically contiguous. +Therefore, a rate limited warning is printed to the console for any +kmem_alloc() which exceeds the threshold spl_kmem_alloc_warn

+

The default warning threshold is set to eight pages but capped at 32K to +accommodate systems using large pages. This value was selected to be +small enough to ensure the largest allocations are quickly noticed and +fixed. But large enough to avoid logging any warnings when a allocation +size is larger than optimal but not a serious concern. Since this value +is tunable, developers are encouraged to set it lower when testing so +any new largish allocations are quickly caught. These warnings may be +disabled by setting the threshold to zero.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_alloc_warn

Notes

Tags

memory

Kernel module

spl

When to change

developers are encouraged lower when testing +so any new, large allocations are quickly +caught

Data Type

uint

Units

bytes

Range

0=disable the warnings,

Default

32,768 (32 KiB)

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_expire

+

Cache expiration is part of default illumos cache behavior. The idea is +that objects in magazines which have not been recently accessed should +be returned to the slabs periodically. This is known as cache aging and +when enabled objects will be typically returned after 15 seconds.

+

On the other hand Linux slabs are designed to never move objects back to +the slabs unless there is memory pressure. This is possible because +under Linux the cache will be notified when memory is low and objects +can be released.

+

By default only the Linux method is enabled. It has been shown to +improve responsiveness on low memory systems and not negatively impact +the performance of systems with more memory. This policy may be changed +by setting the spl_kmem_cache_expire bit mask as follows, both +policies may be enabled concurrently.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_expire

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

bitmask

Range

0x01 - Aging (illumos), 0x02 - Low memory (Linux)

Default

0x02

Change

Dynamic

Versions Affected

v0.6.1 to v0.8.x

+
+
+

spl_kmem_cache_kmem_limit

+

Depending on the size of a memory cache object it may be backed by +kmalloc() or vmalloc() memory. This is because the size of the +required allocation greatly impacts the best way to allocate the memory.

+

When objects are small and only a small number of memory pages need to +be allocated, ideally just one, then kmalloc() is very efficient. +However, allocating multiple pages with kmalloc() gets increasingly +expensive because the pages must be physically contiguous.

+

For this reason we shift to vmalloc() for slabs of large objects +which which removes the need for contiguous pages. vmalloc() cannot +be used in all cases because there is significant locking overhead +involved. This function takes a single global lock over the entire +virtual address range which serializes all allocations. Using slightly +different allocation functions for small and large objects allows us to +handle a wide range of object sizes.

+

The spl_kmem_cache_kmem_limit value is used to determine this cutoff +size. One quarter of the kernel’s compiled PAGE_SIZE is used as the +default value because +spl_kmem_cache_obj_per_slab defaults +to 16. With these default values, at most four contiguous pages are +allocated.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_kmem_limit

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

pages

Range

TBD

Default

PAGE_SIZE / 4

Change

Dynamic

Versions Affected

v0.7.0 to v0.8.x

+
+
+

spl_kmem_cache_max_size

+

spl_kmem_cache_max_size is the maximum size of a kmem cache slab in +MiB. This effectively limits the maximum cache object size to +spl_kmem_cache_max_size / +spl_kmem_cache_obj_per_slab Kmem +caches may not be created with object sized larger than this limit.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_max_size

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

MiB

Range

TBD

Default

4 for 32-bit kernel, 32 for 64-bit kernel

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_obj_per_slab

+

spl_kmem_cache_obj_per_slab is the preferred number of objects per +slab in the kmem cache. In general, a larger value will increase the +caches memory footprint while decreasing the time required to perform an +allocation. Conversely, a smaller value will minimize the footprint and +improve cache reclaim time but individual allocations may take longer.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_obj_per_slab

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

kmem cache objects

Range

TBD

Default

8

Change

Dynamic

Versions Affected

v0.7.0 to v0.8.x

+
+
+

spl_kmem_cache_obj_per_slab_min

+

spl_kmem_cache_obj_per_slab_min is the minimum number of objects +allowed per slab. Normally slabs will contain +spl_kmem_cache_obj_per_slab objects +but for caches that contain very large objects it’s desirable to only +have a few, or even just one, object per slab.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_obj_per_slab_min

Notes

Tags

memory

Kernel module

spl

When to change

debugging kmem cache operations

Data Type

uint

Units

kmem cache objects

Range

TBD

Default

1

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_reclaim

+

spl_kmem_cache_reclaim prevents Linux from being able to rapidly +reclaim all the memory held by the kmem caches. This may be useful in +circumstances where it’s preferable that Linux reclaim memory from some +other subsystem first. Setting spl_kmem_cache_reclaim increases the +likelihood out of memory events on a memory constrained system.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_reclaim

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

boolean

Range

0=enable rapid memory reclaim from kmem +caches, 1=disable rapid memory reclaim +from kmem caches

Default

0

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_slab_limit

+

For small objects the Linux slab allocator should be used to make the +most efficient use of the memory. However, large objects are not +supported by the Linux slab allocator and therefore the SPL +implementation is preferred. spl_kmem_cache_slab_limit is used to +determine the cutoff between a small and large object.

+

Objects of spl_kmem_cache_slab_limit or smaller will be allocated +using the Linux slab allocator, large objects use the SPL allocator. A +cutoff of 16 KiB was determined to be optimal for architectures using 4 +KiB pages.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_slab_limit

Notes

Tags

memory

Kernel module

spl

When to change

TBD

Data Type

uint

Units

bytes

Range

TBD

Default

16,384 (16 KiB) when kernel PAGE_SIZE = +4KiB, 0 for other PAGE_SIZE values

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_max_show_tasks

+

spl_max_show_tasks is the limit of tasks per pending list in each +taskq shown in /proc/spl/taskq and /proc/spl/taskq-all. Reading +the ProcFS files walks the lists with lock held and it could cause a +lock up if the list grow too large. If the list is larger than the +limit, the string `”(truncated)” is printed.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_max_show_tasks

Notes

Tags

taskq

Kernel module

spl

When to change

TBD

Data Type

uint

Units

tasks reported

Range

0 disables the limit, 1 to MAX_UINT

Default

512

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_panic_halt

+

spl_panic_halt enables kernel panic upon assertion failures. When +not enabled, the asserting thread is halted to facilitate further +debugging.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_panic_halt

Notes

Tags

debug, panic

Kernel module

spl

When to change

when debugging assertions and kernel core dumps +are desired

Data Type

boolean

Range

0=halt thread upon assertion, 1=panic kernel +upon assertion

Default

0

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_taskq_kick

+

Upon writing a non-zero value to spl_taskq_kick, all taskqs are +scanned. If any taskq has a pending task more than 5 seconds old, the +taskq spawns more threads. This can be useful in rare deadlock +situations caused by one or more taskqs not spawning a thread when it +should.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_kick

Notes

Tags

taskq

Kernel module

spl

When to change

See description above

Data Type

uint

Units

N/A

Default

0

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_taskq_thread_bind

+

spl_taskq_thread_bind enables binding taskq threads to specific +CPUs, distributed evenly over the available CPUs. By default, this +behavior is disabled to allow the Linux scheduler the maximum +flexibility to determine where a thread should run.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_thread_bind

Notes

Tags

CPU, taskq

Kernel module

spl

When to change

when debugging CPU scheduling options

Data Type

boolean

Range

0=taskqs are not bound to specific CPUs, +1=taskqs are bound to CPUs

Default

0

Change

prior to loading spl kernel module

Versions Affected

v0.7.0

+
+
+

spl_taskq_thread_dynamic

+

spl_taskq_thread_dynamic enables taskqs to set the TASKQ_DYNAMIC +flag will by default create only a single thread. New threads will be +created on demand up to a maximum allowed number to facilitate the +completion of outstanding tasks. Threads which are no longer needed are +promptly destroyed. By default this behavior is enabled but it can be d.

+

See also +zfs_zil_clean_taskq_nthr_pct, +zio_taskq_batch_pct

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_thread_dynamic

Notes

Tags

taskq

Kernel module

spl

When to change

disable for performance analysis or +troubleshooting

Data Type

boolean

Range

0=taskq threads are not dynamic, 1=taskq +threads are dynamically created and +destroyed

Default

1

Change

prior to loading spl kernel module

Versions Affected

v0.7.0

+
+
+

spl_taskq_thread_priority

+
+
spl_taskq_thread_priority allows newly created taskq threads to +set a non-default scheduler priority. When enabled the priority +specified when a taskq is created will be applied to all threads +created by that taskq.
+
When disabled all threads will use the default Linux kernel thread +priority.
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_thread_priority

Notes

Tags

CPU, taskq

Kernel module

spl

When to change

when troubleshooting CPU +scheduling-related performance issues

Data Type

boolean

Range

0=taskq threads use the default Linux +kernel thread priority, 1=

Default

1

Change

prior to loading spl kernel module

Versions Affected

v0.7.0

+
+
+

spl_taskq_thread_sequential

+

spl_taskq_thread_sequential is the number of items a taskq worker +thread must handle without interruption before requesting a new worker +thread be spawned. spl_taskq_thread_sequential controls how quickly +taskqs ramp up the number of threads processing the queue. Because Linux +thread creation and destruction are relatively inexpensive a small +default value has been selected. Thus threads are created aggressively, +which is typically desirable. Increasing this value results in a slower +thread creation rate which may be preferable for some configurations.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_taskq_thread_sequential

Notes

Tags

CPU, taskq

Kernel module

spl

When to change

TBD

Data Type

int

Units

taskq items

Range

1 to MAX_INT

Default

4

Change

Dynamic

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_kmem_threads

+

spl_kmem_cache_kmem_threads shows the current number of +spl_kmem_cache threads. This task queue is responsible for +allocating new slabs for use by the kmem caches. For the majority of +systems and workloads only a small number of threads are required.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_kmem_threads

Notes

Tags

CPU, memory

Kernel module

spl

When to change

read-only

Data Type

int

Range

1 to MAX_INT

Units

threads

Default

4

Change

read-only, can only be changed prior +to spl module load

Versions Affected

v0.7.0

+
+
+

spl_kmem_cache_magazine_size

+

spl_kmem_cache_magazine_size shows the current . Cache magazines are +an optimization designed to minimize the cost of allocating memory. They +do this by keeping a per-cpu cache of recently freed objects, which can +then be reallocated without taking a lock. This can improve performance +on highly contended caches. However, because objects in magazines will +prevent otherwise empty slabs from being immediately released this may +not be ideal for low memory machines.

+

For this reason spl_kmem_cache_magazine_size can be used to set a +maximum magazine size. When this value is set to 0 the magazine size +will be automatically determined based on the object size. Otherwise +magazines will be limited to 2-256 objects per magazine (eg per CPU). +Magazines cannot be disabled entirely in this implementation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

spl_kmem_cache_magazine_size

Notes

Tags

CPU, memory

Kernel module

spl

When to change

Data Type

int

Units

threads

Range

0=automatically scale magazine size, +otherwise 2 to 256

Default

0

Change

read-only, can only be changed prior +to spl module load

Versions Affected

v0.7.0

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/Workload Tuning.html b/Performance and Tuning/Workload Tuning.html new file mode 100644 index 000000000..a4f3c8785 --- /dev/null +++ b/Performance and Tuning/Workload Tuning.html @@ -0,0 +1,937 @@ + + + + + + + Workload Tuning — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Workload Tuning

+

Below are tips for various workloads.

+ +
+

Basic concepts

+

Descriptions of ZFS internals that have an effect on application +performance follow.

+
+

Adaptive Replacement Cache

+

For decades, operating systems have used RAM as a cache to avoid the +necessity of waiting on disk IO, which is extremely slow. This concept +is called page replacement. Until ZFS, virtually all filesystems used +the Least Recently Used (LRU) page replacement algorithm in which the +least recently used pages are the first to be replaced. Unfortunately, +the LRU algorithm is vulnerable to cache flushes, where a brief change +in workload that occurs occasionally removes all frequently used data +from cache. The Adaptive Replacement Cache (ARC) algorithm was +implemented in ZFS to replace LRU. It solves this problem by maintaining +four lists:

+
    +
  1. A list for recently cached entries.

  2. +
  3. A list for recently cached entries that have been accessed more than +once.

  4. +
  5. A list for entries evicted from #1.

  6. +
  7. A list of entries evicited from #2.

  8. +
+

Data is evicted from the first list while an effort is made to keep data +in the second list. In this way, ARC is able to outperform LRU by +providing a superior hit rate.

+

In addition, a dedicated cache device (typically a SSD) can be added to +the pool, with +zpool add POOLNAME cache DEVICENAME. The cache +device is managed by the L2ARC, which scans entries that are next to be +evicted and writes them to the cache device. The data stored in ARC and +L2ARC can be controlled via the primarycache and secondarycache +zfs properties respectively, which can be set on both zvols and +datasets. Possible settings are all, none and metadata. It +is possible to improve performance when a zvol or dataset hosts an +application that does its own caching by caching only metadata. One +example would be a virtual machine using ZFS. Another would be a +database system which manages its own cache (Oracle for instance). +PostgreSQL, by contrast, depends on the OS-level file cache for the +majority of cache.

+
+
+

Alignment Shift (ashift)

+

Top-level vdevs contain an internal property called ashift, which stands +for alignment shift. It is set at vdev creation and it is immutable. It +can be read using the zdb command. It is calculated as the maximum +base 2 logarithm of the physical sector size of any child vdev and it +alters the disk format such that writes are always done according to it. +This makes 2^ashift the smallest possible IO on a vdev. Configuring +ashift correctly is important because partial sector writes incur a +penalty where the sector must be read into a buffer before it can be +written. ZFS makes the implicit assumption that the sector size reported +by drives is correct and calculates ashift based on that.

+

In an ideal world, physical sector size is always reported correctly and +therefore, this requires no attention. Unfortunately, this is not the +case. The sector size on all storage devices was 512-bytes prior to the +creation of flash-based solid state drives. Some operating systems, such +as Windows XP, were written under this assumption and will not function +when drives report a different sector size.

+

Flash-based solid state drives came to market around 2007. These devices +report 512-byte sectors, but the actual flash pages, which roughly +correspond to sectors, are never 512-bytes. The early models used +4096-byte pages while the newer models have moved to an 8192-byte page. +In addition, “Advanced Format” hard drives have been created which also +use a 4096-byte sector size. Partial page writes suffer from similar +performance degradation as partial sector writes. In some cases, the +design of NAND-flash makes the performance degradation even worse, but +that is beyond the scope of this description.

+

Reporting the correct sector sizes is the responsibility the block +device layer. This unfortunately has made proper handling of devices +that misreport drives different across different platforms. The +respective methods are as follows:

+ +

-o ashift= is convenient, but it is flawed in that the creation of pools +containing top level vdevs that have multiple optimal sector sizes +require the use of multiple commands. A newer +syntax +that will rely on the actual sector sizes has been discussed as a cross +platform replacement and will likely be implemented in the future.

+

In addition, there is a database of +drives known to misreport sector +sizes +to the ZFS on Linux project. It is used to automatically adjust ashift +without the assistance of the system administrator. This approach is +unable to fully compensate for misreported sector sizes whenever drive +identifiers are used ambiguously (e.g. virtual machines, iSCSI LUNs, +some rare SSDs), but it does a great amount of good. The format is +roughly compatible with illumos’ sd.conf and it is expected that other +implementations will integrate the database in future releases. Strictly +speaking, this database does not belong in ZFS, but the difficulty of +patching the Linux kernel (especially older ones) necessitated that this +be implemented in ZFS itself for Linux. The same is true for MacZFS. +However, FreeBSD and illumos are both able to implement this in the +correct layer.

+
+
+

Compression

+

Internally, ZFS allocates data using multiples of the device’s sector +size, typically either 512 bytes or 4KB (see above). When compression is +enabled, a smaller number of sectors can be allocated for each block. +The uncompressed block size is set by the recordsize (defaults to +128KB) or volblocksize (defaults to 16KB since v2.2) property (for filesystems +vs volumes).

+

The following compression algorithms are available:

+
    +
  • LZ4

    +
      +
    • New algorithm added after feature flags were created. It is +significantly superior to LZJB in all metrics tested. It is new +default compression algorithm +(compression=on) in OpenZFS. +It is available on all platforms as of 2020.

    • +
    +
  • +
  • LZJB

    +
      +
    • Original default compression algorithm (compression=on) for ZFS. +It was created to satisfy the desire for a compression algorithm +suitable for use in filesystems. Specifically, that it provides +fair compression, has a high compression speed, has a high +decompression speed and detects incompressible data +quickly.

    • +
    +
  • +
  • GZIP (1 through 9)

    +
      +
    • Classic Lempel-Ziv implementation. It provides high compression, +but it often makes IO CPU-bound.

    • +
    +
  • +
  • ZLE (Zero Length Encoding)

    +
      +
    • A very simple algorithm that only compresses zeroes.

    • +
    +
  • +
  • ZSTD (Zstandard)

    +
      +
    • Zstandard is a modern, high performance, general compression +algorithm which provides similar or better compression levels to +GZIP, but with much better performance. Zstandard offers a very +wide range of performance/compression trade-off, and is backed by +an extremely fast decoder. +It is available from OpenZFS 2.0 version.

    • +
    +
  • +
+

If you want to use compression and are uncertain which to use, use LZ4. +It averages a 2.1:1 compression ratio while gzip-1 averages 2.7:1, but +gzip is much slower. Both figures are obtained from testing by the LZ4 +project on the Silesia corpus. The +greater compression ratio of gzip is usually only worthwhile for rarely +accessed data.

+
+
+

RAID-Z stripe width

+

Choose a RAID-Z stripe width based on your IOPS needs and the amount of +space you are willing to devote to parity information. If you need more +IOPS, use fewer disks per stripe. If you need more usable space, use +more disks per stripe. Trying to optimize your RAID-Z stripe width based +on exact numbers is irrelevant in nearly all cases. See this blog +post +for more details.

+
+
+

Dataset recordsize

+

ZFS datasets use an internal recordsize of 128KB by default. The dataset +recordsize is the basic unit of data used for internal copy-on-write on +files. Partial record writes require that data be read from either ARC +(cheap) or disk (expensive). recordsize can be set to any power of 2 +from 512 bytes to 1 megabyte. Software that writes in fixed record +sizes (e.g. databases) will benefit from the use of a matching +recordsize.

+

Changing the recordsize on a dataset will only take effect for new +files. If you change the recordsize because your application should +perform better with a different one, you will need to recreate its +files. A cp followed by a mv on each file is sufficient. Alternatively, +send/recv should recreate the files with the correct recordsize when a +full receive is done.

+
+

Larger record sizes

+

Record sizes of up to 16M are supported with the large_blocks pool +feature, which is enabled by default on new pools on systems that +support it.

+

Record sizes larger than 1M were disabled by default +before openZFS v2.2, +unless the zfs_max_recordsize kernel module parameter was set to allow +sizes higher than 1M.

+

`zfs send` operations must specify -L +to ensure that larger than 128KB blocks are sent and the receiving pools +must support the large_blocks feature.

+
+
+
+

zvol volblocksize

+

Zvols have a volblocksize property that is analogous to recordsize. +Current default (16KB since v2.2) balances the metadata overhead, compression +opportunities and decent space efficiency on majority of pool configurations +due to 4KB disk physical block rounding (especially on RAIDZ and DRAID), +while incurring some write amplification on guest FSes that run with smaller +block sizes [7].

+

Users are advised to test their scenarios and see whether the volblocksize +needs to be changed to favor one or the other:

+
    +
  • sector alignment of guest FS is crucial

  • +
  • most of guest FSes use default block size of 4-8KB, so:

    +
      +
    • Larger volblocksize can help with mostly sequential workloads and +will gain a compression efficiency

    • +
    • Smaller volblocksize can help with random workloads and minimize +IO amplification, but will use more metadata +(e.g. more small IOs will be generated by ZFS) and may have worse +space efficiency (especially on RAIDZ and DRAID)

    • +
    • It’s meaningless to set volblocksize less than guest FS’s block size +or ashift

    • +
    • See Dataset recordsize +for additional information

    • +
    +
  • +
+
+
+

Deduplication

+

Deduplication uses an on-disk hash table, using extensible +hashing as +implemented in the ZAP (ZFS Attribute Processor). Each cached entry uses +slightly more than 320 bytes of memory. The DDT code relies on ARC for +caching the DDT entries, such that there is no double caching or +internal fragmentation from the kernel memory allocator. Each pool has a +global deduplication table shared across all datasets and zvols on which +deduplication is enabled. Each entry in the hash table is a record of a +unique block in the pool. (Where the block size is set by the +recordsize or volblocksize properties.)

+

The hash table (also known as the DDT or DeDup Table) must be accessed +for every dedup-able block that is written or freed (regardless of +whether it has multiple references). If there is insufficient memory for +the DDT to be cached in memory, each cache miss will require reading a +random block from disk, resulting in poor performance. For example, if +operating on a single 7200RPM drive that can do 100 io/s, uncached DDT +reads would limit overall write throughput to 100 blocks per second, or +400KB/s with 4KB blocks.

+

The consequence is that sufficient memory to store deduplication data is +required for good performance. The deduplication data is considered +metadata and therefore can be cached if the primarycache or +secondarycache properties are set to metadata. In addition, the +deduplication table will compete with other metadata for metadata +storage, which can have a negative effect on performance. Simulation of +the number of deduplication table entries needed for a given pool can be +done using the -D option to zdb. Then a simple multiplication by +320-bytes can be done to get the approximate memory requirements. +Alternatively, you can estimate an upper bound on the number of unique +blocks by dividing the amount of storage you plan to use on each dataset +(taking into account that partial records each count as a full +recordsize for the purposes of deduplication) by the recordsize and each +zvol by the volblocksize, summing and then multiplying by 320-bytes.

+
+
+

Metaslab Allocator

+

ZFS top level vdevs are divided into metaslabs from which blocks can be +independently allocated so allow for concurrent IOs to perform +allocations without blocking one another. At present, there is a +regression on the +Linux and Mac OS X ports that causes serialization to occur.

+

By default, the selection of a metaslab is biased toward lower LBAs to +improve performance of spinning disks, but this does not make sense on +solid state media. This behavior can be adjusted globally by setting the +ZFS module’s global metaslab_lba_weighting_enabled tuanble to 0. This +tunable is only advisable on systems that only use solid state media for +pools.

+

The metaslab allocator will allocate blocks on a first-fit basis when a +metaslab has more than or equal to 4 percent free space and a best-fit +basis when a metaslab has less than 4 percent free space. The former is +much faster than the latter, but it is not possible to tell when this +behavior occurs from the pool’s free space. However, the command zdb +-mmm $POOLNAME will provide this information.

+
+
+

Pool Geometry

+

If small random IOPS are of primary importance, mirrored vdevs will +outperform raidz vdevs. Read IOPS on mirrors will scale with the number +of drives in each mirror while raidz vdevs will each be limited to the +IOPS of the slowest drive.

+

If sequential writes are of primary importance, raidz will outperform +mirrored vdevs. Sequential write throughput increases linearly with the +number of data disks in raidz while writes are limited to the slowest +drive in mirrored vdevs. Sequential read performance should be roughly +the same on each.

+

Both IOPS and throughput will increase by the respective sums of the +IOPS and throughput of each top level vdev, regardless of whether they +are raidz or mirrors.

+
+
+

Whole Disks versus Partitions

+

ZFS will behave differently on different platforms when given a whole +disk.

+

On illumos, ZFS attempts to enable the write cache on a whole disk. The +illumos UFS driver cannot ensure integrity with the write cache enabled, +so by default Sun/Solaris systems using UFS file system for boot were +shipped with drive write cache disabled (long ago, when Sun was still an +independent company). For safety on illumos, if ZFS is not given the +whole disk, it could be shared with UFS and thus it is not appropriate +for ZFS to enable write cache. In this case, the write cache setting is +not changed and will remain as-is. Today, most vendors ship drives with +write cache enabled by default.

+

On Linux, the Linux IO elevator is largely redundant given that ZFS has +its own IO elevator.

+

ZFS will also create a GPT partition table own partitions when given a +whole disk under illumos on x86/amd64 and on Linux. This is mainly to +make booting through UEFI possible because UEFI requires a small FAT +partition to be able to boot the system. The ZFS driver will be able to +tell the difference between whether the pool had been given the entire +disk or not via the whole_disk field in the label.

+

This is not done on FreeBSD. Pools created by FreeBSD will always have +the whole_disk field set to true, such that a pool imported on another +platform that was created on FreeBSD will always be treated as the whole +disks were given to ZFS.

+
+
+
+

OS/distro-specific recommendations

+
+

Linux

+
+

init_on_alloc

+

Some Linux distributions (at least Debian, Ubuntu) enable +init_on_alloc option as security precaution by default. +This option can help to [6]:

+
+

prevent possible information leaks and +make control-flow bugs that depend on uninitialized values more +deterministic.

+
+

Unfortunately, it can lower ARC throughput considerably +(see bug).

+

If you’re ready to cope with these security risks [6], +you may disable it +by setting init_on_alloc=0 in the GRUB kernel boot parameters.

+
+
+
+
+

General recommendations

+
+

Alignment shift

+

Make sure that you create your pools such that the vdevs have the +correct alignment shift for your storage device’s size. if dealing with +flash media, this is going to be either 12 (4K sectors) or 13 (8K +sectors). For SSD ephemeral storage on Amazon EC2, the proper setting is +12.

+
+
+

Atime Updates

+

Set either relatime=on or atime=off to minimize IOs used to update +access time stamps. For backward compatibility with a small percentage +of software that supports it, relatime is preferred when available and +should be set on your entire pool. atime=off should be used more +selectively.

+
+
+

Free Space

+

Keep pool free space above 10% to avoid many metaslabs from reaching the +4% free space threshold to switch from first-fit to best-fit allocation +strategies. When the threshold is hit, the Metaslab Allocator becomes very CPU +intensive in an attempt to protect itself from fragmentation. This +reduces IOPS, especially as more metaslabs reach the 4% threshold.

+

The recommendation is 10% rather than 5% because metaslabs selection +considers both location and free space unless the global +metaslab_lba_weighting_enabled tunable is set to 0. When that tunable is +0, ZFS will consider only free space, so the the expense of the best-fit +allocator can be avoided by keeping free space above 5%. That setting +should only be used on systems with pools that consist of solid state +drives because it will reduce sequential IO performance on mechanical +disks.

+
+
+

LZ4 compression

+

Set compression=lz4 on your pools’ root datasets so that all datasets +inherit it unless you have a reason not to enable it. Userland tests of +LZ4 compression of incompressible data in a single thread has shown that +it can process 10GB/sec, so it is unlikely to be a bottleneck even on +incompressible data. Furthermore, incompressible data will be stored +without compression such that reads of incompressible data with +compression enabled will not be subject to decompression. Writes are so +fast that in-compressible data is unlikely to see a performance penalty +from the use of LZ4 compression. The reduction in IO from LZ4 will +typically be a performance win.

+

Note that larger record sizes will increase compression ratios on +compressible data by allowing compression algorithms to process more +data at a time.

+
+
+

NVMe low level formatting

+

See NVMe low level formatting.

+
+
+

Pool Geometry

+

Do not put more than ~16 disks in raidz. The rebuild times on mechanical +disks will be excessive when the pool is full.

+
+
+

Synchronous I/O

+

If your workload involves fsync or O_SYNC and your pool is backed by +mechanical storage, consider adding one or more SLOG devices. Pools that +have multiple SLOG devices will distribute ZIL operations across them. +The best choice for SLOG device(s) are likely Optane / 3D XPoint SSDs. +See Optane / 3D XPoint SSDs +for a description of them. If an Optane / 3D XPoint SSD is an option, +the rest of this section on synchronous I/O need not be read. If Optane +/ 3D XPoint SSDs is not an option, see +NAND Flash SSDs for suggestions +for NAND flash SSDs and also read the information below.

+

To ensure maximum ZIL performance on NAND flash SSD-based SLOG devices, +you should also overprovison spare area to increase +IOPS [1]. Only +about 4GB is needed, so the rest can be left as overprovisioned storage. +The choice of 4GB is somewhat arbitrary. Most systems do not write +anything close to 4GB to ZIL between transaction group commits, so +overprovisioning all storage beyond the 4GB partition should be alright. +If a workload needs more, then make it no more than the maximum ARC +size. Even under extreme workloads, ZFS will not benefit from more SLOG +storage than the maximum ARC size. That is half of system memory on +Linux and 3/4 of system memory on illumos.

+
+

Overprovisioning by secure erase and partition table trick

+

You can do this with a mix of a secure erase and a partition table +trick, such as the following:

+
    +
  1. Run a secure erase on the NAND-flash SSD.

  2. +
  3. Create a partition table on the NAND-flash SSD.

  4. +
  5. Create a 4GB partition.

  6. +
  7. Give the partition to ZFS to use as a log device.

  8. +
+

If using the secure erase and partition table trick, do not use the +unpartitioned space for other things, even temporarily. That will reduce +or eliminate the overprovisioning by marking pages as dirty.

+

Alternatively, some devices allow you to change the sizes that they +report.This would also work, although a secure erase should be done +prior to changing the reported size to ensure that the SSD recognizes +the additional spare area. Changing the reported size can be done on +drives that support it with `hdparm -N ` on systems that have +laptop-mode-tools.

+
+
+

NVMe overprovisioning

+

On NVMe, you can use namespaces to achieve overprovisioning:

+
    +
  1. Do a sanitize command as a precaution to ensure the device is +completely clean.

  2. +
  3. Delete the default namespace.

  4. +
  5. Create a new namespace of size 4GB.

  6. +
  7. Give the namespace to ZFS to use as a log device. e.g. zfs add tank +log /dev/nvme1n1

  8. +
+
+
+
+

Whole disks

+

Whole disks should be given to ZFS rather than partitions. If you must +use a partition, make certain that the partition is properly aligned to +avoid read-modify-write overhead. See the section on +Alignment Shift (ashift) +for a description of proper alignment. Also, see the section on +Whole Disks versus Partitions +for a description of changes in ZFS behavior when operating on a +partition.

+

Single disk RAID 0 arrays from RAID controllers are not equivalent to +whole disks. The Hardware RAID controllers page +explains in detail.

+
+
+
+

Bit Torrent

+

Bit torrent performs 16KB random reads/writes. The 16KB writes cause +read-modify-write overhead. The read-modify-write overhead can reduce +performance by a factor of 16 with 128KB record sizes when the amount of +data written exceeds system memory. This can be avoided by using a +dedicated dataset for bit torrent downloads with recordsize=16KB.

+

When the files are read sequentially through a HTTP server, the random +nature in which the files were generated creates fragmentation that has +been observed to reduce sequential read performance by a factor of two +on 7200RPM hard disks. If performance is a problem, fragmentation can be +eliminated by rewriting the files sequentially in either of two ways:

+

The first method is to configure your client to download the files to a +temporary directory and then copy them into their final location when +the downloads are finished, provided that your client supports this.

+

The second method is to use send/recv to recreate a dataset +sequentially.

+

In practice, defragmenting files obtained through bit torrent should +only improve performance when the files are stored on magnetic storage +and are subject to significant sequential read workloads after creation.

+
+
+

Database workloads

+

Setting redundant_metadata=most can increase IOPS by at least a few +percentage points by eliminating redundant metadata at the lowest level +of the indirect block tree. This comes with the caveat that data loss +will occur if a metadata block pointing to data blocks is corrupted and +there are no duplicate copies, but this is generally not a problem in +production on mirrored or raidz vdevs.

+
+

MySQL

+
+

InnoDB

+

Make separate datasets for InnoDB’s data files and log files. Set +recordsize=16K on InnoDB’s data files to avoid expensive partial record +writes and leave recordsize=128K on the log files. Set +primarycache=metadata on both to prefer InnoDB’s +caching [2]. +Set logbias=throughput on the data to stop ZIL from writing twice.

+

Set skip-innodb_doublewrite in my.cnf to prevent innodb from writing +twice. The double writes are a data integrity feature meant to protect +against corruption from partially-written records, but those are not +possible on ZFS. It should be noted that Percona’s +blog had advocated +using an ext4 configuration where double writes were +turned off for a performance gain, but later recanted it because it +caused data corruption. Following a well timed power failure, an in +place filesystem such as ext4 can have half of a 8KB record be old while +the other half would be new. This would be the corruption that caused +Percona to recant its advice. However, ZFS’ copy on write design would +cause it to return the old correct data following a power failure (no +matter what the timing is). That prevents the corruption that the double +write feature is intended to prevent from ever happening. The double +write feature is therefore unnecessary on ZFS and can be safely turned +off for better performance.

+

On Linux, the driver’s AIO implementation is a compatibility shim that +just barely passes the POSIX standard. InnoDB performance suffers when +using its default AIO codepath. Set innodb_use_native_aio=0 and +innodb_use_atomic_writes=0 in my.cnf to disable AIO. Both of these +settings must be disabled to disable AIO.

+
+
+
+

PostgreSQL

+

Make separate datasets for PostgreSQL’s data and WAL. Set +compression=lz4 and recordsize=32K (64K also work well, as +does the 128K default) on both. Configure full_page_writes = off +for PostgreSQL, as ZFS will never commit a partial write. For a database +with large updates, experiment with logbias=throughput on +PostgreSQL’s data to avoid writing twice, but be aware that with this +setting smaller updates can cause severe fragmentation.

+
+
+

SQLite

+

Make a separate dataset for the database. Set the recordsize to 64K. Set +the SQLite page size to 65536 +bytes [3].

+

Note that SQLite databases typically are not exercised enough to merit +special tuning, but this will provide it. Note the side effect on cache +size mentioned at +SQLite.org [4].

+
+
+
+

File servers

+

Create a dedicated dataset for files being served.

+

See +Sequential workloads +for configuration recommendations.

+
+

Samba

+

Windows/DOS clients doesn’t support case sensitive file names. +If your main workload won’t need case sensitivity for other supported clients, +create dataset with zfs create -o casesensitivity=insensitive +so Samba may search filenames faster in future [5].

+

See case sensitive option in +smb.conf(5).

+
+
+
+

Sequential workloads

+

Set recordsize=1M on datasets that are subject to sequential workloads. +Read +Larger record sizes +for documentation on things that should be known before setting 1M +record sizes.

+

Set compression=lz4 as per the general recommendation for LZ4 +compression.

+
+
+

Video games directories

+

Create a dedicated dataset, use chown to make it user accessible (or +create a directory under it and use chown on that) and then configure +the game download application to place games there. Specific information +on how to configure various ones is below.

+

See +Sequential workloads +for configuration recommendations before installing games.

+

Note that the performance gains from this tuning are likely to be small +and limited to load times. However, the combination of 1M records and +LZ4 will allow more games to be stored, which is why this tuning is +documented despite the performance gains being limited. A steam library +of 300 games (mostly from humble bundle) that had these tweaks applied +to it saw 20% space savings. Both faster load times and significant +space savings are possible on compressible games when this tuning has +been done. Games whose assets are already compressed will see little to +no benefit.

+
+

Lutris

+

Open the context menu by left clicking on the triple bar icon in the +upper right. Go to “Preferences” and then the “System options” tab. +Change the default installation directory and click save.

+
+
+

Steam

+

Go to “Settings” -> “Downloads” -> “Steam Library Folders” and use “Add +Library Folder” to set the directory for steam to use to store games. +Make sure to set it to the default by right clicking on it and clicking +“Make Default Folder” before closing the dialogue.

+

If you’ll use Proton to run non-native games, +create dataset with zfs create -o casesensitivity=insensitive +so Wine may search filenames faster in future [5].

+
+
+
+

Wine

+

Windows file systems’ standard behavior is to be case-insensitive. +Create dataset with zfs create -o casesensitivity=insensitive +so Wine may search filenames faster in future [5].

+
+
+

Virtual machines

+

Virtual machine images on ZFS should be stored using either zvols or raw +files to avoid unnecessary overhead. The recordsize/volblocksize and +guest filesystem may be configured to match to avoid overhead from +partial record modification, see zvol volblocksize. +If raw files are used, a separate dataset should be used to make it easy to configure +recordsize independently of other things stored on ZFS.

+
+

QEMU / KVM / Xen

+

AIO should be used to maximize IOPS when using files for guest storage.

+

Footnotes

+ +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/ZFS Transaction Delay.html b/Performance and Tuning/ZFS Transaction Delay.html new file mode 100644 index 000000000..561043a68 --- /dev/null +++ b/Performance and Tuning/ZFS Transaction Delay.html @@ -0,0 +1,222 @@ + + + + + + + ZFS Transaction Delay — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ZFS Transaction Delay

+

ZFS write operations are delayed when the backend storage isn’t able to +accommodate the rate of incoming writes. This delay process is known as +the ZFS write throttle.

+

If there is already a write transaction waiting, the delay is relative +to when that transaction will finish waiting. Thus the calculated delay +time is independent of the number of threads concurrently executing +transactions.

+

If there is only one waiter, the delay is relative to when the +transaction started, rather than the current time. This credits the +transaction for “time already served.” For example, if a write +transaction requires reading indirect blocks first, then the delay is +counted at the start of the transaction, just prior to the indirect +block reads.

+

The minimum time for a transaction to take is calculated as:

+
min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
+min_time is then capped at 100 milliseconds
+
+
+

The delay has two degrees of freedom that can be adjusted via tunables:

+
    +
  1. The percentage of dirty data at which we start to delay is defined by +zfs_delay_min_dirty_percent. This is typically be at or above +zfs_vdev_async_write_active_max_dirty_percent so delays occur after +writing at full speed has failed to keep up with the incoming write +rate.

  2. +
  3. The scale of the curve is defined by zfs_delay_scale. Roughly +speaking, this variable determines the amount of delay at the +midpoint of the curve.

  4. +
+
delay
+ 10ms +-------------------------------------------------------------*+
+      |                                                             *|
+  9ms +                                                             *+
+      |                                                             *|
+  8ms +                                                             *+
+      |                                                            * |
+  7ms +                                                            * +
+      |                                                            * |
+  6ms +                                                            * +
+      |                                                            * |
+  5ms +                                                           *  +
+      |                                                           *  |
+  4ms +                                                           *  +
+      |                                                           *  |
+  3ms +                                                          *   +
+      |                                                          *   |
+  2ms +                                              (midpoint) *    +
+      |                                                  |    **     |
+  1ms +                                                  v ***       +
+      |             zfs_delay_scale ---------->     ********         |
+    0 +-------------------------------------*********----------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+
+

Note that since the delay is added to the outstanding time remaining on +the most recent transaction, the delay is effectively the inverse of +IOPS. Here the midpoint of 500 microseconds translates to 2000 IOPS. The +shape of the curve was chosen such that small changes in the amount of +accumulated dirty data in the first 3/4 of the curve yield relatively +small differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay is +represented on a log scale:

+
delay
+100ms +-------------------------------------------------------------++
+      +                                                              +
+      |                                                              |
+      +                                                             *+
+ 10ms +                                                             *+
+      +                                                           ** +
+      |                                              (midpoint)  **  |
+      +                                                  |     **    +
+  1ms +                                                  v ****      +
+      +             zfs_delay_scale ---------->        *****         +
+      |                                             ****             |
+      +                                          ****                +
+100us +                                        **                    +
+      +                                       *                      +
+      |                                      *                       |
+      +                                     *                        +
+ 10us +                                     *                        +
+      +                                                              +
+      |                                                              |
+      +                                                              +
+      +--------------------------------------------------------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+
+

Note here that only as the amount of dirty data approaches its limit +does the delay start to increase rapidly. The goal of a properly tuned +system should be to keep the amount of dirty data out of that range by +first ensuring that the appropriate limits are set for the I/O scheduler +to reach optimal throughput on the backend storage, and then by changing +the value of zfs_delay_scale to increase the steepness of the curve.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/ZIO Scheduler.html b/Performance and Tuning/ZIO Scheduler.html new file mode 100644 index 000000000..d978f401b --- /dev/null +++ b/Performance and Tuning/ZIO Scheduler.html @@ -0,0 +1,244 @@ + + + + + + + ZFS I/O (ZIO) Scheduler — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ZFS I/O (ZIO) Scheduler

+

ZFS issues I/O operations to leaf vdevs (usually devices) to satisfy and +complete I/Os. The ZIO scheduler determines when and in what order those +operations are issued. Operations are divided into five I/O classes +prioritized in the following order:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Priority

I/O Class

Description

highest

sync read

most reads

sync write

as defined by application or via ‘zfs’ +‘sync’ property

async read

prefetch reads

async write

most writes

lowest

scrub read

scan read: includes both scrub and +resilver

+

Each queue defines the minimum and maximum number of concurrent +operations issued to the device. In addition, the device has an +aggregate maximum, zfs_vdev_max_active. Note that the sum of the +per-queue minimums must not exceed the aggregate maximum. If the sum of +the per-queue maximums exceeds the aggregate maximum, then the number of +active I/Os may reach zfs_vdev_max_active, in which case no further I/Os +are issued regardless of whether all per-queue minimums have been met.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

I/O Class

Min Active Parameter

Max Active Parameter

sync read

zfs_vdev_sync_read_min_active

zfs_vdev_sync_read_max_active

sync write

zfs_vdev_sync_write_min_active

zfs_vdev_sync_write_max_active

async read

zfs_vdev_async_read_min_active

zfs_vdev_async_read_max_active

async write

zfs_vdev_async_write_min_active

zfs_vdev_async_write_max_active

scrub read

zfs_vdev_scrub_min_active

zfs_vdev_scrub_max_active

+

For many physical devices, throughput increases with the number of +concurrent operations, but latency typically suffers. Further, physical +devices typically have a limit at which more concurrent operations have +no effect on throughput or can cause the disk performance to +decrease.

+

The ZIO scheduler selects the next operation to issue by first looking +for an I/O class whose minimum has not been satisfied. Once all are +satisfied and the aggregate maximum has not been hit, the scheduler +looks for classes whose maximum has not been satisfied. Iteration +through the I/O classes is done in the order specified above. No further +operations are issued if the aggregate maximum number of concurrent +operations has been hit or if there are no operations queued for an I/O +class that has not hit its maximum. Every time an I/O is queued or an +operation completes, the I/O scheduler looks for new operations to +issue.

+

In general, smaller max_active’s will lead to lower latency of +synchronous operations. Larger max_active’s may lead to higher overall +throughput, depending on underlying storage and the I/O mix.

+

The ratio of the queues’ max_actives determines the balance of +performance between reads, writes, and scrubs. For example, when there +is contention, increasing zfs_vdev_scrub_max_active will cause the scrub +or resilver to complete more quickly, but reads and writes to have +higher latency and lower throughput.

+

All I/O classes have a fixed maximum number of outstanding operations +except for the async write class. Asynchronous writes represent the data +that is committed to stable storage during the syncing stage for +transaction groups (txgs). Transaction groups enter the syncing state +periodically so the number of queued async writes quickly bursts up and +then reduce down to zero. The zfs_txg_timeout tunable (default=5 +seconds) sets the target interval for txg sync. Thus a burst of async +writes every 5 seconds is a normal ZFS I/O pattern.

+

Rather than servicing I/Os as quickly as possible, the ZIO scheduler +changes the maximum number of active async write I/Os according to the +amount of dirty data in the pool. Since both throughput and latency +typically increase as the number of concurrent operations issued to +physical devices, reducing the burstiness in the number of concurrent +operations also stabilizes the response time of operations from other +queues. This is particularly important for the sync read and write queues, +where the periodic async write bursts of the txg sync can lead to +device-level contention. In broad strokes, the ZIO scheduler issues more +concurrent operations from the async write queue as there’s more dirty +data in the pool.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Performance and Tuning/index.html b/Performance and Tuning/index.html new file mode 100644 index 000000000..523969c10 --- /dev/null +++ b/Performance and Tuning/index.html @@ -0,0 +1,169 @@ + + + + + + + Performance and Tuning — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/Project and Community/Admin Documentation.html b/Project and Community/Admin Documentation.html new file mode 100644 index 000000000..b09b72386 --- /dev/null +++ b/Project and Community/Admin Documentation.html @@ -0,0 +1,138 @@ + + + + + + + Admin Documentation — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/FAQ hole birth.html b/Project and Community/FAQ hole birth.html new file mode 100644 index 000000000..023cda446 --- /dev/null +++ b/Project and Community/FAQ hole birth.html @@ -0,0 +1,168 @@ + + + + + + + FAQ Hole birth — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

FAQ Hole birth

+
+

Short explanation

+

The hole_birth feature has/had bugs, the result of which is that, if you +do a zfs send -i (or -R, since it uses -i) from an affected +dataset, the receiver will not see any checksum or other errors, but the +resulting destination snapshot will not match the source.

+

ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the +faulty metadata which causes this issue on the sender side.

+
+
+

FAQ

+
+

I have a pool with hole_birth enabled, how do I know if I am affected?

+

It is technically possible to calculate whether you have any affected +files, but it requires scraping zdb output for each file in each +snapshot in each dataset, which is a combinatoric nightmare. (If you +really want it, there is a proof of concept +here.

+
+
+

Is there any less painful way to fix this if we have already received an affected snapshot?

+

No, the data you need was simply not present in the send stream, +unfortunately, and cannot feasibly be rewritten in place.

+
+
+
+

Long explanation

+

hole_birth is a feature to speed up ZFS send -i - in particular, ZFS +used to not store metadata on when “holes” (sparse regions) in files +were created, so every zfs send -i needed to include every hole.

+

hole_birth, as the name implies, added tracking for the txg (transaction +group) when a hole was created, so that zfs send -i could only send +holes that had a birth_time between (starting snapshot txg) and (ending +snapshot txg), and life was wonderful.

+

Unfortunately, hole_birth had a number of edge cases where it could +“forget” to set the birth_time of holes in some cases, causing it to +record the birth_time as 0 (the value used prior to hole_birth, and +essentially equivalent to “since file creation”).

+

This meant that, when you did a zfs send -i, since zfs send does not +have any knowledge of the surrounding snapshots when sending a given +snapshot, it would see the creation txg as 0, conclude “oh, it is 0, I +must have already sent this before”, and not include it.

+

This means that, on the receiving side, it does not know those holes +should exist, and does not create them. This leads to differences +between the source and the destination.

+

ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring this +metadata and always sending holes with birth_time 0, configurable using +the tunable known as ignore_hole_birth or +send_holes_without_birth_time. The latter is what OpenZFS +standardized on. ZoL version 0.6.5.8 only has the former, but for any +ZoL version with send_holes_without_birth_time, they point to the +same value, so changing either will work.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/FAQ.html b/Project and Community/FAQ.html new file mode 100644 index 000000000..2144aaaed --- /dev/null +++ b/Project and Community/FAQ.html @@ -0,0 +1,845 @@ + + + + + + + FAQ — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

FAQ

+ +
+

What is OpenZFS

+

OpenZFS is an outstanding storage platform that +encompasses the functionality of traditional filesystems, volume +managers, and more, with consistent reliability, functionality and +performance across all distributions. Additional information about +OpenZFS can be found in the OpenZFS wikipedia +article.

+
+
+

Hardware Requirements

+

Because ZFS was originally designed for Sun Solaris it was long +considered a filesystem for large servers and for companies that could +afford the best and most powerful hardware available. But since the +porting of ZFS to numerous OpenSource platforms (The BSDs, Illumos and +Linux - under the umbrella organization “OpenZFS”), these requirements +have been lowered.

+

The suggested hardware requirements are:

+
    +
  • ECC memory. This isn’t really a requirement, but it’s highly +recommended.

  • +
  • 8GB+ of memory for the best performance. It’s perfectly possible to +run with 2GB or less (and people do), but you’ll need more if using +deduplication.

  • +
+
+
+

Do I have to use ECC memory for ZFS?

+

Using ECC memory for OpenZFS is strongly recommended for enterprise +environments where the strongest data integrity guarantees are required. +Without ECC memory rare random bit flips caused by cosmic rays or by +faulty memory can go undetected. If this were to occur OpenZFS (or any +other filesystem) will write the damaged data to disk and be unable to +automatically detect the corruption.

+

Unfortunately, ECC memory is not always supported by consumer grade +hardware. And even when it is, ECC memory will be more expensive. For +home users the additional safety brought by ECC memory might not justify +the cost. It’s up to you to determine what level of protection your data +requires.

+
+
+

Installation

+

OpenZFS is available for FreeBSD and all major Linux distributions. Refer to +the getting started section of the wiki for +links to installations instructions. If your distribution/OS isn’t +listed you can always build OpenZFS from the latest official +tarball.

+
+
+

Supported Architectures

+

OpenZFS is regularly compiled for the following architectures: +aarch64, arm, ppc, ppc64, x86, x86_64.

+
+
+

Supported Linux Kernels

+

The notes for a given +OpenZFS release will include a range of supported kernels. Point +releases will be tagged as needed in order to support the stable +kernel available from kernel.org. The +oldest supported kernel is 2.6.32 due to its prominence in Enterprise +Linux distributions.

+
+
+

32-bit vs 64-bit Systems

+

You are strongly encouraged to use a 64-bit kernel. OpenZFS +will build for 32-bit systems but you may encounter stability problems.

+

ZFS was originally developed for the Solaris kernel which differs from +some OpenZFS platforms in several significant ways. Perhaps most importantly +for ZFS it is common practice in the Solaris kernel to make heavy use of +the virtual address space. However, use of the virtual address space is +strongly discouraged in the Linux kernel. This is particularly true on +32-bit architectures where the virtual address space is limited to 100M +by default. Using the virtual address space on 64-bit Linux kernels is +also discouraged but the address space is so much larger than physical +memory that it is less of an issue.

+

If you are bumping up against the virtual memory limit on a 32-bit +system you will see the following message in your system logs. You can +increase the virtual address size with the boot option vmalloc=512M.

+
vmap allocation for size 4198400 failed: use vmalloc=<size> to increase size.
+
+
+

However, even after making this change your system will likely not be +entirely stable. Proper support for 32-bit systems is contingent upon +the OpenZFS code being weaned off its dependence on virtual memory. This +will take some time to do correctly but it is planned for OpenZFS. This +change is also expected to improve how efficiently OpenZFS manages the +ARC cache and allow for tighter integration with the standard Linux page +cache.

+
+
+

Booting from ZFS

+

Booting from ZFS on Linux is possible and many people do it. There are +excellent walk throughs available for +Debian, +Ubuntu, and +Gentoo.

+

On FreeBSD 13+ booting from ZFS is supported out of the box.

+
+
+

Selecting /dev/ names when creating a pool (Linux)

+

There are different /dev/ names that can be used when creating a ZFS +pool. Each option has advantages and drawbacks, the right choice for +your ZFS pool really depends on your requirements. For development and +testing using /dev/sdX naming is quick and easy. A typical home server +might prefer /dev/disk/by-id/ naming for simplicity and readability. +While very large configurations with multiple controllers, enclosures, +and switches will likely prefer /dev/disk/by-vdev naming for maximum +control. But in the end, how you choose to identify your disks is up to +you.

+
    +
  • /dev/sdX, /dev/hdX: Best for development/test pools

    +
      +
    • Summary: The top level /dev/ names are the default for consistency +with other ZFS implementations. They are available under all Linux +distributions and are commonly used. However, because they are not +persistent they should only be used with ZFS for development/test +pools.

    • +
    • Benefits: This method is easy for a quick test, the names are +short, and they will be available on all Linux distributions.

    • +
    • Drawbacks: The names are not persistent and will change depending +on what order the disks are detected in. Adding or removing +hardware for your system can easily cause the names to change. You +would then need to remove the zpool.cache file and re-import the +pool using the new names.

    • +
    • Example: zpool create tank sda sdb

    • +
    +
  • +
  • /dev/disk/by-id/: Best for small pools (less than 10 disks)

    +
      +
    • Summary: This directory contains disk identifiers with more human +readable names. The disk identifier usually consists of the +interface type, vendor name, model number, device serial number, +and partition number. This approach is more user friendly because +it simplifies identifying a specific disk.

    • +
    • Benefits: Nice for small systems with a single disk controller. +Because the names are persistent and guaranteed not to change, it +doesn’t matter how the disks are attached to the system. You can +take them all out, randomly mix them up on the desk, put them +back anywhere in the system and your pool will still be +automatically imported correctly.

    • +
    • Drawbacks: Configuring redundancy groups based on physical +location becomes difficult and error prone. Unreliable on many +personal virtual machine setups because the software does not +generate persistent unique names by default.

    • +
    • Example: +zpool create tank scsi-SATA_Hitachi_HTS7220071201DP1D10DGG6HMRP

    • +
    +
  • +
  • /dev/disk/by-path/: Good for large pools (greater than 10 disks)

    +
      +
    • Summary: This approach is to use device names which include the +physical cable layout in the system, which means that a particular +disk is tied to a specific location. The name describes the PCI +bus number, as well as enclosure names and port numbers. This +allows the most control when configuring a large pool.

    • +
    • Benefits: Encoding the storage topology in the name is not only +helpful for locating a disk in large installations. But it also +allows you to explicitly layout your redundancy groups over +multiple adapters or enclosures.

    • +
    • Drawbacks: These names are long, cumbersome, and difficult for a +human to manage.

    • +
    • Example: +zpool create tank pci-0000:00:1f.2-scsi-0:0:0:0 pci-0000:00:1f.2-scsi-1:0:0:0

    • +
    +
  • +
  • /dev/disk/by-vdev/: Best for large pools (greater than 10 disks)

    +
      +
    • Summary: This approach provides administrative control over device +naming using the configuration file /etc/zfs/vdev_id.conf. Names +for disks in JBODs can be generated automatically to reflect their +physical location by enclosure IDs and slot numbers. The names can +also be manually assigned based on existing udev device links, +including those in /dev/disk/by-path or /dev/disk/by-id. This +allows you to pick your own unique meaningful names for the disks. +These names will be displayed by all the zfs utilities so it can +be used to clarify the administration of a large complex pool. See +the vdev_id and vdev_id.conf man pages for further details.

    • +
    • Benefits: The main benefit of this approach is that it allows you +to choose meaningful human-readable names. Beyond that, the +benefits depend on the naming method employed. If the names are +derived from the physical path the benefits of /dev/disk/by-path +are realized. On the other hand, aliasing the names based on drive +identifiers or WWNs has the same benefits as using +/dev/disk/by-id.

    • +
    • Drawbacks: This method relies on having a /etc/zfs/vdev_id.conf +file properly configured for your system. To configure this file +please refer to section Setting up the /etc/zfs/vdev_id.conf +file. As with +benefits, the drawbacks of /dev/disk/by-id or /dev/disk/by-path +may apply depending on the naming method employed.

    • +
    • Example: zpool create tank mirror A1 B1 mirror A2 B2

    • +
    +
  • +
  • /dev/disk/by-uuid/: Not a great option

  • +
+
+
    +
  • Summary: One might think from the use of “UUID” that this would +be an ideal option - however, in practice, this ends up listing +one device per pool ID, which is not very useful for importing +pools with multiple disks.

  • +
+
+
    +
  • /dev/disk/by-partuuid//by-partlabel: Works only for existing partitions

  • +
+
+
    +
  • Summary: partition UUID is generated on it’s creation, so usage is limited

  • +
  • Drawbacks: you can’t refer to a partition unique ID on +an unpartitioned disk for zpool replace/add/attach, +and you can’t find failed disk easily without a mapping written +down ahead of time.

  • +
+
+
+
+

Setting up the /etc/zfs/vdev_id.conf file

+

In order to use /dev/disk/by-vdev/ naming the /etc/zfs/vdev_id.conf +must be configured. The format of this file is described in the +vdev_id.conf man page. Several examples follow.

+

A non-multipath configuration with direct-attached SAS enclosures and an +arbitrary slot re-mapping.

+
multipath     no
+topology      sas_direct
+phys_per_port 4
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+
+#    Linux      Mapped
+#    Slot       Slot
+slot 0          2
+slot 1          6
+slot 2          0
+slot 3          3
+slot 4          5
+slot 5          7
+slot 6          4
+slot 7          1
+
+
+

A SAS-switch topology. Note that the channel keyword takes only two +arguments in this example.

+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+
+

A multipath configuration. Note that channel names have multiple +definitions - one per physical path.

+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+
+

A configuration using device link aliases.

+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+

After defining the new disk names run udevadm trigger to prompt udev +to parse the configuration file. This will result in a new +/dev/disk/by-vdev directory which is populated with symlinks to /dev/sdX +names. Following the first example above, you could then create the new +pool of mirrors with the following command:

+
$ zpool create tank \
+    mirror A0 B0 mirror A1 B1 mirror A2 B2 mirror A3 B3 \
+    mirror A4 B4 mirror A5 B5 mirror A6 B6 mirror A7 B7
+
+$ zpool status
+  pool: tank
+ state: ONLINE
+ scan: none requested
+config:
+
+    NAME        STATE     READ WRITE CKSUM
+    tank        ONLINE       0     0     0
+      mirror-0  ONLINE       0     0     0
+        A0      ONLINE       0     0     0
+        B0      ONLINE       0     0     0
+      mirror-1  ONLINE       0     0     0
+        A1      ONLINE       0     0     0
+        B1      ONLINE       0     0     0
+      mirror-2  ONLINE       0     0     0
+        A2      ONLINE       0     0     0
+        B2      ONLINE       0     0     0
+      mirror-3  ONLINE       0     0     0
+        A3      ONLINE       0     0     0
+        B3      ONLINE       0     0     0
+      mirror-4  ONLINE       0     0     0
+        A4      ONLINE       0     0     0
+        B4      ONLINE       0     0     0
+      mirror-5  ONLINE       0     0     0
+        A5      ONLINE       0     0     0
+        B5      ONLINE       0     0     0
+      mirror-6  ONLINE       0     0     0
+        A6      ONLINE       0     0     0
+        B6      ONLINE       0     0     0
+      mirror-7  ONLINE       0     0     0
+        A7      ONLINE       0     0     0
+        B7      ONLINE       0     0     0
+
+errors: No known data errors
+
+
+
+
+

Changing /dev/ names on an existing pool

+

Changing the /dev/ names on an existing pool can be done by simply +exporting the pool and re-importing it with the -d option to specify +which new names should be used. For example, to use the custom names in +/dev/disk/by-vdev:

+
$ zpool export tank
+$ zpool import -d /dev/disk/by-vdev tank
+
+
+
+
+

The /etc/zfs/zpool.cache file

+

Whenever a pool is imported on the system it will be added to the +/etc/zfs/zpool.cache file. This file stores pool configuration +information, such as the device names and pool state. If this file +exists when running the zpool import command then it will be used to +determine the list of pools available for import. When a pool is not +listed in the cache file it will need to be detected and imported using +the zpool import -d /dev/disk/by-id command.

+
+
+

Generating a new /etc/zfs/zpool.cache file

+

The /etc/zfs/zpool.cache file will be automatically updated when +your pool configuration is changed. However, if for some reason it +becomes stale you can force the generation of a new +/etc/zfs/zpool.cache file by setting the cachefile property on the +pool.

+
$ zpool set cachefile=/etc/zfs/zpool.cache tank
+
+
+

Conversely the cache file can be disabled by setting cachefile=none. +This is useful for failover configurations where the pool should always +be explicitly imported by the failover software.

+
$ zpool set cachefile=none tank
+
+
+
+
+

Sending and Receiving Streams

+
+

hole_birth Bugs

+

The hole_birth feature has/had bugs, the result of which is that, if you +do a zfs send -i (or -R, since it uses -i) from an affected +dataset, the receiver will not see any checksum or other errors, but +will not match the source.

+

ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the +faulty metadata which causes this issue on the sender side.

+

For more details, see the hole_birth FAQ.

+
+
+

Sending Large Blocks

+

When sending incremental streams which contain large blocks (>128K) the +--large-block flag must be specified. Inconsistent use of the flag +between incremental sends can result in files being incorrectly zeroed +when they are received. Raw encrypted send/recvs automatically imply the +--large-block flag and are therefore unaffected.

+

For more details, see issue +6224.

+
+
+
+

CEPH/ZFS

+

There is a lot of tuning that can be done that’s dependent on the +workload that is being put on CEPH/ZFS, as well as some general +guidelines. Some are as follow;

+
+

ZFS Configuration

+

The CEPH filestore back-end heavily relies on xattrs, for optimal +performance all CEPH workloads will benefit from the following ZFS +dataset parameters

+
    +
  • xattr=sa

  • +
  • dnodesize=auto

  • +
+

Beyond that typically rbd/cephfs focused workloads benefit from small +recordsize({16K-128K), while objectstore/s3/rados focused workloads +benefit from large recordsize (128K-1M).

+
+
+

CEPH Configuration (ceph.conf)

+

Additionally CEPH sets various values internally for handling xattrs +based on the underlying filesystem. As CEPH only officially +supports/detects XFS and BTRFS, for all other filesystems it falls back +to rather limited “safe” +values. +On newer releases, the need for larger xattrs will prevent OSD’s from even +starting.

+

The officially recommended workaround (see +here) +has some severe downsides, and more specifically is geared toward +filesystems with “limited” xattr support such as ext4.

+

ZFS does not have a limit internally to xattrs length, as such we can +treat it similarly to how CEPH treats XFS. We can set overrides to set 3 +internal values to the same as those used with XFS(see +here +and +here) +and allow it be used without the severe limitations of the “official” +workaround.

+
[osd]
+filestore_max_inline_xattrs = 10
+filestore_max_inline_xattr_size = 65536
+filestore_max_xattr_value_size = 65536
+
+
+
+
+

Other General Guidelines

+
    +
  • Use a separate journal device. Do not collocate CEPH journal on +ZFS dataset if at all possible, this will quickly lead to terrible +fragmentation, not to mention terrible performance upfront even +before fragmentation (CEPH journal does a dsync for every write).

  • +
  • Use a SLOG device, even with a separate CEPH journal device. For some +workloads, skipping SLOG and setting logbias=throughput may be +acceptable.

  • +
  • Use a high-quality SLOG/CEPH journal device. A consumer based SSD, or +even NVMe WILL NOT DO (Samsung 830, 840, 850, etc) for a variety of +reasons. CEPH will kill them quickly, on-top of the performance being +quite low in this use. Generally recommended devices are [Intel DC S3610, +S3700, S3710, P3600, P3700], or [Samsung SM853, SM863], or better.

  • +
  • If using a high quality SSD or NVMe device (as mentioned above), you +CAN share SLOG and CEPH Journal to good results on single device. A +ratio of 4 HDDs to 1 SSD (Intel DC S3710 200GB), with each SSD +partitioned (remember to align!) to 4x10GB (for ZIL/SLOG) + 4x20GB +(for CEPH journal) has been reported to work well.

  • +
+

Again - CEPH + ZFS will KILL a consumer based SSD VERY quickly. Even +ignoring the lack of power-loss protection, and endurance ratings, you +will be very disappointed with performance of consumer based SSD under +such a workload.

+
+
+
+

Performance Considerations

+

To achieve good performance with your pool there are some easy best +practices you should follow.

+
    +
  • Evenly balance your disks across controllers: Often the limiting +factor for performance is not the disks but the controller. By +balancing your disks evenly across controllers you can often improve +throughput.

  • +
  • Create your pool using whole disks: When running zpool create use +whole disk names. This will allow ZFS to automatically partition the +disk to ensure correct alignment. It will also improve +interoperability with other OpenZFS implementations which honor the +wholedisk property.

  • +
  • Have enough memory: A minimum of 2GB of memory is recommended for +ZFS. Additional memory is strongly recommended when the compression +and deduplication features are enabled.

  • +
  • Improve performance by setting ashift=12: You may be able to +improve performance for some workloads by setting ashift=12. This +tuning can only be set when block devices are first added to a pool, +such as when the pool is first created or when a new vdev is added to +the pool. This tuning parameter can result in a decrease of capacity +for RAIDZ configurations.

  • +
+
+
+

Advanced Format Disks

+

Advanced Format (AF) is a new disk format which natively uses a 4,096 +byte, instead of 512 byte, sector size. To maintain compatibility with +legacy systems many AF disks emulate a sector size of 512 bytes. By +default, ZFS will automatically detect the sector size of the drive. +This combination can result in poorly aligned disk accesses which will +greatly degrade the pool performance.

+

Therefore, the ability to set the ashift property has been added to the +zpool command. This allows users to explicitly assign the sector size +when devices are first added to a pool (typically at pool creation time +or adding a vdev to the pool). The ashift values range from 9 to 16 with +the default value 0 meaning that zfs should auto-detect the sector size. +This value is actually a bit shift value, so an ashift value for 512 +bytes is 9 (2^9 = 512) while the ashift value for 4,096 bytes is 12 +(2^12 = 4,096).

+

To force the pool to use 4,096 byte sectors at pool creation time, you +may run:

+
$ zpool create -o ashift=12 tank mirror sda sdb
+
+
+

To force the pool to use 4,096 byte sectors when adding a vdev to a +pool, you may run:

+
$ zpool add -o ashift=12 tank mirror sdc sdd
+
+
+
+
+

ZVOL used space larger than expected

+
+
Depending on the filesystem used on the zvol (e.g. ext4) and the usage +(e.g. deletion and creation of many files) the used and +referenced properties reported by the zvol may be larger than the +“actual” space that is being used as reported by the consumer.
+
This can happen due to the way some filesystems work, in which they +prefer to allocate files in new untouched blocks rather than the +fragmented used blocks marked as free. This forces zfs to reference +all blocks that the underlying filesystem has ever touched.
+
This is in itself not much of a problem, as when the used property +reaches the configured volsize the underlying filesystem will +start reusing blocks. But the problem arises if it is desired to +snapshot the zvol, as the space referenced by the snapshots will +contain the unused blocks.
+
+
+
This issue can be prevented, by issuing the so-called trim +(for ex. fstrim command on Linux) to allow +the kernel to specify to zfs which blocks are unused.
+
Issuing a trim before a snapshot is taken will ensure +a minimum snapshot size.
+
For Linux adding the discard option for the mounted ZVOL in /etc/fstab +effectively enables the kernel to issue the trim commands +continuously, without the need to execute fstrim on-demand.
+
+
+
+

Using a zvol for a swap device on Linux

+

You may use a zvol as a swap device but you’ll need to configure it +appropriately.

+

CAUTION: for now swap on zvol may lead to deadlock, in this case +please send your logs +here.

+
    +
  • Set the volume block size to match your systems page size. This +tuning prevents ZFS from having to perform read-modify-write options +on a larger block while the system is already low on memory.

  • +
  • Set the logbias=throughput and sync=always properties. Data +written to the volume will be flushed immediately to disk freeing up +memory as quickly as possible.

  • +
  • Set primarycache=metadata to avoid keeping swap data in RAM via +the ARC.

  • +
  • Disable automatic snapshots of the swap device.

  • +
+
$ zfs create -V 4G -b $(getconf PAGESIZE) \
+    -o logbias=throughput -o sync=always \
+    -o primarycache=metadata \
+    -o com.sun:auto-snapshot=false rpool/swap
+
+
+
+
+

Using ZFS on Xen Hypervisor or Xen Dom0 (Linux)

+

It is usually recommended to keep virtual machine storage and hypervisor +pools, quite separate. Although few people have managed to successfully +deploy and run OpenZFS using the same machine configured as Dom0. +There are few caveats:

+
    +
  • Set a fair amount of memory in grub.conf, dedicated to Dom0.

    +
      +
    • dom0_mem=16384M,max:16384M

    • +
    +
  • +
  • Allocate no more of 30-40% of Dom0’s memory to ZFS in +/etc/modprobe.d/zfs.conf.

    +
      +
    • options zfs zfs_arc_max=6442450944

    • +
    +
  • +
  • Disable Xen’s auto-ballooning in /etc/xen/xl.conf

  • +
  • Watch out for any Xen bugs, such as this +one related to +ballooning

  • +
+
+
+

udisks2 creating /dev/mapper/ entries for zvol (Linux)

+

To prevent udisks2 from creating /dev/mapper entries that must be +manually removed or maintained during zvol remove / rename, create a +udev rule such as /etc/udev/rules.d/80-udisks2-ignore-zfs.rules with +the following contents:

+
ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_FS_TYPE}=="zfs_member", ENV{ID_PART_ENTRY_TYPE}=="6a898cc3-1dd2-11b2-99a6-080020736631", ENV{UDISKS_IGNORE}="1"
+
+
+
+
+

Licensing

+

License information can be found here.

+
+
+

Reporting a problem

+

You can open a new issue and search existing issues using the public +issue tracker. The issue +tracker is used to organize outstanding bug reports, feature requests, +and other development tasks. Anyone may post comments after signing up +for a github account.

+

Please make sure that what you’re actually seeing is a bug and not a +support issue. If in doubt, please ask on the mailing list first, and if +you’re then asked to file an issue, do so.

+

When opening a new issue include this information at the top of the +issue:

+
    +
  • What distribution you’re using and the version.

  • +
  • What spl/zfs packages you’re using and the version.

  • +
  • Describe the problem you’re observing.

  • +
  • Describe how to reproduce the problem.

  • +
  • Including any warning/errors/backtraces from the system logs.

  • +
+

When a new issue is opened it’s not uncommon for a developer to request +additional information about the problem. In general, the more detail +you share about a problem the quicker a developer can resolve it. For +example, providing a simple test case is always exceptionally helpful. +Be prepared to work with the developer looking in to your bug in order +to get it resolved. They may ask for information like:

+
    +
  • Your pool configuration as reported by zdb or zpool status.

  • +
  • Your hardware configuration, such as

    +
      +
    • Number of CPUs.

    • +
    • Amount of memory.

    • +
    • Whether your system has ECC memory.

    • +
    • Whether it is running under a VMM/Hypervisor.

    • +
    • Kernel version.

    • +
    • Values of the spl/zfs module parameters.

    • +
    +
  • +
  • Stack traces which may be logged to dmesg.

  • +
+
+
+

Does OpenZFS have a Code of Conduct?

+

Yes, the OpenZFS community has a code of conduct. See the Code of +Conduct for details.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/Mailing Lists.html b/Project and Community/Mailing Lists.html new file mode 100644 index 000000000..2b35948ca --- /dev/null +++ b/Project and Community/Mailing Lists.html @@ -0,0 +1,171 @@ + + + + + + + Mailing Lists — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Mailing Lists

+ + + + + + + + + + + + + + + + + + + + + + + + + +

List

Description

List Archive

zfs-announce@list.zfsonlinux.org

A low-traffic list +for announcements +such as new releases

archive

zfs-discuss@list.zfsonlinux.org

A user discussion +list for issues +related to +functionality and +usability

archive

zfs-devel@list.zfsonlinux.org

A development list +for developers to +discuss technical +issues

archive

developer@open-zfs.org

A +platform-independent +mailing list for ZFS +developers to review +ZFS code and +architecture changes +from all platforms

archive

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/Signing Keys.html b/Project and Community/Signing Keys.html new file mode 100644 index 000000000..ec52c76af --- /dev/null +++ b/Project and Community/Signing Keys.html @@ -0,0 +1,198 @@ + + + + + + + Signing Keys — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Signing Keys

+

All tagged ZFS on Linux +releases are signed by +the official maintainer for that branch. These signatures are +automatically verified by GitHub and can be checked locally by +downloading the maintainers public key.

+
+

Maintainers

+
+

Release branch (spl/zfs-*-release)

+
+
Maintainer: Ned Bass
+
Download: +pgp.mit.edu
+
Key ID: C77B9667
+
Fingerprint: 29D5 610E AE29 41E3 55A2 FE8A B974 67AA C77B 9667
+
+
+
Maintainer: Tony Hutter
+
Download: +pgp.mit.edu
+
Key ID: D4598027
+
Fingerprint: 4F3B A9AB 6D1F 8D68 3DC2 DFB5 6AD8 60EE D459 8027
+
+
+
+

Master branch (master)

+
+
Maintainer: Brian Behlendorf
+
Download: +pgp.mit.edu
+
Key ID: C6AF658B
+
Fingerprint: C33D F142 657E D1F7 C328 A296 0AB9 E991 C6AF 658B
+
+
+
+
+

Checking the Signature of a Git Tag

+

First import the public key listed above in to your key ring.

+
$ gpg --keyserver pgp.mit.edu --recv C6AF658B
+gpg: requesting key C6AF658B from hkp server pgp.mit.edu
+gpg: key C6AF658B: "Brian Behlendorf <behlendorf1@llnl.gov>" not changed
+gpg: Total number processed: 1
+gpg:              unchanged: 1
+
+
+

After the public key is imported the signature of a git tag can be +verified as shown.

+
$ git tag --verify zfs-0.6.5
+object 7a27ad00ae142b38d4aef8cc0af7a72b4c0e44fe
+type commit
+tag zfs-0.6.5
+tagger Brian Behlendorf <behlendorf1@llnl.gov> 1441996302 -0700
+
+ZFS Version 0.6.5
+gpg: Signature made Fri 11 Sep 2015 11:31:42 AM PDT using DSA key ID C6AF658B
+gpg: Good signature from "Brian Behlendorf <behlendorf1@llnl.gov>"
+gpg:                 aka "Brian Behlendorf (LLNL) <behlendorf1@llnl.gov>"
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/Project and Community/index.html b/Project and Community/index.html new file mode 100644 index 000000000..122dfa91f --- /dev/null +++ b/Project and Community/index.html @@ -0,0 +1,185 @@ + + + + + + + Project and Community — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Project and Community

+

OpenZFS is storage software which combines the functionality of +traditional filesystems, volume manager, and more. OpenZFS includes +protection against data corruption, support for high storage capacities, +efficient data compression, snapshots and copy-on-write clones, +continuous integrity checking and automatic repair, remote replication +with ZFS send and receive, and RAID-Z.

+

OpenZFS brings together developers from the illumos, Linux, FreeBSD and +OS X platforms, and a wide range of companies – both online and at the +annual OpenZFS Developer Summit. High-level goals of the project include +raising awareness of the quality, utility and availability of +open-source implementations of ZFS, encouraging open communication about +ongoing efforts toward improving open-source variants of ZFS, and +ensuring consistent reliability, functionality and performance of all +distributions of ZFS.

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_TableOfContents.html b/_TableOfContents.html new file mode 100644 index 000000000..0302c9bfb --- /dev/null +++ b/_TableOfContents.html @@ -0,0 +1,199 @@ + + + + + + + <no title> — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/_images/draid-resilver-hours.png b/_images/draid-resilver-hours.png new file mode 100644 index 000000000..41899d28f Binary files /dev/null and b/_images/draid-resilver-hours.png differ diff --git a/_images/raidz_draid.png b/_images/raidz_draid.png new file mode 100644 index 000000000..b5617cd14 Binary files /dev/null and b/_images/raidz_draid.png differ diff --git a/_images/zof-logo.png b/_images/zof-logo.png new file mode 100644 index 000000000..0612f6056 Binary files /dev/null and b/_images/zof-logo.png differ diff --git a/_rediraffe_redirected.json b/_rediraffe_redirected.json new file mode 100644 index 000000000..24ae9d2cf --- /dev/null +++ b/_rediraffe_redirected.json @@ -0,0 +1 @@ +{"man/1/cstyle.1.rst": "man/master/1/cstyle.1.rst", "man/1/ztest.1.rst": "man/master/1/ztest.1.rst", "man/1/test-runner.1.rst": "man/master/1/test-runner.1.rst", "man/1/zhack.1.rst": "man/master/1/zhack.1.rst", "man/1/raidz_test.1.rst": "man/master/1/raidz_test.1.rst", "man/1/zvol_wait.1.rst": "man/master/1/zvol_wait.1.rst", "man/1/arcstat.1.rst": "man/master/1/arcstat.1.rst", "man/1/index.rst": "man/master/1/index.rst", "man/4/zfs.4.rst": "man/master/4/zfs.4.rst", "man/4/spl.4.rst": "man/master/4/spl.4.rst", "man/4/index.rst": "man/master/4/index.rst", "man/5/vdev_id.conf.5.rst": "man/master/5/vdev_id.conf.5.rst", "man/5/index.rst": "man/master/5/index.rst", "man/7/zpoolprops.7.rst": "man/master/7/zpoolprops.7.rst", "man/7/zpool-features.7.rst": "man/master/7/zpool-features.7.rst", "man/7/vdevprops.7.rst": "man/master/7/vdevprops.7.rst", "man/7/zfsprops.7.rst": "man/master/7/zfsprops.7.rst", "man/7/dracut.zfs.7.rst": "man/master/7/dracut.zfs.7.rst", "man/7/zfsconcepts.7.rst": "man/master/7/zfsconcepts.7.rst", "man/7/zpoolconcepts.7.rst": "man/master/7/zpoolconcepts.7.rst", "man/7/index.rst": "man/master/7/index.rst", "man/8/zpool-get.8.rst": "man/master/8/zpool-get.8.rst", "man/8/zpool-create.8.rst": "man/master/8/zpool-create.8.rst", "man/8/zpool-attach.8.rst": "man/master/8/zpool-attach.8.rst", "man/8/zfs-zone.8.rst": "man/master/8/zfs-zone.8.rst", "man/8/zfs-upgrade.8.rst": "man/master/8/zfs-upgrade.8.rst", "man/8/zpool-scrub.8.rst": "man/master/8/zpool-scrub.8.rst", "man/8/zpool-checkpoint.8.rst": "man/master/8/zpool-checkpoint.8.rst", "man/8/zpool-add.8.rst": "man/master/8/zpool-add.8.rst", "man/8/zstream.8.rst": "man/master/8/zstream.8.rst", "man/8/zpool-destroy.8.rst": "man/master/8/zpool-destroy.8.rst", "man/8/zfs-projectspace.8.rst": "man/master/8/zfs-projectspace.8.rst", "man/8/zfs-clone.8.rst": "man/master/8/zfs-clone.8.rst", "man/8/zdb.8.rst": "man/master/8/zdb.8.rst", "man/8/zfs-unmount.8.rst": "man/master/8/zfs-unmount.8.rst", "man/8/zfs-hold.8.rst": "man/master/8/zfs-hold.8.rst", "man/8/zfs_prepare_disk.8.rst": "man/master/8/zfs_prepare_disk.8.rst", "man/8/zfs-share.8.rst": "man/master/8/zfs-share.8.rst", "man/8/zfs-unjail.8.rst": "man/master/8/zfs-unjail.8.rst", "man/8/zfs_ids_to_path.8.rst": "man/master/8/zfs_ids_to_path.8.rst", "man/8/zfs-wait.8.rst": "man/master/8/zfs-wait.8.rst", "man/8/zfs-unzone.8.rst": "man/master/8/zfs-unzone.8.rst", "man/8/zpool-detach.8.rst": "man/master/8/zpool-detach.8.rst", "man/8/zfs-mount.8.rst": "man/master/8/zfs-mount.8.rst", "man/8/zpool-set.8.rst": "man/master/8/zpool-set.8.rst", "man/8/zfs-inherit.8.rst": "man/master/8/zfs-inherit.8.rst", "man/8/zpool-history.8.rst": "man/master/8/zpool-history.8.rst", "man/8/vdev_id.8.rst": "man/master/8/vdev_id.8.rst", "man/8/zfs-release.8.rst": "man/master/8/zfs-release.8.rst", "man/8/mount.zfs.8.rst": "man/master/8/mount.zfs.8.rst", "man/8/zpool-upgrade.8.rst": "man/master/8/zpool-upgrade.8.rst", "man/8/zstreamdump.8.rst": "man/master/8/zstreamdump.8.rst", "man/8/zfs-create.8.rst": "man/master/8/zfs-create.8.rst", "man/8/zpool-reguid.8.rst": "man/master/8/zpool-reguid.8.rst", "man/8/zfs-change-key.8.rst": "man/master/8/zfs-change-key.8.rst", "man/8/zfs-unload-key.8.rst": "man/master/8/zfs-unload-key.8.rst", "man/8/zfs-recv.8.rst": "man/master/8/zfs-recv.8.rst", "man/8/zfs-mount-generator.8.rst": "man/master/8/zfs-mount-generator.8.rst", "man/8/zpool-list.8.rst": "man/master/8/zpool-list.8.rst", "man/8/zpool-split.8.rst": "man/master/8/zpool-split.8.rst", "man/8/zfs-project.8.rst": "man/master/8/zfs-project.8.rst", "man/8/zinject.8.rst": "man/master/8/zinject.8.rst", "man/8/zfs-allow.8.rst": "man/master/8/zfs-allow.8.rst", "man/8/zfs-groupspace.8.rst": "man/master/8/zfs-groupspace.8.rst", "man/8/zfs-get.8.rst": "man/master/8/zfs-get.8.rst", "man/8/zfs-promote.8.rst": "man/master/8/zfs-promote.8.rst", "man/8/zgenhostid.8.rst": "man/master/8/zgenhostid.8.rst", "man/8/zpool-initialize.8.rst": "man/master/8/zpool-initialize.8.rst", "man/8/zfs-receive.8.rst": "man/master/8/zfs-receive.8.rst", "man/8/zpool-wait.8.rst": "man/master/8/zpool-wait.8.rst", "man/8/zpool-events.8.rst": "man/master/8/zpool-events.8.rst", "man/8/zpool-trim.8.rst": "man/master/8/zpool-trim.8.rst", "man/8/zfs.8.rst": "man/master/8/zfs.8.rst", "man/8/zpool.8.rst": "man/master/8/zpool.8.rst", "man/8/zfs-load-key.8.rst": "man/master/8/zfs-load-key.8.rst", "man/8/zpool-replace.8.rst": "man/master/8/zpool-replace.8.rst", "man/8/zpool-iostat.8.rst": "man/master/8/zpool-iostat.8.rst", "man/8/zed.8.rst": "man/master/8/zed.8.rst", "man/8/zfs-diff.8.rst": "man/master/8/zfs-diff.8.rst", "man/8/zpool-online.8.rst": "man/master/8/zpool-online.8.rst", "man/8/zpool-import.8.rst": "man/master/8/zpool-import.8.rst", "man/8/zpool-resilver.8.rst": "man/master/8/zpool-resilver.8.rst", "man/8/zpool-clear.8.rst": "man/master/8/zpool-clear.8.rst", "man/8/zfs-send.8.rst": "man/master/8/zfs-send.8.rst", "man/8/zfs-program.8.rst": "man/master/8/zfs-program.8.rst", "man/8/zfs-bookmark.8.rst": "man/master/8/zfs-bookmark.8.rst", "man/8/zfs-rename.8.rst": "man/master/8/zfs-rename.8.rst", "man/8/zfs-list.8.rst": "man/master/8/zfs-list.8.rst", "man/8/fsck.zfs.8.rst": "man/master/8/fsck.zfs.8.rst", "man/8/zpool-reopen.8.rst": "man/master/8/zpool-reopen.8.rst", "man/8/zfs-redact.8.rst": "man/master/8/zfs-redact.8.rst", "man/8/zfs-snapshot.8.rst": "man/master/8/zfs-snapshot.8.rst", "man/8/zfs-userspace.8.rst": "man/master/8/zfs-userspace.8.rst", "man/8/zfs-destroy.8.rst": "man/master/8/zfs-destroy.8.rst", "man/8/zpool-status.8.rst": "man/master/8/zpool-status.8.rst", "man/8/zpool-remove.8.rst": "man/master/8/zpool-remove.8.rst", "man/8/zfs-rollback.8.rst": "man/master/8/zfs-rollback.8.rst", "man/8/zpool-labelclear.8.rst": "man/master/8/zpool-labelclear.8.rst", "man/8/zpool-sync.8.rst": "man/master/8/zpool-sync.8.rst", "man/8/zpool-offline.8.rst": "man/master/8/zpool-offline.8.rst", "man/8/zfs-jail.8.rst": "man/master/8/zfs-jail.8.rst", "man/8/zpool_influxdb.8.rst": "man/master/8/zpool_influxdb.8.rst", "man/8/zfs-set.8.rst": "man/master/8/zfs-set.8.rst", "man/8/zfs-unallow.8.rst": "man/master/8/zfs-unallow.8.rst", "man/8/index.rst": "man/master/8/index.rst", "man/8/zpool-export.8.rst": "man/master/8/zpool-export.8.rst"} \ No newline at end of file diff --git a/_sources/404.rst.txt b/_sources/404.rst.txt new file mode 100644 index 000000000..7379594cf --- /dev/null +++ b/_sources/404.rst.txt @@ -0,0 +1,6 @@ +:orphan: + +404 Page not found. +=================== + +Please use left menu or search to find interested page. diff --git a/_sources/Basic Concepts/Checksums.rst.txt b/_sources/Basic Concepts/Checksums.rst.txt new file mode 100644 index 000000000..76ebfde4c --- /dev/null +++ b/_sources/Basic Concepts/Checksums.rst.txt @@ -0,0 +1,142 @@ +Checksums and Their Use in ZFS +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +End-to-end checksums are a key feature of ZFS and an important +differentiator for ZFS over other RAID implementations and filesystems. +Advantages of end-to-end checksums include: + +- detects data corruption upon reading from media +- blocks that are detected as corrupt are automatically repaired if + possible, by using the RAID protection in suitably configured pools, + or redundant copies (see the zfs ``copies`` property) +- periodic scrubs can check data to detect and repair latent media + degradation (bit rot) and corruption from other sources +- checksums on ZFS replication streams, ``zfs send`` and + ``zfs receive``, ensure the data received is not corrupted by + intervening storage or transport mechanisms + +Checksum Algorithms +^^^^^^^^^^^^^^^^^^^ + +The checksum algorithms in ZFS can be changed for datasets (filesystems +or volumes). The checksum algorithm used for each block is stored in the +block pointer (metadata). The block checksum is calculated when the +block is written, so changing the algorithm only affects writes +occurring after the change. + +The checksum algorithm for a dataset can be changed by setting the +``checksum`` property: + +.. code:: bash + + zfs set checksum=sha256 pool_name/dataset_name + ++-----------+--------------+------------------------+-------------------------+ +| Checksum | Ok for dedup | Compatible with | Notes | +| | and nopwrite?| other ZFS | | +| | | implementations? | | ++===========+==============+========================+=========================+ +| on | see notes | yes | ``on`` is a | +| | | | short hand for | +| | | | ``fletcher4`` | +| | | | for non-deduped | +| | | | datasets and | +| | | | ``sha256`` for | +| | | | deduped | +| | | | datasets | ++-----------+--------------+------------------------+-------------------------+ +| off | no | yes | Do not do use | +| | | | ``off`` | ++-----------+--------------+------------------------+-------------------------+ +| fletcher2 | no | yes | Deprecated | +| | | | implementation | +| | | | of Fletcher | +| | | | checksum, use | +| | | | ``fletcher4`` | +| | | | instead | ++-----------+--------------+------------------------+-------------------------+ +| fletcher4 | no | yes | Fletcher | +| | | | algorithm, also | +| | | | used for | +| | | | ``zfs send`` | +| | | | streams | ++-----------+--------------+------------------------+-------------------------+ +| sha256 | yes | yes | Default for | +| | | | deduped | +| | | | datasets | ++-----------+--------------+------------------------+-------------------------+ +| noparity | no | yes | Do not use | +| | | | ``noparity`` | ++-----------+--------------+------------------------+-------------------------+ +| sha512 | yes | requires pool | salted | +| | | feature | ``sha512`` | +| | | ``org.illumos:sha512`` | currently not | +| | | | supported for | +| | | | any filesystem | +| | | | on the boot | +| | | | pools | ++-----------+--------------+------------------------+-------------------------+ +| skein | yes | requires pool | salted | +| | | feature | ``skein`` | +| | | ``org.illumos:skein`` | currently not | +| | | | supported for | +| | | | any filesystem | +| | | | on the boot | +| | | | pools | ++-----------+--------------+------------------------+-------------------------+ +| edonr | see notes | requires pool | salted | +| | | feature | ``edonr`` | +| | | ``org.illumos:edonr`` | currently not | +| | | | supported for | +| | | | any filesystem | +| | | | on the boot | +| | | | pools | +| | | | | +| | | | In an abundance of | +| | | | caution, Edon-R requires| +| | | | verification when used | +| | | | with dedup, so it will | +| | | | automatically use | +| | | | ``verify``. | +| | | | | ++-----------+--------------+------------------------+-------------------------+ +| blake3 | yes | requires pool | salted | +| | | feature | ``blake3`` | +| | | ``org.openzfs:blake3`` | currently not | +| | | | supported for | +| | | | any filesystem | +| | | | on the boot | +| | | | pools | ++-----------+--------------+------------------------+-------------------------+ + +Checksum Accelerators +^^^^^^^^^^^^^^^^^^^^^ + +ZFS has the ability to offload checksum operations to the Intel +QuickAssist Technology (QAT) adapters. + +Checksum Microbenchmarks +^^^^^^^^^^^^^^^^^^^^^^^^ + +Some ZFS features use microbenchmarks when the ``zfs.ko`` kernel module +is loaded to determine the optimal algorithm for checksums. The results +of the microbenchmarks are observable in the ``/proc/spl/kstat/zfs`` +directory. The winning algorithm is reported as the "fastest" and +becomes the default. The default can be overridden by setting zfs module +parameters. + +========= ==================================== ======================== +Checksum Results Filename ``zfs`` module parameter +========= ==================================== ======================== +Fletcher4 /proc/spl/kstat/zfs/fletcher_4_bench zfs_fletcher_4_impl +all-other /proc/spl/kstat/zfs/chksum_bench zfs_blake3_impl, + zfs_sha256_impl, + zfs_sha512_impl +========= ==================================== ======================== + +Disabling Checksums +^^^^^^^^^^^^^^^^^^^ + +While it may be tempting to disable checksums to improve CPU +performance, it is widely considered by the ZFS community to be an +extrodinarily bad idea. Don't disable checksums. diff --git a/_sources/Basic Concepts/Feature Flags.rst.txt b/_sources/Basic Concepts/Feature Flags.rst.txt new file mode 100644 index 000000000..e9b3a2835 --- /dev/null +++ b/_sources/Basic Concepts/Feature Flags.rst.txt @@ -0,0 +1,53 @@ +Feature Flags +============= + +ZFS on-disk formats were originally versioned with a single number, +which increased whenever the format changed. The numbered approach was +suitable when development of ZFS was driven by a single organisation. + +For distributed development of OpenZFS, version numbering was +unsuitable. Any change to the number would have required agreement, +across all implementations, of each change to the on-disk format. + +OpenZFS feature flags – an alternative to traditional version numbering +– allow **a uniquely named pool property for each change to the on-disk +format**. This approach supports: + +- format changes that are independent +- format changes that depend on each other. + +Compatibility +------------- + +Where all *features* that are used by a pool are supported by multiple +implementations of OpenZFS, the on-disk format is portable across those +implementations. + +Features that are exclusive when enabled should be periodically ported +to all distributions. + +Reference materials +------------------- + +`ZFS Feature Flags `_ +(Christopher Siden, 2012-01, in the Internet +Archive Wayback Machine) in particular: "… Legacy version numbers still +exist for pool versions 1-28 …". + +`zpool-features(7) man page <../man/7/zpool-features.7.html>`_ - OpenZFS + +`zpool-features `__ (5) – illumos + +Feature flags implementation per OS +----------------------------------- + +.. raw:: html + +
+ +.. raw:: html + :file: ../_build/zfs_feature_matrix.html + +.. raw:: html + +
diff --git a/_sources/Basic Concepts/RAIDZ.rst.txt b/_sources/Basic Concepts/RAIDZ.rst.txt new file mode 100644 index 000000000..4675690e2 --- /dev/null +++ b/_sources/Basic Concepts/RAIDZ.rst.txt @@ -0,0 +1,91 @@ +RAIDZ +===== + +tl;dr: RAIDZ is effective for large block sizes and sequential workloads. + +Introduction +~~~~~~~~~~~~ + +RAIDZ is a variation on RAID-5 that allows for better distribution of parity +and eliminates the RAID-5 “write hole” (in which data and parity become +inconsistent after a power loss). +Data and parity is striped across all disks within a raidz group. + +A raidz group can have single, double, or triple parity, meaning that the raidz +group can sustain one, two, or three failures, respectively, without losing any +data. The ``raidz1`` vdev type specifies a single-parity raidz group; the ``raidz2`` +vdev type specifies a double-parity raidz group; and the ``raidz3`` vdev type +specifies a triple-parity raidz group. The ``raidz`` vdev type is an alias for +raidz1. + +A raidz group of N disks of size X with P parity disks can hold +approximately (N-P)*X bytes and can withstand P devices failing without +losing data. The minimum number of devices in a raidz group is one more +than the number of parity disks. The recommended number is between 3 and 9 +to help increase performance. + + +Space efficiency +~~~~~~~~~~~~~~~~ + +Actual used space for a block in RAIDZ is based on several points: + +- minimal write size is disk sector size (can be set via `ashift` vdev parameter) + +- stripe width in RAIDZ is dynamic, and starts with at least one data block part, or up to + ``disks count`` minus ``parity number`` parts of data block + +- one block of data with size of ``recordsize`` is + splitted equally via ``sector size`` parts + and written on each stripe on RAIDZ vdev +- each stripe of data will have a part of block + +- in addition to data one, two or three blocks of parity should be written, + one per disk; so, for raidz2 of 5 disks there will be 3 blocks of data and + 2 blocks of parity + +Due to these inputs, if ``recordsize`` is less or equal to sector size, +then RAIDZ's parity size will be effictively equal to mirror with same redundancy. +For example, for raidz1 of 3 disks with ``ashift=12`` and ``recordsize=4K`` +we will allocate on disk: + +- one 4K block of data + +- one 4K parity block + +and usable space ratio will be 50%, same as with double mirror. + + +Another example for ``ashift=12`` and ``recordsize=128K`` for raidz1 of 3 disks: + +- total stripe width is 3 + +- one stripe can have up to 2 data parts of 4K size because of 1 parity blocks + +- we will have 128K/8k = 16 stripes with 8K of data and 4K of parity each + +- 16 stripes each with 12k, means we write 192k to store 128k + +so usable space ratio in this case will be 66%. + + +The more disks RAIDZ has, the wider the stripe, the greater the space +efficiency. + +You can find actual parity cost per RAIDZ size here: + +.. raw:: html + + + +(`source `__) + + +Performance considerations +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Write +^^^^^ + +Because of full stripe width, one block write will write stripe part on each disk. +One RAIDZ vdev has a write IOPS of one slowest disk because of that in worst case. diff --git a/_sources/Basic Concepts/Troubleshooting.rst.txt b/_sources/Basic Concepts/Troubleshooting.rst.txt new file mode 100644 index 000000000..65d906008 --- /dev/null +++ b/_sources/Basic Concepts/Troubleshooting.rst.txt @@ -0,0 +1,105 @@ +Troubleshooting +=============== + +.. todo:: + This page is a draft. + +This page contains tips for troubleshooting ZFS on Linux and what info +developers might want for bug triage. + +- `About Log Files <#about-log-files>`__ + + - `Generic Kernel Log <#generic-kernel-log>`__ + - `ZFS Kernel Module Debug + Messages <#zfs-kernel-module-debug-messages>`__ + +- `Unkillable Process <#unkillable-process>`__ +- `ZFS Events <#zfs-events>`__ + +-------------- + +About Log Files +--------------- + +Log files can be very useful for troubleshooting. In some cases, +interesting information is stored in multiple log files that are +correlated to system events. + +Pro tip: logging infrastructure tools like *elasticsearch*, *fluentd*, +*influxdb*, or *splunk* can simplify log analysis and event correlation. + +Generic Kernel Log +~~~~~~~~~~~~~~~~~~ + +Typically, Linux kernel log messages are available from ``dmesg -T``, +``/var/log/syslog``, or where kernel log messages are sent (eg by +``rsyslogd``). + +ZFS Kernel Module Debug Messages +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ZFS kernel modules use an internal log buffer for detailed logging +information. This log information is available in the pseudo file +``/proc/spl/kstat/zfs/dbgmsg`` for ZFS builds where ZFS module parameter +`zfs_dbgmsg_enable = +1 `__ + +-------------- + +Unkillable Process +------------------ + +Symptom: ``zfs`` or ``zpool`` command appear hung, does not return, and +is not killable + +Likely cause: kernel thread hung or panic + +Log files of interest: `Generic Kernel Log <#generic-kernel-log>`__, +`ZFS Kernel Module Debug Messages <#zfs-kernel-module-debug-messages>`__ + +Important information: if a kernel thread is stuck, then a backtrace of +the stuck thread can be in the logs. In some cases, the stuck thread is +not logged until the deadman timer expires. See also `debug +tunables `__ + +-------------- + +ZFS Events +---------- + +ZFS uses an event-based messaging interface for communication of +important events to other consumers running on the system. The ZFS Event +Daemon (zed) is a userland daemon that listens for these events and +processes them. zed is extensible so you can write shell scripts or +other programs that subscribe to events and take action. For example, +the script usually installed at ``/etc/zfs/zed.d/all-syslog.sh`` writes +a formatted event message to ``syslog``. See the man page for ``zed(8)`` +for more information. + +A history of events is also available via the ``zpool events`` command. +This history begins at ZFS kernel module load and includes events from +any pool. These events are stored in RAM and limited in count to a value +determined by the kernel tunable +`zfs_event_len_max `__. +``zed`` has an internal throttling mechanism to prevent overconsumption +of system resources processing ZFS events. + +More detailed information about events is observable using +``zpool events -v`` The contents of the verbose events is subject to +change, based on the event and information available at the time of the +event. + +Each event has a class identifier used for filtering event types. +Commonly seen events are those related to pool management with class +``sysevent.fs.zfs.*`` including import, export, configuration updates, +and ``zpool history`` updates. + +Events related to errors are reported as class ``ereport.*`` These can +be invaluable for troubleshooting. Some faults can cause multiple +ereports as various layers of the software deal with the fault. For +example, on a simple pool without parity protection, a faulty disk could +cause an ``ereport.io`` during a read from the disk that results in an +``erport.fs.zfs.checksum`` at the pool level. These events are also +reflected by the error counters observed in ``zpool status`` If you see +checksum or read/write errors in ``zpool status`` then there should be +one or more corresponding ereports in the ``zpool events`` output. diff --git a/_sources/Basic Concepts/dRAID Howto.rst.txt b/_sources/Basic Concepts/dRAID Howto.rst.txt new file mode 100644 index 000000000..79d16d294 --- /dev/null +++ b/_sources/Basic Concepts/dRAID Howto.rst.txt @@ -0,0 +1,248 @@ +dRAID +===== + +.. note:: + This page describes functionality which has been added for the + OpenZFS 2.1.0 release, it is not in the OpenZFS 2.0.0 release. + +Introduction +~~~~~~~~~~~~ + +`dRAID`_ is a variant of raidz that provides integrated distributed hot +spares which allows for faster resilvering while retaining the benefits +of raidz. A dRAID vdev is constructed from multiple internal raidz +groups, each with D data devices and P parity devices. These groups +are distributed over all of the children in order to fully utilize the +available disk performance. This is known as parity declustering and +it has been an active area of research. The image below is simplified, +but it helps illustrate this key difference between dRAID and raidz. + +|draid1| + +Additionally, a dRAID vdev must shuffle its child vdevs in such a way +that regardless of which drive has failed, the rebuild IO (both read +and write) will distribute evenly among all surviving drives. This +is accomplished by using carefully chosen precomputed permutation +maps. This has the advantage of both keeping pool creation fast and +making it impossible for the mapping to be damaged or lost. + +Another way dRAID differs from raidz is that it uses a fixed stripe +width (padding as necessary with zeros). This allows a dRAID vdev to +be sequentially resilvered, however the fixed stripe width significantly +effects both usable capacity and IOPS. For example, with the default +D=8 and 4k disk sectors the minimum allocation size is 32k. If using +compression, this relatively large allocation size can reduce the +effective compression ratio. When using ZFS volumes and dRAID the +default volblocksize property is increased to account for the allocation +size. If a dRAID pool will hold a significant amount of small blocks, +it is recommended to also add a mirrored special vdev to store those +blocks. + +In regards to IO/s, performance is similar to raidz since for any +read all D data disks must be accessed. Delivered random IOPS can be +reasonably approximated as floor((N-S)/(D+P))*. + +In summary dRAID can provide the same level of redundancy and +performance as raidz, while also providing a fast integrated distributed +spare. + +Create a dRAID vdev +~~~~~~~~~~~~~~~~~~~ + +A dRAID vdev is created like any other by using the ``zpool create`` +command and enumerating the disks which should be used. + +:: + + # zpool create draid[1,2,3] + +Like raidz, the parity level is specified immediately after the ``draid`` +vdev type. However, unlike raidz additional colon separated options can be +specified. The most important of which is the ``:s`` option which +controls the number of distributed hot spares to create. By default, no +spares are created. The ``:d`` option can be specified to set the +number of data devices to use in each RAID stripe (D+P). When unspecified +reasonable defaults are chosen. + +:: + + # zpool create draid[][:d][:c][:s] + +- **parity** - The parity level (1-3). Defaults to one. + +- **data** - The number of data devices per redundancy group. In general + a smaller value of D will increase IOPS, improve the compression ratio, + and speed up resilvering at the expense of total usable capacity. + Defaults to 8, unless N-P-S is less than 8. + +- **children** - The expected number of children. Useful as a cross-check + when listing a large number of devices. An error is returned when the + provided number of children differs. + +- **spares** - The number of distributed hot spares. Defaults to zero. + +For example, to create an 11 disk dRAID pool with 4+1 redundancy and a +single distributed spare the command would be: + +:: + + # zpool create tank draid:4d:1s:11c /dev/sd[a-k] + # zpool status tank + + pool: tank + state: ONLINE + config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + draid1:4d:11c:1s-0 ONLINE 0 0 0 + sda ONLINE 0 0 0 + sdb ONLINE 0 0 0 + sdc ONLINE 0 0 0 + sdd ONLINE 0 0 0 + sde ONLINE 0 0 0 + sdf ONLINE 0 0 0 + sdg ONLINE 0 0 0 + sdh ONLINE 0 0 0 + sdi ONLINE 0 0 0 + sdj ONLINE 0 0 0 + sdk ONLINE 0 0 0 + spares + draid1-0-0 AVAIL + +Note that the dRAID vdev name, ``draid1:4d:11c:1s``, fully describes the +configuration and all of disks which are part of the dRAID are listed. +Furthermore, the logical distributed hot spare is shown as an available +spare disk. + +Rebuilding to a Distributed Spare +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +One of the major advantages of dRAID is that it supports both sequential +and traditional healing resilvers. When performing a sequential resilver +to a distributed hot spare the performance scales with the number of disks +divided by the stripe width (D+P). This can greatly reduce resilver times +and restore full redundancy in a fraction of the usual time. For example, +the following graph shows the observed sequential resilver time in hours +for a 90 HDD based dRAID filled to 90% capacity. + +|draid-resilver| + +When using dRAID and a distributed spare, the process for handling a +failed disk is almost identical to raidz with a traditional hot spare. +When a disk failure is detected the ZFS Event Daemon (ZED) will start +rebuilding to a spare if one is available. The only difference is that +for dRAID a sequential resilver is started, while a healing resilver must +be used for raidz. + +:: + + # echo offline >/sys/block/sdg/device/state + # zpool replace -s tank sdg draid1-0-0 + # zpool status + + pool: tank + state: DEGRADED + status: One or more devices is currently being resilvered. The pool will + continue to function, possibly in a degraded state. + action: Wait for the resilver to complete. + scan: resilver (draid1:4d:11c:1s-0) in progress since Tue Nov 24 14:34:25 2020 + 3.51T scanned at 13.4G/s, 1.59T issued 6.07G/s, 6.13T total + 326G resilvered, 57.17% done, 00:03:21 to go + config: + + NAME STATE READ WRITE CKSUM + tank DEGRADED 0 0 0 + draid1:4d:11c:1s-0 DEGRADED 0 0 0 + sda ONLINE 0 0 0 (resilvering) + sdb ONLINE 0 0 0 (resilvering) + sdc ONLINE 0 0 0 (resilvering) + sdd ONLINE 0 0 0 (resilvering) + sde ONLINE 0 0 0 (resilvering) + sdf ONLINE 0 0 0 (resilvering) + spare-6 DEGRADED 0 0 0 + sdg UNAVAIL 0 0 0 + draid1-0-0 ONLINE 0 0 0 (resilvering) + sdh ONLINE 0 0 0 (resilvering) + sdi ONLINE 0 0 0 (resilvering) + sdj ONLINE 0 0 0 (resilvering) + sdk ONLINE 0 0 0 (resilvering) + spares + draid1-0-0 INUSE currently in use + +While both types of resilvering achieve the same goal it's worth taking +a moment to summarize the key differences. + +- A traditional healing resilver scans the entire block tree. This + means the checksum for each block is available while it's being + repaired and can be immediately verified. The downside is this + creates a random read workload which is not ideal for performance. + +- A sequential resilver instead scans the space maps in order to + determine what space is allocated and what must be repaired. + This rebuild process is not limited to block boundaries and can + sequentially reads from the disks and make repairs using larger + I/Os. The price to pay for this performance improvement is that + the block checksums cannot be verified while resilvering. Therefore, + a scrub is started to verify the checksums after the sequential + resilver completes. + +For a more in depth explanation of the differences between sequential +and healing resilvering check out these `sequential resilver`_ slides +which were presented at the OpenZFS Developer Summit. + +Rebalancing +~~~~~~~~~~~ + +Distributed spare space can be made available again by simply replacing +any failed drive with a new drive. This process is called rebalancing +and is essentially a resilver. When performing rebalancing a healing +resilver is recommended since the pool is no longer degraded. This +ensures all checksums are verified when rebuilding to the new disk +and eliminates the need to perform a subsequent scrub of the pool. + +:: + + # zpool replace tank sdg sdl + # zpool status + + pool: tank + state: DEGRADED + status: One or more devices is currently being resilvered. The pool will + continue to function, possibly in a degraded state. + action: Wait for the resilver to complete. + scan: resilver in progress since Tue Nov 24 14:45:16 2020 + 6.13T scanned at 7.82G/s, 6.10T issued at 7.78G/s, 6.13T total + 565G resilvered, 99.44% done, 00:00:04 to go + config: + + NAME STATE READ WRITE CKSUM + tank DEGRADED 0 0 0 + draid1:4d:11c:1s-0 DEGRADED 0 0 0 + sda ONLINE 0 0 0 (resilvering) + sdb ONLINE 0 0 0 (resilvering) + sdc ONLINE 0 0 0 (resilvering) + sdd ONLINE 0 0 0 (resilvering) + sde ONLINE 0 0 0 (resilvering) + sdf ONLINE 0 0 0 (resilvering) + spare-6 DEGRADED 0 0 0 + replacing-0 DEGRADED 0 0 0 + sdg UNAVAIL 0 0 0 + sdl ONLINE 0 0 0 (resilvering) + draid1-0-0 ONLINE 0 0 0 (resilvering) + sdh ONLINE 0 0 0 (resilvering) + sdi ONLINE 0 0 0 (resilvering) + sdj ONLINE 0 0 0 (resilvering) + sdk ONLINE 0 0 0 (resilvering) + spares + draid1-0-0 INUSE currently in use + +After the resilvering completes the distributed hot spare is once again +available for use and the pool has been restored to its normal healthy +state. + +.. |draid1| image:: /_static/img/raidz_draid.png +.. |draid-resilver| image:: /_static/img/draid-resilver-hours.png +.. _dRAID: https://docs.google.com/presentation/d/1uo0nBfY84HIhEqGWEx-Tbm8fPbJKtIP3ICo4toOPcJo/edit +.. _sequential resilver: https://docs.google.com/presentation/d/1vLsgQ1MaHlifw40C9R2sPsSiHiQpxglxMbK2SMthu0Q/edit#slide=id.g995720a6cf_1_39 +.. _custom packages: https://openzfs.github.io/openzfs-docs/Developer%20Resources/Custom%20Packages.html# diff --git a/_sources/Basic Concepts/index.rst.txt b/_sources/Basic Concepts/index.rst.txt new file mode 100644 index 000000000..e7329870a --- /dev/null +++ b/_sources/Basic Concepts/index.rst.txt @@ -0,0 +1,9 @@ +Basic Concepts +============== + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + * diff --git a/_sources/Developer Resources/Buildbot Options.rst.txt b/_sources/Developer Resources/Buildbot Options.rst.txt new file mode 100644 index 000000000..9727bd54d --- /dev/null +++ b/_sources/Developer Resources/Buildbot Options.rst.txt @@ -0,0 +1,248 @@ +Buildbot Options +================ + +There are a number of ways to control the ZFS Buildbot at a commit +level. This page provides a summary of various options that the ZFS +Buildbot supports and how it impacts testing. More detailed information +regarding its implementation can be found at the `ZFS Buildbot Github +page `__. + +Choosing Builders +----------------- + +By default, all commits in your ZFS pull request are compiled by the +BUILD builders. Additionally, the top commit of your ZFS pull request is +tested by TEST builders. However, there is the option to override which +types of builder should be used on a per commit basis. In this case, you +can add +``Requires-builders: `` +to your commit message. A comma separated list of options can be +provided. Supported options are: + +- ``all``: This commit should be built by all available builders +- ``none``: This commit should not be built by any builders +- ``style``: This commit should be built by STYLE builders +- ``build``: This commit should be built by all BUILD builders +- ``arch``: This commit should be built by BUILD builders tagged as + 'Architectures' +- ``distro``: This commit should be built by BUILD builders tagged as + 'Distributions' +- ``test``: This commit should be built and tested by the TEST builders + (excluding the Coverage TEST builders) +- ``perf``: This commit should be built and tested by the PERF builders +- ``coverage`` : This commit should be built and tested by the Coverage + TEST builders +- ``unstable`` : This commit should be built and tested by the Unstable + TEST builders (currently only the Fedora Rawhide TEST builder) + +A couple of examples on how to use ``Requires-builders:`` in commit +messages can be found below. + +.. _preventing-a-commit-from-being-built-and-tested: + +Preventing a commit from being built and tested. +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-builders: none + +.. _submitting-a-commit-to-style-and-test-builders-only: + +Submitting a commit to STYLE and TEST builders only. +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-builders: style test + +Requiring SPL Versions +---------------------- + +Currently, the ZFS Buildbot attempts to choose the correct SPL branch to +build based on a pull request's base branch. In the cases where a +specific SPL version needs to be built, the ZFS buildbot supports +specifying an SPL version for pull request testing. By opening a pull +request against ZFS and adding ``Requires-spl:`` in a commit message, +you can instruct the buildbot to use a specific SPL version. Below are +examples of a commit messages that specify the SPL version. + +Build SPL from a specific pull request +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-spl: refs/pull/123/head + +Build SPL branch ``spl-branch-name`` from ``zfsonlinux/spl`` repository +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-spl: spl-branch-name + +Requiring Kernel Version +------------------------ + +Currently, Kernel.org builders will clone and build the master branch of +Linux. In cases where a specific version of the Linux kernel needs to be +built, the ZFS buildbot supports specifying the Linux kernel to be built +via commit message. By opening a pull request against ZFS and adding +``Requires-kernel:`` in a commit message, you can instruct the buildbot +to use a specific Linux kernel. Below is an example commit message that +specifies a specific Linux kernel tag. + +.. _build-linux-kernel-version-414: + +Build Linux Kernel Version 4.14 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Requires-kernel: v4.14 + +Build Steps Overrides +--------------------- + +Each builder will execute or skip build steps based on its default +preferences. In some scenarios, it might be possible to skip various +build steps. The ZFS buildbot supports overriding the defaults of all +builders in a commit message. The list of available overrides are: + +- ``Build-linux: ``: All builders should build Linux for this + commit +- ``Build-lustre: ``: All builders should build Lustre for this + commit +- ``Build-spl: ``: All builders should build the SPL for this + commit +- ``Build-zfs: ``: All builders should build ZFS for this + commit +- ``Built-in: ``: All Linux builds should build in SPL and ZFS +- ``Check-lint: ``: All builders should perform lint checks for + this commit +- ``Configure-lustre: ``: Provide ```` as configure + flags when building Lustre +- ``Configure-spl: ``: Provide ```` as configure + flags when building the SPL +- ``Configure-zfs: ``: Provide ```` as configure + flags when building ZFS + +A couple of examples on how to use overrides in commit messages can be +found below. + +Skip building the SPL and build Lustre without ldiskfs +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Build-lustre: Yes + Configure-lustre: --disable-ldiskfs + Build-spl: No + +Build ZFS Only +~~~~~~~~~~~~~~ + +:: + + This is a commit message + + This text is part of the commit message body. + + Signed-off-by: Contributor + Build-lustre: No + Build-spl: No + +Configuring Tests with the TEST File +------------------------------------ + +At the top level of the ZFS source tree, there is the `TEST +file `__ which +contains variables that control if and how a specific test should run. +Below is a list of each variable and a brief description of what each +variable controls. + +- ``TEST_PREPARE_WATCHDOG`` - Enables the Linux kernel watchdog +- ``TEST_PREPARE_SHARES`` - Start NFS and Samba servers +- ``TEST_SPLAT_SKIP`` - Determines if ``splat`` testing is skipped +- ``TEST_SPLAT_OPTIONS`` - Command line options to provide to ``splat`` +- ``TEST_ZTEST_SKIP`` - Determines if ``ztest`` testing is skipped +- ``TEST_ZTEST_TIMEOUT`` - The length of time ``ztest`` should run +- ``TEST_ZTEST_DIR`` - Directory where ``ztest`` will create vdevs +- ``TEST_ZTEST_OPTIONS`` - Options to pass to ``ztest`` +- ``TEST_ZTEST_CORE_DIR`` - Directory for ``ztest`` to store core dumps +- ``TEST_ZIMPORT_SKIP`` - Determines if ``zimport`` testing is skipped +- ``TEST_ZIMPORT_DIR`` - Directory used during ``zimport`` +- ``TEST_ZIMPORT_VERSIONS`` - Source versions to test +- ``TEST_ZIMPORT_POOLS`` - Names of the pools for ``zimport`` to use + for testing +- ``TEST_ZIMPORT_OPTIONS`` - Command line options to provide to + ``zimport`` +- ``TEST_XFSTESTS_SKIP`` - Determines if ``xfstest`` testing is skipped +- ``TEST_XFSTESTS_URL`` - URL to download ``xfstest`` from +- ``TEST_XFSTESTS_VER`` - Name of the tarball to download from + ``TEST_XFSTESTS_URL`` +- ``TEST_XFSTESTS_POOL`` - Name of pool to create and used by + ``xfstest`` +- ``TEST_XFSTESTS_FS`` - Name of dataset for use by ``xfstest`` +- ``TEST_XFSTESTS_VDEV`` - Name of the vdev used by ``xfstest`` +- ``TEST_XFSTESTS_OPTIONS`` - Command line options to provide to + ``xfstest`` +- ``TEST_ZFSTESTS_SKIP`` - Determines if ``zfs-tests`` testing is + skipped +- ``TEST_ZFSTESTS_DIR`` - Directory to store files and loopback devices +- ``TEST_ZFSTESTS_DISKS`` - Space delimited list of disks that + ``zfs-tests`` is allowed to use +- ``TEST_ZFSTESTS_DISKSIZE`` - File size of file based vdevs used by + ``zfs-tests`` +- ``TEST_ZFSTESTS_ITERS`` - Number of times ``test-runner`` should + execute its set of tests +- ``TEST_ZFSTESTS_OPTIONS`` - Options to provide ``zfs-tests`` +- ``TEST_ZFSTESTS_RUNFILE`` - The runfile to use when running + ``zfs-tests`` +- ``TEST_ZFSTESTS_TAGS`` - List of tags to provide to ``test-runner`` +- ``TEST_ZFSSTRESS_SKIP`` - Determines if ``zfsstress`` testing is + skipped +- ``TEST_ZFSSTRESS_URL`` - URL to download ``zfsstress`` from +- ``TEST_ZFSSTRESS_VER`` - Name of the tarball to download from + ``TEST_ZFSSTRESS_URL`` +- ``TEST_ZFSSTRESS_RUNTIME`` - Duration to run ``runstress.sh`` +- ``TEST_ZFSSTRESS_POOL`` - Name of pool to create and use for + ``zfsstress`` testing +- ``TEST_ZFSSTRESS_FS`` - Name of dataset for use during ``zfsstress`` + tests +- ``TEST_ZFSSTRESS_FSOPT`` - File system options to provide to + ``zfsstress`` +- ``TEST_ZFSSTRESS_VDEV`` - Directory to store vdevs for use during + ``zfsstress`` tests +- ``TEST_ZFSSTRESS_OPTIONS`` - Command line options to provide to + ``runstress.sh`` diff --git a/_sources/Developer Resources/Building ZFS.rst.txt b/_sources/Developer Resources/Building ZFS.rst.txt new file mode 100644 index 000000000..97f9ff634 --- /dev/null +++ b/_sources/Developer Resources/Building ZFS.rst.txt @@ -0,0 +1,255 @@ +Building ZFS +============ + +GitHub Repositories +~~~~~~~~~~~~~~~~~~~ + +The official source for OpenZFS is maintained at GitHub by the +`openzfs `__ organization. The primary +git repository for the project is the `zfs +`__ repository. + +There are two main components in this repository: + +- **ZFS**: The ZFS repository contains a copy of the upstream OpenZFS + code which has been adapted and extended for Linux and FreeBSD. The + vast majority of the core OpenZFS code is self-contained and can be + used without modification. + +- **SPL**: The SPL is a thin shim layer which is responsible for + implementing the fundamental interfaces required by OpenZFS. It's + this layer which allows OpenZFS to be used across multiple + platforms. SPL used to be maintained in a separate repository, but + was merged into the `zfs `__ + repository in the ``0.8`` major release. + +Installing Dependencies +~~~~~~~~~~~~~~~~~~~~~~~ + +The first thing you'll need to do is prepare your environment by +installing a full development tool chain. In addition, development +headers for both the kernel and the following packages must be +available. It is important to note that if the development kernel +headers for the currently running kernel aren't installed, the modules +won't compile properly. + +The following dependencies should be installed to build the latest ZFS +2.1 release. + +- **RHEL/CentOS 7**: + +.. code:: sh + + sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel git ncompress libcurl-devel + sudo yum install --enablerepo=epel python-packaging dkms + +- **RHEL/CentOS 8, Fedora**: + +.. code:: sh + + sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel git ncompress libcurl-devel + sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms + +- **Debian, Ubuntu**: + +.. code:: sh + + sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-generic python3 python3-dev python3-setuptools python3-cffi libffi-dev python3-packaging git libcurl4-openssl-dev debhelper-compat dh-python po-debconf python3-all-dev python3-sphinx parallel + +- **FreeBSD**: + +.. code:: sh + + pkg install autoconf automake autotools git gmake python devel/py-sysctl sudo + +Build Options +~~~~~~~~~~~~~ + +There are two options for building OpenZFS; the correct one largely +depends on your requirements. + +- **Packages**: Often it can be useful to build custom packages from + git which can be installed on a system. This is the best way to + perform integration testing with systemd, dracut, and udev. The + downside to using packages it is greatly increases the time required + to build, install, and test a change. + +- **In-tree**: Development can be done entirely in the SPL/ZFS source + tree. This speeds up development by allowing developers to rapidly + iterate on a patch. When working in-tree developers can leverage + incremental builds, load/unload kernel modules, execute utilities, + and verify all their changes with the ZFS Test Suite. + +The remainder of this page focuses on the **in-tree** option which is +the recommended method of development for the majority of changes. See +the :doc:`custom packages <./Custom Packages>` page for additional +information on building custom packages. + +Developing In-Tree +~~~~~~~~~~~~~~~~~~ + +Clone from GitHub +^^^^^^^^^^^^^^^^^ + +Start by cloning the ZFS repository from GitHub. The repository has a +**master** branch for development and a series of **\*-release** +branches for tagged releases. After checking out the repository your +clone will default to the master branch. Tagged releases may be built +by checking out zfs-x.y.z tags with matching version numbers or +matching release branches. + +:: + + git clone https://github.com/openzfs/zfs + +Configure and Build +^^^^^^^^^^^^^^^^^^^ + +For developers working on a change always create a new topic branch +based off of master. This will make it easy to open a pull request with +your change latter. The master branch is kept stable with extensive +`regression testing `__ of every pull +request before and after it's merged. Every effort is made to catch +defects as early as possible and to keep them out of the tree. +Developers should be comfortable frequently rebasing their work against +the latest master branch. + +In this example we'll use the master branch and walk through a stock +**in-tree** build. Start by checking out the desired branch then build +the ZFS and SPL source in the traditional autotools fashion. + +:: + + cd ./zfs + git checkout master + sh autogen.sh + ./configure + make -s -j$(nproc) + +| **tip:** ``--with-linux=PATH`` and ``--with-linux-obj=PATH`` can be + passed to configure to specify a kernel installed in a non-default + location. +| **tip:** ``--enable-debug`` can be passed to configure to enable all ASSERTs and + additional correctness tests. + +**Optional** Build packages + +:: + + make rpm #Builds RPM packages for CentOS/Fedora + make deb #Builds RPM converted DEB packages for Debian/Ubuntu + make native-deb #Builds native DEB packages for Debian/Ubuntu + +| **tip:** Native Debian packages build with pre-configured paths for + Debian and Ubuntu. It's best not to override the paths during + configure. +| **tip:** For native Debain packages, ``KVERS``, ``KSRC`` and ``KOBJ`` + environment variables can be exported to specify the kernel installed + in non-default location. + +.. note:: + Support for native Debian packaging will be available starting from + openzfs-2.2 release. + +Install +^^^^^^^ + +You can run ``zfs-tests.sh`` without installing ZFS, see below. If you +have reason to install ZFS after building it, pay attention to how your +distribution handles kernel modules. On Ubuntu, for example, the modules +from this repository install in the ``extra`` kernel module path, which +is not in the standard ``depmod`` search path. Therefore, for the +duration of your testing, edit ``/etc/depmod.d/ubuntu.conf`` and add +``extra`` to the beginning of the search path. + +You may then install using +``sudo make install; sudo ldconfig; sudo depmod``. You'd uninstall with +``sudo make uninstall; sudo ldconfig; sudo depmod``. + +.. _running-zloopsh-and-zfs-testssh: + +Running zloop.sh and zfs-tests.sh +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you wish to run the ZFS Test Suite (ZTS), then ``ksh`` and a few +additional utilities must be installed. + +- **RHEL/CentOS 7:** + +.. code:: sh + + sudo yum install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf + sudo yum install --enablerepo=epel dbench + +- **RHEL/CentOS 8, Fedora:** + +.. code:: sh + + sudo dnf install --skip-broken ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr nfs-utils samba rng-tools pax perf + sudo dnf install --skip-broken --enablerepo=epel dbench + +- **Debian:** + +.. code:: sh + + sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-perf selinux-utils quota + +- **Ubuntu:** + +.. code:: sh + + sudo apt install ksh bc bzip2 fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-tools-common selinux-utils quota + +- **FreeBSD**: + +.. code:: sh + + pkg install base64 bash checkbashisms fio hs-ShellCheck ksh93 pamtester devel/py-flake8 sudo + + +There are a few helper scripts provided in the top-level scripts +directory designed to aid developers working with in-tree builds. + +- **zfs-helper.sh:** Certain functionality (i.e. /dev/zvol/) depends on + the ZFS provided udev helper scripts being installed on the system. + This script can be used to create symlinks on the system from the + installation location to the in-tree helper. These links must be in + place to successfully run the ZFS Test Suite. The **-i** and **-r** + options can be used to install and remove the symlinks. + +:: + + sudo ./scripts/zfs-helpers.sh -i + +- **zfs.sh:** The freshly built kernel modules can be loaded using + ``zfs.sh``. This script can later be used to unload the kernel + modules with the **-u** option. + +:: + + sudo ./scripts/zfs.sh + +- **zloop.sh:** A wrapper to run ztest repeatedly with randomized + arguments. The ztest command is a user space stress test designed to + detect correctness issues by concurrently running a random set of + test cases. If a crash is encountered, the ztest logs, any associated + vdev files, and core file (if one exists) are collected and moved to + the output directory for analysis. + +:: + + sudo ./scripts/zloop.sh + +- **zfs-tests.sh:** A wrapper which can be used to launch the ZFS Test + Suite. Three loopback devices are created on top of sparse files + located in ``/var/tmp/`` and used for the regression test. Detailed + directions for the ZFS Test Suite can be found in the + `README `__ + located in the top-level tests directory. + +:: + + ./scripts/zfs-tests.sh -vx + +**tip:** The **delegate** tests will be skipped unless group read +permission is set on the zfs directory and its parents. diff --git a/_sources/Developer Resources/Custom Packages.rst.txt b/_sources/Developer Resources/Custom Packages.rst.txt new file mode 100644 index 000000000..38d5af132 --- /dev/null +++ b/_sources/Developer Resources/Custom Packages.rst.txt @@ -0,0 +1,250 @@ +Custom Packages +=============== + +The following instructions assume you are building from an official +`release tarball `__ +(version 0.8.0 or newer) or directly from the `git +repository `__. Most users should not +need to do this and should preferentially use the distribution packages. +As a general rule the distribution packages will be more tightly +integrated, widely tested, and better supported. However, if your +distribution of choice doesn't provide packages, or you're a developer +and want to roll your own, here's how to do it. + +The first thing to be aware of is that the build system is capable of +generating several different types of packages. Which type of package +you choose depends on what's supported on your platform and exactly what +your needs are. + +- **DKMS** packages contain only the source code and scripts for + rebuilding the kernel modules. When the DKMS package is installed + kernel modules will be built for all available kernels. Additionally, + when the kernel is upgraded new kernel modules will be automatically + built for that kernel. This is particularly convenient for desktop + systems which receive frequent kernel updates. The downside is that + because the DKMS packages build the kernel modules from source a full + development environment is required which may not be appropriate for + large deployments. + +- **kmods** packages are binary kernel modules which are compiled + against a specific version of the kernel. This means that if you + update the kernel you must compile and install a new kmod package. If + you don't frequently update your kernel, or if you're managing a + large number of systems, then kmod packages are a good choice. + +- **kABI-tracking kmod** Packages are similar to standard binary kmods + and may be used with Enterprise Linux distributions like Red Hat and + CentOS. These distributions provide a stable kABI (Kernel Application + Binary Interface) which allows the same binary modules to be used + with new versions of the distribution provided kernel. + +By default the build system will generate user packages and both DKMS +and kmod style kernel packages if possible. The user packages can be +used with either set of kernel packages and do not need to be rebuilt +when the kernel is updated. You can also streamline the build process by +building only the DKMS or kmod packages as shown below. + +Be aware that when building directly from a git repository you must +first run the *autogen.sh* script to create the *configure* script. This +will require installing the GNU autotools packages for your +distribution. To perform any of the builds, you must install all the +necessary development tools and headers for your distribution. + +It is important to note that if the development kernel headers for the +currently running kernel aren't installed, the modules won't compile +properly. + +- `Red Hat, CentOS and Fedora <#red-hat-centos-and-fedora>`__ +- `Debian and Ubuntu <#debian-and-ubuntu>`__ + +RHEL, CentOS and Fedora +----------------------- + +Make sure that the required packages are installed to build the latest +ZFS 2.1 release: + +- **RHEL/CentOS 7**: + +.. code:: sh + + sudo yum install epel-release gcc make autoconf automake libtool rpm-build libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel ncompress + sudo yum install --enablerepo=epel dkms python-packaging + +- **RHEL/CentOS 8, Fedora**: + +.. code:: sh + + sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build kernel-rpm-macros libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) kernel-abi-stablelists-$(uname -r | sed 's/\.[^.]\+$//') python3 python3-devel python3-setuptools python3-cffi libffi-devel ncompress + sudo dnf install --skip-broken --enablerepo=epel --enablerepo=powertools python3-packaging dkms + +- **RHEL/CentOS 9**: + +.. code:: sh + + sudo dnf config-manager --set-enabled crb + sudo dnf install --skip-broken epel-release gcc make autoconf automake libtool rpm-build kernel-rpm-macros libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) kernel-abi-stablelists-$(uname -r | sed 's/\.[^.]\+$//') python3 python3-devel python3-setuptools python3-cffi libffi-devel + sudo dnf install --skip-broken --enablerepo=epel python3-packaging dkms + + + +`Get the source code <#get-the-source-code>`__. + +DKMS +~~~~ + +Building rpm-based DKMS and user packages can be done as follows: + +.. code:: sh + + $ cd zfs + $ ./configure + $ make -j1 rpm-utils rpm-dkms + $ sudo yum localinstall *.$(uname -p).rpm *.noarch.rpm + +kmod +~~~~ + +The key thing to know when building a kmod package is that a specific +Linux kernel must be specified. At configure time the build system will +make an educated guess as to which kernel you want to build against. +However, if configure is unable to locate your kernel development +headers, or you want to build against a different kernel, you must +specify the exact path with the *--with-linux* and *--with-linux-obj* +options. + +.. code:: sh + + $ cd zfs + $ ./configure + $ make -j1 rpm-utils rpm-kmod + $ sudo yum localinstall *.$(uname -p).rpm + +kABI-tracking kmod +~~~~~~~~~~~~~~~~~~ + +The process for building kABI-tracking kmods is almost identical to for +building normal kmods. However, it will only produce binaries which can +be used by multiple kernels if the distribution supports a stable kABI. +In order to request kABI-tracking package the *--with-spec=redhat* +option must be passed to configure. + +**NOTE:** This type of package is not available for Fedora. + +.. code:: sh + + $ cd zfs + $ ./configure --with-spec=redhat + $ make -j1 rpm-utils rpm-kmod + $ sudo yum localinstall *.$(uname -p).rpm + +Debian and Ubuntu +----------------- + +Make sure that the required packages are installed: + +.. code:: sh + + sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-generic python3 python3-dev python3-setuptools python3-cffi libffi-dev python3-packaging debhelper-compat dh-python po-debconf python3-all-dev python3-sphinx libpam0g-dev + +`Get the source code <#get-the-source-code>`__. + +.. _kmod-1: + +kmod +~~~~ + +The key thing to know when building a kmod package is that a specific +Linux kernel must be specified. At configure time the build system will +make an educated guess as to which kernel you want to build against. +However, if configure is unable to locate your kernel development +headers, or you want to build against a different kernel, you must +specify the exact path with the *--with-linux* and *--with-linux-obj* +options. + +To build RPM converted Debian packages: + +.. code:: sh + + $ cd zfs + $ ./configure --enable-systemd + $ make -j1 deb-utils deb-kmod + $ sudo apt-get install --fix-missing ./*.deb + +Starting from openzfs-2.2 release, native Debian packages can be built +as follows: + +.. code:: sh + + $ cd zfs + $ ./configure + $ make native-deb-utils native-deb-kmod + $ rm ../openzfs-zfs-dkms_*.deb + $ rm ../openzfs-zfs-dracut_*.deb # deb-based systems usually use initramfs + $ sudo apt-get install --fix-missing ../*.deb + +Native Debian packages build with pre-configured paths for Debian and +Ubuntu. It's best not to override the paths during configure. +``KVERS``, ``KSRC`` and ``KOBJ`` environment variables can be exported +to specify the kernel installed in non-default location. + +.. _dkms-1: + +DKMS +~~~~ + +Building RPM converted deb-based DKMS and user packages can be done as +follows: + +.. code:: sh + + $ cd zfs + $ ./configure --enable-systemd + $ make -j1 deb-utils deb-dkms + $ sudo apt-get install --fix-missing ./*.deb + +Starting from openzfs-2.2 release, native deb-based DKMS and user +packages can be built as follows: + +.. code:: sh + + $ sudo apt-get install dh-dkms + $ cd zfs + $ ./configure + $ make native-deb-utils + $ rm ../openzfs-zfs-dracut_*.deb # deb-based systems usually use initramfs + $ sudo apt-get install --fix-missing ../*.deb + +Get the Source Code +------------------- + +Released Tarball +~~~~~~~~~~~~~~~~ + +The released tarball contains the latest fully tested and released +version of ZFS. This is the preferred source code location for use in +production systems. If you want to use the official released tarballs, +then use the following commands to fetch and prepare the source. + +.. code:: sh + + $ wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-x.y.z.tar.gz + $ tar -xzf zfs-x.y.z.tar.gz + +Git Master Branch +~~~~~~~~~~~~~~~~~ + +The Git *master* branch contains the latest version of the software, and +will probably contain fixes that, for some reason, weren't included in +the released tarball. This is the preferred source code location for +developers who intend to modify ZFS. If you would like to use the git +version, you can clone it from Github and prepare the source like this. + +.. code:: sh + + $ git clone https://github.com/zfsonlinux/zfs.git + $ cd zfs + $ ./autogen.sh + +Once the source has been prepared you'll need to decide what kind of +packages you're building and jump the to appropriate section above. Note +that not all package types are supported for all platforms. diff --git a/_sources/Developer Resources/Git and GitHub for beginners.rst.txt b/_sources/Developer Resources/Git and GitHub for beginners.rst.txt new file mode 100644 index 000000000..76e6eb33d --- /dev/null +++ b/_sources/Developer Resources/Git and GitHub for beginners.rst.txt @@ -0,0 +1,210 @@ +Git and GitHub for beginners (ZoL edition) +========================================== + +This is a very basic rundown of how to use Git and GitHub to make +changes. + +Recommended reading: `ZFS on Linux +CONTRIBUTING.md `__ + +First time setup +---------------- + +If you've never used Git before, you'll need a little setup to start +things off. + +:: + + git config --global user.name "My Name" + git config --global user.email myemail@noreply.non + +Cloning the initial repository +------------------------------ + +The easiest way to get started is to click the fork icon at the top of +the main repository page. From there you need to download a copy of the +forked repository to your computer: + +:: + + git clone https://github.com//zfs.git + +This sets the "origin" repository to your fork. This will come in handy +when creating pull requests. To make pulling from the "upstream" +repository as changes are made, it is very useful to establish the +upstream repository as another remote (man git-remote): + +:: + + cd zfs + git remote add upstream https://github.com/zfsonlinux/zfs.git + +Preparing and making changes +---------------------------- + +In order to make changes it is recommended to make a branch, this lets +you work on several unrelated changes at once. It is also not +recommended to make changes to the master branch unless you own the +repository. + +:: + + git checkout -b my-new-branch + +From here you can make your changes and move on to the next step. + +Recommended reading: `C Style and Coding Standards for +SunOS `__, +`ZFS on Linux Developer +Resources `__, +`OpenZFS Developer +Resources `__ + +Testing your patches before pushing +----------------------------------- + +Before committing and pushing, you may want to test your patches. There +are several tests you can run against your branch such as style +checking, and functional tests. All pull requests go through these tests +before being pushed to the main repository, however testing locally +takes the load off the build/test servers. This step is optional but +highly recommended, however the test suite should be run on a virtual +machine or a host that currently does not use ZFS. You may need to +install ``shellcheck`` and ``flake8`` to run the ``checkstyle`` +correctly. + +:: + + sh autogen.sh + ./configure + make checkstyle + +Recommended reading: `Building +ZFS `__, `ZFS Test +Suite +README `__ + +Committing your changes to be pushed +------------------------------------ + +When you are done making changes to your branch there are a few more +steps before you can make a pull request. + +:: + + git commit --all --signoff + +This command opens an editor and adds all unstaged files from your +branch. Here you need to describe your change and add a few things: + +:: + + + # Please enter the commit message for your changes. Lines starting + # with '#' will be ignored, and an empty message aborts the commit. + # On branch my-new-branch + # Changes to be committed: + # (use "git reset HEAD ..." to unstage) + # + # modified: hello.c + # + +The first thing we need to add is the commit message. This is what is +displayed on the git log, and should be a short description of the +change. By style guidelines, this has to be less than 72 characters in +length. + +Underneath the commit message you can add a more descriptive text to +your commit. The lines in this section have to be less than 72 +characters. + +When you are done, the commit should look like this: + +:: + + Add hello command + + This is a test commit with a descriptive commit message. + This message can be more than one line as shown here. + + Signed-off-by: My Name + Closes #9998 + Issue #9999 + # Please enter the commit message for your changes. Lines starting + # with '#' will be ignored, and an empty message aborts the commit. + # On branch my-new-branch + # Changes to be committed: + # (use "git reset HEAD ..." to unstage) + # + # modified: hello.c + # + +You can also reference issues and pull requests if you are filing a pull +request for an existing issue as shown above. Save and exit the editor +when you are done. + +Pushing and creating the pull request +------------------------------------- + +Home stretch. You've made your change and made the commit. Now it's time +to push it. + +:: + + git push --set-upstream origin my-new-branch + +This should ask you for your github credentials and upload your changes +to your repository. + +The last step is to either go to your repository or the upstream +repository on GitHub and you should see a button for making a new pull +request for your recently committed branch. + +Correcting issues with your pull request +---------------------------------------- + +Sometimes things don't always go as planned and you may need to update +your pull request with a correction to either your commit message, or +your changes. This can be accomplished by re-pushing your branch. If you +need to make code changes or ``git add`` a file, you can do those now, +along with the following: + +:: + + git commit --amend + git push --force + +This will return you to the commit editor screen, and push your changes +over top of the old ones. Do note that this will restart the process of +any build/test servers currently running and excessively pushing can +cause delays in processing of all pull requests. + +Maintaining your repository +--------------------------- + +When you wish to make changes in the future you will want to have an +up-to-date copy of the upstream repository to make your changes on. Here +is how you keep updated: + +:: + + git checkout master + git pull upstream master + git push origin master + +This will make sure you are on the master branch of the repository, grab +the changes from upstream, then push them back to your repository. + +Final words +----------- + +This is a very basic introduction to Git and GitHub, but should get you +on your way to contributing to many open source projects. Not all +projects have style requirements and some may have different processes +to getting changes committed so please refer to their documentation to +see if you need to do anything different. One topic we have not touched +on is the ``git rebase`` command which is a little more advanced for +this wiki article. + +Additional resources: `Github Help `__, +`Atlassian Git Tutorials `__ diff --git a/_sources/Developer Resources/OpenZFS Exceptions.rst.txt b/_sources/Developer Resources/OpenZFS Exceptions.rst.txt new file mode 100644 index 000000000..32c97352d --- /dev/null +++ b/_sources/Developer Resources/OpenZFS Exceptions.rst.txt @@ -0,0 +1,652 @@ +OpenZFS Exceptions +================== + +Commit exceptions used to explicitly reference a given Linux commit. +These exceptions are useful for a variety of reasons. + +**This page is used to generate** +`OpenZFS Tracking `__ +**page.** + +Format: +^^^^^^^ + +- ``|-|`` - The OpenZFS commit isn't applicable + to Linux, or the OpenZFS -> ZFS on Linux commit matching is unable to + associate the related commits due to lack of information (denoted by + a -). +- ``||`` - The fix was merged to Linux + prior to their being an OpenZFS issue. +- ``|!|`` - The commit is applicable but not + applied for the reason described in the comment. + ++------------------+-------------------+-----------------------------+ +| OpenZFS issue id | status/ZFS commit | comment | ++==================+===================+=============================+ +| 11453 | ! | check_disk() on illumos | +| | | isn't available on ZoL / | +| | | OpenZFS 2.0 | ++------------------+-------------------+-----------------------------+ +| 11276 | da68988 | | ++------------------+-------------------+-----------------------------+ +| 11052 | 2efea7c | | ++------------------+-------------------+-----------------------------+ +| 11051 | 3b61ca3 | | ++------------------+-------------------+-----------------------------+ +| 10853 | 8dc2197 | | ++------------------+-------------------+-----------------------------+ +| 10844 | 61c3391 | | ++------------------+-------------------+-----------------------------+ +| 10842 | d10b2f1 | | ++------------------+-------------------+-----------------------------+ +| 10841 | 944a372 | | ++------------------+-------------------+-----------------------------+ +| 10809 | ee36c70 | | ++------------------+-------------------+-----------------------------+ +| 10808 | 2ef0f8c | | ++------------------+-------------------+-----------------------------+ +| 10701 | 0091d66 | | ++------------------+-------------------+-----------------------------+ +| 10601 | cc99f27 | | ++------------------+-------------------+-----------------------------+ +| 10573 | 48d3eb4 | | ++------------------+-------------------+-----------------------------+ +| 10572 | edc1e71 | | ++------------------+-------------------+-----------------------------+ +| 10566 | ab7615d | | ++------------------+-------------------+-----------------------------+ +| 10554 | bec1067 | | ++------------------+-------------------+-----------------------------+ +| 10500 | 03916905 | | ++------------------+-------------------+-----------------------------+ +| 10449 | 379ca9c | | ++------------------+-------------------+-----------------------------+ +| 10406 | da2feb4 | | ++------------------+-------------------+-----------------------------+ +| 10154 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 10067 | - | The only ZFS change was to | +| | | zfs remap, which was | +| | | removed on Linux. | ++------------------+-------------------+-----------------------------+ +| 9884 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 9851 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 9691 | d9b4bf0 | | ++------------------+-------------------+-----------------------------+ +| 9683 | - | Not applicable to Linux due | +| | | to devids not being used | ++------------------+-------------------+-----------------------------+ +| 9680 | - | Applied and rolled back in | +| | | OpenZFS, additional changes | +| | | needed. | ++------------------+-------------------+-----------------------------+ +| 9672 | 29445fe3 | | ++------------------+-------------------+-----------------------------+ +| 9647 | a448a25 | | ++------------------+-------------------+-----------------------------+ +| 9626 | 59e6e7ca | | ++------------------+-------------------+-----------------------------+ +| 9635 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 9623 | 22448f08 | | ++------------------+-------------------+-----------------------------+ +| 9621 | 305bc4b3 | | ++------------------+-------------------+-----------------------------+ +| 9539 | 5228cf01 | | ++------------------+-------------------+-----------------------------+ +| 9512 | b4555c77 | | ++------------------+-------------------+-----------------------------+ +| 9487 | 48fbb9dd | | ++------------------+-------------------+-----------------------------+ +| 9466 | 272b5d73 | | ++------------------+-------------------+-----------------------------+ +| 9440 | f664f1e | Illumos ticket 9440 never | +| | | landed in openzfs/openzfs, | +| | | but in ZoL / OpenZFS 2.0 | ++------------------+-------------------+-----------------------------+ +| 9433 | 0873bb63 | | ++------------------+-------------------+-----------------------------+ +| 9421 | 64c1dcef | | ++------------------+-------------------+-----------------------------+ +| 9237 | - | Introduced by 8567 which | +| | | was never applied to Linux | ++------------------+-------------------+-----------------------------+ +| 9194 | - | Not applicable the '-o | +| | | ashift=value' option is | +| | | provided on Linux | ++------------------+-------------------+-----------------------------+ +| 9077 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 9027 | 4a5d7f82 | | ++------------------+-------------------+-----------------------------+ +| 9018 | 3ec34e55 | | ++------------------+-------------------+-----------------------------+ +| 8984 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 8969 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 8942 | 650258d7 | | ++------------------+-------------------+-----------------------------+ +| 8941 | 390d679a | | ++------------------+-------------------+-----------------------------+ +| 8862 | 3b9edd7 | | ++------------------+-------------------+-----------------------------+ +| 8858 | - | Not applicable to Linux | ++------------------+-------------------+-----------------------------+ +| 8856 | - | Not applicable to Linux due | +| | | to Encryption (b525630) | ++------------------+-------------------+-----------------------------+ +| 8809 | ! | Adding libfakekernel needs | +| | | to be done by refactoring | +| | | existing code. | ++------------------+-------------------+-----------------------------+ +| 8727 | b525630 | | ++------------------+-------------------+-----------------------------+ +| 8713 | 871e0732 | | ++------------------+-------------------+-----------------------------+ +| 8661 | 1ce23dca | | ++------------------+-------------------+-----------------------------+ +| 8648 | f763c3d1 | | ++------------------+-------------------+-----------------------------+ +| 8602 | a032ac4 | | ++------------------+-------------------+-----------------------------+ +| 8601 | d99a015 | Equivalent fix included in | +| | | initial commit | ++------------------+-------------------+-----------------------------+ +| 8590 | 935e2c2 | | ++------------------+-------------------+-----------------------------+ +| 8569 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8567 | - | An alternate fix was | +| | | applied for Linux. | ++------------------+-------------------+-----------------------------+ +| 8552 | 935e2c2 | | ++------------------+-------------------+-----------------------------+ +| 8521 | ee6370a7 | | ++------------------+-------------------+-----------------------------+ +| 8502 | ! | Apply when porting OpenZFS | +| | | 7955 | ++------------------+-------------------+-----------------------------+ +| 9485 | 1258bd7 | | ++------------------+-------------------+-----------------------------+ +| 8477 | 92e43c1 | | ++------------------+-------------------+-----------------------------+ +| 8454 | - | An alternate fix was | +| | | applied for Linux. | ++------------------+-------------------+-----------------------------+ +| 8423 | 50c957f | | ++------------------+-------------------+-----------------------------+ +| 8408 | 5f1346c | | ++------------------+-------------------+-----------------------------+ +| 8379 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8376 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8311 | ! | Need to assess | +| | | applicability to Linux. | ++------------------+-------------------+-----------------------------+ +| 8304 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8300 | 44f09cd | | ++------------------+-------------------+-----------------------------+ +| 8265 | - | The large_dnode feature has | +| | | been implemented for Linux. | ++------------------+-------------------+-----------------------------+ +| 8168 | 78d95ea | | ++------------------+-------------------+-----------------------------+ +| 8138 | 44f09cd | The spelling fix to the zfs | +| | | man page came in with the | +| | | mdoc conversion. | ++------------------+-------------------+-----------------------------+ +| 8108 | - | An equivalent Linux | +| | | specific fix was made. | ++------------------+-------------------+-----------------------------+ +| 8068 | a1d477c24c | merged with zfs device | +| | | evacuation/removal | ++------------------+-------------------+-----------------------------+ +| 8064 | - | This change isn't relevant | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 8022 | e55ebf6 | | ++------------------+-------------------+-----------------------------+ +| 8021 | 7657def | | ++------------------+-------------------+-----------------------------+ +| 8013 | - | The change is illumos | +| | | specific and not applicable | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 7982 | - | The change is illumos | +| | | specific and not applicable | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 7970 | c30e58c | | ++------------------+-------------------+-----------------------------+ +| 7956 | cda0317 | | ++------------------+-------------------+-----------------------------+ +| 7955 | ! | Need to assess | +| | | applicability to Linux. If | +| | | porting, apply 8502. | ++------------------+-------------------+-----------------------------+ +| 7869 | df7eecc | | ++------------------+-------------------+-----------------------------+ +| 7816 | - | The change is illumos | +| | | specific and not applicable | +| | | for Linux. | ++------------------+-------------------+-----------------------------+ +| 7803 | - | This functionality is | +| | | provided by | +| | | ``upda | +| | | te_vdev_config_dev_strs()`` | +| | | on Linux. | ++------------------+-------------------+-----------------------------+ +| 7801 | 0eef1bd | Commit f25efb3 in | +| | | openzfs/master has a small | +| | | change for linting which is | +| | | being ported. | ++------------------+-------------------+-----------------------------+ +| 7779 | - | The change isn't relevant, | +| | | ``zfs_ctldir.c`` was | +| | | rewritten for Linux. | ++------------------+-------------------+-----------------------------+ +| 7740 | 32d41fb | | ++------------------+-------------------+-----------------------------+ +| 7739 | 582cc014 | | ++------------------+-------------------+-----------------------------+ +| 7730 | e24e62a | | ++------------------+-------------------+-----------------------------+ +| 7710 | - | None of the illumos build | +| | | system is used under Linux. | ++------------------+-------------------+-----------------------------+ +| 7602 | 44f09cd | | ++------------------+-------------------+-----------------------------+ +| 7591 | 541a090 | | ++------------------+-------------------+-----------------------------+ +| 7586 | c443487 | | ++------------------+-------------------+-----------------------------+ +| 7570 | - | Due to differences in the | +| | | block layer all discards | +| | | are handled asynchronously | +| | | under Linux. This | +| | | functionality could be | +| | | ported but it's unclear to | +| | | what purpose. | ++------------------+-------------------+-----------------------------+ +| 7542 | - | The Linux libshare code | +| | | differs significantly from | +| | | the upstream OpenZFS code. | +| | | Since this change doesn't | +| | | address a Linux specific | +| | | issue it doesn't need to be | +| | | ported. The eventual plan | +| | | is to retire all of the | +| | | existing libshare code and | +| | | use the ZED to more | +| | | flexibly control filesystem | +| | | sharing. | ++------------------+-------------------+-----------------------------+ +| 7512 | - | None of the illumos build | +| | | system is used under Linux. | ++------------------+-------------------+-----------------------------+ +| 7497 | - | DTrace is isn't readily | +| | | available under Linux. | ++------------------+-------------------+-----------------------------+ +| 7446 | ! | Need to assess | +| | | applicability to Linux. | ++------------------+-------------------+-----------------------------+ +| 7430 | 68cbd56 | | ++------------------+-------------------+-----------------------------+ +| 7402 | 690fe64 | | ++------------------+-------------------+-----------------------------+ +| 7345 | 058ac9b | | ++------------------+-------------------+-----------------------------+ +| 7278 | - | Dynamic ARC tuning is | +| | | handled slightly | +| | | differently under Linux and | +| | | this case is covered by | +| | | arc_tuning_update() | ++------------------+-------------------+-----------------------------+ +| 7238 | - | zvol_swap test already | +| | | disabled in ZoL | ++------------------+-------------------+-----------------------------+ +| 7194 | d7958b4 | | ++------------------+-------------------+-----------------------------+ +| 7164 | b1b85c87 | | ++------------------+-------------------+-----------------------------+ +| 7041 | 33c0819 | | ++------------------+-------------------+-----------------------------+ +| 7016 | d3c2ae1 | | ++------------------+-------------------+-----------------------------+ +| 6914 | - | Under Linux the | +| | | arc_meta_limit can be tuned | +| | | with the | +| | | zfs_arc_meta_limit_percent | +| | | module option. | ++------------------+-------------------+-----------------------------+ +| 6875 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6843 | f5f087e | | ++------------------+-------------------+-----------------------------+ +| 6841 | 4254acb | | ++------------------+-------------------+-----------------------------+ +| 6781 | 15313c5 | | ++------------------+-------------------+-----------------------------+ +| 6765 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6764 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6763 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6762 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 6648 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6578 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6577 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6575 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6568 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6528 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6494 | - | The ``vdev_disk.c`` and | +| | | ``vdev_file.c`` files have | +| | | been reworked extensively | +| | | for Linux. The proposed | +| | | changes are not needed. | ++------------------+-------------------+-----------------------------+ +| 6468 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6465 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6434 | 472e7c6 | | ++------------------+-------------------+-----------------------------+ +| 6421 | ca0bf58 | | ++------------------+-------------------+-----------------------------+ +| 6418 | 131cc95 | | ++------------------+-------------------+-----------------------------+ +| 6391 | ee06391 | | ++------------------+-------------------+-----------------------------+ +| 6390 | 85802aa | | ++------------------+-------------------+-----------------------------+ +| 6388 | 0de7c55 | | ++------------------+-------------------+-----------------------------+ +| 6386 | 485c581 | | ++------------------+-------------------+-----------------------------+ +| 6385 | f3ad9cd | | ++------------------+-------------------+-----------------------------+ +| 6369 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6368 | 2024041 | | ++------------------+-------------------+-----------------------------+ +| 6346 | 058ac9b | | ++------------------+-------------------+-----------------------------+ +| 6334 | 1a04bab | | ++------------------+-------------------+-----------------------------+ +| 6290 | 017da6 | | ++------------------+-------------------+-----------------------------+ +| 6250 | - | Linux handles crash dumps | +| | | in a fundamentally | +| | | different way than Illumos. | +| | | The proposed changes are | +| | | not needed. | ++------------------+-------------------+-----------------------------+ +| 6249 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6248 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 6220 | - | The b_thawed debug code was | +| | | unused under Linux and | +| | | removed. | ++------------------+-------------------+-----------------------------+ +| 6209 | - | The Linux user space mutex | +| | | implementation is based on | +| | | phtread primitives. | ++------------------+-------------------+-----------------------------+ +| 6095 | f866a4ea | | ++------------------+-------------------+-----------------------------+ +| 6091 | c11f100 | | ++------------------+-------------------+-----------------------------+ +| 6037 | a8bd6dc | | ++------------------+-------------------+-----------------------------+ +| 5984 | 480f626 | | ++------------------+-------------------+-----------------------------+ +| 5966 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 5961 | 22872ff | | ++------------------+-------------------+-----------------------------+ +| 5882 | 83e9986 | | ++------------------+-------------------+-----------------------------+ +| 5815 | - | This patch could be adapted | +| | | if needed use equivalent | +| | | Linux functionality. | ++------------------+-------------------+-----------------------------+ +| 5770 | c3275b5 | | ++------------------+-------------------+-----------------------------+ +| 5769 | dd26aa5 | | ++------------------+-------------------+-----------------------------+ +| 5768 | - | The change isn't relevant, | +| | | ``zfs_ctldir.c`` was | +| | | rewritten for Linux. | ++------------------+-------------------+-----------------------------+ +| 5766 | 4dd1893 | | ++------------------+-------------------+-----------------------------+ +| 5693 | 0f7d2a4 | | ++------------------+-------------------+-----------------------------+ +| 5692 | ! | This functionality should | +| | | be ported in such a way | +| | | that it can be integrated | +| | | with ``filefrag(8)``. | ++------------------+-------------------+-----------------------------+ +| 5684 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 5503 | 0f676dc | Proposed patch in 5503 | +| | | never upstreamed, | +| | | alternative fix deployed | +| | | with OpenZFS 7072 | ++------------------+-------------------+-----------------------------+ +| 5502 | f0ed6c7 | Proposed patch in 5502 | +| | | never upstreamed, | +| | | alternative fix deployed | +| | | in ZoL with commit f0ed6c7 | ++------------------+-------------------+-----------------------------+ +| 5410 | 0bf8501 | | ++------------------+-------------------+-----------------------------+ +| 5409 | b23d543 | | ++------------------+-------------------+-----------------------------+ +| 5379 | - | This particular issue never | +| | | impacted Linux due to the | +| | | need for a modified | +| | | zfs_putpage() | +| | | implementation. | ++------------------+-------------------+-----------------------------+ +| 5316 | - | The illumos idmap facility | +| | | isn't available under | +| | | Linux. This patch could | +| | | still be applied to | +| | | minimize code delta or all | +| | | HAVE_IDMAP chunks could be | +| | | removed on Linux for better | +| | | readability. | ++------------------+-------------------+-----------------------------+ +| 5313 | ec8501e | | ++------------------+-------------------+-----------------------------+ +| 5312 | ! | This change should be made | +| | | but the ideal time to do it | +| | | is when the spl repository | +| | | is folded in to the zfs | +| | | repository (planned for | +| | | 0.8). At this time we'll | +| | | want to cleanup many of the | +| | | includes. | ++------------------+-------------------+-----------------------------+ +| 5219 | ef56b07 | | ++------------------+-------------------+-----------------------------+ +| 5179 | 3f4058c | | ++------------------+-------------------+-----------------------------+ +| 5154 | 9a49d3f | Illumos ticket 5154 never | +| | | landed in openzfs/openzfs, | +| | | alternative fix deployed | +| | | in ZoL with commit 9a49d3f | ++------------------+-------------------+-----------------------------+ +| 5149 | - | Equivalent Linux | +| | | functionality is provided | +| | | by the | +| | | ``zvol_max_discard_blocks`` | +| | | module option. | ++------------------+-------------------+-----------------------------+ +| 5148 | - | Discards are handled | +| | | differently under Linux, | +| | | there is no DKIOCFREE | +| | | ioctl. | ++------------------+-------------------+-----------------------------+ +| 5136 | e8b96c6 | | ++------------------+-------------------+-----------------------------+ +| 4752 | aa9af22 | | ++------------------+-------------------+-----------------------------+ +| 4745 | 411bf20 | | ++------------------+-------------------+-----------------------------+ +| 4698 | 4fcc437 | | ++------------------+-------------------+-----------------------------+ +| 4620 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 4573 | 10b7549 | | ++------------------+-------------------+-----------------------------+ +| 4571 | 6e1b9d0 | | ++------------------+-------------------+-----------------------------+ +| 4570 | b1d13a6 | | ++------------------+-------------------+-----------------------------+ +| 4391 | 78e2739 | | ++------------------+-------------------+-----------------------------+ +| 4465 | cda0317 | | ++------------------+-------------------+-----------------------------+ +| 4263 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 4242 | - | Neither vnodes or their | +| | | associated events exist | +| | | under Linux. | ++------------------+-------------------+-----------------------------+ +| 4206 | 2820bc4 | | ++------------------+-------------------+-----------------------------+ +| 4188 | 2e7b765 | | ++------------------+-------------------+-----------------------------+ +| 4181 | 44f09cd | | ++------------------+-------------------+-----------------------------+ +| 4161 | - | The Linux user space | +| | | reader/writer | +| | | implementation is based on | +| | | phtread primitives. | ++------------------+-------------------+-----------------------------+ +| 4128 | ! | The | +| | | ldi_ev_register_callbacks() | +| | | interface doesn't exist | +| | | under Linux. It may be | +| | | possible to receive similar | +| | | notifications via the scsi | +| | | error handlers or possibly | +| | | a different interface. | ++------------------+-------------------+-----------------------------+ +| 4072 | - | None of the illumos build | +| | | system is used under Linux. | ++------------------+-------------------+-----------------------------+ +| 3998 | 417104bd | Illumos ticket 3998 never | +| | | landed in openzfs/openzfs, | +| | | alternative fix deployed | +| | | in ZoL. | ++------------------+-------------------+-----------------------------+ +| 3947 | 7f9d994 | | ++------------------+-------------------+-----------------------------+ +| 3928 | - | Neither vnodes or their | +| | | associated events exist | +| | | under Linux. | ++------------------+-------------------+-----------------------------+ +| 3871 | d1d7e268 | | ++------------------+-------------------+-----------------------------+ +| 3747 | 090ff09 | | ++------------------+-------------------+-----------------------------+ +| 3705 | - | The Linux implementation | +| | | uses the lz4 workspace kmem | +| | | cache to resolve the stack | +| | | issue. | ++------------------+-------------------+-----------------------------+ +| 3606 | c5b247f | | ++------------------+-------------------+-----------------------------+ +| 3580 | - | Linux provides generic | +| | | ioctl handlers get/set | +| | | block device information. | ++------------------+-------------------+-----------------------------+ +| 3543 | 8dca0a9 | | ++------------------+-------------------+-----------------------------+ +| 3512 | 67629d0 | | ++------------------+-------------------+-----------------------------+ +| 3507 | 43a696e | | ++------------------+-------------------+-----------------------------+ +| 3444 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 3371 | 44f09cd | | ++------------------+-------------------+-----------------------------+ +| 3311 | 6bb24f4 | | ++------------------+-------------------+-----------------------------+ +| 3301 | - | The Linux implementation of | +| | | ``vdev_disk.c`` does not | +| | | include this comment. | ++------------------+-------------------+-----------------------------+ +| 3258 | 9d81146 | | ++------------------+-------------------+-----------------------------+ +| 3254 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 3246 | cc92e9d | | ++------------------+-------------------+-----------------------------+ +| 2933 | - | None of the illumos build | +| | | system is used under Linux. | ++------------------+-------------------+-----------------------------+ +| 2897 | fb82700 | | ++------------------+-------------------+-----------------------------+ +| 2665 | 32a9872 | | ++------------------+-------------------+-----------------------------+ +| 2130 | 460a021 | | ++------------------+-------------------+-----------------------------+ +| 1974 | - | This change was entirely | +| | | replaced in the ARC | +| | | restructuring. | ++------------------+-------------------+-----------------------------+ +| 1898 | - | The zfs_putpage() function | +| | | was rewritten to properly | +| | | integrate with the Linux | +| | | VM. | ++------------------+-------------------+-----------------------------+ +| 1700 | - | Not applicable to Linux, | +| | | the discard implementation | +| | | is entirely different. | ++------------------+-------------------+-----------------------------+ +| 1618 | ca67b33 | | ++------------------+-------------------+-----------------------------+ +| 1337 | 2402458 | | ++------------------+-------------------+-----------------------------+ +| 1126 | e43b290 | | ++------------------+-------------------+-----------------------------+ +| 763 | 3cee226 | | ++------------------+-------------------+-----------------------------+ +| 742 | ! | WIP to support NFSv4 ACLs | ++------------------+-------------------+-----------------------------+ +| 701 | 460a021 | | ++------------------+-------------------+-----------------------------+ +| 348 | - | The Linux implementation of | +| | | ``vdev_disk.c`` must have | +| | | this differently. | ++------------------+-------------------+-----------------------------+ +| 243 | - | Manual updates have been | +| | | made separately for Linux. | ++------------------+-------------------+-----------------------------+ +| 184 | - | The zfs_putpage() function | +| | | was rewritten to properly | +| | | integrate with the Linux | +| | | VM. | ++------------------+-------------------+-----------------------------+ diff --git a/_sources/Developer Resources/OpenZFS Patches.rst.txt b/_sources/Developer Resources/OpenZFS Patches.rst.txt new file mode 100644 index 000000000..fa622bd7c --- /dev/null +++ b/_sources/Developer Resources/OpenZFS Patches.rst.txt @@ -0,0 +1,318 @@ +OpenZFS Patches +=============== + +The ZFS on Linux project is an adaptation of the upstream `OpenZFS +repository `__ designed to work in +a Linux environment. This upstream repository acts as a location where +new features, bug fixes, and performance improvements from all the +OpenZFS platforms can be integrated. Each platform is responsible for +tracking the OpenZFS repository and merging the relevant improvements +back in to their release. + +For the ZFS on Linux project this tracking is managed through an +`OpenZFS tracking `__ +page. The page is updated regularly and shows a list of OpenZFS commits +and their status in regard to the ZFS on Linux master branch. + +This page describes the process of applying outstanding OpenZFS commits +to ZFS on Linux and submitting those changes for inclusion. As a +developer this is a great way to familiarize yourself with ZFS on Linux +and to begin quickly making a valuable contribution to the project. The +following guide assumes you have a `github +account `__, +are familiar with git, and are used to developing in a Linux +environment. + +Porting OpenZFS changes to ZFS on Linux +--------------------------------------- + +Setup the Environment +~~~~~~~~~~~~~~~~~~~~~ + +**Clone the source.** Start by making a local clone of the +`spl `__ and +`zfs `__ repositories. + +:: + + $ git clone -o zfsonlinux https://github.com/zfsonlinux/spl.git + $ git clone -o zfsonlinux https://github.com/zfsonlinux/zfs.git + +**Add remote repositories.** Using the GitHub web interface +`fork `__ the +`zfs `__ repository in to your +personal GitHub account. Add your new zfs fork and the +`openzfs `__ repository as remotes +and then fetch both repositories. The OpenZFS repository is large and +the initial fetch may take some time over a slow connection. + +:: + + $ cd zfs + $ git remote add git@github.com:/zfs.git + $ git remote add openzfs https://github.com/openzfs/openzfs.git + $ git fetch --all + +**Build the source.** Compile the spl and zfs master branches. These +branches are always kept stable and this is a useful verification that +you have a full build environment installed and all the required +dependencies are available. This may also speed up the compile time +latter for small patches where incremental builds are an option. + +:: + + $ cd ../spl + $ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc) + $ + $ cd ../zfs + $ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc) + +Pick a patch +~~~~~~~~~~~~ + +Consult the `OpenZFS +tracking `__ page and +select a patch which has not yet been applied. For your first patch you +will want to select a small patch to familiarize yourself with the +process. + +Porting a Patch +~~~~~~~~~~~~~~~ + +There are 2 methods: + +- `cherry-pick (easier) <#cherry-pick>`__ +- `manual merge <#manual-merge>`__ + +Please read about `manual merge <#manual-merge>`__ first to learn the +whole process. + +Cherry-pick +^^^^^^^^^^^ + +You can start to +`cherry-pick `__ by your own, +but we have made a special +`script `__, +which tries to +`cherry-pick `__ the patch +automatically and generates the description. + +0) Prepare environment: + +Mandatory git settings (add to ``~/.gitconfig``): + +:: + + [merge] + renameLimit = 999999 + [user] + email = mail@yourmail.com + name = Your Name + +Download the script: + +:: + + wget https://raw.githubusercontent.com/zfsonlinux/zfs-buildbot/master/scripts/openzfs-merge.sh + +1) Run: + +:: + + ./openzfs-merge.sh -d path_to_zfs_folder -c openzfs_commit_hash + +This command will fetch all repositories, create a new branch +``autoport-ozXXXX`` (XXXX - OpenZFS issue number), try to cherry-pick, +compile and check cstyle on success. + +If it succeeds without any merge conflicts - go to ``autoport-ozXXXX`` +branch, it will have ready to pull commit. Congratulations, you can go +to step 7! + +Otherwise you should go to step 2. + +2) Resolve all merge conflicts manually. Easy method - install + `Meld `__ or any other diff tool and run + ``git mergetool``. + +3) Check all compile and cstyle errors (See `Testing a + patch <#testing-a-patch>`__). + +4) Commit your changes with any description. + +5) Update commit description (last commit will be changed): + +:: + + ./openzfs-merge.sh -d path_to_zfs_folder -g openzfs_commit_hash + +6) Add any porting notes (if you have modified something): + ``git commit --amend`` + +7) Push your commit to github: + ``git push autoport-ozXXXX`` + +8) Create a pull request to ZoL master branch. + +9) Go to `Testing a patch <#testing-a-patch>`__ section. + +Manual merge +^^^^^^^^^^^^ + +**Create a new branch.** It is important to create a new branch for +every commit you port to ZFS on Linux. This will allow you to easily +submit your work as a GitHub pull request and it makes it possible to +work on multiple OpenZFS changes concurrently. All development branches +need to be based off of the ZFS master branch and it's helpful to name +the branches after the issue number you're working on. + +:: + + $ git checkout -b openzfs- master + +**Generate a patch.** One of the first things you'll notice about the +ZFS on Linux repository is that it is laid out differently than the +OpenZFS repository. Organizationally it is much flatter, this is +possible because it only contains the code for OpenZFS not an entire OS. +That means that in order to apply a patch from OpenZFS the path names in +the patch must be changed. A script called zfs2zol-patch.sed has been +provided to perform this translation. Use the ``git format-patch`` +command and this script to generate a patch. + +:: + + $ git format-patch --stdout ^.. | \ + ./scripts/zfs2zol-patch.sed >openzfs-.diff + +**Apply the patch.** In many cases the generated patch will apply +cleanly to the repository. However, it's important to keep in mind the +zfs2zol-patch.sed script only translates the paths. There are often +additional reasons why a patch might not apply. In some cases hunks of +the patch may not be applicable to Linux and should be dropped. In other +cases a patch may depend on other changes which must be applied first. +The changes may also conflict with Linux specific modifications. In all +of these cases the patch will need to be manually modified to apply +cleanly while preserving the its original intent. + +:: + + $ git am ./openzfs-.diff + +**Update the commit message.** By using ``git format-patch`` to generate +the patch and then ``git am`` to apply it the original comment and +authorship will be preserved. However, due to the formatting of the +OpenZFS commit you will likely find that the entire commit comment has +been squashed in to the subject line. Use ``git commit --amend`` to +cleanup the comment and be careful to follow `these standard +guidelines `__. + +The summary line of an OpenZFS commit is often very long and you should +truncate it to 50 characters. This is useful because it preserves the +correct formatting of ``git log --pretty=oneline`` command. Make sure to +leave a blank line between the summary and body of the commit. Then +include the full OpenZFS commit message wrapping any lines which exceed +72 characters. Finally, add a ``Ported-by`` tag with your contact +information and both a ``OpenZFS-issue`` and ``OpenZFS-commit`` tag with +appropriate links. You'll want to verify your commit contains all of the +following information: + +- The subject line from the original OpenZFS patch in the form: + "OpenZFS - short description". +- The original patch authorship should be preserved. +- The OpenZFS commit message. +- The following tags: + + - **Authored by:** Original patch author + - **Reviewed by:** All OpenZFS reviewers from the original patch. + - **Approved by:** All OpenZFS reviewers from the original patch. + - **Ported-by:** Your name and email address. + - **OpenZFS-issue:** https ://www.illumos.org/issues/issue + - **OpenZFS-commit:** https + ://github.com/openzfs/openzfs/commit/hash + +- **Porting Notes:** An optional section describing any changes + required when porting. + +For example, OpenZFS issue 6873 was `applied to +Linux `__ from this +upstream `OpenZFS +commit `__. + +:: + + OpenZFS 6873 - zfs_destroy_snaps_nvl leaks errlist + + Authored by: Chris Williamson + Reviewed by: Matthew Ahrens + Reviewed by: Paul Dagnelie + Ported-by: Denys Rtveliashvili + + lzc_destroy_snaps() returns an nvlist in errlist. + zfs_destroy_snaps_nvl() should nvlist_free() it before returning. + + OpenZFS-issue: https://www.illumos.org/issues/6873 + OpenZFS-commit: https://github.com/openzfs/openzfs/commit/ee06391 + +Testing a Patch +~~~~~~~~~~~~~~~ + +**Build the source.** Verify the patched source compiles without errors +and all warnings are resolved. + +:: + + $ make -s -j$(nproc) + +**Run the style checker.** Verify the patched source passes the style +checker, the command should return without printing any output. + +:: + + $ make cstyle + +**Open a Pull Request.** When your patch builds cleanly and passes the +style checks `open a new pull +request `__. +The pull request will be queued for `automated +testing `__. As part of the +testing the change is built for a wide range of Linux distributions and +a battery of functional and stress tests are run to detect regressions. + +:: + + $ git push openzfs- + +**Fix any issues.** Testing takes approximately 2 hours to fully +complete and the results are posted in the GitHub `pull +request `__. All the tests +are expected to pass and you should investigate and resolve any test +failures. The `test +scripts `__ +are all available and designed to run locally in order reproduce an +issue. Once you've resolved the issue force update the pull request to +trigger a new round of testing. Iterate until all the tests are passing. + +:: + + # Fix issue, amend commit, force update branch. + $ git commit --amend + $ git push --force openzfs- + +Merging the Patch +~~~~~~~~~~~~~~~~~ + +**Review.** Lastly one of the ZFS on Linux maintainers will make a final +review of the patch and may request additional changes. Once the +maintainer is happy with the final version of the patch they will add +their signed-off-by, merge it to the master branch, mark it complete on +the tracking page, and thank you for your contribution to the project! + +Porting ZFS on Linux changes to OpenZFS +--------------------------------------- + +Often an issue will be first fixed in ZFS on Linux or a new feature +developed. Changes which are not Linux specific should be submitted +upstream to the OpenZFS GitHub repository for review. The process for +this is described in the `OpenZFS +README `__. diff --git a/_sources/Developer Resources/index.rst.txt b/_sources/Developer Resources/index.rst.txt new file mode 100644 index 000000000..3b5d62b74 --- /dev/null +++ b/_sources/Developer Resources/index.rst.txt @@ -0,0 +1,18 @@ +Developer Resources +=================== + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + Custom Packages + Building ZFS + Buildbot Status + Buildbot Issue Tracking + Buildbot Options + OpenZFS Tracking + OpenZFS Patches + OpenZFS Exceptions + OpenZFS Documentation + Git and GitHub for beginners diff --git a/_sources/Getting Started/Alpine Linux/Root on ZFS.rst.txt b/_sources/Getting Started/Alpine Linux/Root on ZFS.rst.txt new file mode 100644 index 000000000..b7972b01e --- /dev/null +++ b/_sources/Getting Started/Alpine Linux/Root on ZFS.rst.txt @@ -0,0 +1,372 @@ +.. highlight:: sh + +Alpine Linux Root on ZFS +======================== + +.. ifconfig:: zfs_root_test + + :: + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + distro=${1} + + cp /etc/resolv.conf ./"rootfs-${distro}"/etc/resolv.conf + arch-chroot ./"rootfs-${distro}" sh <<-'ZFS_ROOT_GUIDE_TEST' + + set -vxeuf + + # install alpine setup scripts + apk update + apk add alpine-conf curl + +**ZFSBootMenu** + +`ZFSBootMenu `__ is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +**UEFI support only** + +Only UEFI is supported by this guide. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Download latest extended variant of `Alpine Linux + live image + `__, + verify `checksum `__ + and boot from it. + + .. code-block:: sh + + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc + + dd if=input-file of=output-file bs=1M + + .. ifconfig:: zfs_root_test + + # check whether the download page exists + # alpine version must be in sync with ci/cd test chroot tarball + curl --head --fail https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/x86_64/alpine-extended-3.19.0-x86_64.iso + curl --head --fail https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/x86_64/alpine-extended-3.19.0-x86_64.iso.asc + +#. Login as root user. There is no password. +#. Configure Internet + + .. code-block:: sh + + setup-interfaces -r + # You must use "-r" option to start networking services properly + # example: + network interface: wlan0 + WiFi name: + ip address: dhcp + + manual netconfig: n + +#. If you are using wireless network and it is not shown, see `Alpine + Linux wiki + `__ for + further details. ``wpa_supplicant`` can be installed with ``apk + add wpa_supplicant`` without internet connection. + +#. Configure SSH server + + .. code-block:: sh + + setup-sshd + # example: + ssh server: openssh + allow root: "prohibit-password" or "yes" + ssh key: "none" or "" + + Configurations set here will be copied verbatim to the installed system. + +#. Set root password or ``/root/.ssh/authorized_keys``. + + Choose a strong root password, as it will be copied to the + installed system. However, ``authorized_keys`` is not copied. + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Configure NTP client for time synchronization + + .. code-block:: sh + + setup-ntp busybox + + .. ifconfig:: zfs_root_test + + # this step is unnecessary for chroot and returns 1 when executed + +#. Set up apk-repo. A list of available mirrors is shown. + Press space bar to continue + + .. code-block:: sh + + setup-apkrepos + +#. Throughout this guide, we use predictable disk names generated by + udev + + .. code-block:: sh + + apk update + apk add eudev + setup-devd udev + + It can be removed after reboot with ``setup-devd mdev && apk del eudev``. + + .. ifconfig:: zfs_root_test + + # for some reason, udev is extremely slow in chroot + # it is not needed for chroot anyway. so, skip this step + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + # for github test run, use chroot and loop devices + DISK="$(losetup -a| grep alpine | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + # for maintenance guide test + DISK="$(losetup -a| grep maintenance | cut -f1 -d: | xargs -t -I '{}' printf '{} ') ${DISK}" + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Install ZFS support from live media:: + + apk add zfs + +#. Install bootloader programs and partition tool + :: + + apk add parted e2fsprogs cryptsetup util-linux + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 1MiB 4GiB \ + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + set 1 esp on \ + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + + +#. Setup temporary encrypted swap for this installation only. This is + useful if the available memory is small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3 + mkswap /dev/mapper/"${i##*/}"-part3 + swapon /dev/mapper/"${i##*/}"-part3 + done + + +#. Load ZFS kernel module + + .. code-block:: sh + + modprobe zfs + +#. Create root pool + + - Unencrypted:: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=none \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + +#. Create root system container: + + :: + + zfs create -o canmount=noauto -o mountpoint=legacy rpool/root + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o mountpoint=legacy rpool/home + mount -o X-mount.mkdir -t zfs rpool/root "${MNT}" + mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home + +#. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + done + + for i in ${DISK}; do + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot + break + done + + +System Configuration +--------------------------- + +#. Install system to disk + + .. code-block:: sh + + BOOTLOADER=none setup-disk -k lts -v "${MNT}" + + The error message about ZFS kernel module can be ignored. + + .. ifconfig:: zfs_root_test + + # lts kernel will pull in tons of firmware + BOOTLOADER=none setup-disk -k virt -v "${MNT}" + +#. Install rEFInd boot loader:: + + # from http://www.rodsbooks.com/refind/getting.html + # use Binary Zip File option + apk add curl + curl -L http://sourceforge.net/projects/refind/files/0.14.0.2/refind-bin-0.14.0.2.zip/download --output refind.zip + unzip refind + + mkdir -p "${MNT}"/boot/EFI/BOOT + find ./refind-bin-0.14.0.2/ -name 'refind_x64.efi' -print0 \ + | xargs -0I{} mv {} "${MNT}"/boot/EFI/BOOT/BOOTX64.EFI + rm -rf refind.zip refind-bin-0.14.0.2 + +#. Add boot entry:: + + tee -a "${MNT}"/boot/refind-linux.conf <`__ is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +**UEFI support only** + +Only UEFI is supported by this guide. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Because the kernel of latest Live CD might be incompatible with + ZFS, we will use Alpine Linux Extended, which ships with ZFS by + default. + + Download latest extended variant of `Alpine Linux + live image + `__, + verify `checksum `__ + and boot from it. + + .. code-block:: sh + + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc + + dd if=input-file of=output-file bs=1M + + .. ifconfig:: zfs_root_test + + # check whether the download page exists + # alpine version must be in sync with ci/cd test chroot tarball + +#. Login as root user. There is no password. +#. Configure Internet + + .. code-block:: sh + + setup-interfaces -r + # You must use "-r" option to start networking services properly + # example: + network interface: wlan0 + WiFi name: + ip address: dhcp + + manual netconfig: n + +#. If you are using wireless network and it is not shown, see `Alpine + Linux wiki + `__ for + further details. ``wpa_supplicant`` can be installed with ``apk + add wpa_supplicant`` without internet connection. + +#. Configure SSH server + + .. code-block:: sh + + setup-sshd + # example: + ssh server: openssh + allow root: "prohibit-password" or "yes" + ssh key: "none" or "" + +#. Set root password or ``/root/.ssh/authorized_keys``. + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Configure NTP client for time synchronization + + .. code-block:: sh + + setup-ntp busybox + + .. ifconfig:: zfs_root_test + + # this step is unnecessary for chroot and returns 1 when executed + +#. Set up apk-repo. A list of available mirrors is shown. + Press space bar to continue + + .. code-block:: sh + + setup-apkrepos + +#. Throughout this guide, we use predictable disk names generated by + udev + + .. code-block:: sh + + apk update + apk add eudev + setup-devd udev + + .. ifconfig:: zfs_root_test + + # for some reason, udev is extremely slow in chroot + # it is not needed for chroot anyway. so, skip this step + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + # for github test run, use chroot and loop devices + DISK="$(losetup -a| grep archlinux | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Install ZFS support from live media:: + + apk add zfs + +#. Install partition tool + :: + + apk add parted e2fsprogs cryptsetup util-linux + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 1MiB 4GiB \ + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + set 1 esp on \ + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + + +#. Setup temporary encrypted swap for this installation only. This is + useful if the available memory is small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3 + mkswap /dev/mapper/"${i##*/}"-part3 + swapon /dev/mapper/"${i##*/}"-part3 + done + + +#. Load ZFS kernel module + + .. code-block:: sh + + modprobe zfs + +#. Create root pool + + - Unencrypted:: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=none \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + +#. Create root system container: + + :: + + zfs create -o canmount=noauto -o mountpoint=legacy rpool/root + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o mountpoint=legacy rpool/home + mount -o X-mount.mkdir -t zfs rpool/root "${MNT}" + mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home + +#. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + done + + for i in ${DISK}; do + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot + break + done + +System Configuration +--------------------------- + +#. Download and extract minimal Arch Linux root filesystem:: + + apk add curl + + curl --fail-early --fail -L \ + https://america.archive.pkgbuild.com/iso/2024.01.01/archlinux-bootstrap-x86_64.tar.gz \ + -o rootfs.tar.gz + curl --fail-early --fail -L \ + https://america.archive.pkgbuild.com/iso/2024.01.01/archlinux-bootstrap-x86_64.tar.gz.sig \ + -o rootfs.tar.gz.sig + + apk add gnupg + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify rootfs.tar.gz.sig + + ln -s "${MNT}" "${MNT}"/root.x86_64 + tar x -C "${MNT}" -af rootfs.tar.gz root.x86_64 + +#. Enable community repo + + .. code-block:: sh + + sed -i '/edge/d' /etc/apk/repositories + sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories + +#. Generate fstab:: + + apk add arch-install-scripts + genfstab -t PARTUUID "${MNT}" \ + | grep -v swap \ + | sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \ + > "${MNT}"/etc/fstab + +#. Chroot + + .. code-block:: sh + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash + + .. ifconfig:: zfs_root_test + + :: + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash <<-'ZFS_ROOT_NESTED_CHROOT' + + set -vxeuf + +#. Add archzfs repo to pacman config + + :: + + pacman-key --init + pacman-key --refresh-keys + pacman-key --populate + + curl --fail-early --fail -L https://archzfs.com/archzfs.gpg \ + | pacman-key -a - --gpgdir /etc/pacman.d/gnupg + + pacman-key \ + --lsign-key \ + --gpgdir /etc/pacman.d/gnupg \ + DDF7DB817396A49B2A2723F7403BD972F75D9D76 + + tee -a /etc/pacman.d/mirrorlist-archzfs <<- 'EOF' + ## See https://github.com/archzfs/archzfs/wiki + ## France + #,Server = https://archzfs.com/$repo/$arch + + ## Germany + #,Server = https://mirror.sum7.eu/archlinux/archzfs/$repo/$arch + #,Server = https://mirror.biocrafting.net/archlinux/archzfs/$repo/$arch + + ## India + #,Server = https://mirror.in.themindsmaze.com/archzfs/$repo/$arch + + ## United States + #,Server = https://zxcvfdsa.com/archzfs/$repo/$arch + EOF + + tee -a /etc/pacman.conf <<- 'EOF' + + #[archzfs-testing] + #Include = /etc/pacman.d/mirrorlist-archzfs + + #,[archzfs] + #,Include = /etc/pacman.d/mirrorlist-archzfs + EOF + + # this #, prefix is a workaround for ci/cd tests + # remove them + sed -i 's|#,||' /etc/pacman.d/mirrorlist-archzfs + sed -i 's|#,||' /etc/pacman.conf + sed -i 's|^#||' /etc/pacman.d/mirrorlist + +#. Install base packages:: + + pacman -Sy + pacman -S --noconfirm mg mandoc efibootmgr mkinitcpio + + kernel_compatible_with_zfs="$(pacman -Si zfs-linux \ + | grep 'Depends On' \ + | sed "s|.*linux=||" \ + | awk '{ print $1 }')" + pacman -U --noconfirm https://america.archive.pkgbuild.com/packages/l/linux/linux-"${kernel_compatible_with_zfs}"-x86_64.pkg.tar.zst + +#. Install zfs packages:: + + pacman -S --noconfirm zfs-linux zfs-utils + +#. Configure mkinitcpio:: + + sed -i 's|filesystems|zfs filesystems|' /etc/mkinitcpio.conf + mkinitcpio -P + +#. For physical machine, install firmware + + .. code-block:: sh + + pacman -S linux-firmware intel-ucode amd-ucode + +#. Enable internet time synchronisation:: + + systemctl enable systemd-timesyncd + +#. Generate host id:: + + zgenhostid -f -o /etc/hostid + +#. Generate locales:: + + echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen + locale-gen + +#. Set locale, keymap, timezone, hostname + + :: + + rm -f /etc/localtime + systemd-firstboot \ + --force \ + --locale=en_US.UTF-8 \ + --timezone=Etc/UTC \ + --hostname=testhost \ + --keymap=us + +#. Set root passwd + :: + + printf 'root:yourpassword' | chpasswd + +Bootloader +--------------------------- + +#. Install rEFInd boot loader:: + + # from http://www.rodsbooks.com/refind/getting.html + # use Binary Zip File option + pacman -S --noconfirm unzip + curl -L http://sourceforge.net/projects/refind/files/0.14.0.2/refind-bin-0.14.0.2.zip/download --output refind.zip + + unzip refind.zip + mkdir -p /boot/EFI/BOOT + find ./refind-bin-0.14.0.2/ -name 'refind_x64.efi' -print0 \ + | xargs -0I{} mv {} /boot/EFI/BOOT/BOOTX64.EFI + rm -rf refind.zip refind-bin-0.14.0.2 + +#. Add boot entry:: + + tee -a /boot/refind-linux.conf <`__ on `Libera Chat +`__. + +If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @ne9z +`__. + +Overview +-------- +Due to license incompatibility, +ZFS is not available in Arch Linux official repo. + +ZFS support is provided by third-party `archzfs repo `__. + +Installation +------------ + +See `Archlinux Wiki `__. + +Root on ZFS +----------- +ZFS can be used as root file system for Arch Linux. +An installation guide is available. + +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Contribute +---------- +#. Fork and clone `this repo `__. + +#. Install the tools:: + + sudo pacman -S --needed python-pip make + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your "${PATH}", e.g. by adding this to ~/.bashrc: + [ -d "${HOME}"/.local/bin ] && export PATH="${HOME}"/.local/bin:"${PATH}" + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @ne9z. diff --git a/_sources/Getting Started/Debian/Debian Bookworm Root on ZFS.rst.txt b/_sources/Getting Started/Debian/Debian Bookworm Root on ZFS.rst.txt new file mode 100644 index 000000000..542b3bf2a --- /dev/null +++ b/_sources/Getting Started/Debian/Debian Bookworm Root on ZFS.rst.txt @@ -0,0 +1,1185 @@ +.. highlight:: sh + +Debian Bookworm Root on ZFS +=========================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit Debian GNU/Linux Bookworm Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Debian GNU/Linux Live CD. If prompted, login with the username + ``user`` and password ``live``. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Setup and update the repositories:: + + sudo vi /etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian bookworm main contrib non-free-firmware + + :: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo apt install --yes openssh-server + + sudo systemctl restart ssh + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfsutils-linux + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio. Also when using /dev/vda, the partitions used later will be named + differently. Otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + If the disk was previously used with zfs:: + + wipefs -a $DISK + + For flash-based storage, if the disk was previously used, you may wish to + do a full-disk discard (TRIM/UNMAP), which can improve performance:: + + blkdiscard -f $DISK + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -o compatibility=grub2 \ + -o cachefile=/etc/zfs/zpool.cache \ + -O devices=off \ + -O acltype=posixacl -O xattr=sa \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + *Note:* GRUB does not support all zpool features (see + ``spa_feature_names`` in + `grub-core/fs/zfs/zfs.c `_). + We create a separate zpool for ``/boot`` here, specifying the + ``-o compatibility=grub2`` property which restricts the pool to only those + features that GRUB supports, allowing the root pool to use any/all features. + + See the section on ``Compatibility feature sets`` in the ``zpool-features`` + man page for more information. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O encryption=on -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + apt install --yes cryptsetup + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality was implemented in Ubuntu with the + ``zsys`` tool, though its dataset layout is more complicated, and ``zsys`` + `is on life support + `__. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + zfs mount rpool/ROOT/debian + + zfs create -o mountpoint=/boot bpool/BOOT/debian + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to separate these to exclude them from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will have a GUI:: + + zfs create rpool/var/lib/AccountsService + zfs create rpool/var/lib/NetworkManager + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If you use /var/www on this system:: + + zfs create rpool/var/www + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + **Note:** If you separate a directory required for booting (e.g. ``/etc``) + into its own dataset, you must add it to + ``ZFS_INITRD_ADDITIONAL_DATASETS`` in ``/etc/default/zfs``. Datasets + with ``canmount=off`` (like ``rpool/usr`` above) do not matter for this. + +#. Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + +#. Install the minimal system:: + + debootstrap bookworm /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/network/interfaces.d/NAME + + .. code-block:: text + + auto NAME + iface NAME inet dhcp + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian bookworm main contrib non-free-firmware + deb-src http://deb.debian.org/debian bookworm main contrib non-free-firmware + + deb http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware + deb-src http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware + + deb http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware + deb-src http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + apt update + + apt install --yes console-setup locales + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + +#. Install ZFS in the chroot environment for the new system:: + + apt install --yes dpkg-dev linux-headers-generic linux-image-generic + + apt install --yes zfs-initramfs + + echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup does + not support ZFS + `__. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup cryptsetup-initramfs + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Install an NTP service to synchronize time. + This step is specific to Bookworm which does not install the package during + bootstrap. + Although this step is not necessary for ZFS, it is useful for internet + browsing where local clock drift can cause login failures:: + + apt install systemd-timesyncd + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + apt install --yes grub-pc + + + - Install GRUB for UEFI booting:: + + apt install dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64 shim-signed + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from `update-grub`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + + **Note:** For some disk configurations (NVMe?), this service `may fail + `__ with an error + indicating that the ``bpool`` cannot be found. If this happens, add + ``-d DISK-part3`` (replace ``DISK`` with the correct device path) to the + ``zpool import`` command. + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional: For ZFS native encryption or LUKS, configure Dropbear for remote + unlocking:: + + apt install --yes --no-install-recommends dropbear-initramfs + mkdir -p /etc/dropbear/initramfs + + # Optional: Convert OpenSSH server keys for Dropbear + for type in ecdsa ed25519 rsa ; do + cp /etc/ssh/ssh_host_${type}_key /tmp/openssh.key + ssh-keygen -p -N "" -m PEM -f /tmp/openssh.key + dropbearconvert openssh dropbear \ + /tmp/openssh.key \ + /etc/dropbear/initramfs/dropbear_${type}_host_key + done + rm /tmp/openssh.key + + # Add user keys in the same format as ~/.ssh/authorized_keys + vi /etc/dropbear/initramfs/authorized_keys + + # If using a static IP, set it for the initramfs environment: + vi /etc/initramfs-tools/initramfs.conf + # The syntax is: IP=ADDRESS::GATEWAY:MASK:HOSTNAME:NIC + # For example: + # IP=192.168.1.100::192.168.1.1:255.255.255.0:myhostname:ens3 + # HOSTNAME and NIC are optional. + + # Rebuild the initramfs (required when changing any of the above): + update-initramfs -u -k all + + **Notes:** + + - Converting the server keys makes Dropbear use the same keys as OpenSSH, + avoiding host key mismatch warnings. Currently, `dropbearconvert doesn't + understand the new OpenSSH private key format + `__, so the + keys need to be converted to the old PEM format first using + ``ssh-keygen``. The downside of using the same keys for both OpenSSH and + Dropbear is that the OpenSSH keys are then available on-disk, unencrypted + in the initramfs. + - Later, to use this functionality, SSH to the system (as root) while it is + prompting for the passphrase during the boot process. For ZFS native + encryption, run ``zfsunlock``. For LUKS, run ``cryptroot-unlock``. + - You can optionally add ``command="/usr/bin/zfsunlock"`` or + ``command="/bin/cryptroot-unlock"`` in front of the ``authorized_keys`` + line to force the unlock command. This way, the unlock command runs + automatically and is all that can be run. + +#. Optional (but kindly requested): Install popcon + + The ``popularity-contest`` package reports the list of packages install + on your system. Showing that ZFS is popular may be helpful in terms of + long-term attention from the distro. + + :: + + apt install --yes popularity-contest + + Choose Yes at the prompt. + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/debian + zfs set canmount=noauto rpool/ROOT/debian + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 6: First Boot +------------------ + +#. Optional: Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/debian@install + zfs snapshot rpool/ROOT/debian@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. If this fails for rpool, mounting it on boot will fail and you will need to + ``zpool import -f rpool``, then ``exit`` in the initramfs prompt. + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + zfs create rpool/home/$username + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + + dpkg-reconfigure grub-pc + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + + mount /boot/efi + +Step 7: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 8: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software:: + + tasksel --new-install + + **Note:** This will check "Debian desktop environment" and "print server" + by default. If you want a server installation, unselect those. + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 9: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/debian@install + sudo zfs destroy rpool/ROOT/debian@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + apt install --yes cryptsetup + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/debian + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Debian/Debian Bullseye Root on ZFS.rst.txt b/_sources/Getting Started/Debian/Debian Bullseye Root on ZFS.rst.txt new file mode 100644 index 000000000..5b23e4a99 --- /dev/null +++ b/_sources/Getting Started/Debian/Debian Bullseye Root on ZFS.rst.txt @@ -0,0 +1,1234 @@ +.. highlight:: sh + +Debian Bullseye Root on ZFS +=========================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Debian Bookworm Root on ZFS <./Debian Bookworm Root on ZFS>` for + new installs. This guide is no longer receiving most updates. It continues + to exist for reference for existing installs that followed it. + + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit Debian GNU/Linux Bullseye Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Debian GNU/Linux Live CD. If prompted, login with the username + ``user`` and password ``live``. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Setup and update the repositories:: + + sudo vi /etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian bullseye main contrib + + :: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo apt install --yes openssh-server + + sudo systemctl restart ssh + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfsutils-linux + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + If the disk was previously used with zfs:: + + wipefs -a $DISK + + For flash-based storage, if the disk was previously used, you may wish to + do a full-disk discard (TRIM/UNMAP), which can improve performance:: + + blkdiscard -f $DISK + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on -d \ + -o cachefile=/etc/zfs/zpool.cache \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@livelist=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O devices=off \ + -O acltype=posixacl -O xattr=sa \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``device_rebuild`` feature should be safe to use (except on raidz, + which it is incompatible with), but the boot pool is small, so this does + not matter in practice. + - The ``log_spacemap`` and ``spacemap_v2`` features have been tested and + are safe to use. The boot pool is small, so these do not matter in + practice. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O encryption=on -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + apt install --yes cryptsetup + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality was implemented in Ubuntu with the + ``zsys`` tool, though its dataset layout is more complicated, and ``zsys`` + `is on life support + `__. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + zfs mount rpool/ROOT/debian + + zfs create -o mountpoint=/boot bpool/BOOT/debian + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to separate these to exclude them from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will have a GUI:: + + zfs create rpool/var/lib/AccountsService + zfs create rpool/var/lib/NetworkManager + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If you use /var/www on this system:: + + zfs create rpool/var/www + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + **Note:** If you separate a directory required for booting (e.g. ``/etc``) + into its own dataset, you must add it to + ``ZFS_INITRD_ADDITIONAL_DATASETS`` in ``/etc/default/zfs``. Datasets + with ``canmount=off`` (like ``rpool/usr`` above) do not matter for this. + +#. Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + +#. Install the minimal system:: + + debootstrap bullseye /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/network/interfaces.d/NAME + + .. code-block:: text + + auto NAME + iface NAME inet dhcp + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian bullseye main contrib + deb-src http://deb.debian.org/debian bullseye main contrib + + deb http://deb.debian.org/debian-security bullseye-security main contrib + deb-src http://deb.debian.org/debian-security bullseye-security main contrib + + deb http://deb.debian.org/debian bullseye-updates main contrib + deb-src http://deb.debian.org/debian bullseye-updates main contrib + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + apt update + + apt install --yes console-setup locales + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + +#. Install ZFS in the chroot environment for the new system:: + + apt install --yes dpkg-dev linux-headers-generic linux-image-generic + + apt install --yes zfs-initramfs + + echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup does + not support ZFS + `__. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup cryptsetup-initramfs + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Install an NTP service to synchronize time. + This step is specific to Bullseye which does not install the package during + bootstrap. + Although this step is not necessary for ZFS, it is useful for internet + browsing where local clock drift can cause login failures:: + + apt install systemd-timesyncd + timedatectl + + You should now see "NTP service: active" in the above ``timedatectl`` + output. + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + apt install --yes grub-pc + + Select (using the space bar) all of the disks (not partitions) in your + pool. + + - Install GRUB for UEFI booting:: + + apt install dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64 shim-signed + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from `update-grub`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + + **Note:** For some disk configurations (NVMe?), this service `may fail + `__ with an error + indicating that the ``bpool`` cannot be found. If this happens, add + ``-d DISK-part3`` (replace ``DISK`` with the correct device path) to the + ``zpool import`` command. + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional: For ZFS native encryption or LUKS, configure Dropbear for remote + unlocking:: + + apt install --yes --no-install-recommends dropbear-initramfs + mkdir -p /etc/dropbear-initramfs + + # Optional: Convert OpenSSH server keys for Dropbear + for type in ecdsa ed25519 rsa ; do + cp /etc/ssh/ssh_host_${type}_key /tmp/openssh.key + ssh-keygen -p -N "" -m PEM -f /tmp/openssh.key + dropbearconvert openssh dropbear \ + /tmp/openssh.key \ + /etc/dropbear-initramfs/dropbear_${type}_host_key + done + rm /tmp/openssh.key + + # Add user keys in the same format as ~/.ssh/authorized_keys + vi /etc/dropbear-initramfs/authorized_keys + + # If using a static IP, set it for the initramfs environment: + vi /etc/initramfs-tools/initramfs.conf + # The syntax is: IP=ADDRESS::GATEWAY:MASK:HOSTNAME:NIC + # For example: + # IP=192.168.1.100::192.168.1.1:255.255.255.0:myhostname:ens3 + # HOSTNAME and NIC are optional. + + # Rebuild the initramfs (required when changing any of the above): + update-initramfs -u -k all + + **Notes:** + + - Converting the server keys makes Dropbear use the same keys as OpenSSH, + avoiding host key mismatch warnings. Currently, `dropbearconvert doesn't + understand the new OpenSSH private key format + `__, so the + keys need to be converted to the old PEM format first using + ``ssh-keygen``. The downside of using the same keys for both OpenSSH and + Dropbear is that the OpenSSH keys are then available on-disk, unencrypted + in the initramfs. + - Later, to use this functionality, SSH to the system (as root) while it is + prompting for the passphrase during the boot process. For ZFS native + encryption, run ``zfsunlock``. For LUKS, run ``cryptroot-unlock``. + - You can optionally add ``command="/usr/bin/zfsunlock"`` or + ``command="/bin/cryptroot-unlock"`` in front of the ``authorized_keys`` + line to force the unlock command. This way, the unlock command runs + automatically and is all that can be run. + +#. Optional (but kindly requested): Install popcon + + The ``popularity-contest`` package reports the list of packages install + on your system. Showing that ZFS is popular may be helpful in terms of + long-term attention from the distro. + + :: + + apt install --yes popularity-contest + + Choose Yes at the prompt. + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/debian + zfs set canmount=noauto rpool/ROOT/debian + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 6: First Boot +------------------ + +#. Optional: Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/debian@install + zfs snapshot rpool/ROOT/debian@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. If this fails for rpool, mounting it on boot will fail and you will need to + ``zpool import -f rpool``, then ``exit`` in the initramfs prompt. + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + zfs create rpool/home/$username + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + + dpkg-reconfigure grub-pc + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + + mount /boot/efi + +Step 7: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 8: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software:: + + tasksel --new-install + + **Note:** This will check "Debian desktop environment" and "print server" + by default. If you want a server installation, unselect those. + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 9: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/debian@install + sudo zfs destroy rpool/ROOT/debian@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + apt install --yes cryptsetup + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/debian + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Debian/Debian Buster Root on ZFS.rst.txt b/_sources/Getting Started/Debian/Debian Buster Root on ZFS.rst.txt new file mode 100644 index 000000000..56a95e839 --- /dev/null +++ b/_sources/Getting Started/Debian/Debian Buster Root on ZFS.rst.txt @@ -0,0 +1,1171 @@ +.. highlight:: sh + +Debian Buster Root on ZFS +========================= + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Debian Bullseye Root on ZFS <./Debian Bullseye Root on ZFS>` for + new installs. This guide is no longer receiving most updates. It continues + to exist for reference for existing installs that followed it. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Debian GNU/Linux Live CD. If prompted, login with the username + ``user`` and password ``live``. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Setup and update the repositories:: + + sudo vi /etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian buster main contrib + deb http://deb.debian.org/debian buster-backports main contrib + + :: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo apt install --yes openssh-server + + sudo systemctl restart ssh + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-amd64 + + apt install --yes -t buster-backports --no-install-recommends zfs-dkms + + modprobe zfs + apt install --yes -t buster-backports zfsutils-linux + + - The dkms dependency is installed manually just so it comes from buster + and not buster-backports. This is not critical. + - We need to get the module built and loaded before installing + zfsutils-linux or `zfs-mount.service will fail to start + `__. + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - The ``spacemap_v2`` feature has been tested and is safe to use. The boot + pool is small, so this does not matter in practice. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -O encryption=on \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + apt install --yes cryptsetup + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality was implemented in Ubuntu with the + ``zsys`` tool, though its dataset layout is more complicated, and ``zsys`` + `is on life support + `__. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + zfs mount rpool/ROOT/debian + + zfs create -o mountpoint=/boot bpool/BOOT/debian + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to exclude these from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /opt on this system:: + + zfs create rpool/opt + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If you use /var/www on this system:: + + zfs create rpool/var/www + + If this system will use GNOME:: + + zfs create rpool/var/lib/AccountsService + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will use NFS (locking):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + + Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + +#. Install the minimal system:: + + debootstrap buster /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/network/interfaces.d/NAME + + .. code-block:: text + + auto NAME + iface NAME inet dhcp + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian buster main contrib + deb-src http://deb.debian.org/debian buster main contrib + + deb http://security.debian.org/debian-security buster/updates main contrib + deb-src http://security.debian.org/debian-security buster/updates main contrib + + deb http://deb.debian.org/debian buster-updates main contrib + deb-src http://deb.debian.org/debian buster-updates main contrib + + :: + + vi /mnt/etc/apt/sources.list.d/buster-backports.list + + .. code-block:: sourceslist + + deb http://deb.debian.org/debian buster-backports main contrib + deb-src http://deb.debian.org/debian buster-backports main contrib + + :: + + vi /mnt/etc/apt/preferences.d/90_zfs + + .. code-block:: control + + Package: src:zfs-linux + Pin: release n=buster-backports + Pin-Priority: 990 + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + apt update + + apt install --yes console-setup locales + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + +#. Install ZFS in the chroot environment for the new system:: + + apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 + + apt install --yes zfs-initramfs + + echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup does + not support ZFS + `__. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + apt install --yes grub-pc + + Select (using the space bar) all of the disks (not partitions) in your + pool. + + - Install GRUB for UEFI booting:: + + apt install dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64 shim-signed + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from `update-grub`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional (but kindly requested): Install popcon + + The ``popularity-contest`` package reports the list of packages install + on your system. Showing that ZFS is popular may be helpful in terms of + long-term attention from the distro. + + :: + + apt install --yes popularity-contest + + Choose Yes at the prompt. + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/debian + zfs set canmount=noauto rpool/ROOT/debian + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 6: First Boot +------------------ + +#. Optional: Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/debian@install + zfs snapshot rpool/ROOT/debian@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + zfs create rpool/home/$username + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video $username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + + dpkg-reconfigure grub-pc + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + + mount /boot/efi + +Step 7: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 8: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software:: + + tasksel --new-install + + **Note:** This will check "Debian desktop environment" and "print server" + by default. If you want a server installation, unselect those. + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 9: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/debian@install + sudo zfs destroy rpool/ROOT/debian@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + apt install --yes cryptsetup + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/debian + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Debian/Debian GNU Linux initrd documentation.rst.txt b/_sources/Getting Started/Debian/Debian GNU Linux initrd documentation.rst.txt new file mode 100644 index 000000000..b0dd3871d --- /dev/null +++ b/_sources/Getting Started/Debian/Debian GNU Linux initrd documentation.rst.txt @@ -0,0 +1,125 @@ +Debian GNU Linux initrd documentation +===================================== + +Supported boot parameters +************************* + +- rollback= Do a rollback of specified snapshot. +- zfs_debug= Debug the initrd script +- zfs_force= Force importing the pool. Should not be + necessary. +- zfs= Don't try to import ANY pool, mount ANY filesystem or + even load the module. +- rpool= Use this pool for root pool. +- bootfs=/ Use this dataset for root filesystem. +- root=/ Use this dataset for root filesystem. +- root=ZFS=/ Use this dataset for root filesystem. +- root=zfs:/ Use this dataset for root filesystem. +- root=zfs:AUTO Try to detect both pool and rootfs + +In all these cases, could also be @. + +The reason there are so many supported boot options to get the root +filesystem, is that there are a lot of different ways too boot ZFS out +there, and I wanted to make sure I supported them all. + +Pool imports +************ + +Import using /dev/disk/by-\* +---------------------------- + +The initrd will, if the variable USE_DISK_BY_ID is set in the file +/etc/default/zfs, to import using the /dev/disk/by-\* links. It will try +to import in this order: + +1. /dev/disk/by-vdev +2. /dev/disk/by-\* +3. /dev + +Import using cache file +----------------------- + +If all of these imports fail (or if USE_DISK_BY_ID is unset), it will +then try to import using the cache file. + +Last ditch attempt at importing +------------------------------- + +If that ALSO fails, it will try one more time, without any -d or -c +options. + +Booting +******* + +Booting from snapshot: +---------------------- + +Enter the snapshot for the root= parameter like in this example: + +:: + + linux /BOOT/debian@/boot/vmlinuz-5.10.0-9-amd64 root=ZFS=rpool/ROOT/debian@some_snapshot ro + +This will clone the snapshot rpool/ROOT/debian@some_snapshot into the +filesystem rpool/ROOT/debian_some_snapshot and use that as root +filesystem. The original filesystem and snapshot is left alone in this +case. + +**BEWARE** that it will first destroy, blindingly, the +rpool/ROOT/debian_some_snapshot filesystem before trying to clone the +snapshot into it again. So if you've booted from the same snapshot +previously and done some changes in that root filesystem, they will be +undone by the destruction of the filesystem. + +Snapshot rollback +----------------- + +From version 0.6.4-1-3 it is now also possible to specify rollback=1 to +do a rollback of the snapshot instead of cloning it. **BEWARE** that +this will destroy *all* snapshots done after the specified snapshot! + +Select snapshot dynamically +--------------------------- + +From version 0.6.4-1-3 it is now also possible to specify a NULL +snapshot name (such as root=rpool/ROOT/debian@) and if so, the initrd +script will discover all snapshots below that filesystem (sans the at), +and output a list of snapshot for the user to choose from. + +Booting from native encrypted filesystem +---------------------------------------- + +Although there is currently no support for native encryption in ZFS On +Linux, there is a patch floating around 'out there' and the initrd +supports loading key and unlock such encrypted filesystem. + +Separated filesystems +--------------------- + +Descended filesystems +~~~~~~~~~~~~~~~~~~~~~ + +If there are separate filesystems (for example a separate dataset for +/usr), the snapshot boot code will try to find the snapshot under each +filesystems and clone (or rollback) them. + +Example: + +:: + + rpool/ROOT/debian@some_snapshot + rpool/ROOT/debian/usr@some_snapshot + +These will create the following filesystems respectively (if not doing a +rollback): + +:: + + rpool/ROOT/debian_some_snapshot + rpool/ROOT/debian/usr_some_snapshot + +The initrd code will use the mountpoint option (if any) in the original +(without the snapshot part) dataset to find *where* it should mount the +dataset. Or it will use the name of the dataset below the root +filesystem (rpool/ROOT/debian in this example) for the mount point. diff --git a/_sources/Getting Started/Debian/Debian Stretch Root on ZFS.rst.txt b/_sources/Getting Started/Debian/Debian Stretch Root on ZFS.rst.txt new file mode 100644 index 000000000..0c56a8075 --- /dev/null +++ b/_sources/Getting Started/Debian/Debian Stretch Root on ZFS.rst.txt @@ -0,0 +1,1079 @@ +Debian Stretch Root on ZFS +========================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Debian Buster Root on ZFS <./Debian Buster Root on ZFS>` for new + installs. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit Debian GNU/Linux Stretch Live + CD `__ +- `A 64-bit kernel is strongly + encouraged. `__ +- Installing on a drive which presents 4KiB logical sectors (a “4Kn” + drive) only works with UEFI booting. This not unique to ZFS. `GRUB + does not and will not work on 4Kn with legacy (BIOS) + booting. `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of +memory is recommended for normal performance in basic workloads. If you +wish to use deduplication, you will need `massive amounts of +RAM `__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports two different encryption options: unencrypted and +LUKS (full-disk encryption). ZFS native encryption has not yet been +released. With either option, all ZFS features are fully available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +LUKS encrypts almost everything: the OS, swap, home directories, and +anything else. The only unencrypted data is the bootloader, kernel, and +initrd. The system cannot boot without the passphrase being entered at +the console. Performance is good, but LUKS sits underneath ZFS, so if +multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +1.1 Boot the Debian GNU/Linux Live CD. If prompted, login with the +username ``user`` and password ``live``. Connect your system to the +Internet as appropriate (e.g. join your WiFi network). + +1.2 Optional: Install and start the OpenSSH server in the Live CD +environment: + +If you have a second system, using SSH to access the target system can +be convenient. + +:: + + $ sudo apt update + $ sudo apt install --yes openssh-server + $ sudo systemctl restart ssh + +**Hint:** You can find your IP address with +``ip addr show scope global | grep inet``. Then, from your main machine, +connect with ``ssh user@IP``. + +1.3 Become root: + +:: + + $ sudo -i + +1.4 Setup and update the repositories: + +:: + + # echo deb http://deb.debian.org/debian stretch contrib >> /etc/apt/sources.list + # echo deb http://deb.debian.org/debian stretch-backports main contrib >> /etc/apt/sources.list + # apt update + +1.5 Install ZFS in the Live CD environment: + +:: + + # apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-amd64 + # apt install --yes -t stretch-backports zfs-dkms + # modprobe zfs + +- The dkms dependency is installed manually just so it comes from + stretch and not stretch-backports. This is not critical. + +Step 2: Disk Formatting +----------------------- + +2.1 If you are re-using a disk, clear it as necessary: + +:: + + If the disk was previously used in an MD array, zero the superblock: + # apt install --yes mdadm + # mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1 + + Clear the partition table: + # sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1 + +2.2 Partition your disk(s): + +:: + + Run this if you need legacy (BIOS) booting: + # sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1 + + Run this for UEFI booting (for use now or in the future): + # sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1 + + Run this for the boot pool: + # sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1 + +Choose one of the following options: + +2.2a Unencrypted: + +:: + + # sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1 + +2.2b LUKS: + +:: + + # sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1 + +Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the +``/dev/sd*`` device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool. + +**Hints:** + +- ``ls -la /dev/disk/by-id`` will list the aliases. +- Are you doing this in a virtual machine? If your virtual disk is + missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using + KVM with virtio; otherwise, read the + `troubleshooting <#troubleshooting>`__ section. +- If you are creating a mirror or raidz topology, repeat the + partitioning commands for all the disks which will be part of the + pool. + +2.3 Create the boot pool: + +:: + + # zpool create -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@userobj_accounting=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ + -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + bpool /dev/disk/by-id/scsi-SATA_disk1-part3 + +You should not need to customize any of the options for the boot pool. + +GRUB does not support all of the zpool features. See +``spa_feature_names`` in +`grub-core/fs/zfs/zfs.c `__. +This step creates a separate boot pool for ``/boot`` with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are "supported" by GRUB. + +**Hints:** + +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). +- The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + +2.4 Create the root pool: + +Choose one of the following options: + +2.4a Unencrypted: + +:: + + # zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + rpool /dev/disk/by-id/scsi-SATA_disk1-part4 + +2.4b LUKS: + +:: + + # apt install --yes cryptsetup + # cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \ + /dev/disk/by-id/scsi-SATA_disk1-part4 + # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + +- The use of ``ashift=12`` is recommended here because many drives + today have 4KiB (or larger) physical sectors, even though they + present 512B logical sectors. Also, a future replacement drive may + have 4KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4KiB logical sectors (in which case ``ashift=12`` is required). +- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` + for ``/var/log``, as `journald requires + ACLs `__ +- Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only + filenames `__. +- Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat's + documentation `__ + for further information. +- Setting ``xattr=sa`` `vastly improves the performance of extended + attributes `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI + applications. `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain + controller. `__ + Note that ```xattr=sa`` is + Linux-specific. `__ + If you move your ``xattr=sa`` pool to another OpenZFS implementation + besides ZFS-on-Linux, extended attributes will not be readable + (though your data will be). If portability of extended attributes is + important to you, omit the ``-O xattr=sa`` above. Even if you do not + want ``xattr=sa`` for the whole pool, it is probably fine to use it + for ``/var/log``. +- Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). +- For LUKS, the key size chosen is 512 bits. However, XTS mode requires + two keys, so the LUKS key is split in half. Thus, ``-s 512`` means + AES-256. +- Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup + FAQ `__ + for guidance. + +**Hints:** + +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). For LUKS, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will + have to create using ``cryptsetup``. +- The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the + root pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +3.1 Create filesystem datasets to act as containers: + +:: + + # zfs create -o canmount=off -o mountpoint=none rpool/ROOT + # zfs create -o canmount=off -o mountpoint=none bpool/BOOT + +On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through ``pkg image-update`` or +``beadm``. Similar functionality for APT is possible but currently +unimplemented. Even without such a tool, it can still be used for +manually created clones. + +3.2 Create filesystem datasets for the root and boot filesystems: + +:: + + # zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + # zfs mount rpool/ROOT/debian + + # zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian + # zfs mount bpool/BOOT/debian + +With ZFS, it is not normally necessary to use a mount command (either +``mount`` or ``zfs mount``). This situation is an exception because of +``canmount=noauto``. + +3.3 Create datasets: + +:: + + # zfs create rpool/home + # zfs create -o mountpoint=/root rpool/home/root + # zfs create -o canmount=off rpool/var + # zfs create -o canmount=off rpool/var/lib + # zfs create rpool/var/log + # zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices: + + If you wish to exclude these from snapshots: + # zfs create -o com.sun:auto-snapshot=false rpool/var/cache + # zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + # chmod 1777 /mnt/var/tmp + + If you use /opt on this system: + # zfs create rpool/opt + + If you use /srv on this system: + # zfs create rpool/srv + + If you use /usr/local on this system: + # zfs create -o canmount=off rpool/usr + # zfs create rpool/usr/local + + If this system will have games installed: + # zfs create rpool/var/games + + If this system will store local email in /var/mail: + # zfs create rpool/var/mail + + If this system will use Snap packages: + # zfs create rpool/var/snap + + If you use /var/www on this system: + # zfs create rpool/var/www + + If this system will use GNOME: + # zfs create rpool/var/lib/AccountsService + + If this system will use Docker (which manages its own datasets & snapshots): + # zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will use NFS (locking): + # zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + + A tmpfs is recommended later, but if you want a separate dataset for /tmp: + # zfs create -o com.sun:auto-snapshot=false rpool/tmp + # chmod 1777 /mnt/tmp + +The primary goal of this dataset layout is to separate the OS from user +data. This allows the root filesystem to be rolled back without rolling +back user data such as logs (in ``/var/log``). This will be especially +important if/when a ``beadm`` or similar utility is integrated. The +``com.sun.auto-snapshot`` setting is used by some ZFS snapshot utilities +to exclude transient data. + +If you do nothing extra, ``/tmp`` will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for +``/tmp``, as shown above. This keeps the ``/tmp`` data out of snapshots +of your root filesystem. It also allows you to set a quota on +``rpool/tmp``, if you want to limit the maximum space used. Otherwise, +you can use a tmpfs (RAM filesystem) later. + +3.4 Install the minimal system: + +:: + + # debootstrap stretch /mnt + # zfs set devices=off rpool + +The ``debootstrap`` command leaves the new system in an unconfigured +state. An alternative to using ``debootstrap`` is to copy the entirety +of a working system into the new ZFS root. + +Step 4: System Configuration +---------------------------- + +4.1 Configure the hostname (change ``HOSTNAME`` to the desired +hostname). + +:: + + # echo HOSTNAME > /mnt/etc/hostname + + # vi /mnt/etc/hosts + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + +**Hint:** Use ``nano`` if you find ``vi`` confusing. + +4.2 Configure the network interface: + +:: + + Find the interface name: + # ip addr show + + # vi /mnt/etc/network/interfaces.d/NAME + auto NAME + iface NAME inet dhcp + +Customize this file if the system is not a DHCP client. + +4.3 Configure the package sources: + +:: + + # vi /mnt/etc/apt/sources.list + deb http://deb.debian.org/debian stretch main contrib + deb-src http://deb.debian.org/debian stretch main contrib + deb http://security.debian.org/debian-security stretch/updates main contrib + deb-src http://security.debian.org/debian-security stretch/updates main contrib + deb http://deb.debian.org/debian stretch-updates main contrib + deb-src http://deb.debian.org/debian stretch-updates main contrib + + # vi /mnt/etc/apt/sources.list.d/stretch-backports.list + deb http://deb.debian.org/debian stretch-backports main contrib + deb-src http://deb.debian.org/debian stretch-backports main contrib + + # vi /mnt/etc/apt/preferences.d/90_zfs + Package: src:zfs-linux + Pin: release n=stretch-backports + Pin-Priority: 990 + +4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and ``chroot`` into it: + +:: + + # mount --rbind /dev /mnt/dev + # mount --rbind /proc /mnt/proc + # mount --rbind /sys /mnt/sys + # chroot /mnt /bin/bash --login + +**Note:** This is using ``--rbind``, not ``--bind``. + +4.5 Configure a basic system environment: + +:: + + # ln -s /proc/self/mounts /etc/mtab + # apt update + + # apt install --yes locales + # dpkg-reconfigure locales + +Even if you prefer a non-English system language, always ensure that +``en_US.UTF-8`` is available. + +:: + + # dpkg-reconfigure tzdata + +4.6 Install ZFS in the chroot environment for the new system: + +:: + + # apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 + # apt install --yes zfs-initramfs + +4.7 For LUKS installs only, setup crypttab: + +:: + + # apt install --yes cryptsetup + + # echo luks1 UUID=$(blkid -s UUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part4) none \ + luks,discard,initramfs > /etc/crypttab + +- The use of ``initramfs`` is a work-around for `cryptsetup does not + support + ZFS `__. + +**Hint:** If you are creating a mirror or raidz topology, repeat the +``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +4.8 Install GRUB + +Choose one of the following options: + +4.8a Install GRUB for legacy (BIOS) booting + +:: + + # apt install --yes grub-pc + +Install GRUB to the disk(s), not the partition(s). + +4.8b Install GRUB for UEFI booting + +:: + + # apt install dosfstools + # mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2 + # mkdir /boot/efi + # echo PARTUUID=$(blkid -s PARTUUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part2) \ + /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab + # mount /boot/efi + # apt install --yes grub-efi-amd64 shim + +- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum + cluster size (given the partition size of 512 MiB) for FAT32. It also + works fine on drives which present 512 B sectors. + +**Note:** If you are creating a mirror or raidz topology, this step only +installs GRUB on the first disk. The other disk(s) will be handled +later. + +4.9 Set a root password + +:: + + # passwd + +4.10 Enable importing bpool + +This ensures that ``bpool`` is always imported, regardless of whether +``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, +or whether ``zfs-import-scan.service`` is enabled. + +:: + + # vi /etc/systemd/system/zfs-import-bpool.service + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + + [Install] + WantedBy=zfs-import.target + + # systemctl enable zfs-import-bpool.service + +4.11 Optional (but recommended): Mount a tmpfs to /tmp + +If you chose to create a ``/tmp`` dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a +tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + +:: + + # cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + # systemctl enable tmp.mount + +4.12 Optional (but kindly requested): Install popcon + +The ``popularity-contest`` package reports the list of packages install +on your system. Showing that ZFS is popular may be helpful in terms of +long-term attention from the distro. + +:: + + # apt install --yes popularity-contest + +Choose Yes at the prompt. + +Step 5: GRUB Installation +------------------------- + +5.1 Verify that the ZFS boot filesystem is recognized: + +:: + + # grub-probe /boot + zfs + +5.2 Refresh the initrd files: + +:: + + # update-initramfs -u -k all + update-initramfs: Generating /boot/initrd.img-4.9.0-8-amd64 + +**Note:** When using LUKS, this will print "WARNING could not determine +root device from /etc/fstab". This is because `cryptsetup does not +support +ZFS `__. + +5.3 Workaround GRUB's missing zpool-features support: + +:: + + # vi /etc/default/grub + Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + +5.4 Optional (but highly recommended): Make debugging GRUB easier: + +:: + + # vi /etc/default/grub + Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + Uncomment: GRUB_TERMINAL=console + Save and quit. + +Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired. + +5.5 Update the boot configuration: + +:: + + # update-grub + Generating grub configuration file ... + Found linux image: /boot/vmlinuz-4.9.0-8-amd64 + Found initrd image: /boot/initrd.img-4.9.0-8-amd64 + done + +**Note:** Ignore errors from ``osprober``, if present. + +5.6 Install the boot loader + +5.6a For legacy (BIOS) booting, install GRUB to the MBR: + +:: + + # grub-install /dev/disk/by-id/scsi-SATA_disk1 + Installing for i386-pc platform. + Installation finished. No error reported. + +Do not reboot the computer until you get exactly that result message. +Note that you are installing GRUB to the whole disk, not a partition. + +If you are creating a mirror or raidz topology, repeat the +``grub-install`` command for each disk in the pool. + +5.6b For UEFI booting, install GRUB: + +:: + + # grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy + +5.7 Verify that the ZFS module is installed: + +:: + + # ls /boot/grub/*/zfs.mod + +5.8 Fix filesystem mount ordering + +`Until ZFS gains a systemd mount +generator `__, there are +races between mounting filesystems and starting certain daemons. In +practice, the issues (e.g. +`#5754 `__) seem to be +with certain filesystems in ``/var``, specifically ``/var/log`` and +``/var/tmp``. Setting these to use ``legacy`` mounting, and listing them +in ``/etc/fstab`` makes systemd aware that these are separate +mountpoints. In turn, ``rsyslog.service`` depends on ``var-log.mount`` +by way of ``local-fs.target`` and services using the ``PrivateTmp`` +feature of systemd automatically use ``After=var-tmp.mount``. + +Until there is support for mounting ``/boot`` in the initramfs, we also +need to mount that, because it was marked ``canmount=noauto``. Also, +with UEFI, we need to ensure it is mounted before its child filesystem +``/boot/efi``. + +``rpool`` is guaranteed to be imported by the initramfs, so there is no +point in adding ``x-systemd.requires=zfs-import.target`` to those +filesystems. + +:: + + For UEFI booting, unmount /boot/efi first: + # umount /boot/efi + + Everything else applies to both BIOS and UEFI booting: + + # zfs set mountpoint=legacy bpool/BOOT/debian + # echo bpool/BOOT/debian /boot zfs \ + nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab + + # zfs set mountpoint=legacy rpool/var/log + # echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab + + # zfs set mountpoint=legacy rpool/var/spool + # echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab + + If you created a /var/tmp dataset: + # zfs set mountpoint=legacy rpool/var/tmp + # echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab + + If you created a /tmp dataset: + # zfs set mountpoint=legacy rpool/tmp + # echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab + +Step 6: First Boot +------------------ + +6.1 Snapshot the initial installation: + +:: + + # zfs snapshot bpool/BOOT/debian@install + # zfs snapshot rpool/ROOT/debian@install + +In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space. + +6.2 Exit from the ``chroot`` environment back to the LiveCD environment: + +:: + + # exit + +6.3 Run these commands in the LiveCD environment to unmount all +filesystems: + +:: + + # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + # zpool export -a + +6.4 Reboot: + +:: + + # reboot + +6.5 Wait for the newly installed system to boot normally. Login as root. + +6.6 Create a user account: + +:: + + # zfs create rpool/home/YOURUSERNAME + # adduser YOURUSERNAME + # cp -a /etc/skel/.[!.]* /home/YOURUSERNAME + # chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME + +6.7 Add your user account to the default set of groups for an +administrator: + +:: + + # usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME + +6.8 Mirror GRUB + +If you installed to multiple disks, install GRUB on the additional +disks: + +6.8a For legacy (BIOS) booting: + +:: + + # dpkg-reconfigure grub-pc + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + +6.8b UEFI + +:: + + # umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.): + # dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + # efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + + # mount /boot/efi + +Step 7: (Optional) Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. This issue is currently being investigated in: +`https://github.com/zfsonlinux/zfs/issues/7734 `__ + +7.1 Create a volume dataset (zvol) for use as a swap device: + +:: + + # zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + +You can adjust the size (the ``4G`` part) to your needs. + +The compression algorithm is set to ``zle`` because it is the cheapest +available algorithm. As this guide recommends ``ashift=12`` (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior. + +7.2 Configure the swap device: + +**Caution**: Always use long ``/dev/zvol`` aliases in configuration +files. Never use a short ``/dev/zdX`` device name. + +:: + + # mkswap -f /dev/zvol/rpool/swap + # echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + # echo RESUME=none > /etc/initramfs-tools/conf.d/resume + +The ``RESUME=none`` is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear. + +7.3 Enable the swap device: + +:: + + # swapon -av + +Step 8: Full Software Installation +---------------------------------- + +8.1 Upgrade the minimal system: + +:: + + # apt dist-upgrade --yes + +8.2 Install a regular set of software: + +:: + + # tasksel + +**Note:** This will check "Debian desktop environment" and "print server" +by default. If you want a server installation, unselect those. + +8.3 Optional: Disable log compression: + +As ``/var/log`` is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. +Also, if you are making snapshots of ``/var/log``, logrotate’s +compression will actually waste space, as the uncompressed data will +live on in the snapshot. You can edit the files in ``/etc/logrotate.d`` +by hand to comment out ``compress``, or use this loop (copy-and-paste +highly recommended): + +:: + + # for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +8.4 Reboot: + +:: + + # reboot + +Step 9: Final Cleanup +~~~~~~~~~~~~~~~~~~~~~ + +9.1 Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally. + +9.2 Optional: Delete the snapshots of the initial installation: + +:: + + $ sudo zfs destroy bpool/BOOT/debian@install + $ sudo zfs destroy rpool/ROOT/debian@install + +9.3 Optional: Disable the root password + +:: + + $ sudo usermod -p '*' root + +9.4 Optional: Re-enable the graphical boot process: + +If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer. + +:: + + $ sudo vi /etc/default/grub + Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + Comment out GRUB_TERMINAL=console + Save and quit. + + $ sudo update-grub + +**Note:** Ignore errors from ``osprober``, if present. + +9.5 Optional: For LUKS installs only, backup the LUKS header: + +:: + + $ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + +Store that backup somewhere safe (e.g. cloud storage). It is protected +by your LUKS passphrase, but you may wish to use additional encryption. + +**Hint:** If you created a mirror or raidz topology, repeat this for +each LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install +Environment <#step-1-prepare-the-install-environment>`__. + +This will automatically import your pool. Export it and re-import it to +get the mounts right: + +:: + + For LUKS, first unlock the disk(s): + # apt install --yes cryptsetup + # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + Repeat for additional disks, if this is a mirror or raidz topology. + + # zpool export -a + # zpool import -N -R /mnt rpool + # zpool import -N -R /mnt bpool + # zfs mount rpool/ROOT/debian + # zfs mount -a + +If needed, you can chroot into your installed environment: + +:: + + # mount --rbind /dev /mnt/dev + # mount --rbind /proc /mnt/proc + # mount --rbind /sys /mnt/sys + # chroot /mnt /bin/bash --login + # mount /boot/efi + # mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup: + +:: + + # exit + # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + # zpool export -a + # reboot + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that +does slow asynchronous drive initialization, like some IBM M1015 or +OEM-branded cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to +the Linux kernel until after the regular system is started, and ZoL does +not hotplug pool members. See +`https://github.com/zfsonlinux/zfs/issues/330 `__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in +/etc/default/zfs. The system will wait X seconds for all drives to +appear before importing the pool. + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run +``update-initramfs -u -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit +this error message. + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere + configuration. Doing this ensures that ``/dev/disk`` aliases are + created in the guest. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host: + +:: + + $ sudo apt install ovmf + $ sudo vi /etc/libvirt/qemu.conf + Uncomment these lines: + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" + ] + $ sudo service libvirt-bin restart diff --git a/_sources/Getting Started/Debian/index.rst.txt b/_sources/Getting Started/Debian/index.rst.txt new file mode 100644 index 000000000..d62053af0 --- /dev/null +++ b/_sources/Getting Started/Debian/index.rst.txt @@ -0,0 +1,62 @@ +.. highlight:: sh + +Debian +====== + +.. contents:: Table of Contents + :local: + +Installation +------------ + +If you want to use ZFS as your root filesystem, see the `Root on ZFS`_ +links below instead. + +ZFS packages are included in the `contrib repository +`__. The +`backports repository `__ +often provides newer releases of ZFS. You can use it as follows. + +Add the backports repository:: + + vi /etc/apt/sources.list.d/bookworm-backports.list + +.. code-block:: sourceslist + + deb http://deb.debian.org/debian bookworm-backports main contrib + deb-src http://deb.debian.org/debian bookworm-backports main contrib + +:: + + vi /etc/apt/preferences.d/90_zfs + +.. code-block:: control + + Package: src:zfs-linux + Pin: release n=bookworm-backports + Pin-Priority: 990 + +Install the packages:: + + apt update + apt install dpkg-dev linux-headers-generic linux-image-generic + apt install zfs-dkms zfsutils-linux + +**Caution**: If you are in a poorly configured environment (e.g. certain VM or container consoles), when apt attempts to pop up a message on first install, it may fail to notice a real console is unavailable, and instead appear to hang indefinitely. To circumvent this, you can prefix the `apt install` commands with ``DEBIAN_FRONTEND=noninteractive``, like this:: + + DEBIAN_FRONTEND=noninteractive apt install zfs-dkms zfsutils-linux + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + *Root on ZFS + +Related topics +-------------- +.. toctree:: + :maxdepth: 1 + + Debian GNU Linux initrd documentation diff --git a/_sources/Getting Started/Fedora.rst.txt b/_sources/Getting Started/Fedora.rst.txt new file mode 100644 index 000000000..1c341d6ee --- /dev/null +++ b/_sources/Getting Started/Fedora.rst.txt @@ -0,0 +1,7 @@ +:orphan: + +Fedora +======================= + +This page has been moved to `here `__. + diff --git a/_sources/Getting Started/Fedora/Root on ZFS.rst.txt b/_sources/Getting Started/Fedora/Root on ZFS.rst.txt new file mode 100644 index 000000000..2c27470b0 --- /dev/null +++ b/_sources/Getting Started/Fedora/Root on ZFS.rst.txt @@ -0,0 +1,608 @@ +.. highlight:: sh + +.. ifconfig:: zfs_root_test + + :: + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + + distro=${1} + + cp /etc/resolv.conf ./"rootfs-${distro}"/etc/resolv.conf + arch-chroot ./"rootfs-${distro}" sh <<-'ZFS_ROOT_GUIDE_TEST' + + set -vxeuf + + # install alpine setup scripts + apk update + apk add alpine-conf curl + +.. In this document, there are three types of code-block markups: + ``::`` are commands intended for both the vm test and the users + ``.. ifconfig:: zfs_root_test`` are commands intended only for vm test + ``.. code-block:: sh`` are commands intended only for users + +Fedora Root on ZFS +======================================= + +Notes +~~~~~ + +- As an alternative to the below method of installing Fedora Linux on a ZFS root filesystem, you can use the unofficial script `fedora-on-zfs `__, which is more automated and can generate a Fedora Linux installation that is closer to an official Fedora Linux configuration. The fedora-on-zfs script is different from the below method in that it uses one of Fedora's official kickstarts (`fedora-disk-minimal.ks`, `fedora-disk-workstation.ks`, `fedora-disk-kde.ks`, etc.) to guide the installation, but with a few overrides to add the ZFS functionality. Bug reports should be submitted to Greg's fedora-on-zfs GitHub repo. + +**ZFSBootMenu** + +`ZFSBootMenu `__ is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +**UEFI support only** + +Only UEFI is supported by this guide. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Because the kernel of latest Live CD might be incompatible with + ZFS, we will use Alpine Linux Extended, which ships with ZFS by + default. + + Download latest extended variant of `Alpine Linux + live image + `__, + verify `checksum `__ + and boot from it. + + .. code-block:: sh + + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc + + dd if=input-file of=output-file bs=1M + + .. ifconfig:: zfs_root_test + + # check whether the download page exists + # alpine version must be in sync with ci/cd test chroot tarball + +#. Login as root user. There is no password. +#. Configure Internet + + .. code-block:: sh + + setup-interfaces -r + # You must use "-r" option to start networking services properly + # example: + network interface: wlan0 + WiFi name: + ip address: dhcp + + manual netconfig: n + +#. If you are using wireless network and it is not shown, see `Alpine + Linux wiki + `__ for + further details. ``wpa_supplicant`` can be installed with ``apk + add wpa_supplicant`` without internet connection. + +#. Configure SSH server + + .. code-block:: sh + + setup-sshd + # example: + ssh server: openssh + allow root: "prohibit-password" or "yes" + ssh key: "none" or "" + + + +#. Set root password or ``/root/.ssh/authorized_keys``. + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Configure NTP client for time synchronization + + .. code-block:: sh + + setup-ntp busybox + + .. ifconfig:: zfs_root_test + + # this step is unnecessary for chroot and returns 1 when executed + +#. Set up apk-repo. A list of available mirrors is shown. + Press space bar to continue + + .. code-block:: sh + + setup-apkrepos + + +#. Throughout this guide, we use predictable disk names generated by + udev + + .. code-block:: sh + + apk update + apk add eudev + setup-devd udev + + .. ifconfig:: zfs_root_test + + # for some reason, udev is extremely slow in chroot + # it is not needed for chroot anyway. so, skip this step + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + # for github test run, use chroot and loop devices + DISK="$(losetup -a| grep fedora | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Install ZFS support from live media:: + + apk add zfs + +#. Install partition tool + :: + + apk add parted e2fsprogs cryptsetup util-linux + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 1MiB 4GiB \ + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + set 1 esp on \ + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + + +#. Setup temporary encrypted swap for this installation only. This is + useful if the available memory is small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3 + mkswap /dev/mapper/"${i##*/}"-part3 + swapon /dev/mapper/"${i##*/}"-part3 + done + + +#. Load ZFS kernel module + + .. code-block:: sh + + modprobe zfs + +#. Create root pool + + - Unencrypted:: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=none \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + +#. Create root system container: + + :: + + # dracut demands system root dataset to have non-legacy mountpoint + zfs create -o canmount=noauto -o mountpoint=/ rpool/root + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o mountpoint=legacy rpool/home + zfs mount rpool/root + mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home + +#. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + done + + for i in ${DISK}; do + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot + break + done + +System Configuration +--------------------------- + +#. Download and extract minimal Fedora root filesystem:: + + apk add curl + curl --fail-early --fail -L \ + https://dl.fedoraproject.org/pub/fedora/linux/releases/39/Container/x86_64/images/Fedora-Container-Base-39-1.5.x86_64.tar.xz \ + -o rootfs.tar.gz + curl --fail-early --fail -L \ + https://dl.fedoraproject.org/pub/fedora/linux/releases/39/Container/x86_64/images/Fedora-Container-39-1.5-x86_64-CHECKSUM \ + -o checksum + + # BusyBox sha256sum treats all lines in the checksum file + # as checksums and requires two spaces " " + # between filename and checksum + + grep 'Container-Base' checksum \ + | grep '^SHA256' \ + | sed -E 's|.*= ([a-z0-9]*)$|\1 rootfs.tar.gz|' > ./sha256checksum + + sha256sum -c ./sha256checksum + + rootfs_tar=$(tar t -af rootfs.tar.gz | grep layer.tar) + rootfs_tar_dir=$(dirname "${rootfs_tar}") + tar x -af rootfs.tar.gz "${rootfs_tar}" + ln -s "${MNT}" "${MNT}"/"${rootfs_tar_dir}" + tar x -C "${MNT}" -af "${rootfs_tar}" + unlink "${MNT}"/"${rootfs_tar_dir}" + +#. Enable community repo + + .. code-block:: sh + + sed -i '/edge/d' /etc/apk/repositories + sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories + +#. Generate fstab:: + + apk add arch-install-scripts + genfstab -t PARTUUID "${MNT}" \ + | grep -v swap \ + | sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \ + > "${MNT}"/etc/fstab + +#. Chroot + + .. code-block:: sh + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash + + .. ifconfig:: zfs_root_test + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash <<-'ZFS_ROOT_NESTED_CHROOT' + + set -vxeuf + +#. Unset all shell aliases, which can interfere with installation:: + + unalias -a + +#. Install base packages + + .. code-block:: sh + + dnf -y install @core kernel kernel-devel + + .. ifconfig:: zfs_root_test + + # no firmware for test + dnf -y install --setopt=install_weak_deps=False @core + # kernel-core + +#. Install ZFS packages + + .. code-block:: sh + + dnf -y install \ + https://zfsonlinux.org/fedora/zfs-release-2-4"$(rpm --eval "%{dist}"||true)".noarch.rpm + + dnf -y install zfs zfs-dracut + + .. ifconfig:: zfs_root_test + + # this step will build zfs modules and fail + # no need to test building in chroot + + dnf -y install \ + https://zfsonlinux.org/fedora/zfs-release-2-4"$(rpm --eval "%{dist}"||true)".noarch.rpm + +#. Check whether ZFS modules are successfully built + + .. code-block:: sh + + tail -n10 /var/lib/dkms/zfs/**/build/make.log + + # ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_start_io_acct' + # ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_end_io_acct_remapped' + # make[4]: [scripts/Makefile.modpost:138: /var/lib/dkms/zfs/2.1.9/build/module/Module.symvers] Error 1 + # make[3]: [Makefile:1977: modpost] Error 2 + # make[3]: Leaving directory '/usr/src/kernels/6.2.9-100.fc36.x86_64' + # make[2]: [Makefile:55: modules-Linux] Error 2 + # make[2]: Leaving directory '/var/lib/dkms/zfs/2.1.9/build/module' + # make[1]: [Makefile:933: all-recursive] Error 1 + # make[1]: Leaving directory '/var/lib/dkms/zfs/2.1.9/build' + # make: [Makefile:794: all] Error 2 + + If the build failed, you need to install an Long Term Support + kernel and its headers, then rebuild ZFS module + + .. code-block:: sh + + # this is a third-party repo! + # you have been warned. + # + # select a kernel from + # https://copr.fedorainfracloud.org/coprs/kwizart/ + + dnf copr enable -y kwizart/kernel-longterm-VERSION + dnf install -y kernel-longterm kernel-longterm-devel + dnf remove -y kernel-core + + ZFS modules will be built as part of the kernel installation. + Check build log again with ``tail`` command. + +#. Add zfs modules to dracut + + .. code-block:: sh + + echo 'add_dracutmodules+=" zfs "' >> /etc/dracut.conf.d/zfs.conf + echo 'force_drivers+=" zfs "' >> /etc/dracut.conf.d/zfs.conf + + .. ifconfig:: zfs_root_test + + # skip this in chroot, because we did not build zfs module + +#. Add other drivers to dracut:: + + if grep mpt3sas /proc/modules; then + echo 'force_drivers+=" mpt3sas "' >> /etc/dracut.conf.d/zfs.conf + fi + if grep virtio_blk /proc/modules; then + echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf + fi + +#. Build initrd + :: + + find -D exec /lib/modules -maxdepth 1 \ + -mindepth 1 -type d \ + -exec sh -vxc \ + 'if test -e "$1"/modules.dep; + then kernel=$(basename "$1"); + dracut --verbose --force --kver "${kernel}"; + fi' sh {} \; + +#. For SELinux, relabel filesystem on reboot:: + + fixfiles -F onboot + +#. Enable internet time synchronisation:: + + systemctl enable systemd-timesyncd + +#. Generate host id + + .. code-block:: sh + + zgenhostid -f -o /etc/hostid + + .. ifconfig:: zfs_root_test + + # because zfs is not installed, skip this step + +#. Install locale package, example for English locale:: + + dnf install -y glibc-minimal-langpack glibc-langpack-en + +#. Set locale, keymap, timezone, hostname + :: + + rm -f /etc/localtime + rm -f /etc/hostname + systemd-firstboot \ + --force \ + --locale=en_US.UTF-8 \ + --timezone=Etc/UTC \ + --hostname=testhost \ + --keymap=us || true + +#. Set root passwd + :: + + printf 'root:yourpassword' | chpasswd + +Bootloader +--------------------------- + +#. Install rEFInd boot loader:: + + # from http://www.rodsbooks.com/refind/getting.html + # use Binary Zip File option + curl -L http://sourceforge.net/projects/refind/files/0.14.0.2/refind-bin-0.14.0.2.zip/download --output refind.zip + + dnf install -y unzip + unzip refind.zip + mkdir -p /boot/EFI/BOOT + find ./refind-bin-0.14.0.2/ -name 'refind_x64.efi' -print0 \ + | xargs -0I{} mv {} /boot/EFI/BOOT/BOOTX64.EFI + rm -rf refind.zip refind-bin-0.14.0.2 + +#. Add boot entry:: + + tee -a /boot/refind-linux.conf <`__. + :: + + umount -Rl "${MNT}" + zfs snapshot -r rpool@initial-installation + +#. Export all pools + + .. code-block:: sh + + zpool export -a + + .. ifconfig:: zfs_root_test + + # we are now inside a chroot, where the export will fail + # export pools when we are outside chroot + +#. Reboot + + .. code-block:: sh + + reboot + + + .. ifconfig:: zfs_root_test + + # chroot ends here + ZFS_ROOT_GUIDE_TEST + +Post installaion +--------------------------- + +#. Install package groups + + .. code-block:: sh + + dnf group list --hidden -v # query package groups + dnf group install gnome-desktop + +#. Add new user, configure swap. + +#. Mount other EFI system partitions then set up a service for syncing + their contents. diff --git a/_sources/Getting Started/Fedora/index.rst.txt b/_sources/Getting Started/Fedora/index.rst.txt new file mode 100644 index 000000000..bfdf599e7 --- /dev/null +++ b/_sources/Getting Started/Fedora/index.rst.txt @@ -0,0 +1,95 @@ +Fedora +====== + +Contents +-------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Installation +------------ + +Note: this is for installing ZFS on an existing Fedora +installation. To use ZFS as root file system, +see below. + +#. If ``zfs-fuse`` from official Fedora repo is installed, + remove it first. It is not maintained and should not be used + under any circumstance:: + + rpm -e --nodeps zfs-fuse + +#. Add ZFS repo:: + + dnf install -y https://zfsonlinux.org/fedora/zfs-release-2-4$(rpm --eval "%{dist}").noarch.rpm + + List of repos is available `here `__. + +#. Install kernel headers:: + + dnf install -y kernel-devel + + ``kernel-devel`` package must be installed before ``zfs`` package. + +#. Install ZFS packages:: + + dnf install -y zfs + +#. Load kernel module:: + + modprobe zfs + + If kernel module can not be loaded, your kernel version + might be not yet supported by OpenZFS. + + An option is to an LTS kernel from COPR, provided by a third-party. + Use it at your own risk:: + + # this is a third-party repo! + # you have been warned. + # + # select a kernel from + # https://copr.fedorainfracloud.org/coprs/kwizart/ + + dnf copr enable -y kwizart/kernel-longterm-VERSION + dnf install -y kernel-longterm kernel-longterm-devel + + Reboot to new LTS kernel, then load kernel module:: + + modprobe zfs + +#. By default ZFS kernel modules are loaded upon detecting a pool. + To always load the modules at boot:: + + echo zfs > /etc/modules-load.d/zfs.conf + +#. By default ZFS may be removed by kernel package updates. + To lock the kernel version to only ones supported by ZFS to prevent this:: + echo 'zfs' > /etc/dnf/protected.d/zfs.conf + + Pending non-kernel updates can still be applied:: + dnf update --exclude=kernel* + +Testing Repo +-------------------- + +Testing repository, which is disabled by default, contains +the latest version of OpenZFS which is under active development. +These packages +**should not** be used on production systems. + +:: + + dnf config-manager --enable zfs-testing + dnf install zfs + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * diff --git a/_sources/Getting Started/FreeBSD.rst.txt b/_sources/Getting Started/FreeBSD.rst.txt new file mode 100644 index 000000000..6e118b734 --- /dev/null +++ b/_sources/Getting Started/FreeBSD.rst.txt @@ -0,0 +1,142 @@ +FreeBSD +======= + +|ZoF-logo| + +Installation on FreeBSD +----------------------- + +OpenZFS is available pre-packaged as: + +- the zfs-2.0-release branch, in the FreeBSD base system from FreeBSD 13.0-CURRENT forward +- the master branch, in the FreeBSD ports tree as sysutils/openzfs and sysutils/openzfs-kmod from FreeBSD 12.1 forward + +The rest of this document describes the use of OpenZFS either from ports/pkg or built manually from sources for development. + +The ZFS utilities will be installed in /usr/local/sbin/, so make sure +your PATH gets adjusted accordingly. + +To load the module at boot, put ``openzfs_load="YES"`` in +/boot/loader.conf, and remove ``zfs_load="YES"`` if migrating a ZFS +install. + +Beware that the FreeBSD boot loader does not allow booting from root +pools with encryption active (even if it is not in use), so do not try +encryption on a pool you boot from. + +Development on FreeBSD +---------------------- + +The following dependencies are required to build OpenZFS on FreeBSD: + +- FreeBSD sources in /usr/src or elsewhere specified by SYSDIR in env. + If you don't have the sources installed you can install them with + git. + + Install source For FreeBSD 12: + :: + + git clone -b stable/12 https://git.FreeBSD.org/src.git /usr/src + + Install source for FreeBSD Current: + :: + + git clone https://git.FreeBSD.org/src.git /usr/src + +- Packages for build: + :: + + pkg install \ + autoconf \ + automake \ + autotools \ + git \ + gmake + +- Optional packages for build: + :: + + pkg install python + pkg install devel/py-sysctl # needed for arcstat, arc_summary, dbufstat + +- Packages for checks and tests: + :: + + pkg install \ + base64 \ + bash \ + checkbashisms \ + fio \ + hs-ShellCheck \ + ksh93 \ + pamtester \ + devel/py-flake8 \ + sudo + + Your preferred python version may be substituted. The user for + running tests must have NOPASSWD sudo permission. + +To build and install: + +:: + + # as user + git clone https://github.com/openzfs/zfs + cd zfs + ./autogen.sh + env MAKE=gmake ./configure + gmake -j`sysctl -n hw.ncpu` + # as root + gmake install + +To use the OpenZFS kernel module when FreeBSD starts, edit ``/boot/loader.conf`` : + +Replace the line: + +:: + + zfs_load="YES" + +with: + +:: + + openzfs_load="YES" + +The stock FreeBSD ZFS binaries are installed in /sbin. OpenZFS binaries are installed to /usr/local/sbin when installed form ports/pkg or manually from the source. To use OpenZFS binaries, adjust your path so /usr/local/sbin is listed before /sbin. Otherwise the native ZFS binaries will be used. + +For example, make changes to ~/.profile ~/.bashrc ~/.cshrc from this: + +:: + + PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:~/bin + +To this: + +:: + + PATH=/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:~/bin + +For rapid development it can be convenient to do a UFS install instead +of ZFS when setting up the work environment. That way the module can be +unloaded and loaded without rebooting. +:: + + reboot + +Though not required, ``WITHOUT_ZFS`` is a useful build option in FreeBSD +to avoid building and installing the legacy zfs tools and kmod - see +``src.conf(5)``. + +Some tests require fdescfs to be mount on /dev/fd. This can be done +temporarily with: +:: + + mount -t fdescfs fdescfs /dev/fd + +or an entry can be added to /etc/fstab. +:: + + fdescfs /dev/fd fdescfs rw 0 0 + +.. |ZoF-logo| image:: /_static/img/logo/zof-logo.png diff --git a/_sources/Getting Started/NixOS/Root on ZFS.rst.txt b/_sources/Getting Started/NixOS/Root on ZFS.rst.txt new file mode 100644 index 000000000..48c4a0ffd --- /dev/null +++ b/_sources/Getting Started/NixOS/Root on ZFS.rst.txt @@ -0,0 +1,328 @@ +.. highlight:: sh + +.. ifconfig:: zfs_root_test + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + +.. In this document, there are three types of code-block markups: + ``::`` are commands intended for both the vm test and the users + ``.. ifconfig:: zfs_root_test`` are commands intended only for vm test + ``.. code-block:: sh`` are commands intended only for users + +NixOS Root on ZFS +======================================= + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**UEFI support only** + +Only UEFI is supported by this guide. Make sure your computer is +booted in UEFI mode. + +Preparation +--------------------------- + +#. Download `NixOS Live Image + `__ and boot from it. + + .. code-block:: sh + + sha256sum -c ./nixos-*.sha256 + + dd if=input-file of=output-file bs=1M + +#. Connect to the Internet. +#. Set root password or ``/root/.ssh/authorized_keys``. +#. Start SSH server + + .. code-block:: sh + + systemctl restart sshd + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + :: + + # install installation tools + nix-env -f '' -iA nixos-install-tools + + # for github test run, use chroot and loop devices + DISK="$(losetup --all| grep nixos | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + + # if there is no loopdev, then we are using qemu virtualized test + # run, use sata disks instead + if test -z "${DISK}"; then + DISK=$(find /dev/disk/by-id -type l | grep -v DVD-ROM | grep -v -- -part | xargs -t -I '{}' printf '{} ') + fi + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 1MiB 4GiB \ + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + set 1 esp on \ + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + +#. Setup temporary encrypted swap for this installation only. This is + useful if the available memory is small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3 + mkswap /dev/mapper/"${i##*/}"-part3 + swapon /dev/mapper/"${i##*/}"-part3 + done + + +#. **LUKS only**: Setup encrypted LUKS container for root pool:: + + for i in ${DISK}; do + # see PASSPHRASE PROCESSING section in cryptsetup(8) + printf "YOUR_PASSWD" | cryptsetup luksFormat --type luks2 "${i}"-part2 - + printf "YOUR_PASSWD" | cryptsetup luksOpen "${i}"-part2 luks-rpool-"${i##*/}"-part2 - + done + +#. Create root pool + + - Unencrypted + + .. code-block:: sh + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=none \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + + - LUKS encrypted + + :: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=none \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '/dev/mapper/luks-rpool-%s ' "${i##*/}-part2"; + done) + + If not using a multi-disk setup, remove ``mirror``. + +#. Create root system container: + + :: + + zfs create -o canmount=noauto -o mountpoint=legacy rpool/root + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o mountpoint=legacy rpool/home + mount -o X-mount.mkdir -t zfs rpool/root "${MNT}" + mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home + +#. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + done + + for i in ${DISK}; do + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot + break + done + + +System Configuration +--------------------------- + +#. Generate system configuration:: + + nixos-generate-config --root "${MNT}" + +#. Edit system configuration: + + .. code-block:: sh + + nano "${MNT}"/etc/nixos/hardware-configuration.nix + +#. Set networking.hostId: + + .. code-block:: sh + + networking.hostId = "abcd1234"; + +#. If using LUKS, add the output from following command to system + configuration + + .. code-block:: sh + + tee <`__ on `Libera Chat +`__. + +If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @ne9z +`__. + +Installation +------------ + +Note: this is for installing ZFS on an existing +NixOS installation. To use ZFS as root file system, +see below. + +NixOS live image ships with ZFS support by default. + +Note that you need to apply these settings even if you don't need +to boot from ZFS. The kernel module 'zfs.ko' will not be available +to modprobe until you make these changes and reboot. + +#. Edit ``/etc/nixos/configuration.nix`` and add the following + options:: + + boot.supportedFilesystems = [ "zfs" ]; + boot.zfs.forceImportRoot = false; + networking.hostId = "yourHostId"; + + Where hostID can be generated with:: + + head -c4 /dev/urandom | od -A none -t x4 + +#. Apply configuration changes:: + + nixos-rebuild boot + +#. Reboot:: + + reboot + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +Contribute +---------- + +You can contribute to this documentation. Fork this repo, edit the +documentation, then opening a pull request. + +#. To test your changes locally, use the devShell in this repo:: + + git clone https://github.com/ne9z/nixos-live openzfs-docs-dev + cd openzfs-docs-dev + nix develop ./openzfs-docs-dev/#docs + +#. Inside the openzfs-docs repo, build pages:: + + make html + +#. Look for errors and warnings in the make output. If there is no + errors:: + + xdg-open _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a + pull request. Mention @ne9z. diff --git a/_sources/Getting Started/RHEL and CentOS.rst.txt b/_sources/Getting Started/RHEL and CentOS.rst.txt new file mode 100644 index 000000000..afe0ef95b --- /dev/null +++ b/_sources/Getting Started/RHEL and CentOS.rst.txt @@ -0,0 +1,6 @@ +:orphan: + +RHEL and CentOS +======================= + +This page has been moved to `RHEL-based distro `__. diff --git a/_sources/Getting Started/RHEL-based distro/Root on ZFS.rst.txt b/_sources/Getting Started/RHEL-based distro/Root on ZFS.rst.txt new file mode 100644 index 000000000..a926f98ba --- /dev/null +++ b/_sources/Getting Started/RHEL-based distro/Root on ZFS.rst.txt @@ -0,0 +1,529 @@ +.. highlight:: sh + +.. ifconfig:: zfs_root_test + + # For the CI/CD test run of this guide, + # Enable verbose logging of bash shell and fail immediately when + # a commmand fails. + set -vxeuf + distro=${1} + + cp /etc/resolv.conf ./"rootfs-${distro}"/etc/resolv.conf + arch-chroot ./"rootfs-${distro}" sh <<-'ZFS_ROOT_GUIDE_TEST' + + set -vxeuf + + # install alpine setup scripts + apk update + apk add alpine-conf curl + +.. In this document, there are three types of code-block markups: + ``::`` are commands intended for both the vm test and the users + ``.. ifconfig:: zfs_root_test`` are commands intended only for vm test + ``.. code-block:: sh`` are commands intended only for users + +Rocky Linux Root on ZFS +======================================= + +**ZFSBootMenu** + +`ZFSBootMenu `__ is an alternative bootloader +free of such limitations and has support for boot environments. Do not +follow instructions on this page if you plan to use ZBM, +as the layouts are not compatible. Refer +to their site for installation details. + +**Customization** + +Unless stated otherwise, it is not recommended to customize system +configuration before reboot. + +**Only use well-tested pool features** + +You should only use well-tested pool features. Avoid using new features if data integrity is paramount. See, for example, `this comment `__. + +**UEFI support only** + +Only UEFI is supported by this guide. + +Preparation +--------------------------- + +#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled. +#. Because the kernel of latest Live CD might be incompatible with + ZFS, we will use Alpine Linux Extended, which ships with ZFS by + default. + + Download latest extended variant of `Alpine Linux + live image + `__, + verify `checksum `__ + and boot from it. + + .. code-block:: sh + + gpg --auto-key-retrieve --keyserver hkps://keyserver.ubuntu.com --verify alpine-extended-*.asc + + dd if=input-file of=output-file bs=1M + + .. ifconfig:: zfs_root_test + + # check whether the download page exists + # alpine version must be in sync with ci/cd test chroot tarball + +#. Login as root user. There is no password. +#. Configure Internet + + .. code-block:: sh + + setup-interfaces -r + # You must use "-r" option to start networking services properly + # example: + network interface: wlan0 + WiFi name: + ip address: dhcp + + manual netconfig: n + +#. If you are using wireless network and it is not shown, see `Alpine + Linux wiki + `__ for + further details. ``wpa_supplicant`` can be installed with ``apk + add wpa_supplicant`` without internet connection. + +#. Configure SSH server + + .. code-block:: sh + + setup-sshd + # example: + ssh server: openssh + allow root: "prohibit-password" or "yes" + ssh key: "none" or "" + +#. Set root password or ``/root/.ssh/authorized_keys``. + +#. Connect from another computer + + .. code-block:: sh + + ssh root@192.168.1.91 + +#. Configure NTP client for time synchronization + + .. code-block:: sh + + setup-ntp busybox + + .. ifconfig:: zfs_root_test + + # this step is unnecessary for chroot and returns 1 when executed + +#. Set up apk-repo. A list of available mirrors is shown. + Press space bar to continue + + .. code-block:: sh + + setup-apkrepos + + +#. Throughout this guide, we use predictable disk names generated by + udev + + .. code-block:: sh + + apk update + apk add eudev + setup-devd udev + + .. ifconfig:: zfs_root_test + + # for some reason, udev is extremely slow in chroot + # it is not needed for chroot anyway. so, skip this step + +#. Target disk + + List available disks with + + .. code-block:: sh + + find /dev/disk/by-id/ + + If virtio is used as disk bus, power off the VM and set serial numbers for disk. + For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``. + For libvirt, edit domain XML. See `this page + `__ for examples. + + Declare disk array + + .. code-block:: sh + + DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR' + + For single disk installation, use + + .. code-block:: sh + + DISK='/dev/disk/by-id/disk1' + + .. ifconfig:: zfs_root_test + + # for github test run, use chroot and loop devices + DISK="$(losetup -a| grep rhel | cut -f1 -d: | xargs -t -I '{}' printf '{} ')" + +#. Set a mount point + :: + + MNT=$(mktemp -d) + +#. Set partition size: + + Set swap size in GB, set to 1 if you don't want swap to + take up too much space + + .. code-block:: sh + + SWAPSIZE=4 + + .. ifconfig:: zfs_root_test + + # For the test run, use 1GB swap space to avoid hitting CI/CD + # quota + SWAPSIZE=1 + + Set how much space should be left at the end of the disk, minimum 1GB + + :: + + RESERVE=1 + +#. Install ZFS support from live media:: + + apk add zfs + +#. Install partition tool + :: + + apk add parted e2fsprogs cryptsetup util-linux + +System Installation +--------------------------- + +#. Partition the disks. + + Note: you must clear all existing partition tables and data structures from target disks. + + For flash-based storage, this can be done by the blkdiscard command below: + :: + + partition_disk () { + local disk="${1}" + blkdiscard -f "${disk}" || true + + parted --script --align=optimal "${disk}" -- \ + mklabel gpt \ + mkpart EFI 1MiB 4GiB \ + mkpart rpool 4GiB -$((SWAPSIZE + RESERVE))GiB \ + mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \ + set 1 esp on \ + + partprobe "${disk}" + } + + for i in ${DISK}; do + partition_disk "${i}" + done + + .. ifconfig:: zfs_root_test + + :: + + # When working with GitHub chroot runners, we are using loop + # devices as installation target. However, the alias support for + # loop device was just introduced in March 2023. See + # https://github.com/systemd/systemd/pull/26693 + # For now, we will create the aliases maunally as a workaround + looppart="1 2 3 4 5" + for i in ${DISK}; do + for j in ${looppart}; do + if test -e "${i}p${j}"; then + ln -s "${i}p${j}" "${i}-part${j}" + fi + done + done + + +#. Setup temporary encrypted swap for this installation only. This is + useful if the available memory is small:: + + for i in ${DISK}; do + cryptsetup open --type plain --key-file /dev/random "${i}"-part3 "${i##*/}"-part3 + mkswap /dev/mapper/"${i##*/}"-part3 + swapon /dev/mapper/"${i##*/}"-part3 + done + + +#. Load ZFS kernel module + + .. code-block:: sh + + modprobe zfs + +#. Create root pool + + - Unencrypted:: + + # shellcheck disable=SC2046 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -R "${MNT}" \ + -O acltype=posixacl \ + -O canmount=off \ + -O dnodesize=auto \ + -O normalization=formD \ + -O relatime=on \ + -O xattr=sa \ + -O mountpoint=none \ + rpool \ + mirror \ + $(for i in ${DISK}; do + printf '%s ' "${i}-part2"; + done) + +#. Create root system container: + + :: + + # dracut demands system root dataset to have non-legacy mountpoint + zfs create -o canmount=noauto -o mountpoint=/ rpool/root + + Create system datasets, + manage mountpoints with ``mountpoint=legacy`` + :: + + zfs create -o mountpoint=legacy rpool/home + zfs mount rpool/root + mount -o X-mount.mkdir -t zfs rpool/home "${MNT}"/home + +#. Format and mount ESP. Only one of them is used as /boot, you need to set up mirroring afterwards + :: + + for i in ${DISK}; do + mkfs.vfat -n EFI "${i}"-part1 + done + + for i in ${DISK}; do + mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1,X-mount.mkdir "${i}"-part1 "${MNT}"/boot + break + done + +System Configuration +--------------------------- + +#. Download and extract minimal Rhel root filesystem:: + + apk add curl + curl --fail-early --fail -L \ + https://dl.rockylinux.org/vault/rocky/9.2/images/x86_64/Rocky-9-Container-Base-9.2-20230513.0.x86_64.tar.xz \ + -o rootfs.tar.gz + curl --fail-early --fail -L \ + https://dl.rockylinux.org/vault/rocky/9.2/images/x86_64/Rocky-9-Container-Base-9.2-20230513.0.x86_64.tar.xz.CHECKSUM \ + -o checksum + + # BusyBox sha256sum treats all lines in the checksum file + # as checksums and requires two spaces " " + # between filename and checksum + + grep 'Container-Base' checksum \ + | grep '^SHA256' \ + | sed -E 's|.*= ([a-z0-9]*)$|\1 rootfs.tar.gz|' > ./sha256checksum + + sha256sum -c ./sha256checksum + + tar x -C "${MNT}" -af rootfs.tar.gz + +#. Enable community repo + + .. code-block:: sh + + sed -i '/edge/d' /etc/apk/repositories + sed -i -E 's/#(.*)community/\1community/' /etc/apk/repositories + +#. Generate fstab:: + + apk add arch-install-scripts + genfstab -t PARTUUID "${MNT}" \ + | grep -v swap \ + | sed "s|vfat.*rw|vfat rw,x-systemd.idle-timeout=1min,x-systemd.automount,noauto,nofail|" \ + > "${MNT}"/etc/fstab + +#. Chroot + + .. code-block:: sh + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash + + .. ifconfig:: zfs_root_test + + cp /etc/resolv.conf "${MNT}"/etc/resolv.conf + for i in /dev /proc /sys; do mkdir -p "${MNT}"/"${i}"; mount --rbind "${i}" "${MNT}"/"${i}"; done + chroot "${MNT}" /usr/bin/env DISK="${DISK}" bash <<-'ZFS_ROOT_NESTED_CHROOT' + + set -vxeuf + +#. Unset all shell aliases, which can interfere with installation:: + + unalias -a + +#. Install base packages + + .. code-block:: sh + + dnf -y install --allowerasing @core kernel-core + + .. ifconfig:: zfs_root_test + + # skip installing firmware in test + dnf -y install --allowerasing --setopt=install_weak_deps=False \ + @core kernel-core + +#. Install ZFS packages:: + + dnf install -y https://zfsonlinux.org/epel/zfs-release-2-3"$(rpm --eval "%{dist}"|| true)".noarch.rpm + dnf config-manager --disable zfs + dnf config-manager --enable zfs-kmod + dnf install -y zfs zfs-dracut + +#. Add zfs modules to dracut:: + + echo 'add_dracutmodules+=" zfs "' >> /etc/dracut.conf.d/zfs.conf + echo 'force_drivers+=" zfs "' >> /etc/dracut.conf.d/zfs.conf + +#. Add other drivers to dracut:: + + if grep mpt3sas /proc/modules; then + echo 'force_drivers+=" mpt3sas "' >> /etc/dracut.conf.d/zfs.conf + fi + if grep virtio_blk /proc/modules; then + echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf + fi + +#. Build initrd:: + + find -D exec /lib/modules -maxdepth 1 \ + -mindepth 1 -type d \ + -exec sh -vxc \ + 'if test -e "$1"/modules.dep; + then kernel=$(basename "$1"); + dracut --verbose --force --kver "${kernel}"; + fi' sh {} \; + +#. For SELinux, relabel filesystem on reboot:: + + fixfiles -F onboot + +#. Generate host id:: + + zgenhostid -f -o /etc/hostid + +#. Install locale package, example for English locale:: + + dnf install -y glibc-minimal-langpack glibc-langpack-en + +#. Set locale, keymap, timezone, hostname + + :: + + rm -f /etc/localtime + systemd-firstboot \ + --force \ + --locale=en_US.UTF-8 \ + --timezone=Etc/UTC \ + --hostname=testhost \ + --keymap=us + +#. Set root passwd + :: + + printf 'root:yourpassword' | chpasswd + +Bootloader +--------------------------- + +#. Install rEFInd boot loader:: + + # from http://www.rodsbooks.com/refind/getting.html + # use Binary Zip File option + curl -L http://sourceforge.net/projects/refind/files/0.14.0.2/refind-bin-0.14.0.2.zip/download --output refind.zip + + dnf install -y unzip + unzip refind.zip + mkdir -p /boot/EFI/BOOT + find ./refind-bin-0.14.0.2/ -name 'refind_x64.efi' -print0 \ + | xargs -0I{} mv {} /boot/EFI/BOOT/BOOTX64.EFI + rm -rf refind.zip refind-bin-0.14.0.2 + +#. Add boot entry:: + + tee -a /boot/refind-linux.conf <`__. + :: + + umount -Rl "${MNT}" + zfs snapshot -r rpool@initial-installation + +#. Export all pools + + .. code-block:: sh + + zpool export -a + + .. ifconfig:: zfs_root_test + + # we are now inside a chroot, where the export will fail + # export pools when we are outside chroot + +#. Reboot + + .. code-block:: sh + + reboot + + .. ifconfig:: zfs_root_test + + # chroot ends here + ZFS_ROOT_GUIDE_TEST + +Post installaion +--------------------------- + +#. Install package groups + + .. code-block:: sh + + dnf group list --hidden -v # query package groups + dnf group install gnome-desktop + +#. Add new user, configure swap. + +#. Mount other EFI system partitions then set up a service for syncing + their contents. diff --git a/_sources/Getting Started/RHEL-based distro/index.rst.txt b/_sources/Getting Started/RHEL-based distro/index.rst.txt new file mode 100644 index 000000000..edf553070 --- /dev/null +++ b/_sources/Getting Started/RHEL-based distro/index.rst.txt @@ -0,0 +1,181 @@ +RHEL-based distro +======================= + +Contents +-------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +`DKMS`_ and `kABI-tracking kmod`_ style packages are provided for x86_64 RHEL- +and CentOS-based distributions from the OpenZFS repository. These packages +are updated as new versions are released. Only the repository for the current +minor version of each current major release is updated with new packages. + +To simplify installation, a *zfs-release* package is provided which includes +a zfs.repo configuration file and public signing key. All official OpenZFS +packages are signed using this key, and by default yum or dnf will verify a +package's signature before allowing it be to installed. Users are strongly +encouraged to verify the authenticity of the OpenZFS public key using +the fingerprint listed here. + +| **Key location:** /etc/pki/rpm-gpg/RPM-GPG-KEY-openzfs (previously -zfsonlinux) +| **Current release packages:** `EL7`_, `EL8`_, `EL9`_ +| **Archived release packages:** `see repo page `__ + +| **Signing key1 (EL8 and older, Fedora 36 and older)** + `pgp.mit.edu `__ / + `direct link `__ +| **Fingerprint:** C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620 + +| **Signing key2 (EL9+, Fedora 37+)** + `pgp.mit.edu `__ / + `direct link `__ +| **Fingerprint:** 7DC7 299D CF7C 7FD9 CD87 701B A599 FD5E 9DB8 4141 + +For EL7 run:: + + yum install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm + +and for EL8 and 9:: + + dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm + +After installing the *zfs-release* package and verifying the public key +users can opt to install either the DKMS or kABI-tracking kmod style packages. +DKMS packages are recommended for users running a non-distribution kernel or +for users who wish to apply local customizations to OpenZFS. For most users +the kABI-tracking kmod packages are recommended in order to avoid needing to +rebuild OpenZFS for every kernel update. + +DKMS +---- + +To install DKMS style packages issue the following commands. First add the +`EPEL repository`_ which provides DKMS by installing the *epel-release* +package, then the *kernel-devel* and *zfs* packages. Note that it is +important to make sure that the matching *kernel-devel* package is installed +for the running kernel since DKMS requires it to build OpenZFS. + +For EL6 and 7, separately run:: + + yum install -y epel-release + yum install -y kernel-devel + yum install -y zfs + +And for EL8 and newer, separately run:: + + dnf install -y epel-release + dnf install -y kernel-devel + dnf install -y zfs + +.. note:: + When switching from DKMS to kABI-tracking kmods first uninstall the + existing DKMS packages. This should remove the kernel modules for all + installed kernels, then the kABI-tracking kmods can be installed as + described in the section below. + +kABI-tracking kmod +------------------ + +By default the *zfs-release* package is configured to install DKMS style +packages so they will work with a wide range of kernels. In order to +install the kABI-tracking kmods the default repository must be switched +from *zfs* to *zfs-kmod*. Keep in mind that the kABI-tracking kmods are +only verified to work with the distribution-provided, non-Stream kernel. + +For EL6 and 7 run:: + + yum-config-manager --disable zfs + yum-config-manager --enable zfs-kmod + yum install zfs + +And for EL8 and newer:: + + dnf config-manager --disable zfs + dnf config-manager --enable zfs-kmod + dnf install zfs + +By default the OpenZFS kernel modules are automatically loaded when a ZFS +pool is detected. If you would prefer to always load the modules at boot +time you can create such configuration in ``/etc/modules-load.d``:: + + echo zfs >/etc/modules-load.d/zfs.conf + +.. note:: + When updating to a new EL minor release the existing kmod + packages may not work due to upstream kABI changes in the kernel. + The configuration of the current release package may have already made an + updated package available, but the package manager may not know to install + that package if the version number isn't newer. When upgrading, users + should verify that the *kmod-zfs* package is providing suitable kernel + modules, reinstalling the *kmod-zfs* package if necessary. + +Previous minor EL releases +-------------------------- + +The current release package uses `"${releasever}"` rather than specify a particular +minor release as previous release packages did. Typically `"${releasever}"` will +resolve to just the major version (e.g. `8`), and the resulting repository URL +will be aliased to the current minor version (e.g. `8.7`), but you can specify +`--releasever` to use previous repositories. :: + + [vagrant@localhost ~]$ dnf list available --showduplicates kmod-zfs + Last metadata expiration check: 0:00:08 ago on tor 31 jan 2023 17:50:05 UTC. + Available Packages + kmod-zfs.x86_64 2.1.6-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.7-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.8-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.9-1.el8 zfs-kmod + [vagrant@localhost ~]$ dnf list available --showduplicates --releasever=8.6 kmod-zfs + Last metadata expiration check: 0:16:13 ago on tor 31 jan 2023 17:34:10 UTC. + Available Packages + kmod-zfs.x86_64 2.1.4-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.5-1.el8 zfs-kmod + kmod-zfs.x86_64 2.1.5-2.el8 zfs-kmod + kmod-zfs.x86_64 2.1.6-1.el8 zfs-kmod + [vagrant@localhost ~]$ + +In the above example, the former packages were built for EL8.7, and the latter for EL8.6. + +Testing Repositories +-------------------- + +In addition to the primary *zfs* repository a *zfs-testing* repository +is available. This repository, which is disabled by default, contains +the latest version of OpenZFS which is under active development. These +packages are made available in order to get feedback from users regarding +the functionality and stability of upcoming releases. These packages +**should not** be used on production systems. Packages from the testing +repository can be installed as follows. + +For EL6 and 7 run:: + + yum-config-manager --enable zfs-testing + yum install kernel-devel zfs + +And for EL8 and newer:: + + dnf config-manager --enable zfs-testing + dnf install kernel-devel zfs + +.. note:: + Use *zfs-testing* for DKMS packages and *zfs-testing-kmod* + for kABI-tracking kmod packages. + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * + +.. _kABI-tracking kmod: https://elrepoproject.blogspot.com/2016/02/kabi-tracking-kmod-packages.html +.. _DKMS: https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support +.. _EL7: https://zfsonlinux.org/epel/zfs-release-2-3.el7.noarch.rpm +.. _EL8: https://zfsonlinux.org/epel/zfs-release-2-3.el8.noarch.rpm +.. _EL9: https://zfsonlinux.org/epel/zfs-release-2-3.el9.noarch.rpm +.. _EPEL repository: https://fedoraproject.org/wiki/EPEL diff --git a/_sources/Getting Started/Slackware/Root on ZFS.rst.txt b/_sources/Getting Started/Slackware/Root on ZFS.rst.txt new file mode 100644 index 000000000..821f7933e --- /dev/null +++ b/_sources/Getting Started/Slackware/Root on ZFS.rst.txt @@ -0,0 +1,235 @@ +Slackware Root on ZFS +===================== + +This page shows some possible ways to configure Slackware to use zfs for the root filesystem. + +There are countless different ways to achieve such setup, particularly with the flexibility that zfs allows. We'll show only a simple recipe and give pointers for further customization. + +Kernel considerations +--------------------- + +For this mini-HOWTO we'll be using the generic kernel and customize the stock initrd. + +If you use the huge kernel, you may want to switch to the generic kernel first, and install both the kernel-generic and mkinitrd packages. This makes things easier since we'll need an initrd. + +If you absolutely do not want to use an initrd, see "Other options" further down. + + +The problem space +----------------- + +In order to have the root filesystem on zfs, two problems need to be addressed: + +#. The boot loader needs to be able to load the kernel and its initrd. + +#. The kernel (or, rather, the initrd) needs to be able to mount the zfs root filesystem and run /sbin/init. + +The second problem is relatively easy to deal with, and only requires slight modifications to the default Slackware initrd scripts. + +For the first problem, however, a variety of scenarios are possible; on a PC, for example, you might be booting: + +#. In UEFI mode, via an additional bootloader like elilo: here, the kernel and its initrd are on (read: have been copied to) the ESP, and the additional bootloader doesn't need to understand zfs. + +#. In UEFI mode, by booting the kernel straight from the firmware. All Slackware kernels are built with EFI_STUB=Y, so if you copy your kernel and initrd to the ESP and configure a boot entry with efibootmgr, you are all set (note that the kernel image must have a .efi extension). + +#. In legacy BIOS mode, using lilo or grub or similar: lilo doesn't understand zfs and even the latest grub understands it with some limitations (for example, no zstd compression). If you're stuck with legacy BIOS mode, the best option is to put /boot on a separate partition that your loader understands (for example, ext4). + +If you are not using a PC, things will likely be quite different, so refer to relevant hardware documentation for your platform; on a Raspberry PI, for example, the firmware loads kernel and initrd from a FAT32 partition, so the situation is similar to a PC booting in UEFI mode. + +The simplest setup, discussed in this recipe, is the one using UEFI. As said above, if you boot in legacy BIOS mode, you will have to ensure that the boot loader of your choice can load the kernel image. + + +Partition layout +---------------- + +Repartitioning an existing system disk in order to make room for a zfs root partition is left as an exercise to the reader (there's nothing specific to zfs). + +As a pointer: if you're starting from a whole-disk ext4 filesystem, you could use resize2fs to shrink it to half of disk size and then relocate it to the second half of the disk with sfdisk. After that, you could create a ZFS partition before it, and copy stuff across using cp or rsync. This approach has the benefit of providing some kind of recovery mechanism in case stuff goes wrong. When you are happy about the final setup, you can then delete the ext4 partition and enlarge the ZFS one. + +In any case you will want to have a rescue cdrom at hand, and one that supports zfs out of the box. A Ubuntu live CD will do. + +For this recipe, we'll be assuming that we're booting in UEFI mode and there's a single disk configured like this: + +.. code-block:: sh + + /dev/sda1 # EFI system partition + /dev/sda2 # zfs pool (contains the "root" filesystem) + +.. + +Since we are creating a zpool inside a disk partition (as opposed to using up a whole disk), make sure that the partition type is set correctly (for GPT, 54 or 67 are good choices). + +When creating the zfs filesystem, you will want to set "mountpoint=legacy" so that the filesystem can be mounted with "mount" in a traditional way; Slackware startup and shutdown scripts expect that. + +Back to our recipe, this is a working example: + +.. code-block:: sh + + zpool create -o ashift=12 -O mountpoint=none tank /dev/sda2 + zfs create -o mountpoint=legacy -o compression=zstd tank/root + # add more as needed: + # zfs create -o mountpoint=legacy [..] tank/home + # zfs create -o mountpoint=legacy [..] tank/usr + # zfs create -o mountpoint=legacy [..] tank/opt + +.. + +Tweak options to taste; while "mountpoint=legacy" is required for the root filesystem, it is not required for any additional filesystems. In the example above we applied it to all of them, but that's a matter of personal preference, as is setting "mountpoint=none" on the pool itself so it's not mounted anywhere by default (do note that zpool's "mountpoint=none" wants an uppercase "-O"). + +You can check your setup with: + +.. code-block:: sh + + zpool list + zfs list + +.. + +Then, adjust /etc/fstab to something like this: + +.. code-block:: sh + + tank/root / zfs defaults 0 0 + # add more as needed: + # tank/home /home zfs defaults 0 0 + # tank/usr /usr zfs defaults 0 0 + # tank/opt /opt zfs defaults 0 0 + +.. + +This allow us to mount and umount them as usual, once we have imported the pool with "zpool import tank". Which leads us to... + + +Patch and rebuild the initrd +---------------------------- + +Since we're using the generic kernel, we already have a usable /boot/initrd-tree/ (if you don't, prepare one by running mkinitrd once). + +Copy the zfs userspace tools to it (/sbin/zfs isn't strictly necessary, but may be handy for rescuing a system that refuses to boot): + +.. code-block:: sh + + install -m755 /sbin/zpool /sbin/zfs /boot/initrd-tree/sbin/ + +.. + +Modify /boot/initrd-tree/init; locate the first "case" statement that sets ROOTDEV; it reads: + +.. code-block:: sh + + root=/dev/*) + ROOTDEV=$(echo $ARG | cut -f2 -d=) + ;; + root=LABEL=*) + ROOTDEV=$(echo $ARG | cut -f2- -d=) + ;; + root=UUID=*) + ROOTDEV=$(echo $ARG | cut -f2- -d=) + ;; +.. + +Replace the three cases with: + +.. code-block:: sh + + root=*) + ROOTDEV=$(echo $ARG | cut -f2 -d=) + ;; + +.. + +This allows us to specify something like "root=tank/root" (if you look carefully at the script, you will notice that you can collapse the /dev/*, LABEL=*, UUID=* and the newly-added case into a single one). + +Further down in the script, locate the section that handles RESUMEDEV ("# Resume state from swap"), and insert the following just before it: + +.. code-block:: sh + + # Support for zfs root filesystem: + if [ x"$ROOTFS" = xzfs ]; then + POOL=${ROOTDEV%%/*} + echo "Importing zfs pool: $POOL" + zpool import -o cachefile=none -N $POOL + fi + +.. + +Finally, rebuild the initrd with something like: + +.. code-block:: sh + + mkinitrd -m zfs + +.. + +It may make sense to use the "-o" option and create an initrd.gz in a different file, just in case. Look at /boot/README.initrd for more details. + +Rebuilding the initrd should also copy in the necessary libraries (libzfs.so, etc.) under /lib/; verify it by running: + +.. code-block:: sh + + chroot /boot/initrd-tree /sbin/zpool --help + +.. + +When you're happy, remember to copy the new initrd.gz to the ESP partition. + +There are other ways to ensure that the zfs binaries and filesystem module are always built into the initrd - see man initrd. + + +Configure the boot loader +------------------------- + +Any of these three options will do: + +#. Append "rootfstype=zfs root=tank/root" to the boot loader configuration (e.g. elilo.conf or equivalent). +#. Modify /boot/initrd-tree/rootdev and /boot/initrd-tree/rootfs in the previous step, then rebuild the initrd. +#. When rebuilding the initrd, add "-f zfs -r tank/root". + +If you're using elilo, it should look something like this: + +.. code-block:: sh + + image=vmlinuz + label=linux + initrd=initrd.gz + append="root=tank/root rootfstype=zfs" + +.. + +Should go without saying, but doublecheck that the file referenced by initrd is the one you just generated (e.g. if you're using the ESP, make sure you copy the newly-built initrd to it). + + +Before rebooting +---------------- + +Make sure you have an emergency kernel around in case something goes wrong. +If you upgrade kernel or packages, make use of snapshosts. + + +Other options +------------- + +You can build zfs support right into the kernel. If you do so and do not want to use an initrd, you can embed a small initramfs in the kernel image that performs the "zpool import" step). + + +Snapshots and boot environments +------------------------------- + +The modifications above also allow you to create a clone of the root filesystem and boot into it; something like this should work: + +.. code-block:: sh + + zfs snapshot tank/root@mysnapshot + zfs clone tank/root@mysnapshot tank/root-clone + zfs set mountpoint=legacy tank/root-clone + zfs promote tank/root-clone + +.. + +Adjust boot parameters to mount "tank/root-clone" instead of "tank/root" (making a copy of the known-good kernel and initrd on the ESP is not a bad idea). + + +Support +------- + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at `#zfsonlinux `__ on `Libera Chat `__. If you have a bug report or feature request related to this HOWTO, please `file a new issue and mention @a-biardi `__. diff --git a/_sources/Getting Started/Slackware/index.rst.txt b/_sources/Getting Started/Slackware/index.rst.txt new file mode 100644 index 000000000..5b1e4bc74 --- /dev/null +++ b/_sources/Getting Started/Slackware/index.rst.txt @@ -0,0 +1,26 @@ +.. highlight:: sh + +Slackware +========= + +.. contents:: Table of Contents + :local: + +Installation +------------ + +In order to build and install the kernel modules and userspace tools, use the +openzfs SlackBuild script (for 15.0, it's at https://slackbuilds.org/repository/15.0/system/openzfs/). No special options are required. + + +Root on ZFS +----------- + +ZFS can be used as root file system for Slackware. +An installation guide is available here: + +.. toctree:: + :maxdepth: 1 + :glob: + + *Root on ZFS diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst.txt new file mode 100644 index 000000000..8fc3be062 --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst.txt @@ -0,0 +1,1032 @@ +.. highlight:: sh + +Ubuntu 18.04 Root on ZFS +======================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Ubuntu 20.04 Root on ZFS <./Ubuntu 20.04 Root on ZFS>` for new + installs. This guide is no longer receiving most updates. It continues + to exist for reference for existing installs that followed it. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `Ubuntu 18.04.3 ("Bionic") Desktop + CD `__ + (*not* any server images) +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” + drive) only works with UEFI booting. This not unique to ZFS. `GRUB + does not and will not work on 4Kn with legacy (BIOS) + booting. `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of +memory is recommended for normal performance in basic workloads. If you +wish to use deduplication, you will need `massive amounts of +RAM `__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports two different encryption options: unencrypted and +LUKS (full-disk encryption). With either option, all ZFS features are fully +available. ZFS native encryption is not available in Ubuntu 18.04. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +1.1 Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to +the Internet as appropriate (e.g. join your WiFi network). Open a +terminal (press Ctrl-Alt-T). + +1.2 Setup and update the repositories:: + + sudo apt-add-repository universe + sudo apt update + +1.3 Optional: Install and start the OpenSSH server in the Live CD +environment: + +If you have a second system, using SSH to access the target system can +be convenient:: + + passwd + # There is no current password; hit enter at that prompt. + sudo apt install --yes openssh-server + +**Hint:** You can find your IP address with +``ip addr show scope global | grep inet``. Then, from your main machine, +connect with ``ssh ubuntu@IP``. + +1.4 Become root:: + + sudo -i + +1.5 Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfs-initramfs + +Step 2: Disk Formatting +----------------------- + +2.1 Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + +Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the +``/dev/sd*`` device nodes directly can cause sporadic import failures, +especially on systems that have more than one storage pool. + +**Hints:** + +- ``ls -la /dev/disk/by-id`` will list the aliases. +- Are you doing this in a virtual machine? If your virtual disk is + missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using + KVM with virtio; otherwise, read the + `troubleshooting <#troubleshooting>`__ section. +- For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. +- When choosing a boot pool size, consider how you will use the space. A kernel + and initrd may consume around 100M. If you have multiple kernels and take + snapshots, you may find yourself low on boot pool space, especially if you + need to regenerate your initramfs images, which may be around 85M each. Size + your boot pool appropriately for your needs. + +2.2 If you are re-using a disk, clear it as necessary: + +If the disk was previously used in an MD array, zero the superblock:: + + apt install --yes mdadm + mdadm --zero-superblock --force $DISK + +Clear the partition table:: + + sgdisk --zap-all $DISK + +2.3 Partition your disk(s): + +Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + +Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + +Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + +Choose one of the following options: + +2.3a Unencrypted:: + + sgdisk -n4:0:0 -t4:BF01 $DISK + +2.3b LUKS:: + + sgdisk -n4:0:0 -t4:8300 $DISK + +If you are creating a mirror or raidz topology, repeat the partitioning +commands for all the disks which will be part of the pool. + +2.4 Create the boot pool:: + + zpool create -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ + -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt bpool ${DISK}-part3 + +You should not need to customize any of the options for the boot pool. + +GRUB does not support all of the zpool features. See +``spa_feature_names`` in +`grub-core/fs/zfs/zfs.c `__. +This step creates a separate boot pool for ``/boot`` with the features +limited to only those that GRUB supports, allowing the root pool to use +any/all features. Note that GRUB opens the pool read-only, so all +read-only compatible features are “supported” by GRUB. + +**Hints:** + +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). +- The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + +**Feature Notes:** + +- As a read-only compatible feature, the ``userobj_accounting`` feature should + be compatible in theory, but in practice, GRUB can fail with an “invalid + dnode type” error. This feature does not matter for ``/boot`` anyway. + +2.5 Create the root pool: + +Choose one of the following options: + +2.5a Unencrypted:: + + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt rpool ${DISK}-part4 + +2.5b LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1 + +**Notes:** + +- The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). +- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires + ACLs `__ +- Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only + filenames `__. +- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you want to + tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. +- Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s + documentation `__ + for further information. +- Setting ``xattr=sa`` `vastly improves the performance of extended + attributes `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI + applications. `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain + controller. `__ + Note that ``xattr=sa`` is + `Linux-specific `__. + If you move your ``xattr=sa`` pool to another OpenZFS implementation + besides ZFS-on-Linux, extended attributes will not be readable + (though your data will be). If portability of extended attributes is + important to you, omit the ``-O xattr=sa`` above. Even if you do not + want ``xattr=sa`` for the whole pool, it is probably fine to use it + for ``/var/log``. +- Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). +- For LUKS, the key size chosen is 512 bits. However, XTS mode requires + two keys, so the LUKS key is split in half. Thus, ``-s 512`` means + AES-256. +- Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup + FAQ `__ + for guidance. + +**Hints:** + +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). For LUKS, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will + have to create using ``cryptsetup``. +- The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the + root pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +3.1 Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + +On Solaris systems, the root filesystem is cloned and the suffix is +incremented for major system changes through ``pkg image-update`` or +``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with the +``zsys`` tool, though its dataset layout is more complicated. Even without +such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still be used +for manually created clones. + +3.2 Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu + zfs mount rpool/ROOT/ubuntu + + zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu + zfs mount bpool/BOOT/ubuntu + +With ZFS, it is not normally necessary to use a mount command (either +``mount`` or ``zfs mount``). This situation is an exception because of +``canmount=noauto``. + +3.3 Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + +The datasets below are optional, depending on your preferences and/or +software choices. + +If you wish to exclude these from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + +If you use /opt on this system:: + + zfs create rpool/opt + +If you use /srv on this system:: + + zfs create rpool/srv + +If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + +If this system will have games installed:: + + zfs create rpool/var/games + +If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + +If this system will use Snap packages:: + + zfs create rpool/var/snap + +If you use /var/www on this system:: + + zfs create rpool/var/www + +If this system will use GNOME:: + + zfs create rpool/var/lib/AccountsService + +If this system will use Docker (which manages its own datasets & +snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + +If this system will use NFS (locking):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + +A tmpfs is recommended later, but if you want a separate dataset for +``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + +The primary goal of this dataset layout is to separate the OS from user data. +This allows the root filesystem to be rolled back without rolling back user +data. The ``com.sun.auto-snapshot`` setting is used by some ZFS +snapshot utilities to exclude transient data. + +If you do nothing extra, ``/tmp`` will be stored as part of the root +filesystem. Alternatively, you can create a separate dataset for +``/tmp``, as shown above. This keeps the ``/tmp`` data out of snapshots +of your root filesystem. It also allows you to set a quota on +``rpool/tmp``, if you want to limit the maximum space used. Otherwise, +you can use a tmpfs (RAM filesystem) later. + +3.4 Install the minimal system:: + + debootstrap bionic /mnt + zfs set devices=off rpool + +The ``debootstrap`` command leaves the new system in an unconfigured +state. An alternative to using ``debootstrap`` is to copy the entirety +of a working system into the new ZFS root. + +Step 4: System Configuration +---------------------------- + +4.1 Configure the hostname: + +Replace ``HOSTNAME`` with the desired hostname:: + + echo HOSTNAME > /mnt/etc/hostname + vi /mnt/etc/hosts + +.. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + +**Hint:** Use ``nano`` if you find ``vi`` confusing. + +4.2 Configure the network interface: + +Find the interface name:: + + ip addr show + +Adjust NAME below to match your interface name:: + + vi /mnt/etc/netplan/01-netcfg.yaml + +.. code-block:: yaml + + network: + version: 2 + ethernets: + NAME: + dhcp4: true + +Customize this file if the system is not a DHCP client. + +4.3 Configure the package sources:: + + vi /mnt/etc/apt/sources.list + +.. code-block:: sourceslist + + deb http://archive.ubuntu.com/ubuntu bionic main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu bionic-updates main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu bionic-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu bionic-security main restricted universe multiverse + +4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and ``chroot`` into it:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login + +**Note:** This is using ``--rbind``, not ``--bind``. + +4.5 Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + apt update + +Even if you prefer a non-English system language, always ensure that +``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales + dpkg-reconfigure tzdata + +If you prefer ``nano`` over ``vi``, install it:: + + apt install --yes nano + +4.6 Install ZFS in the chroot environment for the new system:: + + apt install --yes --no-install-recommends linux-image-generic + apt install --yes zfs-initramfs + +**Hint:** For the HWE kernel, install ``linux-image-generic-hwe-18.04`` +instead of ``linux-image-generic``. + +4.7 For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup + + echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + +The use of ``initramfs`` is a work-around for `cryptsetup does not support ZFS +`__. + +**Hint:** If you are creating a mirror or raidz topology, repeat the +``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +4.8 Install GRUB + +Choose one of the following options: + +4.8a Install GRUB for legacy (BIOS) booting:: + + apt install --yes grub-pc + +Select (using the space bar) all of the disks (not partitions) in your pool. + +4.8b Install GRUB for UEFI booting:: + + apt install dosfstools + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \ + /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64-signed shim-signed + +**Notes:** + +- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. +- For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +4.9 (Optional): Remove os-prober:: + + apt purge --yes os-prober + +This avoids error messages from `update-grub`. `os-prober` is only necessary +in dual-boot configurations. + +4.10 Set a root password:: + + passwd + +4.11 Enable importing bpool + +This ensures that ``bpool`` is always imported, regardless of whether +``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, +or whether ``zfs-import-scan.service`` is enabled. + +:: + + vi /etc/systemd/system/zfs-import-bpool.service + +.. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + + [Install] + WantedBy=zfs-import.target + +:: + + systemctl enable zfs-import-bpool.service + +4.12 Optional (but recommended): Mount a tmpfs to ``/tmp`` + +If you chose to create a ``/tmp`` dataset above, skip this step, as they +are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a +tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + +:: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +4.13 Setup system groups:: + + addgroup --system lpadmin + addgroup --system sambashare + +Step 5: GRUB Installation +------------------------- + +5.1 Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +5.2 Refresh the initrd files:: + + update-initramfs -c -k all + +**Note:** When using LUKS, this will print “WARNING could not determine +root device from /etc/fstab”. This is because `cryptsetup does not +support ZFS +`__. + +5.3 Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu" + +5.4 Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Comment out: GRUB_TIMEOUT_STYLE=hidden + # Set: GRUB_TIMEOUT=5 + # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 + # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + +Later, once the system has rebooted twice and you are sure everything is +working, you can undo these changes, if desired. + +5.5 Update the boot configuration:: + + update-grub + +**Note:** Ignore errors from ``osprober``, if present. + +5.6 Install the boot loader: + +5.6a For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + +Note that you are installing GRUB to the whole disk, not a partition. + +If you are creating a mirror or raidz topology, repeat the +``grub-install`` command for each disk in the pool. + +5.6b For UEFI booting, install GRUB:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy + +It is not necessary to specify the disk here. If you are creating a +mirror or raidz topology, the additional disks will be handled later. + +5.7 Fix filesystem mount ordering: + +`Until ZFS gains a systemd mount +generator `__, there are +races between mounting filesystems and starting certain daemons. In +practice, the issues (e.g. +`#5754 `__) seem to be +with certain filesystems in ``/var``, specifically ``/var/log`` and +``/var/tmp``. Setting these to use ``legacy`` mounting, and listing them +in ``/etc/fstab`` makes systemd aware that these are separate +mountpoints. In turn, ``rsyslog.service`` depends on ``var-log.mount`` +by way of ``local-fs.target`` and services using the ``PrivateTmp`` +feature of systemd automatically use ``After=var-tmp.mount``. + +Until there is support for mounting ``/boot`` in the initramfs, we also +need to mount that, because it was marked ``canmount=noauto``. Also, +with UEFI, we need to ensure it is mounted before its child filesystem +``/boot/efi``. + +``rpool`` is guaranteed to be imported by the initramfs, so there is no +point in adding ``x-systemd.requires=zfs-import.target`` to those +filesystems. + +For UEFI booting, unmount /boot/efi first:: + + umount /boot/efi + +Everything else applies to both BIOS and UEFI booting:: + + zfs set mountpoint=legacy bpool/BOOT/ubuntu + echo bpool/BOOT/ubuntu /boot zfs \ + nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab + + zfs set mountpoint=legacy rpool/var/log + echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab + + zfs set mountpoint=legacy rpool/var/spool + echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab + +If you created a /var/tmp dataset:: + + zfs set mountpoint=legacy rpool/var/tmp + echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab + +If you created a /tmp dataset:: + + zfs set mountpoint=legacy rpool/tmp + echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab + +Step 6: First Boot +------------------ + +6.1 Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/ubuntu@install + zfs snapshot rpool/ROOT/ubuntu@install + +In the future, you will likely want to take snapshots before each +upgrade, and remove old snapshots (including this one) at some point to +save space. + +6.2 Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +6.3 Run these commands in the LiveCD environment to unmount all +filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + zpool export -a + +6.4 Reboot:: + + reboot + +Wait for the newly installed system to boot normally. Login as root. + +6.5 Create a user account: + +Replace ``username`` with your desired username:: + + zfs create rpool/home/username + adduser username + + cp -a /etc/skel/. /home/username + chown -R username:username /home/username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username + +6.6 Mirror GRUB + +If you installed to multiple disks, install GRUB on the additional +disks: + +6.6a For legacy (BIOS) booting:: + + dpkg-reconfigure grub-pc + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + +6.6b For UEFI booting:: + + umount /boot/efi + +For the second and subsequent disks (increment ubuntu-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "ubuntu-2" -l '\EFI\ubuntu\shimx64.efi' + + mount /boot/efi + +Step 7: (Optional) Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. This issue is currently being investigated in: +`https://github.com/zfsonlinux/zfs/issues/7734 `__ + +7.1 Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + +You can adjust the size (the ``4G`` part) to your needs. + +The compression algorithm is set to ``zle`` because it is the cheapest +available algorithm. As this guide recommends ``ashift=12`` (4 kiB +blocks on disk), the common case of a 4 kiB page size means that no +compression algorithm can reduce I/O. The exception is all-zero pages, +which are dropped by ZFS; but some form of compression has to be enabled +to get this behavior. + +7.2 Configure the swap device: + +**Caution**: Always use long ``/dev/zvol`` aliases in configuration +files. Never use a short ``/dev/zdX`` device name. + +:: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + +The ``RESUME=none`` is necessary to disable resuming from hibernation. +This does not work, as the zvol is not present (because the pool has not +yet been imported) at the time the resume script runs. If it is not +disabled, the boot process hangs for 30 seconds waiting for the swap +zvol to appear. + +7.3 Enable the swap device:: + + swapon -av + +Step 8: Full Software Installation +---------------------------------- + +8.1 Upgrade the minimal system:: + + apt dist-upgrade --yes + +8.2 Install a regular set of software: + +Choose one of the following options: + +8.2a Install a command-line environment only:: + + apt install --yes ubuntu-standard + +8.2b Install a full GUI environment:: + + apt install --yes ubuntu-desktop + vi /etc/gdm3/custom.conf + # In the [daemon] section, add: InitialSetupEnable=false + +**Hint**: If you are installing a full GUI environment, you will likely +want to manage your network with NetworkManager:: + + rm /mnt/etc/netplan/01-netcfg.yaml + vi /etc/netplan/01-network-manager-all.yaml + +.. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +8.3 Optional: Disable log compression: + +As ``/var/log`` is already compressed by ZFS, logrotate’s compression is +going to burn CPU and disk I/O for (in most cases) very little gain. +Also, if you are making snapshots of ``/var/log``, logrotate’s +compression will actually waste space, as the uncompressed data will +live on in the snapshot. You can edit the files in ``/etc/logrotate.d`` +by hand to comment out ``compress``, or use this loop (copy-and-paste +highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +8.4 Reboot:: + + reboot + +Step 9: Final Cleanup +--------------------- + +9.1 Wait for the system to boot normally. Login using the account you +created. Ensure the system (including networking) works normally. + +9.2 Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/ubuntu@install + sudo zfs destroy rpool/ROOT/ubuntu@install + +9.3 Optional: Disable the root password:: + + sudo usermod -p '*' root + +9.4 Optional: Re-enable the graphical boot process: + +If you prefer the graphical boot process, you can re-enable it now. If +you are using LUKS, it makes the prompt look nicer. + +:: + + sudo vi /etc/default/grub + # Uncomment: GRUB_TIMEOUT_STYLE=hidden + # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT + # Comment out: GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + +**Note:** Ignore errors from ``osprober``, if present. + +9.5 Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + +Store that backup somewhere safe (e.g. cloud storage). It is protected +by your LUKS passphrase, but you may wish to use additional encryption. + +**Hint:** If you created a mirror or raidz topology, repeat this for +each LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install +Environment <#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs mount rpool/ROOT/ubuntu + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + zpool export -a + reboot + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that +does slow asynchronous drive initialization, like some IBM M1015 or +OEM-branded cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to +the Linux kernel until after the regular system is started, and ZoL does +not hotplug pool members. See +`https://github.com/zfsonlinux/zfs/issues/330 `__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run +``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit +this error message. + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere + configuration. Doing this ensures that ``/dev/disk`` aliases are + created in the guest. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.rst.txt new file mode 100644 index 000000000..076eee0dd --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.rst.txt @@ -0,0 +1,869 @@ +.. highlight:: sh + +Ubuntu 20.04 Root on ZFS for Raspberry Pi +========================================= + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Newer release available +~~~~~~~~~~~~~~~~~~~~~~~ + +- See :doc:`Ubuntu 22.04 Root on ZFS for Raspberry Pi + <./Ubuntu 22.04 Root on ZFS for Raspberry Pi>` for new installs. This guide + is no longer receiving most updates. It continues to exist for reference + for existing installs that followed it. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- A Raspberry Pi 4 B. (If you are looking to install on a regular PC, see + :doc:`Ubuntu 20.04 Root on ZFS`.) +- `Ubuntu Server 20.04.4 (“Focal”) for Raspberry Pi 4 + `__ +- A microSD card or USB disk. For microSD card recommendations, see Jeff + Geerling's `performance comparison + `__. + When using a USB enclosure, `ensure it supports UASP + `__. +- An Ubuntu system (with the ability to write to the microSD card or USB disk) + other than the target Raspberry Pi. + +4 GiB of memory is recommended. Do not use deduplication, as it needs `massive +amounts of RAM `__. +Enabling deduplication is a permanent change that cannot be easily reverted. + +A Raspberry Pi 3 B/B+ would probably work (as the Pi 3 is 64-bit, though it +has less RAM), but has not been tested. Please report your results (good or +bad) using the issue link below. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +**WARNING:** Encryption has not yet been tested on the Raspberry Pi. + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +USB Disks +~~~~~~~~~ + +The Raspberry Pi 4 runs much faster using a USB Solid State Drive (SSD) than +a microSD card. These instructions can also be used to install Ubuntu on a +USB-connected SSD or other USB disk. USB disks have three requirements that +do not apply to microSD cards: + +#. The Raspberry Pi's Bootloader EEPROM must be dated 2020-09-03 or later. + + To check the bootloader version, power up the Raspberry Pi without an SD + card inserted or a USB boot device attached; the date will be on the + ``bootloader`` line. (If you do not see the ``bootloader`` line, the + bootloader is too old.) Alternatively, run ``sudo rpi-eeprom-update`` + on an existing OS on the Raspberry Pi (which on Ubuntu requires + ``apt install rpi-eeprom``). + + If needed, the bootloader can be updated from an existing OS on the + Raspberry Pi using ``rpi-eeprom-update -a`` and rebooting. + For other options, see `Updating the Bootloader + `_. + +#. The Raspberry Pi must configured for USB boot. The bootloader will show a + ``boot`` line; if ``order`` includes ``4``, USB boot is enabled. + + If not already enabled, it can be enabled from an existing OS on the + Raspberry Pi using ``rpi-eeprom-config -e``: set ``BOOT_ORDER=0xf41`` + and reboot to apply the change. On subsequent reboots, USB boot will be + enabled. + + Otherwise, it can be enabled without an existing OS as follows: + + - Download the `Raspberry Pi Imager Utility + `_. + - Flash the ``USB Boot`` image to a microSD card. The ``USB Boot`` image is + listed under ``Bootload`` in the ``Misc utility images`` folder. + - Boot the Raspberry Pi from the microSD card. USB Boot should be enabled + automatically. + +#. U-Boot on Ubuntu 20.04 does not seem to support the Raspberry Pi USB. + `Ubuntu 20.10 may work + `_. As a + work-around, the Raspberry Pi bootloader is configured to directly boot + Linux. For this to work, the Linux kernel must not be compressed. These + instructions decompress the kernel and add a script to + ``/etc/kernel/postinst.d`` to handle kernel upgrades. + +Step 1: Disk Formatting +----------------------- + +The commands in this step are run on the system other than the Raspberry Pi. + +This guide has you go to some extra work so that the stock ext4 partition can +be deleted. + +#. Download and unpack the official image:: + + curl -O https://cdimage.ubuntu.com/releases/20.04.4/release/ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz + xz -d ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz + + # or combine them to decompress as you download: + curl https://cdimage.ubuntu.com/releases/20.04.4/release/ubuntu-20.04.4-preinstalled-server-arm64+raspi.img.xz | \ + xz -d > ubuntu-20.04.4-preinstalled-server-arm64+raspi.img + +#. Dump the partition table for the image:: + + sfdisk -d ubuntu-20.04.4-preinstalled-server-arm64+raspi.img + + That will output this:: + + label: dos + label-id: 0xddbefb06 + device: ubuntu-20.04.4-preinstalled-server-arm64+raspi.img + unit: sectors + + .img1 : start= 2048, size= 524288, type=c, bootable + .img2 : start= 526336, size= 6285628, type=83 + + The important numbers are 524288 and 6285628. Store those in variables:: + + BOOT=524288 + ROOT=6285628 + +#. Create a partition script:: + + cat > partitions << EOF + label: dos + unit: sectors + + 1 : start= 2048, size=$BOOT, type=c, bootable + 2 : start=$((2048+BOOT)), size=$ROOT, type=83 + 3 : start=$((2048+BOOT+ROOT)), size=$ROOT, type=83 + EOF + +#. Connect the disk: + + Connect the disk to a machine other than the target Raspberry Pi. If any + filesystems are automatically mounted (e.g. by GNOME) unmount them. + Determine the device name. For SD, the device name is almost certainly + ``/dev/mmcblk0``. For USB SSDs, the device name is ``/dev/sdX``, where + ``X`` is a lowercase letter. ``lsblk`` can help determine the device name. + Set the ``DISK`` environment variable to the device name:: + + DISK=/dev/mmcblk0 # microSD card + DISK=/dev/sdX # USB disk + + Because partitions are named differently for ``/dev/mmcblk0`` and ``/dev/sdX`` + devices, set a second variable used when working with partitions:: + + export DISKP=${DISK}p # microSD card + export DISKP=${DISK} # USB disk ($DISKP == $DISK for /dev/sdX devices) + + **Hint**: microSD cards connected using a USB reader also have ``/dev/sdX`` + names. + + **WARNING**: The following steps destroy the existing data on the disk. Ensure + ``DISK`` and ``DISKP`` are correct before proceeding. + +#. Ensure swap partitions are not in use:: + + swapon -v + # If a partition is in use from the disk, disable it: + sudo swapoff THAT_PARTITION + +#. Clear old ZFS labels:: + + sudo zpool labelclear -f ${DISK} + + If a ZFS label still exists from a previous system/attempt, expanding the + pool will result in an unbootable system. + + **Hint:** If you do not already have the ZFS utilities installed, you can + install them with: ``sudo apt install zfsutils-linux`` Alternatively, you + can zero the entire disk with: + ``sudo dd if=/dev/zero of=${DISK} bs=1M status=progress`` + +#. Delete existing partitions:: + + echo "label: dos" | sudo sfdisk ${DISK} + sudo partprobe + ls ${DISKP}* + + Make sure there are no partitions, just the file for the disk itself. This + step is not strictly necessary; it exists to catch problems. + +#. Create the partitions:: + + sudo sfdisk $DISK < partitions + +#. Loopback mount the image:: + + IMG=$(sudo losetup -fP --show \ + ubuntu-20.04.4-preinstalled-server-arm64+raspi.img) + +#. Copy the bootloader data:: + + sudo dd if=${IMG}p1 of=${DISKP}1 bs=1M + +#. Clear old label(s) from partition 2:: + + sudo wipefs -a ${DISKP}2 + + If a filesystem with the ``writable`` label from the Ubuntu image is still + present in partition 2, the system will not boot initially. + +#. Copy the root filesystem data:: + + # NOTE: the destination is p3, not p2. + sudo dd if=${IMG}p2 of=${DISKP}3 bs=1M status=progress conv=fsync + +#. Unmount the image:: + + sudo losetup -d $IMG + +#. If setting up a USB disk: + + Decompress the kernel:: + + sudo -sE + + MNT=$(mktemp -d /mnt/XXXXXXXX) + mkdir -p $MNT/boot $MNT/root + mount ${DISKP}1 $MNT/boot + mount ${DISKP}3 $MNT/root + + zcat -qf $MNT/boot/vmlinuz >$MNT/boot/vmlinux + + Modify boot config:: + + cat >> $MNT/boot/usercfg.txt << EOF + kernel=vmlinux + initramfs initrd.img followkernel + boot_delay + EOF + + Create a script to automatically decompress the kernel after an upgrade:: + + cat >$MNT/root/etc/kernel/postinst.d/zz-decompress-kernel << 'EOF' + #!/bin/sh + + set -eu + + echo "Updating decompressed kernel..." + [ -e /boot/firmware/vmlinux ] && \ + cp /boot/firmware/vmlinux /boot/firmware/vmlinux.bak + vmlinuxtmp=$(mktemp /boot/firmware/vmlinux.XXXXXXXX) + zcat -qf /boot/vmlinuz > "$vmlinuxtmp" + mv "$vmlinuxtmp" /boot/firmware/vmlinux + EOF + + chmod +x $MNT/root/etc/kernel/postinst.d/zz-decompress-kernel + + Cleanup:: + + umount $MNT/* + rm -rf $MNT + exit + +#. Boot the Raspberry Pi. + + Move the SD/USB disk to the Raspberry Pi. Boot it and login (e.g. via SSH) + with ``ubuntu`` as the username and password. If you are using SSH, note + that it takes a little bit for cloud-init to enable password logins on the + first boot. Set a new password when prompted and login again using that + password. If you have your local SSH configured to use ``ControlPersist``, + you will have to kill the existing SSH process before logging in the second + time. + +Step 2: Setup ZFS +----------------- + +#. Become root:: + + sudo -i + +#. Set the DISK and DISKP variables again:: + + DISK=/dev/mmcblk0 # microSD card + DISKP=${DISK}p # microSD card + + DISK=/dev/sdX # USB disk + DISKP=${DISK} # USB disk + + **WARNING:** Device names can change when moving a device to a different + computer or switching the microSD card from a USB reader to a built-in + slot. Double check the device name before continuing. + +#. Install ZFS:: + + apt update + + apt install pv zfs-initramfs + + **Note:** Since this is the first boot, you may get ``Waiting for cache + lock`` because ``unattended-upgrades`` is running in the background. + Wait for it to finish. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISKP}2 + + **WARNING:** Encryption has not yet been tested on the Raspberry Pi. + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -O encryption=aes-256-gcm \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISKP}2 + + - LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISKP}2 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + Also, `disabling ACLs apparently breaks umask handling with NFSv4 + `__. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption defaults to ``aes-256-ccm``, but `the default has + changed upstream + `__ + to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM + `__, + `is faster now + `__, + and `will be even faster in the future + `__. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + +Step 3: System Installation +--------------------------- + +#. Create a filesystem dataset to act as a container:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + +#. Create a filesystem dataset for the root filesystem:: + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + + zfs create -o canmount=noauto -o mountpoint=/ \ + -o com.ubuntu.zsys:bootfs=yes \ + -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID + zfs mount rpool/ROOT/ubuntu_$UUID + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/srv + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/usr + zfs create rpool/ROOT/ubuntu_$UUID/usr/local + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/var + zfs create rpool/ROOT/ubuntu_$UUID/var/games + zfs create rpool/ROOT/ubuntu_$UUID/var/lib + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager + zfs create rpool/ROOT/ubuntu_$UUID/var/log + zfs create rpool/ROOT/ubuntu_$UUID/var/mail + zfs create rpool/ROOT/ubuntu_$UUID/var/snap + zfs create rpool/ROOT/ubuntu_$UUID/var/spool + zfs create rpool/ROOT/ubuntu_$UUID/var/www + + zfs create -o canmount=off -o mountpoint=/ \ + rpool/USERDATA + zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \ + -o canmount=on -o mountpoint=/root \ + rpool/USERDATA/root_$UUID + + If you want a separate dataset for ``/tmp``:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + +#. Optional: Ignore synchronous requests: + + microSD cards are relatively slow. If you want to increase performance + (especially when installing packages) at the cost of some safety, you can + disable flushing of synchronous requests (e.g. ``fsync()``, ``O_[D]SYNC``): + + Choose one of the following options: + + - For the root filesystem, but not user data:: + + zfs set sync=disabled rpool/ROOT + + - For everything:: + + zfs set sync=disabled rpool + + ZFS is transactional, so it will still be crash consistent. However, you + should leave ``sync`` at its default of ``standard`` if this system needs + to guarantee persistence (e.g. if it is a database or NFS server). + +#. Copy the system into the ZFS filesystems:: + + (cd /; tar -cf - --one-file-system --warning=no-file-ignored .) | \ + pv -p -bs $(du -sxm --apparent-size / | cut -f1)m | \ + (cd /mnt ; tar -x) + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Stop ``zed``:: + + systemctl stop zed + +#. Bind the virtual filesystems from the running environment to the new + ZFS environment and ``chroot`` into it:: + + mount --make-private --rbind /boot/firmware /mnt/boot/firmware + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /run /mnt/run + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login + +#. Configure a basic system environment:: + + apt update + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales + dpkg-reconfigure tzdata + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + # cryptsetup is already installed, but this marks it as manually + # installed so it is not automatically removed. + apt install --yes cryptsetup + + echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + +#. Optional: Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Setup system groups:: + + addgroup --system lpadmin + addgroup --system sambashare + +#. Patch a dependency loop: + + For ZFS native encryption or LUKS:: + + apt install --yes curl patch + + curl https://launchpadlibrarian.net/478315221/2150-fix-systemd-dependency-loops.patch | \ + sed "s|/etc|/lib|;s|\.in$||" | (cd / ; patch -p1) + + Ignore the failure in Hunk #2 (say ``n`` twice). + + This patch is from `Bug #1875577 Encrypted swap won't load on 20.04 with + zfs root + `__. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & + + Force a cache update:: + + zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID + + Verify that ``zed`` updated the cache by making sure this is not empty, + which will take a few seconds:: + + cat /etc/zfs/zfs-list.cache/rpool + + Stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +#. Remove old filesystem from ``/etc/fstab``:: + + vi /etc/fstab + # Remove the old root filesystem line: + # LABEL=writable / ext4 ... + +#. Configure kernel command line:: + + cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak + sed -i "s|root=LABEL=writable rootfstype=ext4|root=ZFS=rpool/ROOT/ubuntu_$UUID|" \ + /boot/firmware/cmdline.txt + sed -i "s| fixrtc||" /boot/firmware/cmdline.txt + sed -i "s|$| init_on_alloc=0|" /boot/firmware/cmdline.txt + + The ``fixrtc`` script is not compatible with ZFS and will cause the boot + to hang for 180 seconds. + + The ``init_on_alloc=0`` is to address `performance regressions + `__. + +#. Optional (but highly recommended): Make debugging booting easier:: + + sed -i "s|$| nosplash|" /boot/firmware/cmdline.txt + +#. Reboot:: + + exit + reboot + + Wait for the newly installed system to boot normally. Login as ``ubuntu``. + +Step 5: First Boot +------------------ + +#. Become root:: + + sudo -i + +#. Set the DISK variable again:: + + DISK=/dev/mmcblk0 # microSD card + + DISK=/dev/sdX # USB disk + +#. Delete the ext4 partition and expand the ZFS partition:: + + sfdisk $DISK --delete 3 + echo ", +" | sfdisk --no-reread -N 2 $DISK + + **Note:** This does not automatically expand the pool. That will be happen + on reboot. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}') + zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \ + -o canmount=on -o mountpoint=/home/$username \ + rpool/USERDATA/${username}_$UUID + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username + +#. Reboot:: + + reboot + + Wait for the system to boot normally. Login using the account you + created. + +#. Become root:: + + sudo -i + +#. Expand the ZFS pool: + + Verify the pool expanded:: + + zfs list rpool + + If it did not automatically expand, try to expand it manually:: + + DISK=/dev/mmcblk0 # microSD card + DISKP=${DISK}p # microSD card + + DISK=/dev/sdX # USB disk + DISKP=${DISK} # USB disk + + zpool online -e rpool ${DISKP}2 + +#. Delete the ``ubuntu`` user:: + + deluser --remove-home ubuntu + +Step 6: Full Software Installation +---------------------------------- + +#. Optional: Remove cloud-init:: + + vi /etc/netplan/01-netcfg.yaml + + .. code-block:: yaml + + network: + version: 2 + ethernets: + eth0: + dhcp4: true + + :: + + rm /etc/netplan/50-cloud-init.yaml + apt purge --autoremove ^cloud-init + rm -rf /etc/cloud + +#. Optional: Remove other storage packages:: + + apt purge --autoremove bcache-tools btrfs-progs cloud-guest-utils lvm2 \ + mdadm multipath-tools open-iscsi overlayroot xfsprogs + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Optional: Install a full GUI environment:: + + apt install --yes ubuntu-desktop + echo dtoverlay=vc4-fkms-v3d >> /boot/firmware/usercfg.txt + + **Hint**: If you are installing a full GUI environment, you will likely + want to remove cloud-init as discussed above but manage your network with + NetworkManager:: + + rm /etc/netplan/*.yaml + vi /etc/netplan/01-network-manager-all.yaml + + .. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +#. Optional (but recommended): Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 7: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst.txt new file mode 100644 index 000000000..0eab863d2 --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst.txt @@ -0,0 +1,1307 @@ +.. highlight:: sh + +Ubuntu 20.04 Root on ZFS +======================== + +.. contents:: Table of Contents + :local: + +Newer release available +----------------------- + +- See :doc:`Ubuntu 22.04 Root on ZFS <./Ubuntu 22.04 Root on ZFS>` for new + installs. This guide is no longer receiving most updates. It continues + to exist for reference for existing installs that followed it. + +Errata +------ + +If you previously installed using this guide, please apply these fixes if +applicable: + +/boot/grub Not Mounted +~~~~~~~~~~~~~~~~~~~~~~ + +| **Severity:** Normal (previously Grave) +| **Fixed:** 2020-12-05 (previously 2020-05-30) + +For a mirror or raidz topology, ``/boot/grub`` is on a separate dataset. This +was originally ``bpool/grub``, then changed on 2020-05-30 to +``bpool/BOOT/ubuntu_UUID/grub`` to work-around zsys setting ``canmount=off`` +which would result in ``/boot/grub`` not mounting. This work-around lead to +`issues with snapshot restores +`__. The underlying `zsys +issue `__ was fixed and backported +to 20.04, so it is now back to being ``bpool/grub``. + +* If you never applied the 2020-05-30 errata fix, then ``/boot/grub`` is + probably not mounting. Check that:: + + mount | grep /boot/grub + + If it is mounted, everything is fine. Stop. Otherwise:: + + zfs set canmount=on bpool/grub + update-initramfs -c -k all + update-grub + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy + + Run this for the additional disk(s), incrementing the “2” to “3” and so on + for both ``/boot/efi2`` and ``ubuntu-2``:: + + cp -a /boot/efi/EFI /boot/efi2 + grub-install --target=x86_64-efi --efi-directory=/boot/efi2 \ + --bootloader-id=ubuntu-2 --recheck --no-floppy + + Check that these have ``set prefix=($root)'/grub@'``:: + + grep prefix= \ + /boot/efi/EFI/ubuntu/grub.cfg \ + /boot/efi2/EFI/ubuntu-2/grub.cfg + +* If you applied the 2020-05-30 errata fix, then you should revert the dataset + rename:: + + umount /boot/grub + zfs rename bpool/BOOT/ubuntu_UUID/grub bpool/grub + zfs set com.ubuntu.zsys:bootfs=no bpool/grub + zfs mount bpool/grub + +AccountsService Not Mounted +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +| **Severity:** Normal +| **Fixed:** 2020-05-28 + +The HOWTO previously had a typo in AccountsService (where Accounts is plural) +as AccountServices (where Services is plural). This means that AccountsService +data will be written to the root filesystem. This is only harmful in the event +of a rollback of the root filesystem that does not include a rollback of the +user data. Check it:: + + zfs list | grep Account + +If the “s” is on “Accounts”, you are good. If it is on “Services”, fix it:: + + mv /var/lib/AccountsService /var/lib/AccountsService-old + zfs list -r rpool + # Replace the UUID twice below: + zfs rename rpool/ROOT/ubuntu_UUID/var/lib/AccountServices \ + rpool/ROOT/ubuntu_UUID/var/lib/AccountsService + mv /var/lib/AccountsService-old/* /var/lib/AccountsService + rmdir /var/lib/AccountsService-old + +Overview +-------- + +Ubuntu Installer +~~~~~~~~~~~~~~~~ + +The Ubuntu installer has `support for root-on-ZFS +`__. +This HOWTO produces nearly identical results as the Ubuntu installer because of +`bidirectional collaboration +`__. + +If you want a single-disk, unencrypted, desktop install, use the installer. It +is far easier and faster than doing everything by hand. + +If you want a ZFS native encrypted, desktop install, you can `trivially edit +the installer +`__. +The ``-O recordsize=1M`` there is unrelated to encryption; omit that unless +you understand it. Make sure to use a password that is at least 8 characters +or this hack will crash the installer. Additionally, once the system is +installed, you should switch to encrypted swap:: + + swapon -v + # Note the device, including the partition. + + ls -l /dev/disk/by-id/ + # Find the by-id name of the disk. + + sudo swapoff -a + sudo vi /etc/fstab + # Remove the swap entry. + + sudo apt install --yes cryptsetup + + # Replace DISK-partN as appropriate from above: + echo swap /dev/disk/by-id/DISK-partN /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 | sudo tee -a /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 | sudo tee -a /etc/fstab + +`Hopefully the installer will gain encryption support in +the future +`__. + +If you want to setup a mirror or raidz topology, use LUKS encryption, and/or +install a server (no desktop GUI), use this HOWTO. + +Raspberry Pi +~~~~~~~~~~~~ + +If you are looking to install on a Raspberry Pi, see +:doc:`Ubuntu 20.04 Root on ZFS for Raspberry Pi`. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `Ubuntu 20.04.4 (“Focal”) Desktop CD + `__ + (*not* any server images) +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to the + Internet as appropriate (e.g. join your WiFi network). Open a terminal + (press Ctrl-Alt-T). + +#. Setup and update the repositories:: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + passwd + # There is no current password. + sudo apt install --yes openssh-server vim + + Installing the full ``vim`` package fixes terminal problems that occur when + using the ``vim-tiny`` package (that ships in the Live CD environment) over + SSH. + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh ubuntu@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfsutils-linux + + systemctl stop zed + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition (e.g. a swap partition per this HOWTO): + mdadm --zero-superblock --force ${DISK}-part2 + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Create bootloader partition(s):: + + sgdisk -n1:1M:+512M -t1:EF00 $DISK + + # For legacy (BIOS) booting: + sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK + + **Note:** While the Ubuntu installer uses an MBR label for legacy (BIOS) + booting, this HOWTO uses GPT partition labels for both UEFI and legacy + (BIOS) booting. This is simpler than having two options. It is also + provides forward compatibility (future proofing). In other words, for + legacy (BIOS) booting, this will allow you to move the disk(s) to a new + system/motherboard in the future without having to rebuild the pool (and + restore your data from a backup). The ESP is created in both cases for + similar reasons. Additionally, the ESP is used for ``/boot/grub`` in + single-disk installs, as :ref:`discussed below `. + +#. Create a partition for swap: + + Previous versions of this HOWTO put swap on a zvol. `Ubuntu recommends + against this configuration due to deadlocks. + `__ There + is `a bug report upstream + `__. + + Putting swap on a partition gives up the benefit of ZFS checksums (for your + swap). That is probably the right trade-off given the reports of ZFS + deadlocks with swap. If you are bothered by this, simply do not enable + swap. + + Choose one of the following options if you want swap: + + - For a single-disk install:: + + sgdisk -n2:0:+500M -t2:8200 $DISK + + - For a mirror or raidz topology:: + + sgdisk -n2:0:+500M -t2:FD00 $DISK + + Adjust the swap swize to your needs. If you wish to enable hiberation + (which only works for unencrypted installs), the swap partition must be + at least as large as the system's RAM. + +#. Create a boot pool partition:: + + sgdisk -n3:0:+2G -t3:BE00 $DISK + + The Ubuntu installer uses 5% of the disk space constrained to a minimum of + 500 MiB and a maximum of 2 GiB. `Making this too small (and 500 MiB might + be too small) can result in an inability to upgrade the kernel. + `__ + +#. Create a root pool partition: + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 -o autotrim=on -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The boot pool name is no longer arbitrary. It _must_ be ``bpool``. + If you really want to rename it, edit ``/etc/grub.d/10_linux_zfs`` later, + after GRUB is installed (and run ``update-grub``). + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - The ``spacemap_v2`` feature has been tested and is safe to use. The boot + pool is small, so this does not matter in practice. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 -o autotrim=on \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 -o autotrim=on \ + -O encryption=aes-256-gcm \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 -o autotrim=on \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + Also, `disabling ACLs apparently breaks umask handling with NFSv4 + `__. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption defaults to ``aes-256-ccm``, but `the default has + changed upstream + `__ + to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM + `__, + `is faster now + `__, + and `will be even faster in the future + `__. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + +#. Create filesystem datasets for the root and boot filesystems:: + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + + zfs create -o mountpoint=/ \ + -o com.ubuntu.zsys:bootfs=yes \ + -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID + + zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID + +#. Create datasets:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/srv + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/usr + zfs create rpool/ROOT/ubuntu_$UUID/usr/local + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/var + zfs create rpool/ROOT/ubuntu_$UUID/var/games + zfs create rpool/ROOT/ubuntu_$UUID/var/lib + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager + zfs create rpool/ROOT/ubuntu_$UUID/var/log + zfs create rpool/ROOT/ubuntu_$UUID/var/mail + zfs create rpool/ROOT/ubuntu_$UUID/var/snap + zfs create rpool/ROOT/ubuntu_$UUID/var/spool + zfs create rpool/ROOT/ubuntu_$UUID/var/www + + zfs create -o canmount=off -o mountpoint=/ \ + rpool/USERDATA + zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \ + -o canmount=on -o mountpoint=/root \ + rpool/USERDATA/root_$UUID + chmod 700 /mnt/root + + For a mirror or raidz topology, create a dataset for ``/boot/grub``:: + + zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub + + Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + +#. Install the minimal system:: + + debootstrap focal /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/netplan/01-netcfg.yaml + + .. code-block:: yaml + + network: + version: 2 + ethernets: + NAME: + dhcp4: true + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu focal-security main restricted universe multiverse + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + apt update + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + + Install your preferred text editor:: + + apt install --yes nano + + apt install --yes vim + + Installing the full ``vim`` package fixes terminal problems that occur when + using the ``vim-tiny`` package (that is installed by ``debootstrap``) over + SSH. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Create the EFI filesystem: + + Perform these steps for both UEFI and legacy (BIOS) booting:: + + apt install --yes dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part1) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + + For a mirror or raidz topology, repeat the `mkdosfs` for the additional + disks, but do not repeat the other commands. + + **Note:** The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster + size (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + +#. Put ``/boot/grub`` on the EFI System Partition: + + .. _boot-grub-esp: + + For a single-disk install only:: + + mkdir /boot/efi/grub /boot/grub + echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab + mount /boot/grub + + This allows GRUB to write to ``/boot/grub`` (since it is on a FAT-formatted + ESP instead of on ZFS), which means that ``/boot/grub/grubenv`` and the + ``recordfail`` feature works as expected: if the boot fails, the normally + hidden GRUB menu will be shown on the next boot. For a mirror or raidz + topology, we do not want GRUB writing to the EFI System Partition. This is + because we duplicate it at install without a mechanism to update the copies + when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus, + we keep ``/boot/grub`` on the boot pool for the mirror or raidz topologies. + This preserves correct mirroring/raidz behavior, at the expense of being + able to write to ``/boot/grub/grubenv`` and thus the ``recordfail`` + behavior. + +#. Install GRUB/Linux/ZFS in the chroot environment for the new system: + + Choose one of the following options: + + - Install GRUB/Linux/ZFS for legacy (BIOS) booting:: + + apt install --yes grub-pc linux-image-generic zfs-initramfs zsys + + Select (using the space bar) all of the disks (not partitions) in your + pool. + + - Install GRUB/Linux/ZFS for UEFI booting:: + + apt install --yes \ + grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \ + shim-signed zfs-initramfs zsys + + **Notes:** + + - Ignore any error messages saying ``ERROR: Couldn't resolve device`` and + ``WARNING: Couldn't determine root device``. `cryptsetup does not + support ZFS + `__. + + - Ignore any error messages saying ``Module zfs not found`` and + ``couldn't connect to zsys daemon``. The first seems to occur due to a + version mismatch between the Live CD kernel and the chroot environment, + but this is irrelevant since the module is already loaded. The second + may be caused by the first but either way is irrelevant since ``zed`` + is started manually later. + + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. For some reason, + grub-efi-amd64 does not prompt for ``install_devices`` here, but does + after a reboot. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from ``update-grub``. ``os-prober`` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Configure swap: + + Choose one of the following options if you want swap: + + - For an unencrypted single-disk install:: + + mkswap -f ${DISK}-part2 + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + none swap discard 0 0 >> /etc/fstab + swapon -a + + - For an unencrypted mirror or raidz topology:: + + apt install --yes mdadm + + # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and + # raid-devices if necessary and specify the actual devices. + mdadm --create /dev/md0 --metadata=1.2 --level=mirror \ + --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2 + mkswap -f /dev/md0 + echo /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/md0) \ + none swap discard 0 0 >> /etc/fstab + + - For an encrypted (LUKS or ZFS native encryption) single-disk install:: + + apt install --yes cryptsetup + + echo swap ${DISK}-part2 /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab + + - For an encrypted (LUKS or ZFS native encryption) mirror or raidz + topology:: + + apt install --yes cryptsetup mdadm + + # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and + # raid-devices if necessary and specify the actual devices. + mdadm --create /dev/md0 --metadata=1.2 --level=mirror \ + --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2 + echo swap /dev/md0 /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Setup system groups:: + + addgroup --system lpadmin + addgroup --system lxd + addgroup --system sambashare + +#. Patch a dependency loop: + + For ZFS native encryption or LUKS:: + + apt install --yes curl patch + + curl https://launchpadlibrarian.net/478315221/2150-fix-systemd-dependency-loops.patch | \ + sed "s|/etc|/lib|;s|\.in$||" | (cd / ; patch -p1) + + Ignore the failure in Hunk #2 (say ``n`` twice). + + This patch is from `Bug #1875577 Encrypted swap won't load on 20.04 with + zfs root + `__. + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Disable memory zeroing:: + + vi /etc/default/grub + # Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT + # Save and quit (or see the next step). + + This is to address `performance regressions + `__. + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Comment out: GRUB_TIMEOUT_STYLE=hidden + # Set: GRUB_TIMEOUT=5 + # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 + # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + Choose one of the following options: + + - For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the + ``grub-install`` command for each disk in the pool. + + - For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy + +#. Disable grub-initrd-fallback.service + + For a mirror or raidz topology:: + + systemctl mask grub-initrd-fallback.service + + This is the service for ``/boot/grub/grubenv`` which does not work on + mirrored or raidz topologies. Disabling this keeps it from blocking + subsequent mounts of ``/boot/grub`` if that mount ever fails. + + Another option would be to set ``RequiresMountsFor=/boot/grub`` via a + drop-in unit, but that is more work to do here for no reason. Hopefully + `this bug `__ + will be fixed upstream. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/ubuntu_$UUID + zfs set canmount=on rpool/ROOT/ubuntu_$UUID + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +Step 6: First Boot +------------------ + +#. Install GRUB to additional disks: + + For a UEFI mirror or raidz topology only:: + + dpkg-reconfigure grub-efi-amd64 + + Select (using the space bar) all of the ESP partitions (partition 1 on + each of the pool disks). + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}') + zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \ + -o canmount=on -o mountpoint=/home/$username \ + rpool/USERDATA/${username}_$UUID + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username + +Step 7: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software: + + Choose one of the following options: + + - Install a command-line environment only:: + + apt install --yes ubuntu-standard + + - Install a full GUI environment:: + + apt install --yes ubuntu-desktop + + **Hint**: If you are installing a full GUI environment, you will likely + want to manage your network with NetworkManager:: + + rm /etc/netplan/01-netcfg.yaml + vi /etc/netplan/01-network-manager-all.yaml + + .. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 8: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Uncomment: GRUB_TIMEOUT_STYLE=hidden + # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT + # Comment out: GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + # Replace “UUID” as appropriate; use zfs list to find it: + zfs mount rpool/ROOT/ubuntu_UUID + zfs mount bpool/BOOT/ubuntu_UUID + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.ms.fd:/usr/share/OVMF/OVMF_VARS.ms.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.rst.txt new file mode 100644 index 000000000..3d4d523ca --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.rst.txt @@ -0,0 +1,894 @@ +.. highlight:: sh + +Ubuntu 22.04 Root on ZFS for Raspberry Pi +========================================= + +.. contents:: Table of Contents + :local: + +Overview +-------- + +.. note:: + These are beta instructions. The author still needs to test them. + Additionally, it may be possible to use U-Boot now, which would eliminate + some of the customizations. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- A Raspberry Pi 4 B. (If you are looking to install on a regular PC, see + :doc:`Ubuntu 22.04 Root on ZFS`.) +- `Ubuntu Server 22.04 (“Jammy”) for Raspberry Pi 4 + `__ +- A microSD card or USB disk. For microSD card recommendations, see Jeff + Geerling's `performance comparison + `__. + When using a USB enclosure, `ensure it supports UASP + `__. +- An Ubuntu system (with the ability to write to the microSD card or USB disk) + other than the target Raspberry Pi. + +4 GiB of memory is recommended. Do not use deduplication, as it needs `massive +amounts of RAM `__. +Enabling deduplication is a permanent change that cannot be easily reverted. + +A Raspberry Pi 3 B/B+ would probably work (as the Pi 3 is 64-bit, though it +has less RAM), but has not been tested. Please report your results (good or +bad) using the issue link below. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +**WARNING:** Encryption has not yet been tested on the Raspberry Pi. + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +USB Disks +~~~~~~~~~ + +The Raspberry Pi 4 runs much faster using a USB Solid State Drive (SSD) than +a microSD card. These instructions can also be used to install Ubuntu on a +USB-connected SSD or other USB disk. USB disks have three requirements that +do not apply to microSD cards: + +#. The Raspberry Pi's Bootloader EEPROM must be dated 2020-09-03 or later. + + To check the bootloader version, power up the Raspberry Pi without an SD + card inserted or a USB boot device attached; the date will be on the + ``bootloader`` line. (If you do not see the ``bootloader`` line, the + bootloader is too old.) Alternatively, run ``sudo rpi-eeprom-update`` + on an existing OS on the Raspberry Pi (which on Ubuntu requires + ``apt install rpi-eeprom``). + + If needed, the bootloader can be updated from an existing OS on the + Raspberry Pi using ``rpi-eeprom-update -a`` and rebooting. + For other options, see `Updating the Bootloader + `_. + +#. The Raspberry Pi must configured for USB boot. The bootloader will show a + ``boot`` line; if ``order`` includes ``4``, USB boot is enabled. + + If not already enabled, it can be enabled from an existing OS on the + Raspberry Pi using ``rpi-eeprom-config -e``: set ``BOOT_ORDER=0xf41`` + and reboot to apply the change. On subsequent reboots, USB boot will be + enabled. + + Otherwise, it can be enabled without an existing OS as follows: + + - Download the `Raspberry Pi Imager Utility + `_. + - Flash the ``USB Boot`` image to a microSD card. The ``USB Boot`` image is + listed under ``Bootload`` in the ``Misc utility images`` folder. + - Boot the Raspberry Pi from the microSD card. USB Boot should be enabled + automatically. + +#. U-Boot on Ubuntu 20.04 does not seem to support the Raspberry Pi USB. + `Ubuntu 20.10 may work + `_. As a + work-around, the Raspberry Pi bootloader is configured to directly boot + Linux. For this to work, the Linux kernel must not be compressed. These + instructions decompress the kernel and add a script to + ``/etc/kernel/postinst.d`` to handle kernel upgrades. + +Step 1: Disk Formatting +----------------------- + +The commands in this step are run on the system other than the Raspberry Pi. + +This guide has you go to some extra work so that the stock ext4 partition can +be deleted. + +#. Download and unpack the official image:: + + curl -O https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz + xz -d ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz + + # or combine them to decompress as you download: + curl https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz | \ + xz -d > ubuntu-22.04.1-preinstalled-server-arm64+raspi.img + +#. Dump the partition table for the image:: + + sfdisk -d ubuntu-22.04.1-preinstalled-server-arm64+raspi.img + + That will output this:: + + label: dos + label-id: 0x638274e3 + device: ubuntu-22.04.1-preinstalled-server-arm64+raspi.img + unit: sectors + + .img1 : start= 2048, size= 524288, type=c, bootable + .img2 : start= 526336, size= 7193932, type=83 + + The important numbers are 524288 and 7193932. Store those in variables:: + + BOOT=524288 + ROOT=7193932 + +#. Create a partition script:: + + cat > partitions << EOF + label: dos + unit: sectors + + 1 : start= 2048, size=$BOOT, type=c, bootable + 2 : start=$((2048+BOOT)), size=$ROOT, type=83 + 3 : start=$((2048+BOOT+ROOT)), size=$ROOT, type=83 + EOF + +#. Connect the disk: + + Connect the disk to a machine other than the target Raspberry Pi. If any + filesystems are automatically mounted (e.g. by GNOME) unmount them. + Determine the device name. For SD, the device name is almost certainly + ``/dev/mmcblk0``. For USB SSDs, the device name is ``/dev/sdX``, where + ``X`` is a lowercase letter. ``lsblk`` can help determine the device name. + Set the ``DISK`` environment variable to the device name:: + + DISK=/dev/mmcblk0 # microSD card + DISK=/dev/sdX # USB disk + + Because partitions are named differently for ``/dev/mmcblk0`` and ``/dev/sdX`` + devices, set a second variable used when working with partitions:: + + export DISKP=${DISK}p # microSD card + export DISKP=${DISK} # USB disk ($DISKP == $DISK for /dev/sdX devices) + + **Hint**: microSD cards connected using a USB reader also have ``/dev/sdX`` + names. + + **WARNING**: The following steps destroy the existing data on the disk. Ensure + ``DISK`` and ``DISKP`` are correct before proceeding. + +#. Ensure swap partitions are not in use:: + + swapon -v + # If a partition is in use from the disk, disable it: + sudo swapoff THAT_PARTITION + +#. Clear old ZFS labels:: + + sudo zpool labelclear -f ${DISK} + + If a ZFS label still exists from a previous system/attempt, expanding the + pool will result in an unbootable system. + + **Hint:** If you do not already have the ZFS utilities installed, you can + install them with: ``sudo apt install zfsutils-linux`` Alternatively, you + can zero the entire disk with: + ``sudo dd if=/dev/zero of=${DISK} bs=1M status=progress`` + +#. Delete existing partitions:: + + echo "label: dos" | sudo sfdisk ${DISK} + sudo partprobe + ls ${DISKP}* + + Make sure there are no partitions, just the file for the disk itself. This + step is not strictly necessary; it exists to catch problems. + +#. Create the partitions:: + + sudo sfdisk $DISK < partitions + +#. Loopback mount the image:: + + IMG=$(sudo losetup -fP --show \ + ubuntu-22.04.1-preinstalled-server-arm64+raspi.img) + +#. Copy the bootloader data:: + + sudo dd if=${IMG}p1 of=${DISKP}1 bs=1M + +#. Clear old label(s) from partition 2:: + + sudo wipefs -a ${DISKP}2 + + If a filesystem with the ``writable`` label from the Ubuntu image is still + present in partition 2, the system will not boot initially. + +#. Copy the root filesystem data:: + + # NOTE: the destination is p3, not p2. + sudo dd if=${IMG}p2 of=${DISKP}3 bs=1M status=progress conv=fsync + +#. Unmount the image:: + + sudo losetup -d $IMG + +#. If setting up a USB disk: + + Decompress the kernel:: + + sudo -sE + + MNT=$(mktemp -d /mnt/XXXXXXXX) + mkdir -p $MNT/boot $MNT/root + mount ${DISKP}1 $MNT/boot + mount ${DISKP}3 $MNT/root + + zcat -qf $MNT/boot/vmlinuz >$MNT/boot/vmlinux + + Modify boot config:: + + cat >> $MNT/boot/usercfg.txt << EOF + kernel=vmlinux + initramfs initrd.img followkernel + boot_delay + EOF + + Create a script to automatically decompress the kernel after an upgrade:: + + cat >$MNT/root/etc/kernel/postinst.d/zz-decompress-kernel << 'EOF' + #!/bin/sh + + set -eu + + echo "Updating decompressed kernel..." + [ -e /boot/firmware/vmlinux ] && \ + cp /boot/firmware/vmlinux /boot/firmware/vmlinux.bak + vmlinuxtmp=$(mktemp /boot/firmware/vmlinux.XXXXXXXX) + zcat -qf /boot/vmlinuz > "$vmlinuxtmp" + mv "$vmlinuxtmp" /boot/firmware/vmlinux + EOF + + chmod +x $MNT/root/etc/kernel/postinst.d/zz-decompress-kernel + + Cleanup:: + + umount $MNT/* + rm -rf $MNT + exit + +#. Boot the Raspberry Pi. + + Move the SD/USB disk to the Raspberry Pi. Boot it and login (e.g. via SSH) + with ``ubuntu`` as the username and password. If you are using SSH, note + that it takes a little bit for cloud-init to enable password logins on the + first boot. Set a new password when prompted and login again using that + password. If you have your local SSH configured to use ``ControlPersist``, + you will have to kill the existing SSH process before logging in the second + time. + +Step 2: Setup ZFS +----------------- + +#. Become root:: + + sudo -i + +#. Set the DISK and DISKP variables again:: + + DISK=/dev/mmcblk0 # microSD card + DISKP=${DISK}p # microSD card + + DISK=/dev/sdX # USB disk + DISKP=${DISK} # USB disk + + **WARNING:** Device names can change when moving a device to a different + computer or switching the microSD card from a USB reader to a built-in + slot. Double check the device name before continuing. + +#. Install ZFS:: + + apt update + + apt install pv zfs-initramfs + + **Note:** Since this is the first boot, you may get ``Waiting for cache + lock`` because ``unattended-upgrades`` is running in the background. + Wait for it to finish. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISKP}2 + + **WARNING:** Encryption has not yet been tested on the Raspberry Pi. + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -O encryption=on \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISKP}2 + + - LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISKP}2 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + Also, `disabling ACLs apparently breaks umask handling with NFSv4 + `__. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + +Step 3: System Installation +--------------------------- + +#. Create a filesystem dataset to act as a container:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + +#. Create a filesystem dataset for the root filesystem:: + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + + zfs create -o canmount=noauto -o mountpoint=/ \ + -o com.ubuntu.zsys:bootfs=yes \ + -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID + zfs mount rpool/ROOT/ubuntu_$UUID + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/usr + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/var + zfs create rpool/ROOT/ubuntu_$UUID/var/lib + zfs create rpool/ROOT/ubuntu_$UUID/var/log + zfs create rpool/ROOT/ubuntu_$UUID/var/spool + + zfs create -o canmount=off -o mountpoint=/ \ + rpool/USERDATA + zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \ + -o canmount=on -o mountpoint=/root \ + rpool/USERDATA/root_$UUID + chmod 700 /mnt/root + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to separate these to exclude them from snapshots:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/cache + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/nfs + zfs create rpool/ROOT/ubuntu_$UUID/var/tmp + chmod 1777 /mnt/var/tmp + + If desired (the Ubuntu installer creates these):: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg + + If you use /srv on this system:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/srv + + If you use /usr/local on this system:: + + zfs create rpool/ROOT/ubuntu_$UUID/usr/local + + If this system will have games installed:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/games + + If this system will have a GUI:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/docker + + If this system will store local email in /var/mail:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/mail + + If this system will use Snap packages:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/snap + + If you use /var/www on this system:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/www + + For a mirror or raidz topology, create a dataset for ``/boot/grub``:: + + zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + **Note:** If you separate a directory required for booting (e.g. ``/etc``) + into its own dataset, you must add it to + ``ZFS_INITRD_ADDITIONAL_DATASETS`` in ``/etc/default/zfs``. Datasets + with ``canmount=off`` (like ``rpool/usr`` above) do not matter for this. + +#. Optional: Ignore synchronous requests: + + microSD cards are relatively slow. If you want to increase performance + (especially when installing packages) at the cost of some safety, you can + disable flushing of synchronous requests (e.g. ``fsync()``, ``O_[D]SYNC``): + + Choose one of the following options: + + - For the root filesystem, but not user data:: + + zfs set sync=disabled rpool/ROOT + + - For everything:: + + zfs set sync=disabled rpool + + ZFS is transactional, so it will still be crash consistent. However, you + should leave ``sync`` at its default of ``standard`` if this system needs + to guarantee persistence (e.g. if it is a database or NFS server). + +#. Copy the system into the ZFS filesystems:: + + (cd /; tar -cf - --one-file-system --warning=no-file-ignored .) | \ + pv -p -bs $(du -sxm --apparent-size / | cut -f1)m | \ + (cd /mnt ; tar -x) + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Stop ``zed``:: + + systemctl stop zed + +#. Bind the virtual filesystems from the running environment to the new + ZFS environment and ``chroot`` into it:: + + mount --make-private --rbind /boot/firmware /mnt/boot/firmware + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /run /mnt/run + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login + +#. Configure a basic system environment:: + + apt update + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales + dpkg-reconfigure tzdata + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + # cryptsetup is already installed, but this marks it as manually + # installed so it is not automatically removed. + apt install --yes cryptsetup + + echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + +#. Optional: Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Setup system groups:: + + addgroup --system lpadmin + addgroup --system sambashare + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Force a cache update:: + + zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID + + Verify that ``zed`` updated the cache by making sure this is not empty, + which will take a few seconds:: + + cat /etc/zfs/zfs-list.cache/rpool + + Stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +#. Remove old filesystem from ``/etc/fstab``:: + + vi /etc/fstab + # Remove the old root filesystem line: + # LABEL=writable / ext4 ... + +#. Configure kernel command line:: + + cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak + sed -i "s|root=LABEL=writable rootfstype=ext4|root=ZFS=rpool/ROOT/ubuntu_$UUID|" \ + /boot/firmware/cmdline.txt + sed -i "s| fixrtc||" /boot/firmware/cmdline.txt + sed -i "s|$| init_on_alloc=0|" /boot/firmware/cmdline.txt + + The ``fixrtc`` script is not compatible with ZFS and will cause the boot + to hang for 180 seconds. + + The ``init_on_alloc=0`` is to address `performance regressions + `__. + +#. Optional (but highly recommended): Make debugging booting easier:: + + sed -i "s|$| nosplash|" /boot/firmware/cmdline.txt + +#. Reboot:: + + exit + reboot + + Wait for the newly installed system to boot normally. Login as ``ubuntu``. + +Step 5: First Boot +------------------ + +#. Become root:: + + sudo -i + +#. Set the DISK variable again:: + + DISK=/dev/mmcblk0 # microSD card + + DISK=/dev/sdX # USB disk + +#. Delete the ext4 partition and expand the ZFS partition:: + + sfdisk $DISK --delete 3 + echo ", +" | sfdisk --no-reread -N 2 $DISK + + **Note:** This does not automatically expand the pool. That will be happen + on reboot. + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}') + zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \ + -o canmount=on -o mountpoint=/home/$username \ + rpool/USERDATA/${username}_$UUID + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username + +#. Reboot:: + + reboot + + Wait for the system to boot normally. Login using the account you + created. + +#. Become root:: + + sudo -i + +#. Expand the ZFS pool: + + Verify the pool expanded:: + + zfs list rpool + + If it did not automatically expand, try to expand it manually:: + + DISK=/dev/mmcblk0 # microSD card + DISKP=${DISK}p # microSD card + + DISK=/dev/sdX # USB disk + DISKP=${DISK} # USB disk + + zpool online -e rpool ${DISKP}2 + +#. Delete the ``ubuntu`` user:: + + deluser --remove-home ubuntu + +Step 6: Full Software Installation +---------------------------------- + +#. Optional: Remove cloud-init:: + + vi /etc/netplan/01-netcfg.yaml + + .. code-block:: yaml + + network: + version: 2 + ethernets: + eth0: + dhcp4: true + + :: + + rm /etc/netplan/50-cloud-init.yaml + apt purge --autoremove ^cloud-init + rm -rf /etc/cloud + +#. Optional: Remove other storage packages:: + + apt purge --autoremove bcache-tools btrfs-progs cloud-guest-utils lvm2 \ + mdadm multipath-tools open-iscsi overlayroot xfsprogs + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Optional: Install a full GUI environment:: + + apt install --yes ubuntu-desktop + echo dtoverlay=vc4-fkms-v3d >> /boot/firmware/usercfg.txt + + **Hint**: If you are installing a full GUI environment, you will likely + want to remove cloud-init as discussed above but manage your network with + NetworkManager:: + + rm /etc/netplan/*.yaml + vi /etc/netplan/01-network-manager-all.yaml + + .. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +#. Optional (but recommended): Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 7: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). diff --git a/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.rst.txt b/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.rst.txt new file mode 100644 index 000000000..e57b50f98 --- /dev/null +++ b/_sources/Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.rst.txt @@ -0,0 +1,1229 @@ +.. highlight:: sh + +Ubuntu 22.04 Root on ZFS +======================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Ubuntu Installer +~~~~~~~~~~~~~~~~ + +The Ubuntu installer still has ZFS support, but `it was almost removed for +22.04 `__ +and `it no longer installs zsys +`__. At +the moment, this HOWTO still uses zsys, but that will be probably be removed +in the near future. + +Raspberry Pi +~~~~~~~~~~~~ + +If you are looking to install on a Raspberry Pi, see +:doc:`Ubuntu 22.04 Root on ZFS for Raspberry Pi`. + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `Ubuntu 22.04.1 (“jammy”) Desktop CD + `__ + (*not* any server images) +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please `file a new issue and mention @rlaager +`__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo apt install python3-pip + + pip3 install -r docs/requirements.txt + + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the Ubuntu Live CD. From the GRUB boot menu, select *Try or Install Ubuntu*. + On the *Welcome* page, select your preferred language and *Try Ubuntu*. + Connect your system to the Internet as appropriate (e.g. join your WiFi network). + Open a terminal (press Ctrl-Alt-T). + +#. Setup and update the repositories:: + + sudo apt update + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + passwd + # There is no current password. + sudo apt install --yes openssh-server vim + + Installing the full ``vim`` package fixes terminal problems that occur when + using the ``vim-tiny`` package (that ships in the Live CD environment) over + SSH. + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh ubuntu@IP``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + apt install --yes debootstrap gdisk zfsutils-linux + + systemctl stop zed + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + - For a mirror or raidz topology, use ``DISK1``, ``DISK2``, etc. + - When choosing a boot pool size, consider how you will use the space. A + kernel and initrd may consume around 100M. If you have multiple kernels + and take snapshots, you may find yourself low on boot pool space, + especially if you need to regenerate your initramfs images, which may be + around 85M each. Size your boot pool appropriately for your needs. + +#. If you are re-using a disk, clear it as necessary: + + Ensure swap partitions are not in use:: + + swapoff --all + + If the disk was previously used in an MD array:: + + apt install --yes mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition (e.g. a swap partition per this HOWTO): + mdadm --zero-superblock --force ${DISK}-part2 + + If the disk was previously used with zfs:: + + wipefs -a $DISK + + For flash-based storage, if the disk was previously used, you may wish to + do a full-disk discard (TRIM/UNMAP), which can improve performance:: + + blkdiscard -f $DISK + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Create bootloader partition(s):: + + sgdisk -n1:1M:+512M -t1:EF00 $DISK + + # For legacy (BIOS) booting: + sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK + + **Note:** While the Ubuntu installer uses an MBR label for legacy (BIOS) + booting, this HOWTO uses GPT partition labels for both UEFI and legacy + (BIOS) booting. This is simpler than having two options. It is also + provides forward compatibility (future proofing). In other words, for + legacy (BIOS) booting, this will allow you to move the disk(s) to a new + system/motherboard in the future without having to rebuild the pool (and + restore your data from a backup). The ESP is created in both cases for + similar reasons. Additionally, the ESP is used for ``/boot/grub`` in + single-disk installs, as :ref:`discussed below `. + +#. Create a partition for swap: + + Previous versions of this HOWTO put swap on a zvol. `Ubuntu recommends + against this configuration due to deadlocks. + `__ There + is `a bug report upstream + `__. + + Putting swap on a partition gives up the benefit of ZFS checksums (for your + swap). That is probably the right trade-off given the reports of ZFS + deadlocks with swap. If you are bothered by this, simply do not enable + swap. + + Choose one of the following options if you want swap: + + - For a single-disk install:: + + sgdisk -n2:0:+500M -t2:8200 $DISK + + - For a mirror or raidz topology:: + + sgdisk -n2:0:+500M -t2:FD00 $DISK + + Adjust the swap swize to your needs. If you wish to enable hiberation + (which only works for unencrypted installs), the swap partition must be + at least as large as the system's RAM. + +#. Create a boot pool partition:: + + sgdisk -n3:0:+2G -t3:BE00 $DISK + + The Ubuntu installer uses 5% of the disk space constrained to a minimum of + 500 MiB and a maximum of 2 GiB. `Making this too small (and 500 MiB might + be too small) can result in an inability to upgrade the kernel. + `__ + +#. Create a root pool partition: + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -o cachefile=/etc/zfs/zpool.cache \ + -o compatibility=grub2 \ + -o feature@livelist=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O devices=off \ + -O acltype=posixacl -O xattr=sa \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + Ignore the warnings about the features “not in specified 'compatibility' + feature set.” + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The boot pool name is no longer arbitrary. It _must_ be ``bpool``. + If you really want to rename it, edit ``/etc/grub.d/10_linux_zfs`` later, + after GRUB is installed (and run ``update-grub``). + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``device_rebuild`` feature should be safe to use (except on raidz, + which it is incompatible with), but the boot pool is small, so this does + not matter in practice. + - The ``log_spacemap`` and ``spacemap_v2`` features have been tested and + are safe to use. The boot pool is small, so these do not matter in + practice. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O encryption=on -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o ashift=12 \ + -o autotrim=on \ + -O acltype=posixacl -O xattr=sa -O dnodesize=auto \ + -O compression=lz4 \ + -O normalization=formD \ + -O relatime=on \ + -O canmount=off -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + Also, `disabling ACLs apparently breaks umask handling with NFSv4 + `__. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + +#. Create filesystem datasets for the root and boot filesystems:: + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + + zfs create -o mountpoint=/ \ + -o com.ubuntu.zsys:bootfs=yes \ + -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID + + zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID + +#. Create datasets:: + + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/usr + zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \ + rpool/ROOT/ubuntu_$UUID/var + zfs create rpool/ROOT/ubuntu_$UUID/var/lib + zfs create rpool/ROOT/ubuntu_$UUID/var/log + zfs create rpool/ROOT/ubuntu_$UUID/var/spool + + zfs create -o canmount=off -o mountpoint=/ \ + rpool/USERDATA + zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \ + -o canmount=on -o mountpoint=/root \ + rpool/USERDATA/root_$UUID + chmod 700 /mnt/root + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to separate these to exclude them from snapshots:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/cache + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/nfs + zfs create rpool/ROOT/ubuntu_$UUID/var/tmp + chmod 1777 /mnt/var/tmp + + If desired (the Ubuntu installer creates these):: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg + + If you use /srv on this system:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/srv + + If you use /usr/local on this system:: + + zfs create rpool/ROOT/ubuntu_$UUID/usr/local + + If this system will have games installed:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/games + + If this system will have a GUI:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountsService + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create rpool/ROOT/ubuntu_$UUID/var/lib/docker + + If this system will store local email in /var/mail:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/mail + + If this system will use Snap packages:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/snap + + If you use /var/www on this system:: + + zfs create rpool/ROOT/ubuntu_$UUID/var/www + + For a mirror or raidz topology, create a dataset for ``/boot/grub``:: + + zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.ubuntu.zsys:bootfs=no \ + rpool/ROOT/ubuntu_$UUID/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + **Note:** If you separate a directory required for booting (e.g. ``/etc``) + into its own dataset, you must add it to + ``ZFS_INITRD_ADDITIONAL_DATASETS`` in ``/etc/default/zfs``. Datasets + with ``canmount=off`` (like ``rpool/usr`` above) do not matter for this. + +#. Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + +#. Install the minimal system:: + + debootstrap jammy /mnt + + The ``debootstrap`` command leaves the new system in an unconfigured state. + An alternative to using ``debootstrap`` is to copy the entirety of a + working system into the new ZFS root. + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + hostname HOSTNAME + hostname > /mnt/etc/hostname + vi /mnt/etc/hosts + + .. code-block:: text + + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Configure the network interface: + + Find the interface name:: + + ip addr show + + Adjust ``NAME`` below to match your interface name:: + + vi /mnt/etc/netplan/01-netcfg.yaml + + .. code-block:: yaml + + network: + version: 2 + ethernets: + NAME: + dhcp4: true + + Customize this file if the system is not a DHCP client. + +#. Configure the package sources:: + + vi /mnt/etc/apt/sources.list + + .. code-block:: sourceslist + + deb http://archive.ubuntu.com/ubuntu jammy main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu jammy-updates main restricted universe multiverse + deb http://archive.ubuntu.com/ubuntu jammy-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu jammy-security main restricted universe multiverse + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + apt update + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + dpkg-reconfigure locales tzdata keyboard-configuration console-setup + + Install your preferred text editor:: + + apt install --yes nano + + apt install --yes vim + + Installing the full ``vim`` package fixes terminal problems that occur when + using the ``vim-tiny`` package (that is installed by ``debootstrap``) over + SSH. + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + apt install --yes cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) \ + none luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. Create the EFI filesystem: + + Perform these steps for both UEFI and legacy (BIOS) booting:: + + apt install --yes dosfstools + + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part1) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + + For a mirror or raidz topology, repeat the `mkdosfs` for the additional + disks, but do not repeat the other commands. + + **Note:** The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster + size (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + +#. Put ``/boot/grub`` on the EFI System Partition: + + .. _boot-grub-esp: + + For a single-disk install only:: + + mkdir /boot/efi/grub /boot/grub + echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab + mount /boot/grub + + This allows GRUB to write to ``/boot/grub`` (since it is on a FAT-formatted + ESP instead of on ZFS), which means that ``/boot/grub/grubenv`` and the + ``recordfail`` feature works as expected: if the boot fails, the normally + hidden GRUB menu will be shown on the next boot. For a mirror or raidz + topology, we do not want GRUB writing to the EFI System Partition. This is + because we duplicate it at install without a mechanism to update the copies + when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus, + we keep ``/boot/grub`` on the boot pool for the mirror or raidz topologies. + This preserves correct mirroring/raidz behavior, at the expense of being + able to write to ``/boot/grub/grubenv`` and thus the ``recordfail`` + behavior. + +#. Install GRUB/Linux/ZFS in the chroot environment for the new system: + + Choose one of the following options: + + - Install GRUB/Linux/ZFS for legacy (BIOS) booting:: + + apt install --yes grub-pc linux-image-generic zfs-initramfs zsys + + Select (using the space bar) all of the disks (not partitions) in your + pool. + + - Install GRUB/Linux/ZFS for UEFI booting:: + + apt install --yes \ + grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \ + shim-signed zfs-initramfs zsys + + **Notes:** + + - Ignore any error messages saying ``ERROR: Couldn't resolve device`` and + ``WARNING: Couldn't determine root device``. `cryptsetup does not + support ZFS + `__. + + - Ignore any error messages saying ``Module zfs not found`` and + ``couldn't connect to zsys daemon``. The first seems to occur due to a + version mismatch between the Live CD kernel and the chroot environment, + but this is irrelevant since the module is already loaded. The second + may be caused by the first but either way is irrelevant since ``zed`` + is started manually later. + + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. For some reason, + grub-efi-amd64 does not prompt for ``install_devices`` here, but does + after a reboot. + +#. Optional: Remove os-prober:: + + apt purge --yes os-prober + + This avoids error messages from ``update-grub``. ``os-prober`` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Configure swap: + + Choose one of the following options if you want swap: + + - For an unencrypted single-disk install:: + + mkswap -f ${DISK}-part2 + echo /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part2) \ + none swap discard 0 0 >> /etc/fstab + swapon -a + + - For an unencrypted mirror or raidz topology:: + + apt install --yes mdadm + + # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and + # raid-devices if necessary and specify the actual devices. + mdadm --create /dev/md0 --metadata=1.2 --level=mirror \ + --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2 + mkswap -f /dev/md0 + echo /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/md0) \ + none swap discard 0 0 >> /etc/fstab + + - For an encrypted (LUKS or ZFS native encryption) single-disk install:: + + apt install --yes cryptsetup + + echo swap ${DISK}-part2 /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab + + - For an encrypted (LUKS or ZFS native encryption) mirror or raidz + topology:: + + apt install --yes cryptsetup mdadm + + # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and + # raid-devices if necessary and specify the actual devices. + mdadm --create /dev/md0 --metadata=1.2 --level=mirror \ + --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2 + echo swap /dev/md0 /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab + echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + +#. Setup system groups:: + + addgroup --system lpadmin + addgroup --system lxd + addgroup --system sambashare + +#. Optional: Install SSH:: + + apt install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +Step 5: GRUB Installation +------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub-probe /boot + +#. Refresh the initrd files:: + + update-initramfs -c -k all + + **Note:** Ignore any error messages saying ``ERROR: Couldn't resolve + device`` and ``WARNING: Couldn't determine root device``. `cryptsetup + does not support ZFS + `__. + +#. Disable memory zeroing:: + + vi /etc/default/grub + # Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT + # Save and quit (or see the next step). + + This is to address `performance regressions + `__. + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Comment out: GRUB_TIMEOUT_STYLE=hidden + # Set: GRUB_TIMEOUT=5 + # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 + # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Install the boot loader: + + Choose one of the following options: + + - For legacy (BIOS) booting, install GRUB to the MBR:: + + grub-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the + ``grub-install`` command for each disk in the pool. + + - For UEFI booting, install GRUB to the ESP:: + + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy + +#. Disable grub-initrd-fallback.service + + For a mirror or raidz topology:: + + systemctl mask grub-initrd-fallback.service + + This is the service for ``/boot/grub/grubenv`` which does not work on + mirrored or raidz topologies. Disabling this keeps it from blocking + subsequent mounts of ``/boot/grub`` if that mount ever fails. + + Another option would be to set ``RequiresMountsFor=/boot/grub`` via a + drop-in unit, but that is more work to do here for no reason. Hopefully + `this bug `__ + will be fixed upstream. + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/ubuntu_$UUID + zfs set canmount=on rpool/ROOT/ubuntu_$UUID + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Once the files have data, stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +Step 6: First Boot +------------------ + +#. Install GRUB to additional disks: + + For a UEFI mirror or raidz topology only:: + + dpkg-reconfigure grub-efi-amd64 + + Select (using the space bar) all of the ESP partitions (partition 1 on + each of the pool disks). + +#. Create a user account: + + Replace ``YOUR_USERNAME`` with your desired username:: + + username=YOUR_USERNAME + + UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | + tr -dc 'a-z0-9' | cut -c-6) + ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}') + zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \ + -o canmount=on -o mountpoint=/home/$username \ + rpool/USERDATA/${username}_$UUID + adduser $username + + cp -a /etc/skel/. /home/$username + chown -R $username:$username /home/$username + usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo $username + +Step 7: Full Software Installation +---------------------------------- + +#. Upgrade the minimal system:: + + apt dist-upgrade --yes + +#. Install a regular set of software: + + Choose one of the following options: + + - Install a command-line environment only:: + + apt install --yes ubuntu-standard + + - Install a full GUI environment:: + + apt install --yes ubuntu-desktop + + **Hint**: If you are installing a full GUI environment, you will likely + want to manage your network with NetworkManager:: + + rm /etc/netplan/01-netcfg.yaml + vi /etc/netplan/01-network-manager-all.yaml + + .. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager + +#. Optional: Disable log compression: + + As ``/var/log`` is already compressed by ZFS, logrotate’s compression is + going to burn CPU and disk I/O for (in most cases) very little gain. Also, + if you are making snapshots of ``/var/log``, logrotate’s compression will + actually waste space, as the uncompressed data will live on in the + snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment + out ``compress``, or use this loop (copy-and-paste highly recommended):: + + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done + +#. Reboot:: + + reboot + +Step 8: Final Cleanup +--------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + sudo vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + sudo systemctl restart ssh + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Uncomment: GRUB_TIMEOUT_STYLE=hidden + # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT + # Comment out: GRUB_TERMINAL=console + # Save and quit. + + sudo update-grub + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + # Replace “UUID” as appropriate; use zfs list to find it: + zfs mount rpool/ROOT/ubuntu_UUID + zfs mount bpool/BOOT/ubuntu_UUID + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + chroot /mnt /bin/bash --login + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.ms.fd:/usr/share/OVMF/OVMF_VARS.ms.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. diff --git a/_sources/Getting Started/Ubuntu/index.rst.txt b/_sources/Getting Started/Ubuntu/index.rst.txt new file mode 100644 index 000000000..4d52da52a --- /dev/null +++ b/_sources/Getting Started/Ubuntu/index.rst.txt @@ -0,0 +1,31 @@ +Ubuntu +====== + +.. contents:: Table of Contents + :local: + +Installation +------------ + +.. note:: + If you want to use ZFS as your root filesystem, see the + `Root on ZFS`_ links below instead. + +On Ubuntu, ZFS is included in the default Linux kernel packages. +To install the ZFS utilities, first make sure ``universe`` is enabled in +``/etc/apt/sources.list``:: + + deb http://archive.ubuntu.com/ubuntu main universe + +Then install ``zfsutils-linux``:: + + apt update + apt install zfsutils-linux + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + * diff --git a/_sources/Getting Started/index.rst.txt b/_sources/Getting Started/index.rst.txt new file mode 100644 index 000000000..48669f903 --- /dev/null +++ b/_sources/Getting Started/index.rst.txt @@ -0,0 +1,24 @@ +Getting Started +=============== + +To get started with OpenZFS refer to the provided documentation for your +distribution. It will cover the recommended installation method and any +distribution specific information. First time OpenZFS users are +encouraged to check out Aaron Toponce's `excellent +documentation `__. + +.. toctree:: + :maxdepth: 3 + :glob: + + Alpine Linux/index + Arch Linux/index + Debian/index + Fedora/index + FreeBSD + Gentoo + NixOS/index + openSUSE/index + RHEL-based distro/index + Slackware/index + Ubuntu/index diff --git a/_sources/Getting Started/openSUSE/index.rst.txt b/_sources/Getting Started/openSUSE/index.rst.txt new file mode 100644 index 000000000..c1e097884 --- /dev/null +++ b/_sources/Getting Started/openSUSE/index.rst.txt @@ -0,0 +1,35 @@ +.. highlight:: sh + +openSUSE +======== + +.. contents:: Table of Contents + :local: + +Installation +------------ + +If you want to use ZFS as your root filesystem, see the `Root on ZFS`_ +links below instead. + +ZFS packages are not included in official openSUSE repositories, but repository of `filesystems projects of openSUSE +`__ +includes such packages of filesystems including OpenZFS. + +openSUSE progresses through 3 main distribution branches, these are called Tumbleweed, Leap and SLE. There are ZFS packages available for all three. + + +External Links +-------------- + +* `openSUSE OpenZFS page `__ + +Root on ZFS +----------- +.. toctree:: + :maxdepth: 1 + :glob: + + *Root on ZFS + + diff --git a/_sources/Getting Started/openSUSE/openSUSE Leap Root on ZFS.rst.txt b/_sources/Getting Started/openSUSE/openSUSE Leap Root on ZFS.rst.txt new file mode 100644 index 000000000..e0f1488fe --- /dev/null +++ b/_sources/Getting Started/openSUSE/openSUSE Leap Root on ZFS.rst.txt @@ -0,0 +1,1280 @@ +.. highlight:: sh + +openSUSE Leap Root on ZFS +========================= + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. +- This is not an openSUSE official HOWTO page. This document will be updated if Root on ZFS support of + openSUSE is added in the future. + Also, `openSUSE's default system installer Yast2 does not support zfs `__. The method of setting up system + with zypper without Yast2 used in this page is based on openSUSE installation methods written by the + experience of the people in the community. + For more information about this, please look at the external links. + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit openSUSE Leap Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention `@Zaryob `__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo zypper install python3-pip + pip3 install -r docs/requirements.txt + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Notes +~~~~~~~ + +- You can use unofficial script `LroZ `__ (Linux Root On Zfs), which is based on this manual and automates most steps. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the openSUSE Live CD. If prompted, login with the username + ``linux`` without password. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Check your openSUSE Leap release:: + + lsb_release -d + Description: openSUSE Leap {$release} + +#. Setup and update the repositories:: + + sudo zypper addrepo https://download.opensuse.org/repositories/filesystems/$(lsb_release -rs)/filesystems.repo + sudo zypper refresh # Refresh all repositories + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo zypper install openssh-server + sudo systemctl restart sshd.service + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. Do not forget to set the password for user by ``passwd``. + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + zypper install zfs zfs-kmp-default + zypper install gdisk dkms + modprobe zfs + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + +#. If you are re-using a disk, clear it as necessary: + + If the disk was previously used in an MD array:: + + zypper install mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + **Hints:** + + - If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - The ``spacemap_v2`` feature has been tested and is safe to use. The boot + pool is small, so this does not matter in practice. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O encryption=on \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + zypper install cryptsetup + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + - If you want to use grub bootloader, you must set:: + + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + + for your root pool. Relevant for grub 2.04 and Leap 15.3. Don't use zpool + upgrade for this pool or you will lost the possibility to use grub2-install command. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with + the ``zsys`` tool, though its dataset layout is more complicated. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/suse + zfs mount rpool/ROOT/suse + + zfs create -o mountpoint=/boot bpool/BOOT/suse + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to exclude these from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /opt on this system:: + + zfs create rpool/opt + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will store local email in /var/mail:: + + zfs create rpool/var/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If this system will use Flatpak packages:: + + zfs create rpool/var/lib/flatpak + + If you use /var/www on this system:: + + zfs create rpool/var/www + + If this system will use GNOME:: + + zfs create rpool/var/lib/AccountsService + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will use NFS (locking):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + + + Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs -p + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4. Install System +---------------------- + +#. Add repositories into chrooting directory:: + + zypper --root /mnt ar http://download.opensuse.org/distribution/leap/$(lsb_release -rs)/repo/non-oss non-oss + zypper --root /mnt ar http://download.opensuse.org/distribution/leap/$(lsb_release -rs)/repo/oss oss + zypper --root /mnt ar http://download.opensuse.org/update/leap/$(lsb_release -rs)/oss update-oss + zypper --root /mnt ar http://download.opensuse.org/update/leap/$(lsb_release -rs)/non-oss update-nonoss + +#. Generate repository indexes:: + + zypper --root /mnt refresh + + + You will get fingerprint exception, click a to say always trust and continue.:: + + New repository or package signing key received: + + Repository: oss + Key Name: openSUSE Project Signing Key + Key Fingerprint: 22C07BA5 34178CD0 2EFE22AA B88B2FD4 3DBDC284 + Key Created: Mon May 5 11:37:40 2014 + Key Expires: Thu May 2 11:37:40 2024 + Rpm Name: gpg-pubkey-3dbdc284-53674dd4 + + Do you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r): + + +#. Install openSUSE Leap with zypper: + + If you install `base` pattern, zypper will install `busybox-grep` which masks default kernel package. + Thats why I recommend you to install `enhanced_base` pattern, if you're new in openSUSE. But in `enhanced_base`, bloats + can annoy you, while you want to use it openSUSE on server. So, you need to select + + a. Install base packages of openSUSE Leap with zypper (Recommended for server):: + + zypper --root /mnt install -t pattern base + + + b. Install enhanced base of openSUSE Leap with zypper (Recommended for desktop):: + + zypper --root /mnt install -t pattern enhanced_base + + + +#. Install openSUSE zypper package system into chroot:: + + zypper --root /mnt install zypper + +#. Recommended: Install openSUSE yast2 system into chroot:: + + zypper --root /mnt install yast2 + zypper --root /mnt install -t pattern yast2_basis + + It will make easier to configure network and other configurations for beginners. + +To install a desktop environment, see the `openSUSE wiki +`__ + +Step 5: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + echo HOSTNAME > /mnt/etc/hostname + vi /mnt/etc/hosts + + Add a line: + + .. code-block:: text + + 127.0.1.1 HOSTNAME + + or if the system has a real name in DNS: + + .. code-block:: text + + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Copy network information:: + + rm /mnt/etc/resolv.conf + cp /etc/resolv.conf /mnt/etc/ + + You will reconfigure network with yast2 later. + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + zypper refresh + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + locale -a + + Output must include that languages: + + * C + * C.utf8 + * en_US.utf8 + * POSIX + + Find yout locale from `locale -a` commands output then set it with following command. + + .. code-block:: text + + localectl set-locale LANG=en_US.UTF-8 + +#. Optional: Reinstallation for stability: + + After installation it may need. Some packages may have minor errors. + For that, do this if you wish. Since there is no command like + dpkg-reconfigure in openSUSE, `zypper install -f stated as a alternative for + it `__ + but it will reinstall packages. + + .. code-block:: text + + zypper install -f permissions-config iputils ca-certificates ca-certificates-mozilla pam shadow dbus libutempter0 suse-module-tools util-linux + + +#. Install kernel:: + + zypper install kernel-default kernel-firmware + + **Note:** If you installed `base` pattern, you need to deinstall busybox-grep to install `kernel-default` package. + +#. Install ZFS in the chroot environment for the new system:: + + zypper install lsb-release + zypper addrepo https://download.opensuse.org/repositories/filesystems/`lsb_release -rs`/filesystems.repo + zypper refresh # Refresh all repositories + zypper install zfs zfs-kmp-default + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + zypper install cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. For LUKS installs only, fix cryptsetup naming for ZFS:: + + echo 'ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}" + ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"' >> /etc/udev/rules.d/99-local-crypt.rules + +#. Recommended: Generate and setup hostid:: + + cd /root + zypper install wget + wget https://github.com/openzfs/zfs/files/4537537/genhostid.sh.gz + gzip -d genhostid.sh.gz + chmod +x genhostid.sh + zgenhostid `/root/genhostid.sh` + + Check, that generated and system hostid matches:: + + /root/genhostid.sh + hostid + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + zypper install grub2-x86_64-pc + + If your processor is 32bit use `grub2-i386-pc` instead of x86_64 one. + + - Install GRUB for UEFI booting:: + + zypper install grub2-x86_64-efi dosfstools os-prober + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s PARTUUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + zypper remove os-prober + + This avoids error messages from `update-bootloader`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/usr/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + + +Step 6: Kernel Installation +--------------------------- + +#. Add zfs module into dracut:: + + echo 'zfs'>> /etc/modules-load.d/zfs.conf + +#. Kernel version of livecd can differ from currently installed version. Get kernel version of your new OS:: + + kernel_version=$(find /boot/vmlinuz-* | grep -Eo '[[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*\-.*-default') + +#. Refresh kernel files:: + + kernel-install add "$kernel_version" /boot/vmlinuz-"$kernel_version" + +#. Refresh the initrd files:: + + mkinitrd + + **Note:** After some installations, LUKS partition cannot seen by dracut, + this will print “Failure occured during following action: + configuring encrypted DM device X VOLUME_CRYPTSETUP_FAILED“. For fix this + issue you need to check cryptsetup installation. `See for more information `__ + **Note:** Although we add the zfs config to the system module into `/etc/modules.d`, if it is not seen by dracut, we have to add it to dracut by force. + `dracut --kver $(uname -r) --force --add-drivers "zfs"` + + +Step 7: Grub2 Installation +-------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub2-probe /boot + + Output must be `zfs` + +#. If you having trouble with `grub2-probe` command make this:: + + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile + export ZPOOL_VDEV_NAME_PATH=YES + + then go back to `grub2-probe` step. + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-bootloader + + **Note:** Ignore errors from ``osprober``, if present. + **Note:** If you have had trouble with the grub2 installation, I suggest you use systemd-boot. + **Note:** If this command don't gives any output, use classic grub.cfg generation with following command: + ``grub2-mkconfig -o /boot/grub2/grub.cfg`` + +#. Check that ``/boot/grub2/grub.cfg`` have the menuentry ``root=ZFS=rpool/ROOT/suse``, like this:: + + linux /boot@/vmlinuz-5.3.18-150300.59.60-default root=ZFS=rpool/ROOT/suse + + If not, change ``/etc/default/grub``:: + + GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/suse" + + and repeat previous step. + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub2-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub2-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=opensuse --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +Step 8: Systemd-Boot Installation +--------------------------------- + +**Warning:** This will break your Yast2 Bootloader Configuration. Make sure that you +are not able to fix the problem you are having with grub2. I decided to write this +part because sometimes grub2 doesn't see the rpool pool in some cases. + +#. Install systemd-boot:: + + bootctl install + + Note: Only if previous cmd replied "Failed to get machine id: No medium found", you need: + + systemd-machine-id-setup + + and repeat installation systemd-boot. + +#. Configure bootloader configuration:: + + tee -a /boot/efi/loader/loader.conf << EOF + default openSUSE_Leap.conf + timeout 5 + console-mode auto + EOF + +#. Write Entries:: + + tee -a /boot/efi/loader/entries/openSUSE_Leap.conf << EOF + title openSUSE Leap + linux /EFI/openSUSE/vmlinuz + initrd /EFI/openSUSE/initrd + options root=zfs:rpool/ROOT/suse boot=zfs + EOF + +#. Copy files into EFI:: + + mkdir /boot/efi/EFI/openSUSE + cp /boot/{vmlinuz,initrd} /boot/efi/EFI/openSUSE + +#. Update systemd-boot variables:: + + bootctl update + +Step 9: Filesystem Configuration +-------------------------------- + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/suse + zfs set canmount=noauto rpool/ROOT/suse + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 10: First Boot +------------------- + +#. Optional: Install SSH:: + + zypper install -y openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional: Snapshot the initial installation:: + + zfs snapshot -r bpool/BOOT/suse@install + zfs snapshot -r rpool/ROOT/suse@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``username`` with your desired username:: + + zfs create rpool/home/username + adduser username + + cp -a /etc/skel/. /home/username + chown -R username:username /home/username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + Check to be sure we using efi mode: + + .. code-block:: text + + efibootmgr -v + + This must return a message contains `legacy_boot` + + Then reconfigure grub: + + .. code-block:: text + + grub-install $DISK + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "opensuse-2" -l '\EFI\opensuse\grubx64.efi' + + mount /boot/efi + +Step 11: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 12: Final Cleanup +---------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/suse@install + sudo zfs destroy rpool/ROOT/suse@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + systemctl restart sshd + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-bootloader + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + zypper install cryptsetup + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/suse + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo zypper install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. + + +External Links +~~~~~~~~~~~~~~ +* `OpenZFS on openSUSE `__ +* `ZenLinux Blog - How to Setup an openSUSE chroot + `__ diff --git a/_sources/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.rst.txt b/_sources/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.rst.txt new file mode 100644 index 000000000..2719502e9 --- /dev/null +++ b/_sources/Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.rst.txt @@ -0,0 +1,1237 @@ +.. highlight:: sh + +openSUSE Tumbleweed Root on ZFS +=============================== + +.. contents:: Table of Contents + :local: + +Overview +-------- + +Caution +~~~~~~~ + +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. +- This is not an openSUSE official HOWTO page. This document will be updated if Root on ZFS support of + openSUSE is added in the future. + Also, `openSUSE's default system installer Yast2 does not support zfs `__. The method of setting up system + with zypper without Yast2 used in this page is based on openSUSE installation methods written by the + experience of the people in the community. + For more information about this, please look at the external links. + + +System Requirements +~~~~~~~~~~~~~~~~~~~ + +- `64-bit openSUSE Tumbleweed Live CD w/ GUI (e.g. gnome iso) + `__ +- `A 64-bit kernel is strongly encouraged. + `__ +- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) + only works with UEFI booting. This not unique to ZFS. `GRUB does not and + will not work on 4Kn with legacy (BIOS) booting. + `__ + +Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory +is recommended for normal performance in basic workloads. If you wish to use +deduplication, you will need `massive amounts of RAM +`__. Enabling +deduplication is a permanent change that cannot be easily reverted. + +Support +~~~~~~~ + +If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at +`#zfsonlinux `__ on `Libera Chat +`__. If you have a bug report or feature request +related to this HOWTO, please file a new issue and mention `@Zaryob `__. + +Contributing +~~~~~~~~~~~~ + +#. Fork and clone: https://github.com/openzfs/openzfs-docs + +#. Install the tools:: + + sudo zypper install python3-pip + pip3 install -r docs/requirements.txt + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH + +#. Make your changes. + +#. Test:: + + cd docs + make html + sensible-browser _build/html/index.html + +#. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. + +Encryption +~~~~~~~~~~ + +This guide supports three different encryption options: unencrypted, ZFS +native encryption, and LUKS. With any option, all ZFS features are fully +available. + +Unencrypted does not encrypt anything, of course. With no encryption +happening, this option naturally has the best performance. + +ZFS native encryption encrypts the data and most metadata in the root +pool. It does not encrypt dataset or snapshot names or properties. The +boot pool is not encrypted at all, but it only contains the bootloader, +kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the +initrd is unlikely to contain sensitive data.) The system cannot boot +without the passphrase being entered at the console. Performance is +good. As the encryption happens in ZFS, even if multiple disks (mirror +or raidz topologies) are used, the data only has to be encrypted once. + +LUKS encrypts almost everything. The only unencrypted data is the bootloader, +kernel, and initrd. The system cannot boot without the passphrase being +entered at the console. Performance is good, but LUKS sits underneath ZFS, so +if multiple disks (mirror or raidz topologies) are used, the data has to be +encrypted once per disk. + +Step 1: Prepare The Install Environment +--------------------------------------- + +#. Boot the openSUSE Live CD. If prompted, login with the username + ``live`` and password ``live``. Connect your system to the Internet as + appropriate (e.g. join your WiFi network). Open a terminal. + +#. Setup and update the repositories:: + + sudo zypper addrepo https://download.opensuse.org/repositories/filesystems/openSUSE_Tumbleweed/filesystems.repo + sudo zypper refresh # Refresh all repositories + +#. Optional: Install and start the OpenSSH server in the Live CD environment: + + If you have a second system, using SSH to access the target system can be + convenient:: + + sudo zypper install openssh-server + sudo systemctl restart sshd.service + + **Hint:** You can find your IP address with + ``ip addr show scope global | grep inet``. Then, from your main machine, + connect with ``ssh user@IP``. + + +#. Disable automounting: + + If the disk has been used before (with partitions at the same offsets), + previous filesystems (e.g. the ESP) will automount if not disabled:: + + gsettings set org.gnome.desktop.media-handling automount false + + +#. Become root:: + + sudo -i + +#. Install ZFS in the Live CD environment:: + + zypper install zfs zfs-kmp-default + zypper install gdisk + modprobe zfs + +Step 2: Disk Formatting +----------------------- + +#. Set a variable with the disk name:: + + DISK=/dev/disk/by-id/scsi-SATA_disk1 + + Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the + ``/dev/sd*`` device nodes directly can cause sporadic import failures, + especially on systems that have more than one storage pool. + + **Hints:** + + - ``ls -la /dev/disk/by-id`` will list the aliases. + - Are you doing this in a virtual machine? If your virtual disk is missing + from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with + virtio; otherwise, read the `troubleshooting <#troubleshooting>`__ + section. + +#. If you are re-using a disk, clear it as necessary: + + If the disk was previously used in an MD array:: + + zypper install mdadm + + # See if one or more MD arrays are active: + cat /proc/mdstat + # If so, stop them (replace ``md0`` as required): + mdadm --stop /dev/md0 + + # For an array using the whole disk: + mdadm --zero-superblock --force $DISK + # For an array using a partition: + mdadm --zero-superblock --force ${DISK}-part2 + + Clear the partition table:: + + sgdisk --zap-all $DISK + + If you get a message about the kernel still using the old partition table, + reboot and start over (except that you can skip this step). + + +#. Partition your disk(s): + + Run this if you need legacy (BIOS) booting:: + + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK + + Run this for UEFI booting (for use now or in the future):: + + sgdisk -n2:1M:+512M -t2:EF00 $DISK + + Run this for the boot pool:: + + sgdisk -n3:0:+1G -t3:BF01 $DISK + + Choose one of the following options: + + - Unencrypted or ZFS native encryption:: + + sgdisk -n4:0:0 -t4:BF00 $DISK + + - LUKS:: + + sgdisk -n4:0:0 -t4:8309 $DISK + + If you are creating a mirror or raidz topology, repeat the partitioning + commands for all the disks which will be part of the pool. + +#. Create the boot pool:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@zpool_checkpoint=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/boot -R /mnt \ + bpool ${DISK}-part3 + + You should not need to customize any of the options for the boot pool. + + GRUB does not support all of the zpool features. See ``spa_feature_names`` + in `grub-core/fs/zfs/zfs.c + `__. + This step creates a separate boot pool for ``/boot`` with the features + limited to only those that GRUB supports, allowing the root pool to use + any/all features. Note that GRUB opens the pool read-only, so all + read-only compatible features are “supported” by GRUB. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + bpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part3 \ + /dev/disk/by-id/scsi-SATA_disk2-part3 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. + + **Feature Notes:** + + - The ``allocation_classes`` feature should be safe to use. However, unless + one is using it (i.e. a ``special`` vdev), there is no point to enabling + it. It is extremely unlikely that someone would use this feature for a + boot pool. If one cares about speeding up the boot pool, it would make + more sense to put the whole pool on the faster disk rather than using it + as a ``special`` vdev. + - The ``project_quota`` feature has been tested and is safe to use. This + feature is extremely unlikely to matter for the boot pool. + - The ``resilver_defer`` should be safe but the boot pool is small enough + that it is unlikely to be necessary. + - The ``spacemap_v2`` feature has been tested and is safe to use. The boot + pool is small, so this does not matter in practice. + - As a read-only compatible feature, the ``userobj_accounting`` feature + should be compatible in theory, but in practice, GRUB can fail with an + “invalid dnode type” error. This feature does not matter for ``/boot`` + anyway. + +#. Create the root pool: + + Choose one of the following options: + + - Unencrypted:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - ZFS native encryption:: + + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O encryption=on \ + -O keylocation=prompt -O keyformat=passphrase \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool ${DISK}-part4 + + - LUKS:: + + zypper install cryptsetup + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create \ + -o cachefile=/etc/zfs/zpool.cache \ + -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on \ + -O xattr=sa -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 + + **Notes:** + + - The use of ``ashift=12`` is recommended here because many drives + today have 4 KiB (or larger) physical sectors, even though they + present 512 B logical sectors. Also, a future replacement drive may + have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4 KiB logical sectors (in which case ``ashift=12`` is required). + - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` + for ``/var/log``, as `journald requires ACLs + `__ + - Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only filenames + `__. + - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you + want to tune it (e.g. ``-O recordsize=1M``), see `these + `__ `various + `__ `blog + `__ + `posts + `__. + - Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat’s documentation + `__ + for further information. + - Setting ``xattr=sa`` `vastly improves the performance of extended + attributes + `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI applications. + `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain controller. + `__ + Note that ``xattr=sa`` is `Linux-specific + `__. If you move your + ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, + extended attributes will not be readable (though your data will be). If + portability of extended attributes is important to you, omit the + ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole + pool, it is probably fine to use it for ``/var/log``. + - Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). + - ZFS native encryption `now + `__ + defaults to ``aes-256-gcm``. + - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two + keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256. + - Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup FAQ + `__ + for guidance. + + **Hints:** + + - If you are creating a mirror topology, create the pool using:: + + zpool create \ + ... \ + rpool mirror \ + /dev/disk/by-id/scsi-SATA_disk1-part4 \ + /dev/disk/by-id/scsi-SATA_disk2-part4 + + - For raidz topologies, replace ``mirror`` in the above command with + ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from + the additional disks. + - When using LUKS with mirror or raidz topologies, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have + to create using ``cryptsetup``. + - The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the root + pool is named ``rpool`` by default. + +Step 3: System Installation +--------------------------- + +#. Create filesystem datasets to act as containers:: + + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT + + On Solaris systems, the root filesystem is cloned and the suffix is + incremented for major system changes through ``pkg image-update`` or + ``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with + the ``zsys`` tool, though its dataset layout is more complicated. Even + without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still + be used for manually created clones. That said, this HOWTO assumes a single + filesystem for ``/boot`` for simplicity. + +#. Create filesystem datasets for the root and boot filesystems:: + + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/suse + zfs mount rpool/ROOT/suse + + zfs create -o mountpoint=/boot bpool/BOOT/suse + + With ZFS, it is not normally necessary to use a mount command (either + ``mount`` or ``zfs mount``). This situation is an exception because of + ``canmount=noauto``. + +#. Create datasets:: + + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + chmod 700 /mnt/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool + + The datasets below are optional, depending on your preferences and/or + software choices. + + If you wish to exclude these from snapshots:: + + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp + + If you use /opt on this system:: + + zfs create rpool/opt + + If you use /srv on this system:: + + zfs create rpool/srv + + If you use /usr/local on this system:: + + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local + + If this system will have games installed:: + + zfs create rpool/var/games + + If this system will store local email in /var/mail:: + + zfs create rpool/var/spool/mail + + If this system will use Snap packages:: + + zfs create rpool/var/snap + + If this system will use Flatpak packages:: + + zfs create rpool/var/lib/flatpak + + If you use /var/www on this system:: + + zfs create rpool/var/www + + If this system will use GNOME:: + + zfs create rpool/var/lib/AccountsService + + If this system will use Docker (which manages its own datasets & + snapshots):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + + If this system will use NFS (locking):: + + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + + + Mount a tmpfs at /run:: + + mkdir /mnt/run + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + + A tmpfs is recommended later, but if you want a separate dataset for + ``/tmp``:: + + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp + + The primary goal of this dataset layout is to separate the OS from user + data. This allows the root filesystem to be rolled back without rolling + back user data. + + If you do nothing extra, ``/tmp`` will be stored as part of the root + filesystem. Alternatively, you can create a separate dataset for ``/tmp``, + as shown above. This keeps the ``/tmp`` data out of snapshots of your root + filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want + to limit the maximum space used. Otherwise, you can use a tmpfs (RAM + filesystem) later. + + +#. Copy in zpool.cache:: + + mkdir /mnt/etc/zfs -p + cp /etc/zfs/zpool.cache /mnt/etc/zfs/ + +Step 4. Install System +---------------------- + +#. Add repositories into chrooting directory:: + + zypper --root /mnt ar http://download.opensuse.org/tumbleweed/repo/non-oss/ non-oss + zypper --root /mnt ar http://download.opensuse.org/tumbleweed/repo/oss/ oss + +#. Generate repository indexes:: + + zypper --root /mnt refresh + + + You will get fingerprint exception, click a to say always trust and continue.:: + + New repository or package signing key received: + + Repository: oss + Key Name: openSUSE Project Signing Key + Key Fingerprint: 22C07BA5 34178CD0 2EFE22AA B88B2FD4 3DBDC284 + Key Created: Mon May 5 11:37:40 2014 + Key Expires: Thu May 2 11:37:40 2024 + Rpm Name: gpg-pubkey-3dbdc284-53674dd4 + + Do you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r): + + +#. Install openSUSE Tumbleweed with zypper: + + If you install `base` pattern, zypper will install `busybox-grep` which masks default kernel package. + Thats why I recommend you to install `enhanced_base` pattern, if you're new in openSUSE. But in `enhanced_base`, bloats + can annoy you, while you want to use it openSUSE on server. So, you need to select + + a. Install base packages of openSUSE Tumbleweed with zypper (Recommended for server):: + + zypper --root /mnt install -t pattern base + + + b. Install enhanced base of openSUSE Tumbleweed with zypper (Recommended for desktop):: + + zypper --root /mnt install -t pattern enhanced_base + + + +#. Install openSUSE zypper package system into chroot:: + + zypper --root /mnt install zypper + +#. Recommended: Install openSUSE yast2 system into chroot:: + + zypper --root /mnt install yast2 + + + .. note:: If your `/etc/resolv.conf` file is empty, proceed this command. + + echo "nameserver 8.8.4.4" | tee -a /mnt/etc/resolv.conf + + + It will make easier to configure network and other configurations for beginners. + + + +To install a desktop environment, see the `openSUSE wiki +`__ + +Step 5: System Configuration +---------------------------- + +#. Configure the hostname: + + Replace ``HOSTNAME`` with the desired hostname:: + + echo HOSTNAME > /mnt/etc/hostname + vi /mnt/etc/hosts + + Add a line: + + .. code-block:: text + + 127.0.1.1 HOSTNAME + + or if the system has a real name in DNS: + + .. code-block:: text + + 127.0.1.1 FQDN HOSTNAME + + **Hint:** Use ``nano`` if you find ``vi`` confusing. + +#. Copy network information:: + + cp /etc/resolv.conf /mnt/etc + + You will reconfigure network with yast2. + + .. note:: If your `/etc/resolv.conf` file is empty, proceed this command. + + echo "nameserver 8.8.4.4" | tee -a /mnt/etc/resolv.conf + +#. Bind the virtual filesystems from the LiveCD environment to the new + system and ``chroot`` into it:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + mount -t tmpfs tmpfs /mnt/run + mkdir /mnt/run/lock + + chroot /mnt /usr/bin/env DISK=$DISK bash --login + + **Note:** This is using ``--rbind``, not ``--bind``. + +#. Configure a basic system environment:: + + ln -s /proc/self/mounts /etc/mtab + zypper refresh + + Even if you prefer a non-English system language, always ensure that + ``en_US.UTF-8`` is available:: + + locale -a + + Output must include that languages: + + * C + * C.UTF-8 + * en_US.utf8 + * POSIX + + Find yout locale from `locale -a` commands output then set it with following command. + + .. code-block:: text + + localectl set-locale LANG=en_US.UTF-8 + + +#. Optional: Reinstallation for stability: + + After installation it may need. Some packages may have minor errors. + For that, do this if you wish. Since there is no command like + dpkg-reconfigure in openSUSE, `zypper install -f stated as a alternative for + it `__ + but it will reinstall packages. + + .. code-block:: text + + zypper install -f permissions-config iputils ca-certificates ca-certificates-mozilla pam shadow dbus-1 libutempter0 suse-module-tools util-linux + + +#. Install kernel:: + + zypper install kernel-default kernel-firmware + + .. note:: If you installed `base` pattern, you need to deinstall busybox-grep to install `kernel-default` package. + +#. Install ZFS in the chroot environment for the new system:: + + zypper addrepo https://download.opensuse.org/repositories/filesystems/openSUSE_Tumbleweed/filesystems.repo + zypper refresh # Refresh all repositories + zypper install zfs + + +#. For LUKS installs only, setup ``/etc/crypttab``:: + + zypper install cryptsetup + + echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab + + The use of ``initramfs`` is a work-around for `cryptsetup does not support + ZFS `__. + + **Hint:** If you are creating a mirror or raidz topology, repeat the + ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. + +#. For LUKS installs only, fix cryptsetup naming for ZFS:: + + echo 'ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}" + ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"' >> /etc/udev/rules.d/99-local-crypt.rules + + +#. Install GRUB + + Choose one of the following options: + + - Install GRUB for legacy (BIOS) booting:: + + zypper install grub2-i386-pc + + - Install GRUB for UEFI booting:: + + zypper install grub2-x86_64-efi dosfstools os-prober + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo /dev/disk/by-uuid/$(blkid -s PARTUUID -o value ${DISK}-part2) \ + /boot/efi vfat defaults 0 0 >> /etc/fstab + mount /boot/efi + + **Notes:** + + - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present + 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size + (given the partition size of 512 MiB) for FAT32. It also works fine on + drives which present 512 B sectors. + - For a mirror or raidz topology, this step only installs GRUB on the + first disk. The other disk(s) will be handled later. + +#. Optional: Remove os-prober:: + + zypper remove os-prober + + This avoids error messages from `update-bootloader`. `os-prober` is only + necessary in dual-boot configurations. + +#. Set a root password:: + + passwd + +#. Enable importing bpool + + This ensures that ``bpool`` is always imported, regardless of whether + ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, + or whether ``zfs-import-scan.service`` is enabled. + + :: + + vi /etc/systemd/system/zfs-import-bpool.service + + .. code-block:: ini + + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + # Work-around to preserve zpool cache: + ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache + ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache + + [Install] + WantedBy=zfs-import.target + + :: + + systemctl enable zfs-import-bpool.service + +#. Optional (but recommended): Mount a tmpfs to ``/tmp`` + + If you chose to create a ``/tmp`` dataset above, skip this step, as they + are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a + tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. + + :: + + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount + + +Step 6: Kernel Installation +--------------------------- + +#. Add zfs module into dracut:: + + echo 'zfs'>> /etc/modules-load.d/zfs.conf + + +#. Refresh kernel files:: + + kernel-install add $(uname -r) /boot/vmlinuz-$(uname -r) + +#. Refresh the initrd files:: + + mkinitrd + + **Note:** After some installations, LUKS partition cannot seen by dracut, + this will print “Failure occured during following action: + configuring encrypted DM device X VOLUME_CRYPTSETUP_FAILED“. For fix this + issue you need to check cryptsetup installation. `See for more information `__ + **Note:** Although we add the zfs config to the system module into `/etc/modules.d`, if it is not seen by dracut, we have to add it to dracut by force. + `dracut --kver $(uname -r) --force --add-drivers "zfs"` + + +Step 7: Grub2 Installation +-------------------------- + +#. Verify that the ZFS boot filesystem is recognized:: + + grub2-probe /boot + + Output must be `zfs` + +#. If you having trouble with `grub2-probe` command make this:: + + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile + export ZPOOL_VDEV_NAME_PATH=YES + + then go back to `grub2-probe` step. + + +#. Workaround GRUB's missing zpool-features support:: + + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/suse" + +#. Optional (but highly recommended): Make debugging GRUB easier:: + + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. + + Later, once the system has rebooted twice and you are sure everything is + working, you can undo these changes, if desired. + +#. Update the boot configuration:: + + update-bootloader + + **Note:** Ignore errors from ``osprober``, if present. + **Note:** If you have had trouble with the grub2 installation, I suggest you use systemd-boot. + **Note:** If this command don't gives any output, use classic grub.cfg generation with following command: + ``grub2-mkconfig -o /boot/grub2/grub.cfg`` + +#. Install the boot loader: + + #. For legacy (BIOS) booting, install GRUB to the MBR:: + + grub2-install $DISK + + Note that you are installing GRUB to the whole disk, not a partition. + + If you are creating a mirror or raidz topology, repeat the ``grub-install`` + command for each disk in the pool. + + #. For UEFI booting, install GRUB to the ESP:: + + grub2-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=opensuse --recheck --no-floppy + + It is not necessary to specify the disk here. If you are creating a + mirror or raidz topology, the additional disks will be handled later. + +Step 8: Systemd-Boot Installation +--------------------------------- + +**Warning:** This will break your Yast2 Bootloader Configuration. Make sure that you +are not able to fix the problem you are having with grub2. I decided to write this +part because sometimes grub2 doesn't see the rpool pool in some cases. + +#. Install systemd-boot:: + + bootctl install + +#. Configure bootloader configuration:: + + tee -a /boot/efi/loader/loader.conf << EOF + default openSUSE_Tumbleweed.conf + timeout 5 + console-mode auto + EOF + +#. Write Entries:: + + tee -a /boot/efi/loader/entries/openSUSE_Tumbleweed.conf << EOF + title openSUSE Tumbleweed + linux /EFI/openSUSE/vmlinuz + initrd /EFI/openSUSE/initrd + options root=zfs=rpool/ROOT/suse boot=zfs + EOF + +#. Copy files into EFI:: + + mkdir /boot/efi/EFI/openSUSE + cp /boot/{vmlinuz,initrd} /boot/efi/EFI/openSUSE + +#. Update systemd-boot variables:: + + bootctl update + +Step 9: Filesystem Configuration +-------------------------------- + +#. Fix filesystem mount ordering: + + We need to activate ``zfs-mount-generator``. This makes systemd aware of + the separate mountpoints, which is important for things like ``/var/log`` + and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount`` + by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature + of systemd automatically use ``After=var-tmp.mount``. + + :: + + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/bpool + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & + + Verify that ``zed`` updated the cache by making sure these are not empty:: + + cat /etc/zfs/zfs-list.cache/bpool + cat /etc/zfs/zfs-list.cache/rpool + + If either is empty, force a cache update and check again:: + + zfs set canmount=on bpool/BOOT/suse + zfs set canmount=noauto rpool/ROOT/suse + + If they are still empty, stop zed (as below), start zed (as above) and try + again. + + Stop ``zed``:: + + fg + Press Ctrl-C. + + Fix the paths to eliminate ``/mnt``:: + + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/* + +Step 10: First Boot +------------------- + +#. Optional: Install SSH:: + + zypper install --yes openssh-server + + vi /etc/ssh/sshd_config + # Set: PermitRootLogin yes + +#. Optional: Snapshot the initial installation:: + + zfs snapshot bpool/BOOT/suse@install + zfs snapshot rpool/ROOT/suse@install + + In the future, you will likely want to take snapshots before each + upgrade, and remove old snapshots (including this one) at some point to + save space. + +#. Exit from the ``chroot`` environment back to the LiveCD environment:: + + exit + +#. Run these commands in the LiveCD environment to unmount all + filesystems:: + + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + +#. Reboot:: + + reboot + + Wait for the newly installed system to boot normally. Login as root. + +#. Create a user account: + + Replace ``username`` with your desired username:: + + zfs create rpool/home/username + adduser username + + cp -a /etc/skel/. /home/username + chown -R username:username /home/username + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username + +#. Mirror GRUB + + If you installed to multiple disks, install GRUB on the additional + disks. + + - For legacy (BIOS) booting:: + Check to be sure we using efi mode: + + .. code-block:: text + + efibootmgr -v + + This must return a message contains `legacy_boot` + + Then reconfigure grub: + + .. code-block:: text + + grub-install $DISK + + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. + + - For UEFI booting:: + + umount /boot/efi + + For the second and subsequent disks (increment debian-2 to -3, etc.):: + + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "opensuse-2" -l '\EFI\opensuse\grubx64.efi' + + mount /boot/efi + +Step 11: Optional: Configure Swap +--------------------------------- + +**Caution**: On systems with extremely high memory pressure, using a +zvol for swap can result in lockup, regardless of how much swap is still +available. There is `a bug report upstream +`__. + +#. Create a volume dataset (zvol) for use as a swap device:: + + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap + + You can adjust the size (the ``4G`` part) to your needs. + + The compression algorithm is set to ``zle`` because it is the cheapest + available algorithm. As this guide recommends ``ashift=12`` (4 kiB + blocks on disk), the common case of a 4 kiB page size means that no + compression algorithm can reduce I/O. The exception is all-zero pages, + which are dropped by ZFS; but some form of compression has to be enabled + to get this behavior. + +#. Configure the swap device: + + **Caution**: Always use long ``/dev/zvol`` aliases in configuration + files. Never use a short ``/dev/zdX`` device name. + + :: + + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume + + The ``RESUME=none`` is necessary to disable resuming from hibernation. + This does not work, as the zvol is not present (because the pool has not + yet been imported) at the time the resume script runs. If it is not + disabled, the boot process hangs for 30 seconds waiting for the swap + zvol to appear. + +#. Enable the swap device:: + + swapon -av + +Step 12: Final Cleanup +---------------------- + +#. Wait for the system to boot normally. Login using the account you + created. Ensure the system (including networking) works normally. + +#. Optional: Delete the snapshots of the initial installation:: + + sudo zfs destroy bpool/BOOT/suse@install + sudo zfs destroy rpool/ROOT/suse@install + +#. Optional: Disable the root password:: + + sudo usermod -p '*' root + +#. Optional (but highly recommended): Disable root SSH logins: + + If you installed SSH earlier, revert the temporary change:: + + vi /etc/ssh/sshd_config + # Remove: PermitRootLogin yes + + systemctl restart sshd + +#. Optional: Re-enable the graphical boot process: + + If you prefer the graphical boot process, you can re-enable it now. If + you are using LUKS, it makes the prompt look nicer. + + :: + + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. + + sudo update-bootloader + + **Note:** Ignore errors from ``osprober``, if present. + +#. Optional: For LUKS installs only, backup the LUKS header:: + + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat + + Store that backup somewhere safe (e.g. cloud storage). It is protected by + your LUKS passphrase, but you may wish to use additional encryption. + + **Hint:** If you created a mirror or raidz topology, repeat this for each + LUKS volume (``luks2``, etc.). + +Troubleshooting +--------------- + +Rescuing using a Live CD +~~~~~~~~~~~~~~~~~~~~~~~~ + +Go through `Step 1: Prepare The Install Environment +<#step-1-prepare-the-install-environment>`__. + +For LUKS, first unlock the disk(s):: + + zypper install cryptsetup + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. + +Mount everything correctly:: + + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/suse + zfs mount -a + +If needed, you can chroot into your installed environment:: + + mount --make-private --rbind /dev /mnt/dev + mount --make-private --rbind /proc /mnt/proc + mount --make-private --rbind /sys /mnt/sys + chroot /mnt /bin/bash --login + mount /boot/efi + mount -a + +Do whatever you need to do to fix your system. + +When done, cleanup:: + + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \ + xargs -i{} umount -lf {} + zpool export -a + reboot + +Areca +~~~~~ + +Systems that require the ``arcsas`` blob driver should add it to the +``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``. + +Upgrade or downgrade the Areca driver if something like +``RIP: 0010:[] [] native_read_tsc+0x6/0x20`` +appears anywhere in kernel log. ZoL is unstable on systems that emit this +error message. + +MPT2SAS +~~~~~~~ + +Most problem reports for this tutorial involve ``mpt2sas`` hardware that does +slow asynchronous drive initialization, like some IBM M1015 or OEM-branded +cards that have been flashed to the reference LSI firmware. + +The basic problem is that disks on these controllers are not visible to the +Linux kernel until after the regular system is started, and ZoL does not +hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330 +`__. + +Most LSI cards are perfectly compatible with ZoL. If your card has this +glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in +``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to +appear before importing the pool. + +QEMU/KVM/XEN +~~~~~~~~~~~~ + +Set a unique serial number on each virtual disk using libvirt or qemu +(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). + +To be able to use UEFI in guests (instead of only BIOS booting), run +this on the host:: + + sudo zypper install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] + +:: + + sudo systemctl restart libvirtd.service + +VMware +~~~~~~ + +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration. + Doing this ensures that ``/dev/disk`` aliases are created in the guest. + + +External Links +~~~~~~~~~~~~~~ +* `OpenZFS on openSUSE `__ +* `ZenLinux Blog - How to Setup an openSUSE chroot + `__ diff --git a/_sources/License.rst.txt b/_sources/License.rst.txt new file mode 100644 index 000000000..d2adea9a2 --- /dev/null +++ b/_sources/License.rst.txt @@ -0,0 +1,41 @@ +License +======= + +- The OpenZFS software is licensed under the Common Development and Distribution License + (`CDDL `__) unless otherwise noted. + +- The OpenZFS documentation content is licensed under a Creative Commons Attribution-ShareAlike + license (`CC BY-SA 3.0 `__) + unless otherwise noted. + +- OpenZFS is an associated project of SPI (`Software in the Public Interest + `__). SPI is a 501(c)(3) nonprofit + organization which handles the donations, finances, and legal holdings of the project. + +.. note:: + The Linux Kernel is licensed under the GNU General Public License + Version 2 (`GPLv2 `__). While + both (OpenZFS and Linux Kernel) are free open source licenses they are + restrictive licenses. The combination of them causes problems because it + prevents using pieces of code exclusively available under one license + with pieces of code exclusively available under the other in the same binary. + In the case of the Linux Kernel, this prevents us from distributing OpenZFS + as part of the Linux Kernel binary. However, there is nothing in either license + that prevents distributing it in the form of a binary module or in the form + of source code. + + Additional reading and opinions: + + - `Software Freedom Law + Center `__ + - `Software Freedom + Conservancy `__ + - `Free Software + Foundation `__ + - `Encouraging closed source + modules `__ + +CC BY-SA 3.0: |Creative Commons License| + +.. |Creative Commons License| image:: https://i.creativecommons.org/l/by-sa/3.0/88x31.png + :target: http://creativecommons.org/licenses/by-sa/3.0/ diff --git a/_sources/Performance and Tuning/Async Write.rst.txt b/_sources/Performance and Tuning/Async Write.rst.txt new file mode 100644 index 000000000..692b72d3c --- /dev/null +++ b/_sources/Performance and Tuning/Async Write.rst.txt @@ -0,0 +1,36 @@ +Async Writes +============ + +The number of concurrent operations issued for the async write I/O class +follows a piece-wise linear function defined by a few adjustable points. + +:: + + | o---------| <-- zfs_vdev_async_write_max_active + ^ | /^ | + | | / | | + active | / | | + I/O | / | | + count | / | | + | / | | + |-------o | | <-- zfs_vdev_async_write_min_active + 0|_______^______|_________| + 0% | | 100% of zfs_dirty_data_max + | | + | `-- zfs_vdev_async_write_active_max_dirty_percent + `--------- zfs_vdev_async_write_active_min_dirty_percent + +Until the amount of dirty data exceeds a minimum percentage of the dirty +data allowed in the pool, the I/O scheduler will limit the number of +concurrent operations to the minimum. As that threshold is crossed, the +number of concurrent operations issued increases linearly to the maximum +at the specified maximum percentage of the dirty data allowed in the +pool. + +Ideally, the amount of dirty data on a busy pool will stay in the sloped +part of the function between +zfs_vdev_async_write_active_min_dirty_percent and +zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the maximum +percentage, this indicates that the rate of incoming data is greater +than the rate that the backend storage can handle. In this case, we must +further throttle incoming writes, as described in the next section. diff --git a/_sources/Performance and Tuning/Hardware.rst.txt b/_sources/Performance and Tuning/Hardware.rst.txt new file mode 100644 index 000000000..0ca9f0280 --- /dev/null +++ b/_sources/Performance and Tuning/Hardware.rst.txt @@ -0,0 +1,808 @@ +Hardware +******** + +.. contents:: Table of Contents + :local: + +Introduction +============ + +Storage before ZFS involved rather expensive hardware that was unable to +protect against silent corruption and did not scale very well. The +introduction of ZFS has enabled people to use far less expensive +hardware than previously used in the industry with superior scaling. +This page attempts to provide some basic guidance to people buying +hardware for use in ZFS-based servers and workstations. + +Hardware that adheres to this guidance will enable ZFS to reach its full +potential for performance and reliability. Hardware that does not adhere +to it will serve as a handicap. Unless otherwise stated, such handicaps +apply to all storage stacks and are by no means specific to ZFS. Systems +built using competing storage stacks will also benefit from these +suggestions. + +.. _bios_cpu_microcode_updates: + +BIOS / CPU microcode updates +============================ + +Running the latest BIOS and CPU microcode is highly recommended. + +Background +---------- + +Computer microprocessors are very complex designs that often have bugs, +which are called errata. Modern microprocessors are designed to utilize +microcode. This puts part of the hardware design into quasi-software +that can be patched without replacing the entire chip. Errata are often +resolved through CPU microcode updates. These are often bundled in BIOS +updates. In some cases, the BIOS interactions with the CPU through +machine registers can be modified to fix things with the same microcode. +If a newer microcode is not bundled as part of a BIOS update, it can +often be loaded by the operating system bootloader or the operating +system itself. + +.. _ecc_memory: + +ECC Memory +========== + +Bit flips can have fairly dramatic consequences for all computer +filesystems and ZFS is no exception. No technique used in ZFS (or any +other filesystem) is capable of protecting against bit flips. +Consequently, ECC Memory is highly recommended. + +.. _background_1: + +Background +---------- + +Ordinary background radiation will randomly flip bits in computer +memory, which causes undefined behavior. These are known as "bit flips". +Each bit flip can have any of four possible consequences depending on +which bit is flipped: + +- Bit flips can have no effect. + + - Bit flips that have no effect occur in unused memory. + +- Bit flips can cause runtime failures. + + - This is the case when a bit flip occurs in something read from + disk. + - Failures are typically observed when program code is altered. + - If the bit flip is in a routine within the system's kernel or + /sbin/init, the system will likely crash. Otherwise, reloading the + affected data can clear it. This is typically achieved by a + reboot. + +- It can cause data corruption. + + - This is the case when the bit is in use by data being written to + disk. + - If the bit flip occurs before ZFS' checksum calculation, ZFS will + not realize that the data is corrupt. + - If the bit flip occurs after ZFS' checksum calculation, but before + write-out, ZFS will detect it, but it might not be able to correct + it. + +- It can cause metadata corruption. + + - This is the case when a bit flips in an on-disk structure being + written to disk. + - If the bit flip occurs before ZFS' checksum calculation, ZFS will + not realize that the metadata is corrupt. + - If the bit flip occurs after ZFS' checksum calculation, but before + write-out, ZFS will detect it, but it might not be able to correct + it. + - Recovery from such an event will depend on what was corrupted. In + the worst, case, a pool could be rendered unimportable. + + - All filesystems have poor reliability in their absolute worst + case bit-flip failure scenarios. Such scenarios should be + considered extraordinarily rare. + +.. _drive_interfaces: + +Drive Interfaces +================ + +.. _sas_versus_sata: + +SAS versus SATA +--------------- + +ZFS depends on the block device layer for storage. Consequently, ZFS is +affected by the same things that affect other filesystems, such as +driver support and non-working hardware. Consequently, there are a few +things to note: + +- Never place SATA disks into a SAS expander without a SAS interposer. + + - If you do this and it does work, it is the exception, rather than + the rule. + +- Do not expect SAS controllers to be compatible with SATA port + multipliers. + + - This configuration is typically not tested. + - The disks could be unrecognized. + +- Support for SATA port multipliers is inconsistent across OpenZFS + platforms + + - Linux drivers generally support them. + - Illumos drivers generally do not support them. + - FreeBSD drivers are somewhere between Linux and Illumos in terms + of support. + +.. _usb_hard_drives_andor_adapters: + +USB Hard Drives and/or Adapters +------------------------------- + +These have problems involving sector size reporting, SMART passthrough, +the ability to set ERC and other areas. ZFS will perform as well on such +devices as they are capable of allowing, but try to avoid them. They +should not be expected to have the same up-time as SAS and SATA drives +and should be considered unreliable. + +Controllers +=========== + +The ideal storage controller for ZFS has the following attributes: + +- Driver support on major OpenZFS platforms + + - Stability is important. + +- High per-port bandwidth + + - PCI Express interface bandwidth divided by the number of ports + +- Low cost + + - Support for RAID, Battery Backup Units and hardware write caches + is unnecessary. + +Marc Bevand's blog post `From 32 to 2 ports: Ideal SATA/SAS Controllers +for ZFS & Linux MD RAID `__ contains an +excellent list of storage controllers that meet these criteria. He +regularly updates it as newer controllers become available. + +.. _hardware_raid_controllers: + +Hardware RAID controllers +------------------------- + +Hardware RAID controllers should not be used with ZFS. While ZFS will +likely be more reliable than other filesystems on Hardware RAID, it will +not be as reliable as it would be on its own. + +- Hardware RAID will limit opportunities for ZFS to perform self + healing on checksum failures. When ZFS does RAID-Z or mirroring, a + checksum failure on one disk can be corrected by treating the disk + containing the sector as bad for the purpose of reconstructing the + original information. This cannot be done when a RAID controller + handles the redundancy unless a duplicate copy is stored by ZFS in + the case that the corruption involving as metadata, the copies flag + is set or the RAID array is part of a mirror/raid-z vdev within ZFS. + +- Sector size information is not necessarily passed correctly by + hardware RAID on RAID 1. Sector size information cannot be passed + correctly on RAID 5/6. + Hardware RAID 1 is more likely to experience read-modify-write + overhead from partial sector writes while Hardware RAID 5/6 will almost + certainty suffer from partial stripe writes (i.e. the RAID write + hole). ZFS using the disks natively allows it to obtain the + sector size information reported by the disks to avoid + read-modify-write on sectors, while ZFS avoids partial stripe writes + on RAID-Z by design from using copy-on-write. + + - There can be sector alignment problems on ZFS when a drive + misreports its sector size. Such drives are typically NAND-flash + based solid state drives and older SATA drives from the advanced + format (4K sector size) transition before Windows XP EoL occurred. + This can be :ref:`manually corrected ` at + vdev creation. + - It is possible for the RAID header to cause misalignment of sector + writes on RAID 1 by starting the array within a sector on an + actual drive, such that manual correction of sector alignment at + vdev creation does not solve the problem. + +- RAID controller failures can require that the controller be replaced with + the same model, or in less extreme cases, a model from the same + manufacturer. Using ZFS by itself allows any controller to be used. + +- If a hardware RAID controller's write cache is used, an additional + failure point is introduced that can only be partially mitigated by + additional complexity from adding flash to save data in power loss + events. The data can still be lost if the battery fails when it is + required to survive a power loss event or there is no flash and power + is not restored in a timely manner. The loss of the data in the write + cache can severely damage anything stored on a RAID array when many + outstanding writes are cached. In addition, all writes are stored in + the cache rather than just synchronous writes that require a write + cache, which is inefficient, and the write cache is relatively small. + ZFS allows synchronous writes to be written directly to flash, which + should provide similar acceleration to hardware RAID and the ability + to accelerate many more in-flight operations. + +- Behavior during RAID reconstruction when silent corruption damages + data is undefined. There are reports of RAID 5 and 6 arrays being + lost during reconstruction when the controller encounters silent + corruption. ZFS' checksums allow it to avoid this situation by + determining whether enough information exists to reconstruct data. If + not, the file is listed as damaged in zpool status and the + system administrator has the opportunity to restore it from a backup. + +- IO response times will be reduced whenever the OS blocks on IO + operations because the system CPU blocks on a much weaker embedded + CPU used in the RAID controller. This lowers IOPS relative to what + ZFS could have achieved. + +- The controller's firmware is an additional layer of complexity that + cannot be inspected by arbitrary third parties. The ZFS source code + is open source and can be inspected by anyone. + +- If multiple RAID arrays are formed by the same controller and one + fails, the identifiers provided by the arrays exposed to the OS might + become inconsistent. Giving the drives directly to the OS allows this + to be avoided via naming that maps to a unique port or unique drive + identifier. + + - e.g. If you have arrays A, B, C and D; array B dies, the + interaction between the hardware RAID controller and the OS might + rename arrays C and D to look like arrays B and C respectively. + This can fault pools verbatim imported from the cachefile. + - Not all RAID controllers behave this way. This issue has + been observed on both Linux and FreeBSD when system administrators + used single drive RAID 0 arrays, however. It has also been observed + with controllers from different vendors. + +One might be inclined to try using single-drive RAID 0 arrays to try to +use a RAID controller like a HBA, but this is not recommended for many +of the reasons listed for other hardware RAID types. It is best to use a +HBA instead of a RAID controller, for both performance and reliability. + +.. _hard_drives: + +Hard drives +=========== + +.. _sector_size: + +Sector Size +----------- + +Historically, all hard drives had 512-byte sectors, with the exception +of some SCSI drives that could be modified to support slightly larger +sectors. In 2009, the industry migrated from 512-byte sectors to +4096-byte "Advanced Format" sectors. Since Windows XP is not compatible +with 4096-byte sectors or drives larger than 2TB, some of the first +advanced format drives implemented hacks to maintain Windows XP +compatibility. + +- The first advanced format drives on the market misreported their + sector size as 512-bytes for Windows XP compatibility. As of 2013, it + is believed that such hard drives are no longer in production. + Advanced format hard drives made during or after this time should + report their true physical sector size. +- Drives storing 2TB and smaller might have a jumper that can be set to + map all sectors off by 1. This to provide proper alignment for + Windows XP, which started its first partition at sector 63. This + jumper setting should be off when using such drives with ZFS. + +As of 2014, there are still 512-byte and 4096-byte drives on the market, +but they are known to properly identify themselves unless behind a USB +to SATA controller. Replacing a 512-byte sector drive with a 4096-byte +sector drives in a vdev created with 512-byte sector drives will +adversely affect performance. Replacing a 4096-byte sector drive with a +512-byte sector drive will have no negative effect on performance. + +.. _error_recovery_control: + +Error recovery control +---------------------- + +ZFS is said to be able to use cheap drives. This was true when it was +introduced and hard drives supported Error recovery control. Since ZFS' +introduction, error recovery control has been removed from low-end +drives from certain manufacturers, most notably Western Digital. +Consistent performance requires hard drives that support error recovery +control. + +.. _background_2: + +Background +~~~~~~~~~~ + +Hard drives store data using small polarized regions a magnetic surface. +Reading from and/or writing to this surface poses a few reliability +problems. One is that imperfections in the surface can corrupt bits. +Another is that vibrations can cause drive heads to miss their targets. +Consequently, hard drive sectors are composed of three regions: + +- A sector number +- The actual data +- ECC + +The sector number and ECC enables hard drives to detect and respond to +such events. When either event occurs during a read, hard drives will +retry the read many times until they either succeed or conclude that the +data cannot be read. The latter case can take a substantial amount of +time and consequently, IO to the drive will stall. + +Enterprise hard drives and some consumer hard drives implement a feature +called Time-Limited Error Recovery (TLER) by Western Digital, Error +Recovery Control (ERC) by Seagate and Command Completion Time Limit by +Hitachi and Samsung, which permits the time drives are willing to spend +on such events to be limited by the system administrator. + +Drives that lack such functionality can be expected to have arbitrarily +high limits. Several minutes is not impossible. Drives with this +functionality typically default to 7 seconds. ZFS does not currently +adjust this setting on drives. However, it is advisable to write a +script to set the error recovery time to a low value, such as 0.1 +seconds until ZFS is modified to control it. This must be done on every +boot. + +.. _rpm_speeds: + +RPM Speeds +---------- + +High RPM drives have lower seek times, which is historically regarded as +being desirable. They increase cost and sacrifice storage density in +order to achieve what is typically no more than a factor of 6 +improvement over their lower RPM counterparts. + +To provide some numbers, a 15k RPM drive from a major manufacturer is +rated for 3.4 millisecond average read and 3.9 millisecond average +write. Presumably, this number assumes that the target sector is at most +half the number of drive tracks away from the head and half the disk +away. Being even further away is worst-case 2 times slower. Manufacturer +numbers for 7200 RPM drives are not available, but they average 13 to 16 +milliseconds in empirical measurements. 5400 RPM drives can be expected +to be slower. + +ARC and ZIL are able to mitigate much of the benefit of lower seek +times. Far larger increases in IOPS performance can be obtained by +adding additional RAM for ARC, L2ARC devices and SLOG devices. Even +higher increases in performance can be obtained by replacing hard drives +with solid state storage entirely. Such things are typically more cost +effective than high RPM drives when considering IOPS. + +.. _command_queuing: + +Command Queuing +--------------- + +Drives with command queues are able to reorder IO operations to increase +IOPS. This is called Native Command Queuing on SATA and Tagged Command +Queuing on PATA/SCSI/SAS. ZFS stores objects in metaslabs and it can use +several metastabs at any given time. Consequently, ZFS is not only +designed to take advantage of command queuing, but good ZFS performance +requires command queuing. Almost all drives manufactured within the past +10 years can be expected to support command queuing. The exceptions are: + +- Consumer PATA/IDE drives +- First generation SATA drives, which used IDE to SATA translation + chips, from 2003 to 2004. +- SATA drives operating under IDE emulation that was configured in the + system BIOS. + +Each OpenZFS system has different methods for checking whether command +queuing is supported. On Linux, ``hdparm -I /path/to/device \| grep +Queue`` is used. On FreeBSD, ``camcontrol identify $DEVICE`` is used. + +.. _nand_flash_ssds: + +NAND Flash SSDs +=============== + +As of 2014, Solid state storage is dominated by NAND-flash and most +articles on solid state storage focus on it exclusively. As of 2014, the +most popular form of flash storage used with ZFS involve drives with +SATA interfaces. Enterprise models with SAS interfaces are beginning to +become available. + +As of 2017, Solid state storage using NAND-flash with PCI-E interfaces +are widely available on the market. They are predominantly enterprise +drives that utilize a NVMe interface that has lower overhead than the +ATA used in SATA or SCSI used in SAS. There is also an interface known +as M.2 that is primarily used by consumer SSDs, although not necessarily +limited to them. It can provide electrical connectivity for multiple +buses, such as SATA, PCI-E and USB. M.2 SSDs appear to use either SATA +or NVME. + +.. _nvme_low_level_formatting: + +NVMe low level formatting +------------------------- + +Many NVMe SSDs support both 512-byte sectors and 4096-byte sectors. They +often ship with 512-byte sectors, which are less performant than +4096-byte sectors. Some also support metadata for T10/DIF CRC to try to +improve reliability, although this is unnecessary with ZFS. + +NVMe drives should be +`formatted `__ +to use 4096-byte sectors without metadata prior to being given to ZFS +for best performance unless they indicate that 512-byte sectors are as +performant as 4096-byte sectors, although this is unlikely. Lower +numbers in the Rel_Perf of Supported LBA Sizes from ``smartctl -a +/dev/$device_namespace`` (for example ``smartctl -a /dev/nvme1n1``) +indicate higher performance low level formats, with 0 being the best. +The current formatting will be marked by a plus sign under the format +Fmt. + +You may format a drive using ``nvme format /dev/nvme1n1 -l $ID``. The $ID +corresponds to the Id field value from the Supported LBA Sizes SMART +information. + +.. _power_failure_protection: + +Power Failure Protection +------------------------ + +.. _background_3: + +Background +~~~~~~~~~~ + +On-flash data structures are highly complex and traditionally have been +highly vulnerable to corruption. In the past, such corruption would +result in the loss of \*all\* drive data and an event such as a PSU +failure could result in multiple drives simultaneously failing. Since +the drive firmware is not available for review, the traditional +conclusion was that all drives that lack hardware features to avoid +power failure events cannot be trusted, which was found to be the case +multiple times in the +past [#ssd_analysis]_ [#ssd_analysis2]_ [#ssd_analysis3]_. +Discussion of power failures bricking NAND flash SSDs appears to have +vanished from literature following the year 2015. SSD manufacturers now +claim that firmware power loss protection is robust enough to provide +equivalent protection to hardware power loss protection. `Kingston is one +example `__. +Firmware power loss protection is used to guarantee the protection of +flushed data and the drives’ own metadata, which is all that filesystems +such as ZFS need. + +However, those that either need or want strong guarantees that firmware +bugs are unlikely to be able to brick drives following power loss events +should continue to use drives that provide hardware power loss +protection. The basic concept behind how hardware power failure +protection works has been `documented by +Intel `__ +for those who wish to read about the details. As of 2020, use of +hardware power loss protection is now a feature solely of enterprise +SSDs that attempt to protect unflushed data in addition to drive +metadata and flushed data. This additional protection beyond protecting +flushed data and the drive metadata provides no additional benefit to +ZFS, but it does not hurt it. + +It should also be noted that drives in data centers and laptops are +unlikely to experience power loss events, reducing the usefulness of +hardware power loss protection. This is especially the case in +datacenters where redundant power, UPS power and the use of IPMI to do +forced reboots should prevent most drives from experiencing power loss +events. + +Lists of drives that provide hardware power loss protection are +maintained below for those who need/want it. Since ZFS, like other +filesystems, only requires power failure protection for flushed data and +drive metadata, older drives that only protect these things are included +on the lists. + +.. _nvme_drives_with_power_failure_protection: + +NVMe drives with power failure protection +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A non-exhaustive list of NVMe drives with power failure protection is as +follows: + +- Intel 750 +- Intel DC P3500/P3600/P3608/P3700 +- Micron 7300/7400/7450 PRO/MAX +- Samsung PM963 (M.2 form factor) +- Samsung PM1725/PM1725a +- Samsung XS1715 +- Toshiba ZD6300 +- Seagate Nytro 5000 M.2 (XP1920LE30002 tested; **read notes below + before buying**) + + - Inexpensive 22110 M.2 enterprise drive using consumer MLC that is + optimized for read mostly workloads. It is not a good choice for a + SLOG device, which is a write mostly workload. + - The + `manual `__ + for this drive specifies airflow requirements. If the drive does + not receive sufficient airflow from case fans, it will overheat at + idle. It's thermal throttling will severely degrade performance + such that write throughput performance will be limited to 1/10 of + the specification and read latencies will reach several hundred + milliseconds. Under continuous load, the device will continue to + become hotter until it suffers a "degraded reliability" event + where all data on at least one NVMe namespace is lost. The NVMe + namespace is then unusable until a secure erase is done. Even with + sufficient airflow under normal circumstances, data loss is + possible under load following the failure of fans in an enterprise + environment. Anyone deploying this into production in an + enterprise environment should be mindful of this failure mode. + - Those who wish to use this drive in a low airflow situation can + workaround this failure mode by placing a passive heatsink such as + `this `__ on the + NAND flash controller. It is the chip under the sticker closest to + the capacitors. This was tested by placing the heatsink over the + sticker (as removing it was considered undesirable). The heatsink + will prevent the drive from overheating to the point of data loss, + but it will not fully alleviate the overheating situation under + load without active airflow. A scrub will cause it to overheat + after a few hundred gigabytes are read. However, the thermal + throttling will quickly cool the drive from 76 degrees Celsius to + 74 degrees Celsius, restoring performance. + + - It might be possible to use the heatsink in an enterprise + environment to provide protection against data loss following + fan failures. However, this was not evaluated. Furthermore, + operating temperatures for consumer NAND flash should be at or + above 40 degrees Celsius for long term data integrity. + Therefore, the use of a heatsink to provide protection against + data loss following fan failures in an enterprise environment + should be evaluated before deploying drives into production to + ensure that the drive is not overcooled. + +.. _sas_drives_with_power_failure_protection: + +SAS drives with power failure protection +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A non-exhaustive list of SAS drives with power failure protection is as +follows: + +- Samsung PM1633/PM1633a +- Samsung SM1625 +- Samsung PM853T +- Toshiba PX05SHB***/PX04SHB***/PX04SHQ**\* +- Toshiba PX05SLB***/PX04SLB***/PX04SLQ**\* +- Toshiba PX05SMB***/PX04SMB***/PX04SMQ**\* +- Toshiba PX05SRB***/PX04SRB***/PX04SRQ**\* +- Toshiba PX05SVB***/PX04SVB***/PX04SVQ**\* + +.. _sata_drives_with_power_failure_protection: + +SATA drives with power failure protection +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A non-exhaustive list of SATA drives with power failure protection is as +follows: + +- Crucial MX100/MX200/MX300 +- Crucial M500/M550/M600 +- Intel 320 + + - Early reports claimed that the 330 and 335 had power failure + protection too, `but they do + not `__. + +- Intel 710 +- Intel 730 +- Intel DC S3500/S3510/S3610/S3700/S3710 +- Kingston DC500R/DC500M +- Micron 5210 Ion + + - First QLC drive on the list. High capacity with a low price per + gigabyte. + +- Samsung PM863/PM863a +- Samsung SM843T (do not confuse with SM843) +- Samsung SM863/SM863a +- Samsung 845DC Evo +- Samsung 845DC Pro + + - `High sustained write + IOPS `__ + +- Toshiba HK4E/HK3E2 +- Toshiba HK4R/HK3R2/HK3R + +.. _criteriaprocess_for_inclusion_into_these_lists: + +Criteria/process for inclusion into these lists +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +These lists have been compiled on a volunteer basis by OpenZFS +contributors (mainly Richard Yao) from trustworthy sources of +information. The lists are intended to be vendor neutral and are not +intended to benefit any particular manufacturer. Any perceived bias +toward any manufacturer is caused by a lack of awareness and a lack of +time to research additional options. Confirmation of the presence of +adequate power loss protection by a reliable source is the only +requirement for inclusion into this list. Adequate power loss protection +means that the drive must protect both its own internal metadata and all +flushed data. Protection of unflushed data is irrelevant and therefore +not a requirement. ZFS only expects storage to protect flushed data. +Consequently, solid state drives whose power loss protection only +protects flushed data is sufficient for ZFS to ensure that data remains +safe. + +Anyone who believes an unlisted drive to provide adequate power failure +protection may contact the :ref:`mailing_lists` with +a request for inclusion and substantiation for the claim that power +failure protection is provided. Examples of substantiation include +pictures of drive internals showing the presence of capacitors, +statements by well regarded independent review sites such as Anandtech +and manufacturer specification sheets. The latter are accepted on the +honor system until a manufacturer is found to misstate reality on the +protection of the drives' own internal metadata structures and/or the +protection of flushed data. Thus far, all manufacturers have been +honest. + +.. _flash_pages: + +Flash pages +----------- + +The smallest unit on a NAND chip that can be written is a flash page. +The first NAND-flash SSDs on the market had 4096-byte pages. Further +complicating matters is that the the page size has been doubled twice +since then. NAND flash SSDs **should** report these pages as being +sectors, but so far, all of them incorrectly report 512-byte sectors for +Windows XP compatibility. The consequence is that we have a similar +situation to what we had with early advanced format hard drives. + +As of 2014, most NAND-flash SSDs on the market have 8192-byte page +sizes. However, models using 128-Gbit NAND from certain manufacturers +have a 16384-byte page size. Maximum performance requires that vdevs be +created with correct ashift values (13 for 8192-byte and 14 for +16384-byte). However, not all OpenZFS platforms support this. The Linux +port supports ashift=13, while others are limited to ashift=12 +(4096-byte). + +As of 2017, NAND-flash SSDs are tuned for 4096-byte IOs. Matching the +flash page size is unnecessary and ashift=12 is usually the correct +choice. Public documentation on flash page size is also nearly +non-existent. + +.. _ata_trim_scsi_unmap: + +ATA TRIM / SCSI UNMAP +--------------------- + +It should be noted that this is a separate case from +discard on zvols or hole punching on filesystems. Those work regardless +of whether ATA TRIM / SCSI UNMAP is sent to the actual block devices. + +.. _ata_trim_performance_issues: + +ATA TRIM Performance Issues +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ATA TRIM command in SATA 3.0 and earlier is a non-queued command. +Issuing a TRIM command on a SATA drive conforming to SATA 3.0 or earlier +will cause the drive to drain its IO queue and stop servicing requests +until it finishes, which hurts performance. SATA 3.1 removed this +limitation, but very few SATA drives on the market are conformant to +SATA 3.1 and it is difficult to distinguish them from SATA 3.0 drives. +At the same time, SCSI UNMAP has no such problems. + +.. _optane_3d_xpoint_ssds: + +Optane / 3D XPoint SSDs +======================= + +These are SSDs with far better latencies and write endurance than NAND +flash SSDs. They are byte addressable, such that ashift=9 is fine for +use on them. Unlike NAND flash SSDs, they do not require any special +power failure protection circuitry for reliability. There is also no +need to run TRIM on them. However, they cost more per GB than NAND flash +(as of 2020). The enterprise models make excellent SLOG devices. Here is +a list of models that are known to perform well: + +- `Intel DC + P4800X `__ + +- `Intel DC + P4801X `__ + +- `Intel DC + P1600X `__ + +Note that SLOG devices rarely have more than 4GB in use at any given +time, so the smaller sized devices are generally the best choice in +terms of cost, with larger sizes giving no benefit. Larger sizes could +be a good choice for other vdev types, depending on performance needs +and cost considerations. + +Power +===== + +Ensuring that computers are properly grounded is highly recommended. +There have been cases in user homes where machines experienced random +failures when plugged into power receptacles that had open grounds (i.e. +no ground wire at all). This can cause random failures on any computer +system, whether it uses ZFS or not. + +Power should also be relatively stable. Large dips in voltages from +brownouts are preferably avoided through the use of UPS units or line +conditioners. Systems subject to unstable power that do not outright +shutdown can exhibit undefined behavior. PSUs with longer hold-up times +should be able to provide partial protection against this, but hold up +times are often undocumented and are not a substitute for a UPS or line +conditioner. + +.. _pwr_ok_signal: + +PWR_OK signal +------------- + +PSUs are supposed to deassert a PWR_OK signal to indicate that provided +voltages are no longer within the rated specification. This should force +an immediate shutdown. However, the system clock of a developer +workstation was observed to significantly deviate from the expected +value following during a series of ~1 second brown outs. This machine +did not use a UPS at the time. However, the PWR_OK mechanism should have +protected against this. The observation of the PWR_OK signal failing to +force a shutdown with adverse consequences (to the system clock in this +case) suggests that the PWR_OK mechanism is not a strict guarantee. + +.. _psu_hold_up_times: + +PSU Hold-up Times +----------------- + +A PSU hold-up time is the amount of time that a PSU can continue to +output power at maximum output within standard voltage tolerances +following the loss of input power. This is important for supporting UPS +units because `the transfer +time `__ +taken by a standard UPS to supply power from its battery can leave +machines without power for "5-12 ms". `Intel's ATX Power Supply design +guide `__ +specifies a hold up time of 17 milliseconds at maximum continuous +output. The hold-up time is a inverse function of how much power is +being output by the PSU, with lower power output increasing holdup +times. + +Capacitor aging in PSUs will lower the hold-up time below what it was +when new, which could cause reliability issues as equipment ages. +Machines using substandard PSUs with hold-up times below the +specification therefore require higher end UPS units for protection to +ensure that the transfer time does not exceed the hold-up time. A +hold-up time below the transfer time during a transfer to battery power +can cause undefined behavior should the PWR_OK signal not become +deasserted to force the machine to power off. + +If in doubt, use a double conversion UPS unit. Double conversion UPS +units always run off the battery, such that the transfer time is 0. This +is unless they are high efficiency models that are hybrids between +standard UPS units and double conversion UPS units, although these are +reported to have much lower transfer times than standard PSUs. You could +also contact your PSU manufacturer for the hold up time specification, +but if reliability for years is a requirement, you should use a higher +end UPS with a low transfer time. + +Note that double conversion units are at most 94% efficient unless they +support a high efficiency mode, which adds latency to the time to +transition to battery power. + +.. _ups_batteries: + +UPS batteries +------------- + +The lead acid batteries in UPS units generally need to be replaced +regularly to ensure that they provide power during power outages. For +home systems, this is every 3 to 5 years, although this varies with +temperature [#ups_temp]_. For +enterprise systems, contact your vendor. + + +.. rubric:: Footnotes + +.. [#ssd_analysis] +.. [#ssd_analysis2] +.. [#ssd_analysis3] +.. [#ups_temp] diff --git a/_sources/Performance and Tuning/Module Parameters.rst.txt b/_sources/Performance and Tuning/Module Parameters.rst.txt new file mode 100644 index 000000000..01267e745 --- /dev/null +++ b/_sources/Performance and Tuning/Module Parameters.rst.txt @@ -0,0 +1,9557 @@ +Module Parameters +================= + +Most of the ZFS kernel module parameters are accessible in the SysFS +``/sys/module/zfs/parameters`` directory. Current values can be observed +by + +.. code:: shell + + cat /sys/module/zfs/parameters/PARAMETER + +Many of these can be changed by writing new values. These are denoted by +Change|Dynamic in the PARAMETER details below. + +.. code:: shell + + echo NEWVALUE >> /sys/module/zfs/parameters/PARAMETER + +If the parameter is not dynamically adjustable, an error can occur and +the value will not be set. It can be helpful to check the permissions +for the PARAMETER file in SysFS. + +In some cases, the parameter must be set prior to loading the kernel +modules or it is desired to have the parameters set automatically at +boot time. For many distros, this can be accomplished by creating a file +named ``/etc/modprobe.d/zfs.conf`` containing a text line for each +module parameter using the format: + +:: + + # change PARAMETER for workload XZY to solve problem PROBLEM_DESCRIPTION + # changed by YOUR_NAME on DATE + options zfs PARAMETER=VALUE + +Some parameters related to ZFS operations are located in module +parameters other than in the ``zfs`` kernel module. These are documented +in the individual parameter description. Unless otherwise noted, the +tunable applies to the ``zfs`` kernel module. For example, the ``icp`` +kernel module parameters are visible in the +``/sys/module/icp/parameters`` directory and can be set by default at +boot time by changing the ``/etc/modprobe.d/icp.conf`` file. + +See the man page for *modprobe.d* for more information. + +Manual Pages +------------ + +The `zfs(4) <../man/4/zfs.4.html>`_ and `spl(4) <../man/4/spl.4.html>`_ man +pages (previously ``zfs-`` and ``spl-module-parameters(5)``, respectively, +prior to OpenZFS 2.1) contain brief descriptions of +the module parameters. Alas, man pages are not as suitable for quick +reference as documentation pages. This page is intended to be a better +cross-reference and capture some of the wisdom of ZFS developers and +practitioners. + +ZFS Module Parameters +--------------------- + +The ZFS kernel module, ``zfs.ko``, parameters are detailed below. + +To observe the list of parameters along with a short synopsis of each +parameter, use the ``modinfo`` command: + +.. code:: bash + + modinfo zfs + +Tags +---- + +The list of parameters is quite large and resists hierarchical +representation. To assist in finding relevant information +quickly, each module parameter has a "Tags" row with keywords for +frequent searches. + +ABD +~~~ + +- `zfs_abd_scatter_enabled <#zfs-abd-scatter-enabled>`__ +- `zfs_abd_scatter_max_order <#zfs-abd-scatter-max-order>`__ +- `zfs_compressed_arc_enabled <#zfs-compressed-arc-enabled>`__ + +allocation +~~~~~~~~~~ + +- `dmu_object_alloc_chunk_shift <#dmu-object-alloc-chunk-shift>`__ +- `metaslab_aliquot <#metaslab-aliquot>`__ +- `metaslab_bias_enabled <#metaslab-bias-enabled>`__ +- `metaslab_debug_load <#metaslab-debug-load>`__ +- `metaslab_debug_unload <#metaslab-debug-unload>`__ +- `metaslab_force_ganging <#metaslab-force-ganging>`__ +- `metaslab_fragmentation_factor_enabled <#metaslab-fragmentation-factor-enabled>`__ +- `zfs_metaslab_fragmentation_threshold <#zfs-metaslab-fragmentation-threshold>`__ +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `metaslab_preload_enabled <#metaslab-preload-enabled>`__ +- `zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__ +- `zfs_metaslab_switch_threshold <#zfs-metaslab-switch-threshold>`__ +- `metaslabs_per_vdev <#metaslabs-per-vdev>`__ +- `zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ +- `zfs_mg_noalloc_threshold <#zfs-mg-noalloc-threshold>`__ +- `spa_asize_inflation <#spa-asize-inflation>`__ +- `spa_load_verify_data <#spa-load-verify-data>`__ +- `spa_slop_shift <#spa-slop-shift>`__ +- `zfs_vdev_default_ms_count <#zfs-vdev-default-ms-count>`__ + +ARC +~~~ + +- `zfs_abd_scatter_min_size <#zfs-abd-scatter-min-size>`__ +- `zfs_arc_average_blocksize <#zfs-arc-average-blocksize>`__ +- `zfs_arc_dnode_limit <#zfs-arc-dnode-limit>`__ +- `zfs_arc_dnode_limit_percent <#zfs-arc-dnode-limit-percent>`__ +- `zfs_arc_dnode_reduce_percent <#zfs-arc-dnode-reduce-percent>`__ +- `zfs_arc_evict_batch_limit <#zfs-arc-evict-batch-limit>`__ +- `zfs_arc_grow_retry <#zfs-arc-grow-retry>`__ +- `zfs_arc_lotsfree_percent <#zfs-arc-lotsfree-percent>`__ +- `zfs_arc_max <#zfs-arc-max>`__ +- `zfs_arc_meta_adjust_restarts <#zfs-arc-meta-adjust-restarts>`__ +- `zfs_arc_meta_limit <#zfs-arc-meta-limit>`__ +- `zfs_arc_meta_limit_percent <#zfs-arc-meta-limit-percent>`__ +- `zfs_arc_meta_min <#zfs-arc-meta-min>`__ +- `zfs_arc_meta_prune <#zfs-arc-meta-prune>`__ +- `zfs_arc_meta_strategy <#zfs-arc-meta-strategy>`__ +- `zfs_arc_min <#zfs-arc-min>`__ +- `zfs_arc_min_prefetch_lifespan <#zfs-arc-min-prefetch-lifespan>`__ +- `zfs_arc_min_prefetch_ms <#zfs-arc-min-prefetch-ms>`__ +- `zfs_arc_min_prescient_prefetch_ms <#zfs-arc-min-prescient-prefetch-ms>`__ +- `zfs_arc_overflow_shift <#zfs-arc-overflow-shift>`__ +- `zfs_arc_p_dampener_disable <#zfs-arc-p-dampener-disable>`__ +- `zfs_arc_p_min_shift <#zfs-arc-p-min-shift>`__ +- `zfs_arc_pc_percent <#zfs-arc-pc-percent>`__ +- `zfs_arc_shrink_shift <#zfs-arc-shrink-shift>`__ +- `zfs_arc_sys_free <#zfs-arc-sys-free>`__ +- `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +- `dbuf_cache_shift <#dbuf-cache-shift>`__ +- `dbuf_metadata_cache_shift <#dbuf-metadata-cache-shift>`__ +- `zfs_disable_dup_eviction <#zfs-disable-dup-eviction>`__ +- `l2arc_exclude_special <#l2arc-exclude-special>`__ +- `l2arc_feed_again <#l2arc-feed-again>`__ +- `l2arc_feed_min_ms <#l2arc-feed-min-ms>`__ +- `l2arc_feed_secs <#l2arc-feed-secs>`__ +- `l2arc_headroom <#l2arc-headroom>`__ +- `l2arc_headroom_boost <#l2arc-headroom-boost>`__ +- `l2arc_meta_percent <#l2arc-meta-percent>`__ +- `l2arc_mfuonly <#l2arc-mfuonly>`__ +- `l2arc_nocompress <#l2arc-nocompress>`__ +- `l2arc_noprefetch <#l2arc-noprefetch>`__ +- `l2arc_norw <#l2arc-norw>`__ +- `l2arc_rebuild_blocks_min_l2size <#l2arc-rebuild-blocks-min-l2size>`__ +- `l2arc_rebuild_enabled <#l2arc-rebuild-enabled>`__ +- `l2arc_trim_ahead <#l2arc-trim-ahead>`__ +- `l2arc_write_boost <#l2arc-write-boost>`__ +- `l2arc_write_max <#l2arc-write-max>`__ +- `zfs_multilist_num_sublists <#zfs-multilist-num-sublists>`__ +- `spa_load_verify_shift <#spa-load-verify-shift>`__ + +channel_programs +~~~~~~~~~~~~~~~~ + +- `zfs_lua_max_instrlimit <#zfs-lua-max-instrlimit>`__ +- `zfs_lua_max_memlimit <#zfs-lua-max-memlimit>`__ + +checkpoint +~~~~~~~~~~ + +- `zfs_spa_discard_memory_limit <#zfs-spa-discard-memory-limit>`__ + +checksum +~~~~~~~~ + +- `zfs_checksums_per_second <#zfs-checksums-per-second>`__ +- `zfs_fletcher_4_impl <#zfs-fletcher-4-impl>`__ +- `zfs_nopwrite_enabled <#zfs-nopwrite-enabled>`__ +- `zfs_qat_checksum_disable <#zfs-qat-checksum-disable>`__ + +compression +~~~~~~~~~~~ + +- `zfs_compressed_arc_enabled <#zfs-compressed-arc-enabled>`__ +- `zfs_qat_compress_disable <#zfs-qat-compress-disable>`__ +- `zfs_qat_disable <#zfs-qat-disable>`__ + +CPU +~~~ + +- `zfs_fletcher_4_impl <#zfs-fletcher-4-impl>`__ +- `zfs_mdcomp_disable <#zfs-mdcomp-disable>`__ +- `spl_kmem_cache_kmem_threads <#spl-kmem-cache-kmem-threads>`__ +- `spl_kmem_cache_magazine_size <#spl-kmem-cache-magazine-size>`__ +- `spl_taskq_thread_bind <#spl-taskq-thread-bind>`__ +- `spl_taskq_thread_priority <#spl-taskq-thread-priority>`__ +- `spl_taskq_thread_sequential <#spl-taskq-thread-sequential>`__ +- `zfs_vdev_raidz_impl <#zfs-vdev-raidz-impl>`__ + +dataset +~~~~~~~ + +- `zfs_max_dataset_nesting <#zfs-max-dataset-nesting>`__ + +dbuf_cache +~~~~~~~~~~ + +- `dbuf_cache_hiwater_pct <#dbuf-cache-hiwater-pct>`__ +- `dbuf_cache_lowater_pct <#dbuf-cache-lowater-pct>`__ +- `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +- `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +- `dbuf_cache_max_shift <#dbuf-cache-max-shift>`__ +- `dbuf_cache_shift <#dbuf-cache-shift>`__ +- `dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__ +- `dbuf_metadata_cache_shift <#dbuf-metadata-cache-shift>`__ + +debug +~~~~~ + +- `zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__ +- `zfs_dbgmsg_maxsize <#zfs-dbgmsg-maxsize>`__ +- `zfs_dbuf_state_index <#zfs-dbuf-state-index>`__ +- `zfs_deadman_checktime_ms <#zfs-deadman-checktime-ms>`__ +- `zfs_deadman_enabled <#zfs-deadman-enabled>`__ +- `zfs_deadman_failmode <#zfs-deadman-failmode>`__ +- `zfs_deadman_synctime_ms <#zfs-deadman-synctime-ms>`__ +- `zfs_deadman_ziotime_ms <#zfs-deadman-ziotime-ms>`__ +- `zfs_flags <#zfs-flags>`__ +- `zfs_free_leak_on_eio <#zfs-free-leak-on-eio>`__ +- `zfs_nopwrite_enabled <#zfs-nopwrite-enabled>`__ +- `zfs_object_mutex_size <#zfs-object-mutex-size>`__ +- `zfs_read_history <#zfs-read-history>`__ +- `zfs_read_history_hits <#zfs-read-history-hits>`__ +- `spl_panic_halt <#spl-panic-halt>`__ +- `zfs_txg_history <#zfs-txg-history>`__ +- `zfs_zevent_cols <#zfs-zevent-cols>`__ +- `zfs_zevent_console <#zfs-zevent-console>`__ +- `zfs_zevent_len_max <#zfs-zevent-len-max>`__ +- `zil_replay_disable <#zil-replay-disable>`__ +- `zio_deadman_log_all <#zio-deadman-log-all>`__ +- `zio_decompress_fail_fraction <#zio-decompress-fail-fraction>`__ +- `zio_delay_max <#zio-delay-max>`__ + +dedup +~~~~~ + +- `zfs_ddt_data_is_special <#zfs-ddt-data-is-special>`__ +- `zfs_disable_dup_eviction <#zfs-disable-dup-eviction>`__ + +delay +~~~~~ + +- `zfs_delays_per_second <#zfs-delays-per-second>`__ + +delete +~~~~~~ + +- `zfs_async_block_max_blocks <#zfs-async-block-max-blocks>`__ +- `zfs_delete_blocks <#zfs-delete-blocks>`__ +- `zfs_free_bpobj_enabled <#zfs-free-bpobj-enabled>`__ +- `zfs_free_max_blocks <#zfs-free-max-blocks>`__ +- `zfs_free_min_time_ms <#zfs-free-min-time-ms>`__ +- `zfs_obsolete_min_time_ms <#zfs-obsolete-min-time-ms>`__ +- `zfs_per_txg_dirty_frees_percent <#zfs-per-txg-dirty-frees-percent>`__ + +discard +~~~~~~~ + +- `zvol_max_discard_blocks <#zvol-max-discard-blocks>`__ + +disks +~~~~~ + +- `zfs_nocacheflush <#zfs-nocacheflush>`__ +- `zil_nocacheflush <#zil-nocacheflush>`__ + +DMU +~~~ + +- `zfs_async_block_max_blocks <#zfs-async-block-max-blocks>`__ +- `dmu_object_alloc_chunk_shift <#dmu-object-alloc-chunk-shift>`__ +- `zfs_dmu_offset_next_sync <#zfs-dmu-offset-next-sync>`__ + +encryption +~~~~~~~~~~ + +- `icp_aes_impl <#icp-aes-impl>`__ +- `icp_gcm_impl <#icp-gcm-impl>`__ +- `zfs_key_max_salt_uses <#zfs-key-max-salt-uses>`__ +- `zfs_qat_encrypt_disable <#zfs-qat-encrypt-disable>`__ + +filesystem +~~~~~~~~~~ + +- `zfs_admin_snapshot <#zfs-admin-snapshot>`__ +- `zfs_delete_blocks <#zfs-delete-blocks>`__ +- `zfs_expire_snapshot <#zfs-expire-snapshot>`__ +- `zfs_free_max_blocks <#zfs-free-max-blocks>`__ +- `zfs_max_recordsize <#zfs-max-recordsize>`__ +- `zfs_read_chunk_size <#zfs-read-chunk-size>`__ + +fragmentation +~~~~~~~~~~~~~ + +- `zfs_metaslab_fragmentation_threshold <#zfs-metaslab-fragmentation-threshold>`__ +- `zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ +- `zfs_mg_noalloc_threshold <#zfs-mg-noalloc-threshold>`__ + +HDD +~~~ + +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ +- `zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ + +hostid +~~~~~~ + +- `spl_hostid <#spl-hostid>`__ +- `spl_hostid_path <#spl-hostid-path>`__ + +import +~~~~~~ + +- `zfs_autoimport_disable <#zfs-autoimport-disable>`__ +- `zfs_max_missing_tvds <#zfs-max-missing-tvds>`__ +- `zfs_multihost_fail_intervals <#zfs-multihost-fail-intervals>`__ +- `zfs_multihost_history <#zfs-multihost-history>`__ +- `zfs_multihost_import_intervals <#zfs-multihost-import-intervals>`__ +- `zfs_multihost_interval <#zfs-multihost-interval>`__ +- `zfs_recover <#zfs-recover>`__ +- `spa_config_path <#spa-config-path>`__ +- `spa_load_print_vdev_tree <#spa-load-print-vdev-tree>`__ +- `spa_load_verify_maxinflight <#spa-load-verify-maxinflight>`__ +- `spa_load_verify_metadata <#spa-load-verify-metadata>`__ +- `spa_load_verify_shift <#spa-load-verify-shift>`__ +- `zvol_inhibit_dev <#zvol-inhibit-dev>`__ + +L2ARC +~~~~~ + +- `l2arc_exclude_special <#l2arc-exclude-special>`__ +- `l2arc_feed_again <#l2arc-feed-again>`__ +- `l2arc_feed_min_ms <#l2arc-feed-min-ms>`__ +- `l2arc_feed_secs <#l2arc-feed-secs>`__ +- `l2arc_headroom <#l2arc-headroom>`__ +- `l2arc_headroom_boost <#l2arc-headroom-boost>`__ +- `l2arc_meta_percent <#l2arc-meta-percent>`__ +- `l2arc_mfuonly <#l2arc-mfuonly>`__ +- `l2arc_nocompress <#l2arc-nocompress>`__ +- `l2arc_noprefetch <#l2arc-noprefetch>`__ +- `l2arc_norw <#l2arc-norw>`__ +- `l2arc_rebuild_blocks_min_l2size <#l2arc-rebuild-blocks-min-l2size>`__ +- `l2arc_rebuild_enabled <#l2arc-rebuild-enabled>`__ +- `l2arc_trim_ahead <#l2arc-trim-ahead>`__ +- `l2arc_write_boost <#l2arc-write-boost>`__ +- `l2arc_write_max <#l2arc-write-max>`__ + +memory +~~~~~~ + +- `zfs_abd_scatter_enabled <#zfs-abd-scatter-enabled>`__ +- `zfs_abd_scatter_max_order <#zfs-abd-scatter-max-order>`__ +- `zfs_arc_average_blocksize <#zfs-arc-average-blocksize>`__ +- `zfs_arc_grow_retry <#zfs-arc-grow-retry>`__ +- `zfs_arc_lotsfree_percent <#zfs-arc-lotsfree-percent>`__ +- `zfs_arc_max <#zfs-arc-max>`__ +- `zfs_arc_pc_percent <#zfs-arc-pc-percent>`__ +- `zfs_arc_shrink_shift <#zfs-arc-shrink-shift>`__ +- `zfs_arc_sys_free <#zfs-arc-sys-free>`__ +- `zfs_dedup_prefetch <#zfs-dedup-prefetch>`__ +- `zfs_max_recordsize <#zfs-max-recordsize>`__ +- `metaslab_debug_load <#metaslab-debug-load>`__ +- `metaslab_debug_unload <#metaslab-debug-unload>`__ +- `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ +- `zfs_scan_strict_mem_lim <#zfs-scan-strict-mem-lim>`__ +- `spl_kmem_alloc_max <#spl-kmem-alloc-max>`__ +- `spl_kmem_alloc_warn <#spl-kmem-alloc-warn>`__ +- `spl_kmem_cache_expire <#spl-kmem-cache-expire>`__ +- `spl_kmem_cache_kmem_limit <#spl-kmem-cache-kmem-limit>`__ +- `spl_kmem_cache_kmem_threads <#spl-kmem-cache-kmem-threads>`__ +- `spl_kmem_cache_magazine_size <#spl-kmem-cache-magazine-size>`__ +- `spl_kmem_cache_max_size <#spl-kmem-cache-max-size>`__ +- `spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ +- `spl_kmem_cache_obj_per_slab_min <#spl-kmem-cache-obj-per-slab-min>`__ +- `spl_kmem_cache_reclaim <#spl-kmem-cache-reclaim>`__ +- `spl_kmem_cache_slab_limit <#spl-kmem-cache-slab-limit>`__ + +metadata +~~~~~~~~ + +- `zfs_mdcomp_disable <#zfs-mdcomp-disable>`__ + +metaslab +~~~~~~~~ + +- `metaslab_aliquot <#metaslab-aliquot>`__ +- `metaslab_bias_enabled <#metaslab-bias-enabled>`__ +- `metaslab_debug_load <#metaslab-debug-load>`__ +- `metaslab_debug_unload <#metaslab-debug-unload>`__ +- `metaslab_fragmentation_factor_enabled <#metaslab-fragmentation-factor-enabled>`__ +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `metaslab_preload_enabled <#metaslab-preload-enabled>`__ +- `zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__ +- `zfs_metaslab_switch_threshold <#zfs-metaslab-switch-threshold>`__ +- `metaslabs_per_vdev <#metaslabs-per-vdev>`__ +- `zfs_vdev_min_ms_count <#zfs-vdev-min-ms-count>`__ +- `zfs_vdev_ms_count_limit <#zfs-vdev-ms-count-limit>`__ + +mirror +~~~~~~ + +- `zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +- `zfs_vdev_mirror_non_rotating_seek_inc <#zfs-vdev-mirror-non-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ +- `zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ + +MMP +~~~ + +- `zfs_multihost_fail_intervals <#zfs-multihost-fail-intervals>`__ +- `zfs_multihost_history <#zfs-multihost-history>`__ +- `zfs_multihost_import_intervals <#zfs-multihost-import-intervals>`__ +- `zfs_multihost_interval <#zfs-multihost-interval>`__ +- `spl_hostid <#spl-hostid>`__ +- `spl_hostid_path <#spl-hostid-path>`__ + +panic +~~~~~ + +- `spl_panic_halt <#spl-panic-halt>`__ + +prefetch +~~~~~~~~ + +- `zfs_arc_min_prefetch_ms <#zfs-arc-min-prefetch-ms>`__ +- `zfs_arc_min_prescient_prefetch_ms <#zfs-arc-min-prescient-prefetch-ms>`__ +- `zfs_dedup_prefetch <#zfs-dedup-prefetch>`__ +- `l2arc_noprefetch <#l2arc-noprefetch>`__ +- `zfs_no_scrub_prefetch <#zfs-no-scrub-prefetch>`__ +- `zfs_pd_bytes_max <#zfs-pd-bytes-max>`__ +- `zfs_prefetch_disable <#zfs-prefetch-disable>`__ +- `zfetch_array_rd_sz <#zfetch-array-rd-sz>`__ +- `zfetch_max_distance <#zfetch-max-distance>`__ +- `zfetch_max_streams <#zfetch-max-streams>`__ +- `zfetch_min_sec_reap <#zfetch-min-sec-reap>`__ +- `zvol_prefetch_bytes <#zvol-prefetch-bytes>`__ + +QAT +~~~ + +- `zfs_qat_checksum_disable <#zfs-qat-checksum-disable>`__ +- `zfs_qat_compress_disable <#zfs-qat-compress-disable>`__ +- `zfs_qat_disable <#zfs-qat-disable>`__ +- `zfs_qat_encrypt_disable <#zfs-qat-encrypt-disable>`__ + +raidz +~~~~~ + +- `zfs_vdev_raidz_impl <#zfs-vdev-raidz-impl>`__ + +receive +~~~~~~~ + +- `zfs_disable_ivset_guid_check <#zfs-disable-ivset-guid-check>`__ +- `zfs_recv_queue_length <#zfs-recv-queue-length>`__ + +remove +~~~~~~ + +- `zfs_obsolete_min_time_ms <#zfs-obsolete-min-time-ms>`__ +- `zfs_remove_max_segment <#zfs-remove-max-segment>`__ + +resilver +~~~~~~~~ + +- `zfs_resilver_delay <#zfs-resilver-delay>`__ +- `zfs_resilver_disable_defer <#zfs-resilver-disable-defer>`__ +- `zfs_resilver_min_time_ms <#zfs-resilver-min-time-ms>`__ +- `zfs_scan_checkpoint_intval <#zfs-scan-checkpoint-intval>`__ +- `zfs_scan_fill_weight <#zfs-scan-fill-weight>`__ +- `zfs_scan_idle <#zfs-scan-idle>`__ +- `zfs_scan_ignore_errors <#zfs-scan-ignore-errors>`__ +- `zfs_scan_issue_strategy <#zfs-scan-issue-strategy>`__ +- `zfs_scan_legacy <#zfs-scan-legacy>`__ +- `zfs_scan_max_ext_gap <#zfs-scan-max-ext-gap>`__ +- `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ +- `zfs_scan_mem_lim_soft_fact <#zfs-scan-mem-lim-soft-fact>`__ +- `zfs_scan_strict_mem_lim <#zfs-scan-strict-mem-lim>`__ +- `zfs_scan_suspend_progress <#zfs-scan-suspend-progress>`__ +- `zfs_scan_vdev_limit <#zfs-scan-vdev-limit>`__ +- `zfs_top_maxinflight <#zfs-top-maxinflight>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ + +scrub +~~~~~ + +- `zfs_no_scrub_io <#zfs-no-scrub-io>`__ +- `zfs_no_scrub_prefetch <#zfs-no-scrub-prefetch>`__ +- `zfs_scan_checkpoint_intval <#zfs-scan-checkpoint-intval>`__ +- `zfs_scan_fill_weight <#zfs-scan-fill-weight>`__ +- `zfs_scan_idle <#zfs-scan-idle>`__ +- `zfs_scan_issue_strategy <#zfs-scan-issue-strategy>`__ +- `zfs_scan_legacy <#zfs-scan-legacy>`__ +- `zfs_scan_max_ext_gap <#zfs-scan-max-ext-gap>`__ +- `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ +- `zfs_scan_mem_lim_soft_fact <#zfs-scan-mem-lim-soft-fact>`__ +- `zfs_scan_min_time_ms <#zfs-scan-min-time-ms>`__ +- `zfs_scan_strict_mem_lim <#zfs-scan-strict-mem-lim>`__ +- `zfs_scan_suspend_progress <#zfs-scan-suspend-progress>`__ +- `zfs_scan_vdev_limit <#zfs-scan-vdev-limit>`__ +- `zfs_scrub_delay <#zfs-scrub-delay>`__ +- `zfs_scrub_min_time_ms <#zfs-scrub-min-time-ms>`__ +- `zfs_top_maxinflight <#zfs-top-maxinflight>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ + +send +~~~~ + +- `ignore_hole_birth <#ignore-hole-birth>`__ +- `zfs_override_estimate_recordsize <#zfs-override-estimate-recordsize>`__ +- `zfs_pd_bytes_max <#zfs-pd-bytes-max>`__ +- `zfs_send_corrupt_data <#zfs-send-corrupt-data>`__ +- `zfs_send_queue_length <#zfs-send-queue-length>`__ +- `zfs_send_unmodified_spill_blocks <#zfs-send-unmodified-spill-blocks>`__ + +snapshot +~~~~~~~~ + +- `zfs_admin_snapshot <#zfs-admin-snapshot>`__ +- `zfs_expire_snapshot <#zfs-expire-snapshot>`__ + +SPA +~~~ + +- `spa_asize_inflation <#spa-asize-inflation>`__ +- `spa_load_print_vdev_tree <#spa-load-print-vdev-tree>`__ +- `spa_load_verify_data <#spa-load-verify-data>`__ +- `spa_load_verify_shift <#spa-load-verify-shift>`__ +- `spa_slop_shift <#spa-slop-shift>`__ +- `zfs_sync_pass_deferred_free <#zfs-sync-pass-deferred-free>`__ +- `zfs_sync_pass_dont_compress <#zfs-sync-pass-dont-compress>`__ +- `zfs_sync_pass_rewrite <#zfs-sync-pass-rewrite>`__ +- `zfs_sync_taskq_batch_pct <#zfs-sync-taskq-batch-pct>`__ +- `zfs_txg_timeout <#zfs-txg-timeout>`__ + +special_vdev +~~~~~~~~~~~~ + +- `l2arc_exclude_special <#l2arc-exclude-special>`__ +- `zfs_ddt_data_is_special <#zfs-ddt-data-is-special>`__ +- `zfs_special_class_metadata_reserve_pct <#zfs-special-class-metadata-reserve-pct>`__ +- `zfs_user_indirect_is_special <#zfs-user-indirect-is-special>`__ + +SSD +~~~ + +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +- `zfs_vdev_mirror_non_rotating_seek_inc <#zfs-vdev-mirror-non-rotating-seek-inc>`__ + +taskq +~~~~~ + +- `spl_max_show_tasks <#spl-max-show-tasks>`__ +- `spl_taskq_kick <#spl-taskq-kick>`__ +- `spl_taskq_thread_bind <#spl-taskq-thread-bind>`__ +- `spl_taskq_thread_dynamic <#spl-taskq-thread-dynamic>`__ +- `spl_taskq_thread_priority <#spl-taskq-thread-priority>`__ +- `spl_taskq_thread_sequential <#spl-taskq-thread-sequential>`__ +- `zfs_zil_clean_taskq_nthr_pct <#zfs-zil-clean-taskq-nthr-pct>`__ +- `zio_taskq_batch_pct <#zio-taskq-batch-pct>`__ + +trim +~~~~ + +- `zfs_trim_extent_bytes_max <#zfs-trim-extent-bytes-max>`__ +- `zfs_trim_extent_bytes_min <#zfs-trim-extent-bytes-min>`__ +- `zfs_trim_metaslab_skip <#zfs-trim-metaslab-skip>`__ +- `zfs_trim_queue_limit <#zfs-trim-queue-limit>`__ +- `zfs_trim_txg_batch <#zfs-trim-txg-batch>`__ +- `zfs_vdev_aggregate_trim <#zfs-vdev-aggregate-trim>`__ + +vdev +~~~~ + +- `zfs_checksum_events_per_second <#zfs-checksum-events-per-second>`__ +- `metaslab_aliquot <#metaslab-aliquot>`__ +- `metaslab_bias_enabled <#metaslab-bias-enabled>`__ +- `zfs_metaslab_fragmentation_threshold <#zfs-metaslab-fragmentation-threshold>`__ +- `metaslabs_per_vdev <#metaslabs-per-vdev>`__ +- `zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ +- `zfs_mg_noalloc_threshold <#zfs-mg-noalloc-threshold>`__ +- `zfs_multihost_interval <#zfs-multihost-interval>`__ +- `zfs_scan_vdev_limit <#zfs-scan-vdev-limit>`__ +- `zfs_slow_io_events_per_second <#zfs-slow-io-events-per-second>`__ +- `zfs_vdev_aggregate_trim <#zfs-vdev-aggregate-trim>`__ +- `zfs_vdev_aggregation_limit <#zfs-vdev-aggregation-limit>`__ +- `zfs_vdev_aggregation_limit_non_rotating <#zfs-vdev-aggregation-limit-non-rotating>`__ +- `zfs_vdev_async_read_max_active <#zfs-vdev-async-read-max-active>`__ +- `zfs_vdev_async_read_min_active <#zfs-vdev-async-read-min-active>`__ +- `zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__ +- `zfs_vdev_async_write_active_min_dirty_percent <#zfs-vdev-async-write-active-min-dirty-percent>`__ +- `zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ +- `zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +- `zfs_vdev_cache_bshift <#zfs-vdev-cache-bshift>`__ +- `zfs_vdev_cache_max <#zfs-vdev-cache-max>`__ +- `zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ +- `zfs_vdev_initializing_max_active <#zfs-vdev-initializing-max-active>`__ +- `zfs_vdev_initializing_min_active <#zfs-vdev-initializing-min-active>`__ +- `zfs_vdev_max_active <#zfs-vdev-max-active>`__ +- `zfs_vdev_min_ms_count <#zfs-vdev-min-ms-count>`__ +- `zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +- `zfs_vdev_mirror_non_rotating_seek_inc <#zfs-vdev-mirror-non-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ +- `zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ +- `zfs_vdev_ms_count_limit <#zfs-vdev-ms-count-limit>`__ +- `zfs_vdev_queue_depth_pct <#zfs-vdev-queue-depth-pct>`__ +- `zfs_vdev_raidz_impl <#zfs-vdev-raidz-impl>`__ +- `zfs_vdev_read_gap_limit <#zfs-vdev-read-gap-limit>`__ +- `zfs_vdev_removal_max_active <#zfs-vdev-removal-max-active>`__ +- `zfs_vdev_removal_min_active <#zfs-vdev-removal-min-active>`__ +- `zfs_vdev_scheduler <#zfs-vdev-scheduler>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ +- `zfs_vdev_sync_read_max_active <#zfs-vdev-sync-read-max-active>`__ +- `zfs_vdev_sync_read_min_active <#zfs-vdev-sync-read-min-active>`__ +- `zfs_vdev_sync_write_max_active <#zfs-vdev-sync-write-max-active>`__ +- `zfs_vdev_sync_write_min_active <#zfs-vdev-sync-write-min-active>`__ +- `zfs_vdev_trim_max_active <#zfs-vdev-trim-max-active>`__ +- `zfs_vdev_trim_min_active <#zfs-vdev-trim-min-active>`__ +- `vdev_validate_skip <#vdev-validate-skip>`__ +- `zfs_vdev_write_gap_limit <#zfs-vdev-write-gap-limit>`__ +- `zio_dva_throttle_enabled <#zio-dva-throttle-enabled>`__ +- `zio_slow_io_ms <#zio-slow-io-ms>`__ + +vdev_cache +~~~~~~~~~~ + +- `zfs_vdev_cache_bshift <#zfs-vdev-cache-bshift>`__ +- `zfs_vdev_cache_max <#zfs-vdev-cache-max>`__ +- `zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ + +vdev_initialize +~~~~~~~~~~~~~~~ + +- `zfs_initialize_value <#zfs-initialize-value>`__ + +vdev_removal +~~~~~~~~~~~~ + +- `zfs_condense_indirect_commit_entry_delay_ms <#zfs-condense-indirect-commit-entry-delay-ms>`__ +- `zfs_condense_indirect_vdevs_enable <#zfs-condense-indirect-vdevs-enable>`__ +- `zfs_condense_max_obsolete_bytes <#zfs-condense-max-obsolete-bytes>`__ +- `zfs_condense_min_mapping_bytes <#zfs-condense-min-mapping-bytes>`__ +- `zfs_reconstruct_indirect_combinations_max <#zfs-reconstruct-indirect-combinations-max>`__ +- `zfs_removal_ignore_errors <#zfs-removal-ignore-errors>`__ +- `zfs_removal_suspend_progress <#zfs-removal-suspend-progress>`__ +- `vdev_removal_max_span <#vdev-removal-max-span>`__ + +volume +~~~~~~ + +- `zfs_max_recordsize <#zfs-max-recordsize>`__ +- `zvol_inhibit_dev <#zvol-inhibit-dev>`__ +- `zvol_major <#zvol-major>`__ +- `zvol_max_discard_blocks <#zvol-max-discard-blocks>`__ +- `zvol_prefetch_bytes <#zvol-prefetch-bytes>`__ +- `zvol_request_sync <#zvol-request-sync>`__ +- `zvol_threads <#zvol-threads>`__ +- `zvol_volmode <#zvol-volmode>`__ + +write_throttle +~~~~~~~~~~~~~~ + +- `zfs_delay_min_dirty_percent <#zfs-delay-min-dirty-percent>`__ +- `zfs_delay_scale <#zfs-delay-scale>`__ +- `zfs_dirty_data_max <#zfs-dirty-data-max>`__ +- `zfs_dirty_data_max_max <#zfs-dirty-data-max-max>`__ +- `zfs_dirty_data_max_max_percent <#zfs-dirty-data-max-max-percent>`__ +- `zfs_dirty_data_max_percent <#zfs-dirty-data-max-percent>`__ +- `zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ +- `zfs_dirty_data_sync_percent <#zfs-dirty-data-sync-percent>`__ + +zed +~~~ + +- `zfs_checksums_per_second <#zfs-checksums-per-second>`__ +- `zfs_delays_per_second <#zfs-delays-per-second>`__ +- `zio_slow_io_ms <#zio-slow-io-ms>`__ + +ZIL +~~~ + +- `zfs_commit_timeout_pct <#zfs-commit-timeout-pct>`__ +- `zfs_immediate_write_sz <#zfs-immediate-write-sz>`__ +- `zfs_zil_clean_taskq_maxalloc <#zfs-zil-clean-taskq-maxalloc>`__ +- `zfs_zil_clean_taskq_minalloc <#zfs-zil-clean-taskq-minalloc>`__ +- `zfs_zil_clean_taskq_nthr_pct <#zfs-zil-clean-taskq-nthr-pct>`__ +- `zil_nocacheflush <#zil-nocacheflush>`__ +- `zil_replay_disable <#zil-replay-disable>`__ +- `zil_slog_bulk <#zil-slog-bulk>`__ + +ZIO_scheduler +~~~~~~~~~~~~~ + +- `zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ +- `zfs_dirty_data_sync_percent <#zfs-dirty-data-sync-percent>`__ +- `zfs_resilver_delay <#zfs-resilver-delay>`__ +- `zfs_scan_idle <#zfs-scan-idle>`__ +- `zfs_scrub_delay <#zfs-scrub-delay>`__ +- `zfs_top_maxinflight <#zfs-top-maxinflight>`__ +- `zfs_txg_timeout <#zfs-txg-timeout>`__ +- `zfs_vdev_aggregate_trim <#zfs-vdev-aggregate-trim>`__ +- `zfs_vdev_aggregation_limit <#zfs-vdev-aggregation-limit>`__ +- `zfs_vdev_aggregation_limit_non_rotating <#zfs-vdev-aggregation-limit-non-rotating>`__ +- `zfs_vdev_async_read_max_active <#zfs-vdev-async-read-max-active>`__ +- `zfs_vdev_async_read_min_active <#zfs-vdev-async-read-min-active>`__ +- `zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__ +- `zfs_vdev_async_write_active_min_dirty_percent <#zfs-vdev-async-write-active-min-dirty-percent>`__ +- `zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ +- `zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +- `zfs_vdev_initializing_max_active <#zfs-vdev-initializing-max-active>`__ +- `zfs_vdev_initializing_min_active <#zfs-vdev-initializing-min-active>`__ +- `zfs_vdev_max_active <#zfs-vdev-max-active>`__ +- `zfs_vdev_queue_depth_pct <#zfs-vdev-queue-depth-pct>`__ +- `zfs_vdev_read_gap_limit <#zfs-vdev-read-gap-limit>`__ +- `zfs_vdev_removal_max_active <#zfs-vdev-removal-max-active>`__ +- `zfs_vdev_removal_min_active <#zfs-vdev-removal-min-active>`__ +- `zfs_vdev_scheduler <#zfs-vdev-scheduler>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ +- `zfs_vdev_sync_read_max_active <#zfs-vdev-sync-read-max-active>`__ +- `zfs_vdev_sync_read_min_active <#zfs-vdev-sync-read-min-active>`__ +- `zfs_vdev_sync_write_max_active <#zfs-vdev-sync-write-max-active>`__ +- `zfs_vdev_sync_write_min_active <#zfs-vdev-sync-write-min-active>`__ +- `zfs_vdev_trim_max_active <#zfs-vdev-trim-max-active>`__ +- `zfs_vdev_trim_min_active <#zfs-vdev-trim-min-active>`__ +- `zfs_vdev_write_gap_limit <#zfs-vdev-write-gap-limit>`__ +- `zio_dva_throttle_enabled <#zio-dva-throttle-enabled>`__ +- `zio_requeue_io_start_cut_in_line <#zio-requeue-io-start-cut-in-line>`__ +- `zio_taskq_batch_pct <#zio-taskq-batch-pct>`__ + +Index +----- + +- `zfs_abd_scatter_enabled <#zfs-abd-scatter-enabled>`__ +- `zfs_abd_scatter_max_order <#zfs-abd-scatter-max-order>`__ +- `zfs_abd_scatter_min_size <#zfs-abd-scatter-min-size>`__ +- `zfs_admin_snapshot <#zfs-admin-snapshot>`__ +- `zfs_arc_average_blocksize <#zfs-arc-average-blocksize>`__ +- `zfs_arc_dnode_limit <#zfs-arc-dnode-limit>`__ +- `zfs_arc_dnode_limit_percent <#zfs-arc-dnode-limit-percent>`__ +- `zfs_arc_dnode_reduce_percent <#zfs-arc-dnode-reduce-percent>`__ +- `zfs_arc_evict_batch_limit <#zfs-arc-evict-batch-limit>`__ +- `zfs_arc_grow_retry <#zfs-arc-grow-retry>`__ +- `zfs_arc_lotsfree_percent <#zfs-arc-lotsfree-percent>`__ +- `zfs_arc_max <#zfs-arc-max>`__ +- `zfs_arc_meta_adjust_restarts <#zfs-arc-meta-adjust-restarts>`__ +- `zfs_arc_meta_limit <#zfs-arc-meta-limit>`__ +- `zfs_arc_meta_limit_percent <#zfs-arc-meta-limit-percent>`__ +- `zfs_arc_meta_min <#zfs-arc-meta-min>`__ +- `zfs_arc_meta_prune <#zfs-arc-meta-prune>`__ +- `zfs_arc_meta_strategy <#zfs-arc-meta-strategy>`__ +- `zfs_arc_min <#zfs-arc-min>`__ +- `zfs_arc_min_prefetch_lifespan <#zfs-arc-min-prefetch-lifespan>`__ +- `zfs_arc_min_prefetch_ms <#zfs-arc-min-prefetch-ms>`__ +- `zfs_arc_min_prescient_prefetch_ms <#zfs-arc-min-prescient-prefetch-ms>`__ +- `zfs_arc_overflow_shift <#zfs-arc-overflow-shift>`__ +- `zfs_arc_p_dampener_disable <#zfs-arc-p-dampener-disable>`__ +- `zfs_arc_p_min_shift <#zfs-arc-p-min-shift>`__ +- `zfs_arc_pc_percent <#zfs-arc-pc-percent>`__ +- `zfs_arc_shrink_shift <#zfs-arc-shrink-shift>`__ +- `zfs_arc_sys_free <#zfs-arc-sys-free>`__ +- `zfs_async_block_max_blocks <#zfs-async-block-max-blocks>`__ +- `zfs_autoimport_disable <#zfs-autoimport-disable>`__ +- `zfs_checksum_events_per_second <#zfs-checksum-events-per-second>`__ +- `zfs_checksums_per_second <#zfs-checksums-per-second>`__ +- `zfs_commit_timeout_pct <#zfs-commit-timeout-pct>`__ +- `zfs_compressed_arc_enabled <#zfs-compressed-arc-enabled>`__ +- `zfs_condense_indirect_commit_entry_delay_ms <#zfs-condense-indirect-commit-entry-delay-ms>`__ +- `zfs_condense_indirect_vdevs_enable <#zfs-condense-indirect-vdevs-enable>`__ +- `zfs_condense_max_obsolete_bytes <#zfs-condense-max-obsolete-bytes>`__ +- `zfs_condense_min_mapping_bytes <#zfs-condense-min-mapping-bytes>`__ +- `zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__ +- `zfs_dbgmsg_maxsize <#zfs-dbgmsg-maxsize>`__ +- `dbuf_cache_hiwater_pct <#dbuf-cache-hiwater-pct>`__ +- `dbuf_cache_lowater_pct <#dbuf-cache-lowater-pct>`__ +- `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +- `dbuf_cache_max_shift <#dbuf-cache-max-shift>`__ +- `dbuf_cache_shift <#dbuf-cache-shift>`__ +- `dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__ +- `dbuf_metadata_cache_shift <#dbuf-metadata-cache-shift>`__ +- `zfs_dbuf_state_index <#zfs-dbuf-state-index>`__ +- `zfs_ddt_data_is_special <#zfs-ddt-data-is-special>`__ +- `zfs_deadman_checktime_ms <#zfs-deadman-checktime-ms>`__ +- `zfs_deadman_enabled <#zfs-deadman-enabled>`__ +- `zfs_deadman_failmode <#zfs-deadman-failmode>`__ +- `zfs_deadman_synctime_ms <#zfs-deadman-synctime-ms>`__ +- `zfs_deadman_ziotime_ms <#zfs-deadman-ziotime-ms>`__ +- `zfs_dedup_prefetch <#zfs-dedup-prefetch>`__ +- `zfs_delay_min_dirty_percent <#zfs-delay-min-dirty-percent>`__ +- `zfs_delay_scale <#zfs-delay-scale>`__ +- `zfs_delays_per_second <#zfs-delays-per-second>`__ +- `zfs_delete_blocks <#zfs-delete-blocks>`__ +- `zfs_dirty_data_max <#zfs-dirty-data-max>`__ +- `zfs_dirty_data_max_max <#zfs-dirty-data-max-max>`__ +- `zfs_dirty_data_max_max_percent <#zfs-dirty-data-max-max-percent>`__ +- `zfs_dirty_data_max_percent <#zfs-dirty-data-max-percent>`__ +- `zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ +- `zfs_dirty_data_sync_percent <#zfs-dirty-data-sync-percent>`__ +- `zfs_disable_dup_eviction <#zfs-disable-dup-eviction>`__ +- `zfs_disable_ivset_guid_check <#zfs-disable-ivset-guid-check>`__ +- `dmu_object_alloc_chunk_shift <#dmu-object-alloc-chunk-shift>`__ +- `zfs_dmu_offset_next_sync <#zfs-dmu-offset-next-sync>`__ +- `zfs_expire_snapshot <#zfs-expire-snapshot>`__ +- `zfs_flags <#zfs-flags>`__ +- `zfs_fletcher_4_impl <#zfs-fletcher-4-impl>`__ +- `zfs_free_bpobj_enabled <#zfs-free-bpobj-enabled>`__ +- `zfs_free_leak_on_eio <#zfs-free-leak-on-eio>`__ +- `zfs_free_max_blocks <#zfs-free-max-blocks>`__ +- `zfs_free_min_time_ms <#zfs-free-min-time-ms>`__ +- `icp_aes_impl <#icp-aes-impl>`__ +- `icp_gcm_impl <#icp-gcm-impl>`__ +- `ignore_hole_birth <#ignore-hole-birth>`__ +- `zfs_immediate_write_sz <#zfs-immediate-write-sz>`__ +- `zfs_initialize_value <#zfs-initialize-value>`__ +- `zfs_key_max_salt_uses <#zfs-key-max-salt-uses>`__ +- `l2arc_exclude_special <#l2arc-exclude-special>`__ +- `l2arc_feed_again <#l2arc-feed-again>`__ +- `l2arc_feed_min_ms <#l2arc-feed-min-ms>`__ +- `l2arc_feed_secs <#l2arc-feed-secs>`__ +- `l2arc_headroom <#l2arc-headroom>`__ +- `l2arc_headroom_boost <#l2arc-headroom-boost>`__ +- `l2arc_meta_percent <#l2arc-meta-percent>`__ +- `l2arc_mfuonly <#l2arc-mfuonly>`__ +- `l2arc_nocompress <#l2arc-nocompress>`__ +- `l2arc_noprefetch <#l2arc-noprefetch>`__ +- `l2arc_norw <#l2arc-norw>`__ +- `l2arc_rebuild_blocks_min_l2size <#l2arc-rebuild-blocks-min-l2size>`__ +- `l2arc_rebuild_enabled <#l2arc-rebuild-enabled>`__ +- `l2arc_trim_ahead <#l2arc-trim-ahead>`__ +- `l2arc_write_boost <#l2arc-write-boost>`__ +- `l2arc_write_max <#l2arc-write-max>`__ +- `zfs_lua_max_instrlimit <#zfs-lua-max-instrlimit>`__ +- `zfs_lua_max_memlimit <#zfs-lua-max-memlimit>`__ +- `zfs_max_dataset_nesting <#zfs-max-dataset-nesting>`__ +- `zfs_max_missing_tvds <#zfs-max-missing-tvds>`__ +- `zfs_max_recordsize <#zfs-max-recordsize>`__ +- `zfs_mdcomp_disable <#zfs-mdcomp-disable>`__ +- `metaslab_aliquot <#metaslab-aliquot>`__ +- `metaslab_bias_enabled <#metaslab-bias-enabled>`__ +- `metaslab_debug_load <#metaslab-debug-load>`__ +- `metaslab_debug_unload <#metaslab-debug-unload>`__ +- `metaslab_force_ganging <#metaslab-force-ganging>`__ +- `metaslab_fragmentation_factor_enabled <#metaslab-fragmentation-factor-enabled>`__ +- `zfs_metaslab_fragmentation_threshold <#zfs-metaslab-fragmentation-threshold>`__ +- `metaslab_lba_weighting_enabled <#metaslab-lba-weighting-enabled>`__ +- `metaslab_preload_enabled <#metaslab-preload-enabled>`__ +- `zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__ +- `zfs_metaslab_switch_threshold <#zfs-metaslab-switch-threshold>`__ +- `metaslabs_per_vdev <#metaslabs-per-vdev>`__ +- `zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ +- `zfs_mg_noalloc_threshold <#zfs-mg-noalloc-threshold>`__ +- `zfs_multihost_fail_intervals <#zfs-multihost-fail-intervals>`__ +- `zfs_multihost_history <#zfs-multihost-history>`__ +- `zfs_multihost_import_intervals <#zfs-multihost-import-intervals>`__ +- `zfs_multihost_interval <#zfs-multihost-interval>`__ +- `zfs_multilist_num_sublists <#zfs-multilist-num-sublists>`__ +- `zfs_no_scrub_io <#zfs-no-scrub-io>`__ +- `zfs_no_scrub_prefetch <#zfs-no-scrub-prefetch>`__ +- `zfs_nocacheflush <#zfs-nocacheflush>`__ +- `zfs_nopwrite_enabled <#zfs-nopwrite-enabled>`__ +- `zfs_object_mutex_size <#zfs-object-mutex-size>`__ +- `zfs_obsolete_min_time_ms <#zfs-obsolete-min-time-ms>`__ +- `zfs_override_estimate_recordsize <#zfs-override-estimate-recordsize>`__ +- `zfs_pd_bytes_max <#zfs-pd-bytes-max>`__ +- `zfs_per_txg_dirty_frees_percent <#zfs-per-txg-dirty-frees-percent>`__ +- `zfs_prefetch_disable <#zfs-prefetch-disable>`__ +- `zfs_qat_checksum_disable <#zfs-qat-checksum-disable>`__ +- `zfs_qat_compress_disable <#zfs-qat-compress-disable>`__ +- `zfs_qat_disable <#zfs-qat-disable>`__ +- `zfs_qat_encrypt_disable <#zfs-qat-encrypt-disable>`__ +- `zfs_read_chunk_size <#zfs-read-chunk-size>`__ +- `zfs_read_history <#zfs-read-history>`__ +- `zfs_read_history_hits <#zfs-read-history-hits>`__ +- `zfs_reconstruct_indirect_combinations_max <#zfs-reconstruct-indirect-combinations-max>`__ +- `zfs_recover <#zfs-recover>`__ +- `zfs_recv_queue_length <#zfs-recv-queue-length>`__ +- `zfs_removal_ignore_errors <#zfs-removal-ignore-errors>`__ +- `zfs_removal_suspend_progress <#zfs-removal-suspend-progress>`__ +- `zfs_remove_max_segment <#zfs-remove-max-segment>`__ +- `zfs_resilver_delay <#zfs-resilver-delay>`__ +- `zfs_resilver_disable_defer <#zfs-resilver-disable-defer>`__ +- `zfs_resilver_min_time_ms <#zfs-resilver-min-time-ms>`__ +- `zfs_scan_checkpoint_intval <#zfs-scan-checkpoint-intval>`__ +- `zfs_scan_fill_weight <#zfs-scan-fill-weight>`__ +- `zfs_scan_idle <#zfs-scan-idle>`__ +- `zfs_scan_ignore_errors <#zfs-scan-ignore-errors>`__ +- `zfs_scan_issue_strategy <#zfs-scan-issue-strategy>`__ +- `zfs_scan_legacy <#zfs-scan-legacy>`__ +- `zfs_scan_max_ext_gap <#zfs-scan-max-ext-gap>`__ +- `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ +- `zfs_scan_mem_lim_soft_fact <#zfs-scan-mem-lim-soft-fact>`__ +- `zfs_scan_min_time_ms <#zfs-scan-min-time-ms>`__ +- `zfs_scan_strict_mem_lim <#zfs-scan-strict-mem-lim>`__ +- `zfs_scan_suspend_progress <#zfs-scan-suspend-progress>`__ +- `zfs_scan_vdev_limit <#zfs-scan-vdev-limit>`__ +- `zfs_scrub_delay <#zfs-scrub-delay>`__ +- `zfs_scrub_min_time_ms <#zfs-scrub-min-time-ms>`__ +- `zfs_send_corrupt_data <#zfs-send-corrupt-data>`__ +- `send_holes_without_birth_time <#send-holes-without-birth-time>`__ +- `zfs_send_queue_length <#zfs-send-queue-length>`__ +- `zfs_send_unmodified_spill_blocks <#zfs-send-unmodified-spill-blocks>`__ +- `zfs_slow_io_events_per_second <#zfs-slow-io-events-per-second>`__ +- `spa_asize_inflation <#spa-asize-inflation>`__ +- `spa_config_path <#spa-config-path>`__ +- `zfs_spa_discard_memory_limit <#zfs-spa-discard-memory-limit>`__ +- `spa_load_print_vdev_tree <#spa-load-print-vdev-tree>`__ +- `spa_load_verify_data <#spa-load-verify-data>`__ +- `spa_load_verify_maxinflight <#spa-load-verify-maxinflight>`__ +- `spa_load_verify_metadata <#spa-load-verify-metadata>`__ +- `spa_load_verify_shift <#spa-load-verify-shift>`__ +- `spa_slop_shift <#spa-slop-shift>`__ +- `zfs_special_class_metadata_reserve_pct <#zfs-special-class-metadata-reserve-pct>`__ +- `spl_hostid <#spl-hostid>`__ +- `spl_hostid_path <#spl-hostid-path>`__ +- `spl_kmem_alloc_max <#spl-kmem-alloc-max>`__ +- `spl_kmem_alloc_warn <#spl-kmem-alloc-warn>`__ +- `spl_kmem_cache_expire <#spl-kmem-cache-expire>`__ +- `spl_kmem_cache_kmem_limit <#spl-kmem-cache-kmem-limit>`__ +- `spl_kmem_cache_kmem_threads <#spl-kmem-cache-kmem-threads>`__ +- `spl_kmem_cache_magazine_size <#spl-kmem-cache-magazine-size>`__ +- `spl_kmem_cache_max_size <#spl-kmem-cache-max-size>`__ +- `spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ +- `spl_kmem_cache_obj_per_slab_min <#spl-kmem-cache-obj-per-slab-min>`__ +- `spl_kmem_cache_reclaim <#spl-kmem-cache-reclaim>`__ +- `spl_kmem_cache_slab_limit <#spl-kmem-cache-slab-limit>`__ +- `spl_max_show_tasks <#spl-max-show-tasks>`__ +- `spl_panic_halt <#spl-panic-halt>`__ +- `spl_taskq_kick <#spl-taskq-kick>`__ +- `spl_taskq_thread_bind <#spl-taskq-thread-bind>`__ +- `spl_taskq_thread_dynamic <#spl-taskq-thread-dynamic>`__ +- `spl_taskq_thread_priority <#spl-taskq-thread-priority>`__ +- `spl_taskq_thread_sequential <#spl-taskq-thread-sequential>`__ +- `zfs_sync_pass_deferred_free <#zfs-sync-pass-deferred-free>`__ +- `zfs_sync_pass_dont_compress <#zfs-sync-pass-dont-compress>`__ +- `zfs_sync_pass_rewrite <#zfs-sync-pass-rewrite>`__ +- `zfs_sync_taskq_batch_pct <#zfs-sync-taskq-batch-pct>`__ +- `zfs_top_maxinflight <#zfs-top-maxinflight>`__ +- `zfs_trim_extent_bytes_max <#zfs-trim-extent-bytes-max>`__ +- `zfs_trim_extent_bytes_min <#zfs-trim-extent-bytes-min>`__ +- `zfs_trim_metaslab_skip <#zfs-trim-metaslab-skip>`__ +- `zfs_trim_queue_limit <#zfs-trim-queue-limit>`__ +- `zfs_trim_txg_batch <#zfs-trim-txg-batch>`__ +- `zfs_txg_history <#zfs-txg-history>`__ +- `zfs_txg_timeout <#zfs-txg-timeout>`__ +- `zfs_unlink_suspend_progress <#zfs-unlink-suspend-progress>`__ +- `zfs_user_indirect_is_special <#zfs-user-indirect-is-special>`__ +- `zfs_vdev_aggregate_trim <#zfs-vdev-aggregate-trim>`__ +- `zfs_vdev_aggregation_limit <#zfs-vdev-aggregation-limit>`__ +- `zfs_vdev_aggregation_limit_non_rotating <#zfs-vdev-aggregation-limit-non-rotating>`__ +- `zfs_vdev_async_read_max_active <#zfs-vdev-async-read-max-active>`__ +- `zfs_vdev_async_read_min_active <#zfs-vdev-async-read-min-active>`__ +- `zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__ +- `zfs_vdev_async_write_active_min_dirty_percent <#zfs-vdev-async-write-active-min-dirty-percent>`__ +- `zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ +- `zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +- `zfs_vdev_cache_bshift <#zfs-vdev-cache-bshift>`__ +- `zfs_vdev_cache_max <#zfs-vdev-cache-max>`__ +- `zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ +- `zfs_vdev_default_ms_count <#zfs-vdev-default-ms-count>`__ +- `zfs_vdev_initializing_max_active <#zfs-vdev-initializing-max-active>`__ +- `zfs_vdev_initializing_min_active <#zfs-vdev-initializing-min-active>`__ +- `zfs_vdev_max_active <#zfs-vdev-max-active>`__ +- `zfs_vdev_min_ms_count <#zfs-vdev-min-ms-count>`__ +- `zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +- `zfs_vdev_mirror_non_rotating_seek_inc <#zfs-vdev-mirror-non-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ +- `zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__ +- `zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ +- `zfs_vdev_ms_count_limit <#zfs-vdev-ms-count-limit>`__ +- `zfs_vdev_queue_depth_pct <#zfs-vdev-queue-depth-pct>`__ +- `zfs_vdev_raidz_impl <#zfs-vdev-raidz-impl>`__ +- `zfs_vdev_read_gap_limit <#zfs-vdev-read-gap-limit>`__ +- `zfs_vdev_removal_max_active <#zfs-vdev-removal-max-active>`__ +- `vdev_removal_max_span <#vdev-removal-max-span>`__ +- `zfs_vdev_removal_min_active <#zfs-vdev-removal-min-active>`__ +- `zfs_vdev_scheduler <#zfs-vdev-scheduler>`__ +- `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +- `zfs_vdev_scrub_min_active <#zfs-vdev-scrub-min-active>`__ +- `zfs_vdev_sync_read_max_active <#zfs-vdev-sync-read-max-active>`__ +- `zfs_vdev_sync_read_min_active <#zfs-vdev-sync-read-min-active>`__ +- `zfs_vdev_sync_write_max_active <#zfs-vdev-sync-write-max-active>`__ +- `zfs_vdev_sync_write_min_active <#zfs-vdev-sync-write-min-active>`__ +- `zfs_vdev_trim_max_active <#zfs-vdev-trim-max-active>`__ +- `zfs_vdev_trim_min_active <#zfs-vdev-trim-min-active>`__ +- `vdev_validate_skip <#vdev-validate-skip>`__ +- `zfs_vdev_write_gap_limit <#zfs-vdev-write-gap-limit>`__ +- `zfs_zevent_cols <#zfs-zevent-cols>`__ +- `zfs_zevent_console <#zfs-zevent-console>`__ +- `zfs_zevent_len_max <#zfs-zevent-len-max>`__ +- `zfetch_array_rd_sz <#zfetch-array-rd-sz>`__ +- `zfetch_max_distance <#zfetch-max-distance>`__ +- `zfetch_max_streams <#zfetch-max-streams>`__ +- `zfetch_min_sec_reap <#zfetch-min-sec-reap>`__ +- `zfs_zil_clean_taskq_maxalloc <#zfs-zil-clean-taskq-maxalloc>`__ +- `zfs_zil_clean_taskq_minalloc <#zfs-zil-clean-taskq-minalloc>`__ +- `zfs_zil_clean_taskq_nthr_pct <#zfs-zil-clean-taskq-nthr-pct>`__ +- `zil_nocacheflush <#zil-nocacheflush>`__ +- `zil_replay_disable <#zil-replay-disable>`__ +- `zil_slog_bulk <#zil-slog-bulk>`__ +- `zio_deadman_log_all <#zio-deadman-log-all>`__ +- `zio_decompress_fail_fraction <#zio-decompress-fail-fraction>`__ +- `zio_delay_max <#zio-delay-max>`__ +- `zio_dva_throttle_enabled <#zio-dva-throttle-enabled>`__ +- `zio_requeue_io_start_cut_in_line <#zio-requeue-io-start-cut-in-line>`__ +- `zio_slow_io_ms <#zio-slow-io-ms>`__ +- `zio_taskq_batch_pct <#zio-taskq-batch-pct>`__ +- `zvol_inhibit_dev <#zvol-inhibit-dev>`__ +- `zvol_major <#zvol-major>`__ +- `zvol_max_discard_blocks <#zvol-max-discard-blocks>`__ +- `zvol_prefetch_bytes <#zvol-prefetch-bytes>`__ +- `zvol_request_sync <#zvol-request-sync>`__ +- `zvol_threads <#zvol-threads>`__ +- `zvol_volmode <#zvol-volmode>`__ + +.. _zfs-module-parameters-1: + +Module Parameters +----------------- + +ignore_hole_birth +~~~~~~~~~~~~~~~~~ + +When set, the hole_birth optimization will not be used and all holes +will always be sent by ``zfs send`` In the source code, +ignore_hole_birth is an alias for and SysFS PARAMETER for +`send_holes_without_birth_time <#send-holes-without-birth-time>`__. + ++-------------------+-------------------------------------------------+ +| ignore_hole_birth | Notes | ++===================+=================================================+ +| Tags | `send <#send>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Enable if you suspect your datasets are | +| | affected by a bug in hole_birth during | +| | ``zfs send`` operations | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=disabled, 1=enabled | ++-------------------+-------------------------------------------------+ +| Default | 1 (hole birth optimization is ignored) | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | TBD | ++-------------------+-------------------------------------------------+ + +l2arc_exclude_special +~~~~~~~~~~~~~~~~~~~~~ + +Controls whether buffers present on special vdevs are eligible for +caching into L2ARC. + ++-----------------------+-------------------------------------------------+ +| l2arc_exclude_special | Notes | ++=======================+=================================================+ +| Tags | `ARC <#arc>`__, | +| | `L2ARC <#l2arc>`__, | +| | `special_vdev <#special-vdev>`__, | ++-----------------------+-------------------------------------------------+ +| When to change | If cache and special devices exist and caching | +| | data on special devices in L2ARC is not desired | ++-----------------------+-------------------------------------------------+ +| Data Type | boolean | ++-----------------------+-------------------------------------------------+ +| Range | 0=disabled, 1=enabled | ++-----------------------+-------------------------------------------------+ +| Default | 0 | ++-----------------------+-------------------------------------------------+ +| Change | Dynamic | ++-----------------------+-------------------------------------------------+ +| Versions Affected | TBD | ++-----------------------+-------------------------------------------------+ + +l2arc_feed_again +~~~~~~~~~~~~~~~~ + +Turbo L2ARC cache warm-up. When the L2ARC is cold the fill interval will +be set to aggressively fill as fast as possible. + ++-------------------+-------------------------------------------------+ +| l2arc_feed_again | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If cache devices exist and it is desired to | +| | fill them as fast as possible | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=disabled, 1=enabled | ++-------------------+-------------------------------------------------+ +| Default | 1 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | TBD | ++-------------------+-------------------------------------------------+ + +l2arc_feed_min_ms +~~~~~~~~~~~~~~~~~ + +Minimum time period for aggressively feeding the L2ARC. The L2ARC feed +thread wakes up once per second (see +`l2arc_feed_secs <#l2arc-feed-secs>`__) to look for data to feed into +the L2ARC. ``l2arc_feed_min_ms`` only affects the turbo L2ARC cache +warm-up and allows the aggressiveness to be adjusted. + ++-------------------+-------------------------------------------------+ +| l2arc_feed_min_ms | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If cache devices exist and | +| | `l2arc_feed_again <#l2arc-feed-again>`__ and | +| | the feed is too aggressive, then this tunable | +| | can be adjusted to reduce the impact of the | +| | fill | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | milliseconds | ++-------------------+-------------------------------------------------+ +| Range | 0 to (1000 \* l2arc_feed_secs) | ++-------------------+-------------------------------------------------+ +| Default | 200 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | 0.6 and later | ++-------------------+-------------------------------------------------+ + +l2arc_feed_secs +~~~~~~~~~~~~~~~ + +Seconds between waking the L2ARC feed thread. One feed thread works for +all cache devices in turn. + +If the pool that owns a cache device is imported readonly, then the feed +thread is delayed 5 \* `l2arc_feed_secs <#l2arc-feed-secs>`__ before +moving onto the next cache device. If multiple pools are imported with +cache devices and one pool with cache is imported readonly, the L2ARC +feed rate to all caches can be slowed. + +================= ================================== +l2arc_feed_secs Notes +================= ================================== +Tags `ARC <#arc>`__, `L2ARC <#l2arc>`__ +When to change Do not change +Data Type uint64 +Units seconds +Range 1 to UINT64_MAX +Default 1 +Change Dynamic +Versions Affected 0.6 and later +================= ================================== + +l2arc_headroom +~~~~~~~~~~~~~~ + +How far through the ARC lists to search for L2ARC cacheable content, +expressed as a multiplier of `l2arc_write_max <#l2arc-write-max>`__ + ++-------------------+-------------------------------------------------+ +| l2arc_headroom | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the rate of change in the ARC is faster than | +| | the overall L2ARC feed rate, then increasing | +| | l2arc_headroom can increase L2ARC efficiency. | +| | Setting the value too large can cause the L2ARC | +| | feed thread to consume more CPU time looking | +| | for data to feed. | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | unit | ++-------------------+-------------------------------------------------+ +| Range | 0 to UINT64_MAX | ++-------------------+-------------------------------------------------+ +| Default | 2 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | 0.6 and later | ++-------------------+-------------------------------------------------+ + +l2arc_headroom_boost +~~~~~~~~~~~~~~~~~~~~ + +Percentage scale for `l2arc_headroom <#l2arc-headroom>`__ when L2ARC +contents are being successfully compressed before writing. + ++----------------------+----------------------------------------------+ +| l2arc_headroom_boost | Notes | ++======================+==============================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++----------------------+----------------------------------------------+ +| When to change | If average compression efficiency is greater | +| | than 2:1, then increasing | +| | `l2a | +| | rc_headroom_boost <#l2arc-headroom-boost>`__ | +| | can increase the L2ARC feed rate | ++----------------------+----------------------------------------------+ +| Data Type | uint64 | ++----------------------+----------------------------------------------+ +| Units | percent | ++----------------------+----------------------------------------------+ +| Range | 100 to UINT64_MAX, when set to 100, the | +| | L2ARC headroom boost feature is effectively | +| | disabled | ++----------------------+----------------------------------------------+ +| Default | 200 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | all | ++----------------------+----------------------------------------------+ + +l2arc_nocompress +~~~~~~~~~~~~~~~~ + +Disable writing compressed data to cache devices. Disabling allows the +legacy behavior of writing decompressed data to cache devices. + ++-------------------+-------------------------------------------------+ +| l2arc_nocompress | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | When testing compressed L2ARC feature | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=store compressed blocks in cache device, | +| | 1=store uncompressed blocks in cache device | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | deprecated in v0.7.0 by new compressed ARC | +| | design | ++-------------------+-------------------------------------------------+ + +l2arc_meta_percent +~~~~~~~~~~~~~~~~~~ + +Percent of ARC size allowed for L2ARC-only headers. +Since L2ARC buffers are not evicted on memory pressure, too large amount of +headers on system with irrationaly large L2ARC can render it slow or unusable. +This parameter limits L2ARC writes and rebuild to achieve it. + ++-------------------+-------------------------------------------------+ +| l2arc_nocompress | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | When workload really require enormous L2ARC. | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Range | 0 to 100 | ++-------------------+-------------------------------------------------+ +| Default | 33 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v2.0 and later | ++-------------------+-------------------------------------------------+ + +l2arc_mfuonly +~~~~~~~~~~~~~ + +Controls whether only MFU metadata and data are cached from ARC into L2ARC. +This may be desirable to avoid wasting space on L2ARC when reading/writing +large amounts of data that are not expected to be accessed more than once. +By default both MRU and MFU data and metadata are cached in the L2ARC. + ++-------------------+-------------------------------------------------+ +| l2arc_mfuonly | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | When accessing a large amount of data only | +| | once. | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=store MRU and MFU blocks in cache device, | +| | 1=store MFU blocks in cache device | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v2.0 and later | ++-------------------+-------------------------------------------------+ + +l2arc_noprefetch +~~~~~~~~~~~~~~~~ + +Disables writing prefetched, but unused, buffers to cache devices. + ++-------------------+-------------------------------------------------+ +| l2arc_noprefetch | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__, | +| | `prefetch <#prefetch>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Setting to 0 can increase L2ARC hit rates for | +| | workloads where the ARC is too small for a read | +| | workload that benefits from prefetching. Also, | +| | if the main pool devices are very slow, setting | +| | to 0 can improve some workloads such as | +| | backups. | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=write prefetched but unused buffers to cache | +| | devices, 1=do not write prefetched but unused | +| | buffers to cache devices | ++-------------------+-------------------------------------------------+ +| Default | 1 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++-------------------+-------------------------------------------------+ + +l2arc_norw +~~~~~~~~~~ + +Disables writing to cache devices while they are being read. + ++-------------------+-------------------------------------------------+ +| l2arc_norw | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | In the early days of SSDs, some devices did not | +| | perform well when reading and writing | +| | simultaneously. Modern SSDs do not have these | +| | issues. | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=read and write simultaneously, 1=avoid writes | +| | when reading for antique SSDs | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +l2arc_rebuild_blocks_min_l2size +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The minimum required size (in bytes) of an L2ARC device in order to +write log blocks in it. The log blocks are used upon importing the pool +to rebuild the persistent L2ARC. For L2ARC devices less than 1GB the +overhead involved offsets most of benefit so log blocks are not written +for cache devices smaller than this. + ++---------------------------------+-----------------------------------+ +| l2arc_rebuild_blocks_min_l2size | Notes | ++=================================+===================================+ +| Tags | `ARC <#arc>`__, | +| | `L2ARC <#l2arc>`__ | ++---------------------------------+-----------------------------------+ +| When to change | The cache device is small and | +| | the pool is frequently imported. | ++---------------------------------+-----------------------------------+ +| Data Type | bytes | ++---------------------------------+-----------------------------------+ +| Range | 0 to UINT64_MAX | ++---------------------------------+-----------------------------------+ +| Default | 1,073,741,824 | ++---------------------------------+-----------------------------------+ +| Change | Dynamic | ++---------------------------------+-----------------------------------+ +| Versions Affected | v2.0 and later | ++---------------------------------+-----------------------------------+ + +l2arc_rebuild_enabled +~~~~~~~~~~~~~~~~~~~~~ + +Rebuild the persistent L2ARC when importing a pool. + ++-----------------------+---------------------------------------------+ +| l2arc_rebuild_enabled | Notes | ++=======================+=============================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-----------------------+---------------------------------------------+ +| When to change | If there are problems importing a pool or | +| | attaching an L2ARC device. | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=disable persistent L2ARC rebuild, | +| | 1=enable persistent L2ARC rebuild | ++-----------------------+---------------------------------------------+ +| Default | 1 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v2.0 and later | ++-----------------------+---------------------------------------------+ + +l2arc_trim_ahead +~~~~~~~~~~~~~~~~ + +Once the cache device has been filled TRIM ahead of the current write size +``l2arc_write_max`` on L2ARC devices by this percentage. This can speed +up future writes depending on the performance characteristics of the +cache device. + +When set to 100% TRIM twice the space required to accommodate upcoming +writes. A minimum of 64MB will be trimmed. If set it enables TRIM of +the whole L2ARC device when it is added to a pool. By default, this +option is disabled since it can put significant stress on the underlying +storage devices. + ++-------------------+-------------------------------------------------+ +| l2arc_trim_ahead | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Consider setting for cache devices which | +| | effeciently handle TRIM commands. | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Units | percent of l2arc_write_max | ++-------------------+-------------------------------------------------+ +| Range | 0 to 100 | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v2.0 and later | ++-------------------+-------------------------------------------------+ + +l2arc_write_boost +~~~~~~~~~~~~~~~~~ + +Until the ARC fills, increases the L2ARC fill rate +`l2arc_write_max <#l2arc-write-max>`__ by ``l2arc_write_boost``. + ++-------------------+-------------------------------------------------+ +| l2arc_write_boost | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | To fill the cache devices more aggressively | +| | after pool import. | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 0 to UINT64_MAX | ++-------------------+-------------------------------------------------+ +| Default | 8,388,608 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +l2arc_write_max +~~~~~~~~~~~~~~~ + +Maximum number of bytes to be written to each cache device for each +L2ARC feed thread interval (see `l2arc_feed_secs <#l2arc-feed-secs>`__). +The actual limit can be adjusted by +`l2arc_write_boost <#l2arc-write-boost>`__. By default +`l2arc_feed_secs <#l2arc-feed-secs>`__ is 1 second, delivering a maximum +write workload to cache devices of 8 MiB/sec. + ++-------------------+-------------------------------------------------+ +| l2arc_write_max | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `L2ARC <#l2arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the cache devices can sustain the write | +| | workload, increasing the rate of cache device | +| | fill when workloads generate new data at a rate | +| | higher than l2arc_write_max can increase L2ARC | +| | hit rate | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 1 to UINT64_MAX | ++-------------------+-------------------------------------------------+ +| Default | 8,388,608 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +metaslab_aliquot +~~~~~~~~~~~~~~~~ + +Sets the metaslab granularity. Nominally, ZFS will try to allocate this +amount of data to a top-level vdev before moving on to the next +top-level vdev. This is roughly similar to what would be referred to as +the "stripe size" in traditional RAID arrays. + +When tuning for HDDs, it can be more efficient to have a few larger, +sequential writes to a device rather than switching to the next device. +Monitoring the size of contiguous writes to the disks relative to the +write throughput can be used to determine if increasing +``metaslab_aliquot`` can help. For modern devices, it is unlikely that +decreasing ``metaslab_aliquot`` from the default will help. + +If there is only one top-level vdev, this tunable is not used. + ++-------------------+-------------------------------------------------+ +| metaslab_aliquot | Notes | ++===================+=================================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__, `vdev <#vdev>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If write performance increases as devices more | +| | efficiently write larger, contiguous blocks | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 0 to UINT64_MAX | ++-------------------+-------------------------------------------------+ +| Default | 524,288 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +metaslab_bias_enabled +~~~~~~~~~~~~~~~~~~~~~ + +Enables metaslab group biasing based on a top-level vdev's utilization +relative to the pool. Nominally, all top-level devs are the same size +and the allocation is spread evenly. When the top-level vdevs are not of +the same size, for example if a new (empty) top-level is added to the +pool, this allows the new top-level vdev to get a larger portion of new +allocations. + ++-----------------------+---------------------------------------------+ +| metaslab_bias_enabled | Notes | ++=======================+=============================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__, `vdev <#vdev>`__ | ++-----------------------+---------------------------------------------+ +| When to change | If a new top-level vdev is added and you do | +| | not want to bias new allocations to the new | +| | top-level vdev | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=spread evenly across top-level vdevs, | +| | 1=bias spread to favor less full top-level | +| | vdevs | ++-----------------------+---------------------------------------------+ +| Default | 1 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-----------------------+---------------------------------------------+ + +zfs_metaslab_segment_weight_enabled +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enables metaslab allocation based on largest free segment rather than +total amount of free space. The goal is to avoid metaslabs that exhibit +free space fragmentation: when there is a lot of small free spaces, but +few larger free spaces. + +If ``zfs_metaslab_segment_weight_enabled`` is enabled, then +`metaslab_fragmentation_factor_enabled <#metaslab-fragmentation-factor-enabled>`__ +is ignored. + ++----------------------------------+----------------------------------+ +| zfs | Notes | +| _metaslab_segment_weight_enabled | | ++==================================+==================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__ | ++----------------------------------+----------------------------------+ +| When to change | When testing allocation and | +| | fragmentation | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0=do not consider metaslab | +| | fragmentation, 1=avoid metaslabs | +| | where free space is highly | +| | fragmented | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_metaslab_switch_threshold +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When using segment-based metaslab selection (see +`zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__), +continue allocating from the active metaslab until +``zfs_metaslab_switch_threshold`` worth of free space buckets have been +exhausted. + ++-------------------------------+-------------------------------------+ +| zfs_metaslab_switch_threshold | Notes | ++===============================+=====================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__ | ++-------------------------------+-------------------------------------+ +| When to change | When testing allocation and | +| | fragmentation | ++-------------------------------+-------------------------------------+ +| Data Type | uint64 | ++-------------------------------+-------------------------------------+ +| Units | free spaces | ++-------------------------------+-------------------------------------+ +| Range | 0 to UINT64_MAX | ++-------------------------------+-------------------------------------+ +| Default | 2 | ++-------------------------------+-------------------------------------+ +| Change | Dynamic | ++-------------------------------+-------------------------------------+ +| Versions Affected | v0.7.0 and later | ++-------------------------------+-------------------------------------+ + +metaslab_debug_load +~~~~~~~~~~~~~~~~~~~ + +When enabled, all metaslabs are loaded into memory during pool import. +Nominally, metaslab space map information is loaded and unloaded as +needed (see `metaslab_debug_unload <#metaslab-debug-unload>`__) + +It is difficult to predict how much RAM is required to store a space +map. An empty or completely full metaslab has a small space map. +However, a highly fragmented space map can consume significantly more +memory. + +Enabling ``metaslab_debug_load`` can increase pool import time. + ++---------------------+-----------------------------------------------+ +| metaslab_debug_load | Notes | ++=====================+===============================================+ +| Tags | `allocation <#allocation>`__, | +| | `memory <#memory>`__, | +| | `metaslab <#metaslab>`__ | ++---------------------+-----------------------------------------------+ +| When to change | When RAM is plentiful and pool import time is | +| | not a consideration | ++---------------------+-----------------------------------------------+ +| Data Type | boolean | ++---------------------+-----------------------------------------------+ +| Range | 0=do not load all metaslab info at pool | +| | import, 1=dynamically load metaslab info as | +| | needed | ++---------------------+-----------------------------------------------+ +| Default | 0 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------+-----------------------------------------------+ + +metaslab_debug_unload +~~~~~~~~~~~~~~~~~~~~~ + +When enabled, prevents metaslab information from being dynamically +unloaded from RAM. Nominally, metaslab space map information is loaded +and unloaded as needed (see +`metaslab_debug_load <#metaslab-debug-load>`__) + +It is difficult to predict how much RAM is required to store a space +map. An empty or completely full metaslab has a small space map. +However, a highly fragmented space map can consume significantly more +memory. + +Enabling ``metaslab_debug_unload`` consumes RAM that would otherwise be +freed. + ++-----------------------+---------------------------------------------+ +| metaslab_debug_unload | Notes | ++=======================+=============================================+ +| Tags | `allocation <#allocation>`__, | +| | `memory <#memory>`__, | +| | `metaslab <#metaslab>`__ | ++-----------------------+---------------------------------------------+ +| When to change | When RAM is plentiful and the penalty for | +| | dynamically reloading metaslab info from | +| | the pool is high | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=dynamically unload metaslab info, | +| | 1=unload metaslab info only upon pool | +| | export | ++-----------------------+---------------------------------------------+ +| Default | 0 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-----------------------+---------------------------------------------+ + +metaslab_fragmentation_factor_enabled +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enable use of the fragmentation metric in computing metaslab weights. + +In version v0.7.0, if +`zfs_metaslab_segment_weight_enabled <#zfs-metaslab-segment-weight-enabled>`__ +is enabled, then ``metaslab_fragmentation_factor_enabled`` is ignored. + ++----------------------------------+----------------------------------+ +| metas | Notes | +| lab_fragmentation_factor_enabled | | ++==================================+==================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__ | ++----------------------------------+----------------------------------+ +| When to change | To test metaslab fragmentation | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0=do not consider metaslab free | +| | space fragmentation, 1=try to | +| | avoid fragmented metaslabs | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------------+----------------------------------+ + +metaslabs_per_vdev +~~~~~~~~~~~~~~~~~~ + +When a vdev is added, it will be divided into approximately, but no more +than, this number of metaslabs. + ++--------------------+------------------------------------------------+ +| metaslabs_per_vdev | Notes | ++====================+================================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__, `vdev <#vdev>`__ | ++--------------------+------------------------------------------------+ +| When to change | When testing metaslab allocation | ++--------------------+------------------------------------------------+ +| Data Type | uint64 | ++--------------------+------------------------------------------------+ +| Units | metaslabs | ++--------------------+------------------------------------------------+ +| Range | 16 to UINT64_MAX | ++--------------------+------------------------------------------------+ +| Default | 200 | ++--------------------+------------------------------------------------+ +| Change | Prior to pool creation or adding new top-level | +| | vdevs | ++--------------------+------------------------------------------------+ +| Versions Affected | all | ++--------------------+------------------------------------------------+ + +metaslab_preload_enabled +~~~~~~~~~~~~~~~~~~~~~~~~ + +Enable metaslab group preloading. Each top-level vdev has a metaslab +group. By default, up to 3 copies of metadata can exist and are +distributed across multiple top-level vdevs. +``metaslab_preload_enabled`` allows the corresponding metaslabs to be +preloaded, thus improving allocation efficiency. + ++--------------------------+------------------------------------------+ +| metaslab_preload_enabled | Notes | ++==========================+==========================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__ | ++--------------------------+------------------------------------------+ +| When to change | When testing metaslab allocation | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=do not preload metaslab info, | +| | 1=preload up to 3 metaslabs | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------+------------------------------------------+ + +metaslab_lba_weighting_enabled +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Modern HDDs have uniform bit density and constant angular velocity. +Therefore, the outer recording zones are faster (higher bandwidth) than +the inner zones by the ratio of outer to inner track diameter. The +difference in bandwidth can be 2:1, and is often available in the HDD +detailed specifications or drive manual. For HDDs when +``metaslab_lba_weighting_enabled`` is true, write allocation preference +is given to the metaslabs representing the outer recording zones. Thus +the allocation to metaslabs prefers faster bandwidth over free space. + +If the devices are not rotational, yet misrepresent themselves to the OS +as rotational, then disabling ``metaslab_lba_weighting_enabled`` can +result in more even, free-space-based allocation. + ++--------------------------------+------------------------------------+ +| metaslab_lba_weighting_enabled | Notes | ++================================+====================================+ +| Tags | `allocation <#allocation>`__, | +| | `metaslab <#metaslab>`__, | +| | `HDD <#hdd>`__, `SSD <#ssd>`__ | ++--------------------------------+------------------------------------+ +| When to change | disable if using only SSDs and | +| | version v0.6.4 or earlier | ++--------------------------------+------------------------------------+ +| Data Type | boolean | ++--------------------------------+------------------------------------+ +| Range | 0=do not use LBA weighting, 1=use | +| | LBA weighting | ++--------------------------------+------------------------------------+ +| Default | 1 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Verification | The rotational setting described | +| | by a block device in sysfs by | +| | observing | +| | ``/sys/ | +| | block/DISK_NAME/queue/rotational`` | ++--------------------------------+------------------------------------+ +| Versions Affected | prior to v0.6.5, the check for | +| | non-rotation media did not exist | ++--------------------------------+------------------------------------+ + +spa_config_path +~~~~~~~~~~~~~~~ + +By default, the ``zpool import`` command searches for pool information +in the ``zpool.cache`` file. If the pool to be imported has an entry in +``zpool.cache`` then the devices do not have to be scanned to determine +if they are pool members. The path to the cache file is spa_config_path. + +For more information on ``zpool import`` and the ``-o cachefile`` and +``-d`` options, see the man page for zpool(8) + +See also `zfs_autoimport_disable <#zfs-autoimport-disable>`__ + ++-------------------+-------------------------------------------------+ +| spa_config_path | Notes | ++===================+=================================================+ +| Tags | `import <#import>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If creating a non-standard distribution and the | +| | cachefile property is inconvenient | ++-------------------+-------------------------------------------------+ +| Data Type | string | ++-------------------+-------------------------------------------------+ +| Default | ``/etc/zfs/zpool.cache`` | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, applies only to the next invocation of | +| | ``zpool import`` | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +spa_asize_inflation +~~~~~~~~~~~~~~~~~~~ + +Multiplication factor used to estimate actual disk consumption from the +size of data being written. The default value is a worst case estimate, +but lower values may be valid for a given pool depending on its +configuration. Pool administrators who understand the factors involved +may wish to specify a more realistic inflation factor, particularly if +they operate close to quota or capacity limits. + +The worst case space requirement for allocation is single-sector +max-parity RAIDZ blocks, in which case the space requirement is exactly +4 times the size, accounting for a maximum of 3 parity blocks. This is +added to the maximum number of ZFS ``copies`` parameter (copies max=3). +Additional space is required if the block could impact deduplication +tables. Altogether, the worst case is 24. + +If the estimation is not correct, then quotas or out-of-space conditions +can lead to optimistic expectations of the ability to allocate. +Applications are typically not prepared to deal with such failures and +can misbehave. + ++---------------------+-----------------------------------------------+ +| spa_asize_inflation | Notes | ++=====================+===============================================+ +| Tags | `allocation <#allocation>`__, `SPA <#spa>`__ | ++---------------------+-----------------------------------------------+ +| When to change | If the allocation requirements for the | +| | workload are well known and quotas are used | ++---------------------+-----------------------------------------------+ +| Data Type | uint64 | ++---------------------+-----------------------------------------------+ +| Units | unit | ++---------------------+-----------------------------------------------+ +| Range | 1 to 24 | ++---------------------+-----------------------------------------------+ +| Default | 24 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.3 and later | ++---------------------+-----------------------------------------------+ + +spa_load_verify_data +~~~~~~~~~~~~~~~~~~~~ + +An extreme rewind import (see ``zpool import -X``) normally performs a +full traversal of all blocks in the pool for verification. If this +parameter is set to 0, the traversal skips non-metadata blocks. It can +be toggled once the import has started to stop or start the traversal of +non-metadata blocks. See also +`spa_load_verify_metadata <#spa-load-verify-metadata>`__. + ++----------------------+----------------------------------------------+ +| spa_load_verify_data | Notes | ++======================+==============================================+ +| Tags | `allocation <#allocation>`__, `SPA <#spa>`__ | ++----------------------+----------------------------------------------+ +| When to change | At the risk of data integrity, to speed | +| | extreme import of large pool | ++----------------------+----------------------------------------------+ +| Data Type | boolean | ++----------------------+----------------------------------------------+ +| Range | 0=do not verify data upon pool import, | +| | 1=verify pool data upon import | ++----------------------+----------------------------------------------+ +| Default | 1 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------+----------------------------------------------+ + +spa_load_verify_metadata +~~~~~~~~~~~~~~~~~~~~~~~~ + +An extreme rewind import (see ``zpool import -X``) normally performs a +full traversal of all blocks in the pool for verification. If this +parameter is set to 0, the traversal is not performed. It can be toggled +once the import has started to stop or start the traversal. See +`spa_load_verify_data <#spa-load-verify-data>`__ + ++--------------------------+------------------------------------------+ +| spa_load_verify_metadata | Notes | ++==========================+==========================================+ +| Tags | `import <#import>`__ | ++--------------------------+------------------------------------------+ +| When to change | At the risk of data integrity, to speed | +| | extreme import of large pool | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=do not verify metadata upon pool | +| | import, 1=verify pool metadata upon | +| | import | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------+------------------------------------------+ + +spa_load_verify_maxinflight +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Maximum number of concurrent I/Os during the data verification performed +during an extreme rewind import (see ``zpool import -X``) + ++-----------------------------+---------------------------------------+ +| spa_load_verify_maxinflight | Notes | ++=============================+=======================================+ +| Tags | `import <#import>`__ | ++-----------------------------+---------------------------------------+ +| When to change | During an extreme rewind import, to | +| | match the concurrent I/O capabilities | +| | of the pool devices | ++-----------------------------+---------------------------------------+ +| Data Type | int | ++-----------------------------+---------------------------------------+ +| Units | I/Os | ++-----------------------------+---------------------------------------+ +| Range | 1 to MAX_INT | ++-----------------------------+---------------------------------------+ +| Default | 10,000 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-----------------------------+---------------------------------------+ + +spa_slop_shift +~~~~~~~~~~~~~~ + +Normally, the last 3.2% (1/(2^\ ``spa_slop_shift``)) of pool space is +reserved to ensure the pool doesn't run completely out of space, due to +unaccounted changes (e.g. to the MOS). This also limits the worst-case +time to allocate space. When less than this amount of free space exists, +most ZPL operations (e.g. write, create) return error:no space (ENOSPC). + +Changing spa_slop_shift affects the currently loaded ZFS module and all +imported pools. spa_slop_shift is not stored on disk. Beware when +importing full pools on systems with larger spa_slop_shift can lead to +over-full conditions. + +The minimum SPA slop space is limited to 128 MiB. +The maximum SPA slop space is limited to 128 GiB. + ++-------------------+-------------------------------------------------+ +| spa_slop_shift | Notes | ++===================+=================================================+ +| Tags | `allocation <#allocation>`__, `SPA <#spa>`__ | ++-------------------+-------------------------------------------------+ +| When to change | For large pools, when 3.2% may be too | +| | conservative and more usable space is desired, | +| | consider increasing ``spa_slop_shift`` | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | shift | ++-------------------+-------------------------------------------------+ +| Range | 1 to MAX_INT, however the practical upper limit | +| | is 15 for a system with 4TB of RAM | ++-------------------+-------------------------------------------------+ +| Default | 5 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.5 and later (max. slop space since v2.1.0) | ++-------------------+-------------------------------------------------+ + +zfetch_array_rd_sz +~~~~~~~~~~~~~~~~~~ + +If prefetching is enabled, do not prefetch blocks larger than +``zfetch_array_rd_sz`` size. + +================== ================================================= +zfetch_array_rd_sz Notes +================== ================================================= +Tags `prefetch <#prefetch>`__ +When to change To allow prefetching when using large block sizes +Data Type unsigned long +Units bytes +Range 0 to MAX_ULONG +Default 1,048,576 (1 MiB) +Change Dynamic +Versions Affected all +================== ================================================= + +zfetch_max_distance +~~~~~~~~~~~~~~~~~~~ + +Limits the maximum number of bytes to prefetch per stream. + ++---------------------+-----------------------------------------------+ +| zfetch_max_distance | Notes | ++=====================+===============================================+ +| Tags | `prefetch <#prefetch>`__ | ++---------------------+-----------------------------------------------+ +| When to change | Consider increasing read workloads that use | +| | large blocks and exhibit high prefetch hit | +| | ratios | ++---------------------+-----------------------------------------------+ +| Data Type | uint | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 0 to UINT_MAX | ++---------------------+-----------------------------------------------+ +| Default | 8,388,608 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 | ++---------------------+-----------------------------------------------+ + +zfetch_max_streams +~~~~~~~~~~~~~~~~~~ + +Maximum number of prefetch streams per file. + +For version v0.7.0 and later, when prefetching small files the number of +prefetch streams is automatically reduced below to prevent the streams +from overlapping. + ++--------------------+------------------------------------------------+ +| zfetch_max_streams | Notes | ++====================+================================================+ +| Tags | `prefetch <#prefetch>`__ | ++--------------------+------------------------------------------------+ +| When to change | If the workload benefits from prefetching and | +| | has more than ``zfetch_max_streams`` | +| | concurrent reader threads | ++--------------------+------------------------------------------------+ +| Data Type | uint | ++--------------------+------------------------------------------------+ +| Units | streams | ++--------------------+------------------------------------------------+ +| Range | 1 to MAX_UINT | ++--------------------+------------------------------------------------+ +| Default | 8 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | all | ++--------------------+------------------------------------------------+ + +zfetch_min_sec_reap +~~~~~~~~~~~~~~~~~~~ + +Prefetch streams that have been accessed in ``zfetch_min_sec_reap`` +seconds are automatically stopped. + +=================== =========================== +zfetch_min_sec_reap Notes +=================== =========================== +Tags `prefetch <#prefetch>`__ +When to change To test prefetch efficiency +Data Type uint +Units seconds +Range 0 to MAX_UINT +Default 2 +Change Dynamic +Versions Affected all +=================== =========================== + +zfs_arc_dnode_limit_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Percentage of ARC metadata space that can be used for dnodes. + +The value calculated for ``zfs_arc_dnode_limit_percent`` can be +overridden by `zfs_arc_dnode_limit <#zfs-arc-dnode-limit>`__. + ++-----------------------------+---------------------------------------+ +| zfs_arc_dnode_limit_percent | Notes | ++=============================+=======================================+ +| Tags | `ARC <#arc>`__ | ++-----------------------------+---------------------------------------+ +| When to change | Consider increasing if ``arc_prune`` | +| | is using excessive system time and | +| | ``/proc/spl/kstat/zfs/arcstats`` | +| | shows ``arc_dnode_size`` is near or | +| | over ``arc_dnode_limit`` | ++-----------------------------+---------------------------------------+ +| Data Type | int | ++-----------------------------+---------------------------------------+ +| Units | percent of arc_meta_limit | ++-----------------------------+---------------------------------------+ +| Range | 0 to 100 | ++-----------------------------+---------------------------------------+ +| Default | 10 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | v0.7.0 and later | ++-----------------------------+---------------------------------------+ + +zfs_arc_dnode_limit +~~~~~~~~~~~~~~~~~~~ + +When the number of bytes consumed by dnodes in the ARC exceeds +``zfs_arc_dnode_limit`` bytes, demand for new metadata can take from the +space consumed by dnodes. + +The default value 0, indicates that a percent which is based on +`zfs_arc_dnode_limit_percent <#zfs-arc-dnode-limit-percent>`__ of the +ARC meta buffers that may be used for dnodes. + +``zfs_arc_dnode_limit`` is similar to +`zfs_arc_meta_prune <#zfs-arc-meta-prune>`__ which serves a similar +purpose for metadata. + ++---------------------+-----------------------------------------------+ +| zfs_arc_dnode_limit | Notes | ++=====================+===============================================+ +| Tags | `ARC <#arc>`__ | ++---------------------+-----------------------------------------------+ +| When to change | Consider increasing if ``arc_prune`` is using | +| | excessive system time and | +| | ``/proc/spl/kstat/zfs/arcstats`` shows | +| | ``arc_dnode_size`` is near or over | +| | ``arc_dnode_limit`` | ++---------------------+-----------------------------------------------+ +| Data Type | uint64 | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 0 to MAX_UINT64 | ++---------------------+-----------------------------------------------+ +| Default | 0 (uses | +| | `zfs_arc_dnode_lim | +| | it_percent <#zfs-arc-dnode-limit-percent>`__) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------+-----------------------------------------------+ + +zfs_arc_dnode_reduce_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Percentage of ARC dnodes to try to evict in response to demand for +non-metadata when the number of bytes consumed by dnodes exceeds +`zfs_arc_dnode_limit <#zfs-arc-dnode-limit>`__. + ++------------------------------+--------------------------------------+ +| zfs_arc_dnode_reduce_percent | Notes | ++==============================+======================================+ +| Tags | `ARC <#arc>`__ | ++------------------------------+--------------------------------------+ +| When to change | Testing dnode cache efficiency | ++------------------------------+--------------------------------------+ +| Data Type | uint64 | ++------------------------------+--------------------------------------+ +| Units | percent of size of dnode space used | +| | above | +| | `zfs_arc_d | +| | node_limit <#zfs-arc-dnode-limit>`__ | ++------------------------------+--------------------------------------+ +| Range | 0 to 100 | ++------------------------------+--------------------------------------+ +| Default | 10 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------------+--------------------------------------+ + +zfs_arc_average_blocksize +~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ARC's buffer hash table is sized based on the assumption of an +average block size of ``zfs_arc_average_blocksize``. The default of 8 +KiB uses approximately 1 MiB of hash table per 1 GiB of physical memory +with 8-byte pointers. + ++---------------------------+-----------------------------------------+ +| zfs_arc_average_blocksize | Notes | ++===========================+=========================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++---------------------------+-----------------------------------------+ +| When to change | For workloads where the known average | +| | blocksize is larger, increasing | +| | ``zfs_arc_average_blocksize`` can | +| | reduce memory usage | ++---------------------------+-----------------------------------------+ +| Data Type | int | ++---------------------------+-----------------------------------------+ +| Units | bytes | ++---------------------------+-----------------------------------------+ +| Range | 512 to 16,777,216 | ++---------------------------+-----------------------------------------+ +| Default | 8,192 | ++---------------------------+-----------------------------------------+ +| Change | Prior to zfs module load | ++---------------------------+-----------------------------------------+ +| Versions Affected | all | ++---------------------------+-----------------------------------------+ + +zfs_arc_evict_batch_limit +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Number ARC headers to evict per sublist before proceeding to another +sublist. This batch-style operation prevents entire sublists from being +evicted at once but comes at a cost of additional unlocking and locking. + +========================= ============================== +zfs_arc_evict_batch_limit Notes +========================= ============================== +Tags `ARC <#arc>`__ +When to change Testing ARC multilist features +Data Type int +Units count of ARC headers +Range 1 to INT_MAX +Default 10 +Change Dynamic +Versions Affected v0.6.5 and later +========================= ============================== + +zfs_arc_grow_retry +~~~~~~~~~~~~~~~~~~ + +When the ARC is shrunk due to memory demand, do not retry growing the +ARC for ``zfs_arc_grow_retry`` seconds. This operates as a damper to +prevent oscillating grow/shrink cycles when there is memory pressure. + +If ``zfs_arc_grow_retry`` = 0, the internal default of 5 seconds is +used. + +================== ==================================== +zfs_arc_grow_retry Notes +================== ==================================== +Tags `ARC <#arc>`__, `memory <#memory>`__ +When to change TBD +Data Type int +Units seconds +Range 1 to MAX_INT +Default 0 +Change Dynamic +Versions Affected v0.6.5 and later +================== ==================================== + +zfs_arc_lotsfree_percent +~~~~~~~~~~~~~~~~~~~~~~~~ + +Throttle ARC memory consumption, effectively throttling I/O, when free +system memory drops below this percentage of total system memory. +Setting ``zfs_arc_lotsfree_percent`` to 0 disables the throttle. + +The arcstat_memory_throttle_count counter in +``/proc/spl/kstat/arcstats`` can indicate throttle activity. + +======================== ==================================== +zfs_arc_lotsfree_percent Notes +======================== ==================================== +Tags `ARC <#arc>`__, `memory <#memory>`__ +When to change TBD +Data Type int +Units percent +Range 0 to 100 +Default 10 +Change Dynamic +Versions Affected v0.6.5 and later +======================== ==================================== + +zfs_arc_max +~~~~~~~~~~~ + +Maximum size of ARC in bytes. + +If set to 0 then the maximum size of ARC +is determined by the amount of system memory installed: + +* **Linux**: 1/2 of system memory +* **FreeBSD**: the larger of ``all_system_memory - 1GB`` and ``5/8 × all_system_memory`` + +``zfs_arc_max`` can be changed dynamically with some caveats. It cannot +be set back to 0 while running and reducing it below the current ARC +size will not cause the ARC to shrink without memory pressure to induce +shrinking. + ++-------------------+-------------------------------------------------+ +| zfs_arc_max | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Reduce if ARC competes too much with other | +| | applications, increase if ZFS is the primary | +| | application and can use more RAM | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 67,108,864 to RAM size in bytes | ++-------------------+-------------------------------------------------+ +| Default | 0 (see description above, OS-dependent) | ++-------------------+-------------------------------------------------+ +| Change | Dynamic (see description above) | ++-------------------+-------------------------------------------------+ +| Verification | ``c`` column in ``arcstats.py`` or | +| | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``c_max`` | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_arc_meta_adjust_restarts +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The number of restart passes to make while scanning the ARC attempting +the free buffers in order to stay below the +`zfs_arc_meta_limit <#zfs-arc-meta-limit>`__. + +============================ ======================================= +zfs_arc_meta_adjust_restarts Notes +============================ ======================================= +Tags `ARC <#arc>`__ +When to change Testing ARC metadata adjustment feature +Data Type int +Units restarts +Range 0 to INT_MAX +Default 4,096 +Change Dynamic +Versions Affected v0.6.5 and later +============================ ======================================= + +zfs_arc_meta_limit +~~~~~~~~~~~~~~~~~~ + +Sets the maximum allowed size metadata buffers in the ARC. When +`zfs_arc_meta_limit <#zfs-arc-meta-limit>`__ is reached metadata buffers +are reclaimed, even if the overall ``c_max`` has not been reached. + +In version v0.7.0, with a default value = 0, +``zfs_arc_meta_limit_percent`` is used to set ``arc_meta_limit`` + ++--------------------+------------------------------------------------+ +| zfs_arc_meta_limit | Notes | ++====================+================================================+ +| Tags | `ARC <#arc>`__ | ++--------------------+------------------------------------------------+ +| When to change | For workloads where the metadata to data ratio | +| | in the ARC can be changed to improve ARC hit | +| | rates | ++--------------------+------------------------------------------------+ +| Data Type | uint64 | ++--------------------+------------------------------------------------+ +| Units | bytes | ++--------------------+------------------------------------------------+ +| Range | 0 to ``c_max`` | ++--------------------+------------------------------------------------+ +| Default | 0 | ++--------------------+------------------------------------------------+ +| Change | Dynamic, except that it cannot be set back to | +| | 0 for a specific percent of the ARC; it must | +| | be set to an explicit value | ++--------------------+------------------------------------------------+ +| Verification | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``arc_meta_limit`` | ++--------------------+------------------------------------------------+ +| Versions Affected | all | ++--------------------+------------------------------------------------+ + +zfs_arc_meta_limit_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Sets the limit to ARC metadata, ``arc_meta_limit``, as a percentage of +the maximum size target of the ARC, ``c_max`` + +Prior to version v0.7.0, the +`zfs_arc_meta_limit <#zfs-arc-meta-limit>`__ was used to set the limit +as a fixed size. ``zfs_arc_meta_limit_percent`` provides a more +convenient interface for setting the limit. + ++----------------------------+----------------------------------------+ +| zfs_arc_meta_limit_percent | Notes | ++============================+========================================+ +| Tags | `ARC <#arc>`__ | ++----------------------------+----------------------------------------+ +| When to change | For workloads where the metadata to | +| | data ratio in the ARC can be changed | +| | to improve ARC hit rates | ++----------------------------+----------------------------------------+ +| Data Type | uint64 | ++----------------------------+----------------------------------------+ +| Units | percent of ``c_max`` | ++----------------------------+----------------------------------------+ +| Range | 0 to 100 | ++----------------------------+----------------------------------------+ +| Default | 75 | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Verification | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``arc_meta_limit`` | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------+----------------------------------------+ + +zfs_arc_meta_min +~~~~~~~~~~~~~~~~ + +The minimum allowed size in bytes that metadata buffers may consume in +the ARC. This value defaults to 0 which disables a floor on the amount +of the ARC devoted meta data. + +When evicting data from the ARC, if the ``metadata_size`` is less than +``arc_meta_min`` then data is evicted instead of metadata. + ++-------------------+---------------------------------------------------------+ +| zfs_arc_meta_min | Notes | ++===================+=========================================================+ +| Tags | `ARC <#arc>`__ | ++-------------------+---------------------------------------------------------+ +| When to change | | ++-------------------+---------------------------------------------------------+ +| Data Type | uint64 | ++-------------------+---------------------------------------------------------+ +| Units | bytes | ++-------------------+---------------------------------------------------------+ +| Range | 16,777,216 to ``c_max`` | ++-------------------+---------------------------------------------------------+ +| Default | 0 (use internal default 16 MiB) | ++-------------------+---------------------------------------------------------+ +| Change | Dynamic | ++-------------------+---------------------------------------------------------+ +| Verification | ``/proc/spl/kstat/zfs/arcstats`` entry ``arc_meta_min`` | ++-------------------+---------------------------------------------------------+ +| Versions Affected | all | ++-------------------+---------------------------------------------------------+ + +zfs_arc_meta_prune +~~~~~~~~~~~~~~~~~~ + +``zfs_arc_meta_prune`` sets the number of dentries and znodes to be +scanned looking for entries which can be dropped. This provides a +mechanism to ensure the ARC can honor the ``arc_meta_limit and`` reclaim +otherwise pinned ARC buffers. Pruning may be required when the ARC size +drops to ``arc_meta_limit`` because dentries and znodes can pin buffers +in the ARC. Increasing this value will cause to dentry and znode caches +to be pruned more aggressively and the arc_prune thread becomes more +active. Setting ``zfs_arc_meta_prune`` to 0 will disable pruning. + ++--------------------+------------------------------------------------+ +| zfs_arc_meta_prune | Notes | ++====================+================================================+ +| Tags | `ARC <#arc>`__ | ++--------------------+------------------------------------------------+ +| When to change | TBD | ++--------------------+------------------------------------------------+ +| Data Type | uint64 | ++--------------------+------------------------------------------------+ +| Units | entries | ++--------------------+------------------------------------------------+ +| Range | 0 to INT_MAX | ++--------------------+------------------------------------------------+ +| Default | 10,000 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| ! Verification | Prune activity is counted by the | +| | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``arc_prune`` | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++--------------------+------------------------------------------------+ + +zfs_arc_meta_strategy +~~~~~~~~~~~~~~~~~~~~~ + +Defines the strategy for ARC metadata eviction (meta reclaim strategy). +A value of 0 (META_ONLY) will evict only the ARC metadata. A value of 1 +(BALANCED) indicates that additional data may be evicted if required in +order to evict the requested amount of metadata. + ++-----------------------+---------------------------------------------+ +| zfs_arc_meta_strategy | Notes | ++=======================+=============================================+ +| Tags | `ARC <#arc>`__ | ++-----------------------+---------------------------------------------+ +| When to change | Testing ARC metadata eviction | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | enum | ++-----------------------+---------------------------------------------+ +| Range | 0=evict metadata only, 1=also evict data | +| | buffers if they can free metadata buffers | +| | for eviction | ++-----------------------+---------------------------------------------+ +| Default | 1 (BALANCED) | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++-----------------------+---------------------------------------------+ + +zfs_arc_min +~~~~~~~~~~~ + +Minimum ARC size limit. When the ARC is asked to shrink, it will stop +shrinking at ``c_min`` as tuned by ``zfs_arc_min``. + ++-------------------+-------------------------------------------------+ +| zfs_arc_min | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the primary focus of the system is ZFS, then | +| | increasing can ensure the ARC gets a minimum | +| | amount of RAM | ++-------------------+-------------------------------------------------+ +| Data Type | uint64 | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 33,554,432 to ``c_max`` | ++-------------------+-------------------------------------------------+ +| Default | For kernel: greater of 33,554,432 (32 MiB) and | +| | memory size / 32. For user-land: greater of | +| | 33,554,432 (32 MiB) and ``c_max`` / 2. | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Verification | ``/proc/spl/kstat/zfs/arcstats`` entry | +| | ``c_min`` | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_arc_min_prefetch_ms +~~~~~~~~~~~~~~~~~~~~~~~ + +Minimum time prefetched blocks are locked in the ARC. + +A value of 0 represents the default of 1 second. However, once changed, +dynamically setting to 0 will not return to the default. + +======================= ======================================== +zfs_arc_min_prefetch_ms Notes +======================= ======================================== +Tags `ARC <#arc>`__, `prefetch <#prefetch>`__ +When to change TBD +Data Type int +Units milliseconds +Range 1 to INT_MAX +Default 0 (use internal default of 1000 ms) +Change Dynamic +Versions Affected v0.8.0 and later +======================= ======================================== + +zfs_arc_min_prescient_prefetch_ms +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Minimum time "prescient prefetched" blocks are locked in the ARC. These +blocks are meant to be prefetched fairly aggressively ahead of the code +that may use them. + +A value of 0 represents the default of 6 seconds. However, once changed, +dynamically setting to 0 will not return to the default. + ++----------------------------------+----------------------------------+ +| z | Notes | +| fs_arc_min_prescient_prefetch_ms | | ++==================================+==================================+ +| Tags | `ARC <#arc>`__, | +| | `prefetch <#prefetch>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | milliseconds | ++----------------------------------+----------------------------------+ +| Range | 1 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 0 (use internal default of 6000 | +| | ms) | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.8.0 and later | ++----------------------------------+----------------------------------+ + +zfs_multilist_num_sublists +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To allow more fine-grained locking, each ARC state contains a series of +lists (sublists) for both data and metadata objects. Locking is +performed at the sublist level. This parameters controls the number of +sublists per ARC state, and also applies to other uses of the multilist +data structure. + ++----------------------------+----------------------------------------+ +| zfs_multilist_num_sublists | Notes | ++============================+========================================+ +| Tags | `ARC <#arc>`__ | ++----------------------------+----------------------------------------+ +| When to change | TBD | ++----------------------------+----------------------------------------+ +| Data Type | int | ++----------------------------+----------------------------------------+ +| Units | lists | ++----------------------------+----------------------------------------+ +| Range | 1 to INT_MAX | ++----------------------------+----------------------------------------+ +| Default | 0 (internal value is greater of number | +| | of online CPUs or 4) | ++----------------------------+----------------------------------------+ +| Change | Prior to zfs module load | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------+----------------------------------------+ + +zfs_arc_overflow_shift +~~~~~~~~~~~~~~~~~~~~~~ + +The ARC size is considered to be overflowing if it exceeds the current +ARC target size (``/proc/spl/kstat/zfs/arcstats`` entry ``c``) by a +threshold determined by ``zfs_arc_overflow_shift``. The threshold is +calculated as a fraction of c using the formula: (ARC target size) +``c >> zfs_arc_overflow_shift`` + +The default value of 8 causes the ARC to be considered to be overflowing +if it exceeds the target size by 1/256th (0.3%) of the target size. + +When the ARC is overflowing, new buffer allocations are stalled until +the reclaim thread catches up and the overflow condition no longer +exists. + +====================== ================ +zfs_arc_overflow_shift Notes +====================== ================ +Tags `ARC <#arc>`__ +When to change TBD +Data Type int +Units shift +Range 1 to INT_MAX +Default 8 +Change Dynamic +Versions Affected v0.6.5 and later +====================== ================ + +zfs_arc_p_min_shift +~~~~~~~~~~~~~~~~~~~ + +arc_p_min_shift is used to shift of ARC target size +(``/proc/spl/kstat/zfs/arcstats`` entry ``c``) for calculating both +minimum and maximum most recently used (MRU) target size +(``/proc/spl/kstat/zfs/arcstats`` entry ``p``) + +A value of 0 represents the default setting of ``arc_p_min_shift`` = 4. +However, once changed, dynamically setting ``zfs_arc_p_min_shift`` to 0 +will not return to the default. + ++---------------------+-----------------------------------------------+ +| zfs_arc_p_min_shift | Notes | ++=====================+===============================================+ +| Tags | `ARC <#arc>`__ | ++---------------------+-----------------------------------------------+ +| When to change | TBD | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | shift | ++---------------------+-----------------------------------------------+ +| Range | 1 to INT_MAX | ++---------------------+-----------------------------------------------+ +| Default | 0 (internal default = 4) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Verification | Observe changes to | +| | ``/proc/spl/kstat/zfs/arcstats`` entry ``p`` | ++---------------------+-----------------------------------------------+ +| Versions Affected | all | ++---------------------+-----------------------------------------------+ + +zfs_arc_p_dampener_disable +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When data is being added to the ghost lists, the MRU target size is +adjusted. The amount of adjustment is based on the ratio of the MRU/MFU +sizes. When enabled, the ratio is capped to 10, avoiding large +adjustments. + ++----------------------------+----------------------------------------+ +| zfs_arc_p_dampener_disable | Notes | ++============================+========================================+ +| Tags | `ARC <#arc>`__ | ++----------------------------+----------------------------------------+ +| When to change | Testing ARC ghost list behaviour | ++----------------------------+----------------------------------------+ +| Data Type | boolean | ++----------------------------+----------------------------------------+ +| Range | 0=avoid large adjustments, 1=permit | +| | large adjustments | ++----------------------------+----------------------------------------+ +| Default | 1 | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------+----------------------------------------+ + +zfs_arc_shrink_shift +~~~~~~~~~~~~~~~~~~~~ + +``arc_shrink_shift`` is used to adjust the ARC target sizes when large +reduction is required. The current ARC target size, ``c``, and MRU size +``p`` can be reduced by by the current ``size >> arc_shrink_shift``. For +the default value of 7, this reduces the target by approximately 0.8%. + +A value of 0 represents the default setting of arc_shrink_shift = 7. +However, once changed, dynamically setting arc_shrink_shift to 0 will +not return to the default. + ++----------------------+----------------------------------------------+ +| zfs_arc_shrink_shift | Notes | ++======================+==============================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++----------------------+----------------------------------------------+ +| When to change | During memory shortfall, reducing | +| | ``zfs_arc_shrink_shift`` increases the rate | +| | of ARC shrinkage | ++----------------------+----------------------------------------------+ +| Data Type | int | ++----------------------+----------------------------------------------+ +| Units | shift | ++----------------------+----------------------------------------------+ +| Range | 1 to INT_MAX | ++----------------------+----------------------------------------------+ +| Default | 0 (``arc_shrink_shift`` = 7) | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | all | ++----------------------+----------------------------------------------+ + +zfs_arc_pc_percent +~~~~~~~~~~~~~~~~~~ + +``zfs_arc_pc_percent`` allows ZFS arc to play more nicely with the +kernel's LRU pagecache. It can guarantee that the arc size won't +collapse under scanning pressure on the pagecache, yet still allows arc +to be reclaimed down to zfs_arc_min if necessary. This value is +specified as percent of pagecache size (as measured by +``NR_FILE_PAGES``) where that percent may exceed 100. This only operates +during memory pressure/reclaim. + ++--------------------+------------------------------------------------+ +| zfs_arc_pc_percent | Notes | ++====================+================================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++--------------------+------------------------------------------------+ +| When to change | When using file systems under memory | +| | shortfall, if the page scanner causes the ARC | +| | to shrink too fast, then adjusting | +| | ``zfs_arc_pc_percent`` can reduce the shrink | +| | rate | ++--------------------+------------------------------------------------+ +| Data Type | int | ++--------------------+------------------------------------------------+ +| Units | percent | ++--------------------+------------------------------------------------+ +| Range | 0 to 100 | ++--------------------+------------------------------------------------+ +| Default | 0 (disabled) | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------+------------------------------------------------+ + +zfs_arc_sys_free +~~~~~~~~~~~~~~~~ + +``zfs_arc_sys_free`` is the target number of bytes the ARC should leave +as free memory on the system. Defaults to the larger of 1/64 of physical +memory or 512K. Setting this option to a non-zero value will override +the default. + +A value of 0 represents the default setting of larger of 1/64 of +physical memory or 512 KiB. However, once changed, dynamically setting +zfs_arc_sys_free to 0 will not return to the default. + ++-------------------+-------------------------------------------------+ +| zfs_arc_sys_free | Notes | ++===================+=================================================+ +| Tags | `ARC <#arc>`__, `memory <#memory>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Change if more free memory is desired as a | +| | margin against memory demand by applications | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 0 to ULONG_MAX | ++-------------------+-------------------------------------------------+ +| Default | 0 (default to larger of 1/64 of physical memory | +| | or 512 KiB) | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++-------------------+-------------------------------------------------+ + +zfs_autoimport_disable +~~~~~~~~~~~~~~~~~~~~~~ + +Disable reading zpool.cache file (see +`spa_config_path <#spa-config-path>`__) when loading the zfs module. + ++------------------------+--------------------------------------------+ +| zfs_autoimport_disable | Notes | ++========================+============================================+ +| Tags | `import <#import>`__ | ++------------------------+--------------------------------------------+ +| When to change | Leave as default so that zfs behaves as | +| | other Linux kernel modules | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0=read ``zpool.cache`` at module load, | +| | 1=do not read ``zpool.cache`` at module | +| | load | ++------------------------+--------------------------------------------+ +| Default | 1 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++------------------------+--------------------------------------------+ + +zfs_commit_timeout_pct +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_commit_timeout_pct`` controls the amount of time that a log (ZIL) +write block (lwb) remains "open" when it isn't "full" and it has a +thread waiting to commit to stable storage. The timeout is scaled based +on a percentage of the last lwb latency to avoid significantly impacting +the latency of each individual intent log transaction (itx). + +====================== ============== +zfs_commit_timeout_pct Notes +====================== ============== +Tags `ZIL <#zil>`__ +When to change TBD +Data Type int +Units percent +Range 1 to 100 +Default 5 +Change Dynamic +Versions Affected v0.8.0 +====================== ============== + +zfs_dbgmsg_enable +~~~~~~~~~~~~~~~~~ + +| Internally ZFS keeps a small log to facilitate debugging. The contents + of the log are in the ``/proc/spl/kstat/zfs/dbgmsg`` file. +| Writing 0 to ``/proc/spl/kstat/zfs/dbgmsg`` file clears the log. + +See also `zfs_dbgmsg_maxsize <#zfs-dbgmsg-maxsize>`__ + +================= ================================================= +zfs_dbgmsg_enable Notes +================= ================================================= +Tags `debug <#debug>`__ +When to change To view ZFS internal debug log +Data Type boolean +Range 0=do not log debug messages, 1=log debug messages +Default 0 (1 for debug builds) +Change Dynamic +Versions Affected v0.6.5 and later +================= ================================================= + +zfs_dbgmsg_maxsize +~~~~~~~~~~~~~~~~~~ + +The ``/proc/spl/kstat/zfs/dbgmsg`` file size limit is set by +zfs_dbgmsg_maxsize. + +See also zfs_dbgmsg_enable + +================== ================== +zfs_dbgmsg_maxsize Notes +================== ================== +Tags `debug <#debug>`__ +When to change TBD +Data Type int +Units bytes +Range 0 to INT_MAX +Default 4 MiB +Change Dynamic +Versions Affected v0.6.5 and later +================== ================== + +zfs_dbuf_state_index +~~~~~~~~~~~~~~~~~~~~ + +The ``zfs_dbuf_state_index`` feature is currently unused. It is normally +used for controlling values in the ``/proc/spl/kstat/zfs/dbufs`` file. + +==================== ================== +zfs_dbuf_state_index Notes +==================== ================== +Tags `debug <#debug>`__ +When to change Do not change +Data Type int +Units TBD +Range TBD +Default 0 +Change Dynamic +Versions Affected v0.6.5 and later +==================== ================== + +zfs_deadman_enabled +~~~~~~~~~~~~~~~~~~~ + +When a pool sync operation takes longer than zfs_deadman_synctime_ms +milliseconds, a "slow spa_sync" message is logged to the debug log (see +`zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__). If ``zfs_deadman_enabled`` +is set to 1, then all pending IO operations are also checked and if any +haven't completed within zfs_deadman_synctime_ms milliseconds, a "SLOW +IO" message is logged to the debug log and a "deadman" system event (see +zpool events command) with the details of the hung IO is posted. + +=================== ===================================== +zfs_deadman_enabled Notes +=================== ===================================== +Tags `debug <#debug>`__ +When to change To disable logging of slow I/O +Data Type boolean +Range 0=do not log slow I/O, 1=log slow I/O +Default 1 +Change Dynamic +Versions Affected v0.8.0 +=================== ===================================== + +zfs_deadman_checktime_ms +~~~~~~~~~~~~~~~~~~~~~~~~ + +Once a pool sync operation has taken longer than +`zfs_deadman_synctime_ms <#zfs-deadman-synctime-ms>`__ milliseconds, +continue to check for slow operations every +`zfs_deadman_checktime_ms <#zfs-deadman-synctime-ms>`__ milliseconds. + +======================== ======================= +zfs_deadman_checktime_ms Notes +======================== ======================= +Tags `debug <#debug>`__ +When to change When debugging slow I/O +Data Type ulong +Units milliseconds +Range 1 to ULONG_MAX +Default 60,000 (1 minute) +Change Dynamic +Versions Affected v0.8.0 +======================== ======================= + +zfs_deadman_ziotime_ms +~~~~~~~~~~~~~~~~~~~~~~ + +When an individual I/O takes longer than ``zfs_deadman_ziotime_ms`` +milliseconds, then the operation is considered to be "hung". If +`zfs_deadman_enabled <#zfs-deadman-enabled>`__ is set then the deadman +behaviour is invoked as described by the +`zfs_deadman_failmode <#zfs-deadman-failmode>`__ option. + +====================== ==================== +zfs_deadman_ziotime_ms Notes +====================== ==================== +Tags `debug <#debug>`__ +When to change Testing ABD features +Data Type ulong +Units milliseconds +Range 1 to ULONG_MAX +Default 300,000 (5 minutes) +Change Dynamic +Versions Affected v0.8.0 +====================== ==================== + +zfs_deadman_synctime_ms +~~~~~~~~~~~~~~~~~~~~~~~ + +The I/O deadman timer expiration time has two meanings + +1. determines when the ``spa_deadman()`` logic should fire, indicating + the txg sync has not completed in a timely manner +2. determines if an I/O is considered "hung" + +In version v0.8.0, any I/O that has not completed in +``zfs_deadman_synctime_ms`` is considered "hung" resulting in one of +three behaviors controlled by the +`zfs_deadman_failmode <#zfs-deadman-failmode>`__ parameter. + +``zfs_deadman_synctime_ms`` takes effect if +`zfs_deadman_enabled <#zfs-deadman-enabled>`__ = 1. + +======================= ======================= +zfs_deadman_synctime_ms Notes +======================= ======================= +Tags `debug <#debug>`__ +When to change When debugging slow I/O +Data Type ulong +Units milliseconds +Range 1 to ULONG_MAX +Default 600,000 (10 minutes) +Change Dynamic +Versions Affected v0.6.5 and later +======================= ======================= + +zfs_deadman_failmode +~~~~~~~~~~~~~~~~~~~~ + +zfs_deadman_failmode controls the behavior of the I/O deadman timer when +it detects a "hung" I/O. Valid values are: + +- wait - Wait for the "hung" I/O (default) +- continue - Attempt to recover from a "hung" I/O +- panic - Panic the system + +==================== =============================================== +zfs_deadman_failmode Notes +==================== =============================================== +Tags `debug <#debug>`__ +When to change In some cluster cases, panic can be appropriate +Data Type string +Range *wait*, *continue*, or *panic* +Default wait +Change Dynamic +Versions Affected v0.8.0 +==================== =============================================== + +zfs_dedup_prefetch +~~~~~~~~~~~~~~~~~~ + +ZFS can prefetch deduplication table (DDT) entries. +``zfs_dedup_prefetch`` allows DDT prefetches to be enabled. + ++--------------------+------------------------------------------------+ +| zfs_dedup_prefetch | Notes | ++====================+================================================+ +| Tags | `prefetch <#prefetch>`__, `memory <#memory>`__ | ++--------------------+------------------------------------------------+ +| When to change | For systems with limited RAM using the dedup | +| | feature, disabling deduplication table | +| | prefetch can reduce memory pressure | ++--------------------+------------------------------------------------+ +| Data Type | boolean | ++--------------------+------------------------------------------------+ +| Range | 0=do not prefetch, 1=prefetch dedup table | +| | entries | ++--------------------+------------------------------------------------+ +| Default | 0 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++--------------------+------------------------------------------------+ + +zfs_delete_blocks +~~~~~~~~~~~~~~~~~ + +``zfs_delete_blocks`` defines a large file for the purposes of delete. +Files containing more than ``zfs_delete_blocks`` will be deleted +asynchronously while smaller files are deleted synchronously. Decreasing +this value reduces the time spent in an ``unlink(2)`` system call at the +expense of a longer delay before the freed space is available. + +The ``zfs_delete_blocks`` value is specified in blocks, not bytes. The +size of blocks can vary and is ultimately limited by the filesystem's +recordsize property. + ++-------------------+-------------------------------------------------+ +| zfs_delete_blocks | Notes | ++===================+=================================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `delete <#delete>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If applications delete large files and blocking | +| | on ``unlink(2)`` is not desired | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Units | blocks | ++-------------------+-------------------------------------------------+ +| Range | 1 to ULONG_MAX | ++-------------------+-------------------------------------------------+ +| Default | 20,480 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_delay_min_dirty_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ZFS write throttle begins to delay each transaction when the amount +of dirty data reaches the threshold ``zfs_delay_min_dirty_percent`` of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__. This value should be >= +`zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__. + +=========================== ==================================== +zfs_delay_min_dirty_percent Notes +=========================== ==================================== +Tags `write_throttle <#write-throttle>`__ +When to change See section "ZFS TRANSACTION DELAY" +Data Type int +Units percent +Range 0 to 100 +Default 60 +Change Dynamic +Versions Affected v0.6.4 and later +=========================== ==================================== + +zfs_delay_scale +~~~~~~~~~~~~~~~ + +``zfs_delay_scale`` controls how quickly the ZFS write throttle +transaction delay approaches infinity. Larger values cause longer delays +for a given amount of dirty data. + +For the smoothest delay, this value should be about 1 billion divided by +the maximum number of write operations per second the pool can sustain. +The throttle will smoothly handle between 10x and 1/10th +``zfs_delay_scale``. + +Note: ``zfs_delay_scale`` \* +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ must be < 2^64. + +================= ==================================== +zfs_delay_scale Notes +================= ==================================== +Tags `write_throttle <#write-throttle>`__ +When to change See section "ZFS TRANSACTION DELAY" +Data Type ulong +Units scalar (nanoseconds) +Range 0 to ULONG_MAX +Default 500,000 +Change Dynamic +Versions Affected v0.6.4 and later +================= ==================================== + +zfs_dirty_data_max +~~~~~~~~~~~~~~~~~~ + +``zfs_dirty_data_max`` is the ZFS write throttle dirty space limit. Once +this limit is exceeded, new writes are delayed until space is freed by +writes being committed to the pool. + +zfs_dirty_data_max takes precedence over +`zfs_dirty_data_max_percent <#zfs-dirty-data-max-percent>`__. + ++--------------------+------------------------------------------------+ +| zfs_dirty_data_max | Notes | ++====================+================================================+ +| Tags | `write_throttle <#write-throttle>`__ | ++--------------------+------------------------------------------------+ +| When to change | See section "ZFS TRANSACTION DELAY" | ++--------------------+------------------------------------------------+ +| Data Type | ulong | ++--------------------+------------------------------------------------+ +| Units | bytes | ++--------------------+------------------------------------------------+ +| Range | 1 to | +| | `zfs_d | +| | irty_data_max_max <#zfs-dirty-data-max-max>`__ | ++--------------------+------------------------------------------------+ +| Default | 10% of physical RAM | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------+------------------------------------------------+ + +zfs_dirty_data_max_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_dirty_data_max_percent`` is an alternative method of specifying +`zfs_dirty_data_max <#zfs-dirty-data-max>`__, the ZFS write throttle +dirty space limit. Once this limit is exceeded, new writes are delayed +until space is freed by writes being committed to the pool. + +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ takes precedence over +``zfs_dirty_data_max_percent``. + ++----------------------------+----------------------------------------+ +| zfs_dirty_data_max_percent | Notes | ++============================+========================================+ +| Tags | `write_throttle <#write-throttle>`__ | ++----------------------------+----------------------------------------+ +| When to change | See section "ZFS TRANSACTION DELAY" | ++----------------------------+----------------------------------------+ +| Data Type | int | ++----------------------------+----------------------------------------+ +| Units | percent | ++----------------------------+----------------------------------------+ +| Range | 1 to 100 | ++----------------------------+----------------------------------------+ +| Default | 10% of physical RAM | ++----------------------------+----------------------------------------+ +| Change | Prior to zfs module load or a memory | +| | hot plug event | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------+----------------------------------------+ + +zfs_dirty_data_max_max +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_dirty_data_max_max`` is the maximum allowable value of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__. + +``zfs_dirty_data_max_max`` takes precedence over +`zfs_dirty_data_max_max_percent <#zfs-dirty-data-max-max-percent>`__. + +====================== ==================================== +zfs_dirty_data_max_max Notes +====================== ==================================== +Tags `write_throttle <#write-throttle>`__ +When to change See section "ZFS TRANSACTION DELAY" +Data Type ulong +Units bytes +Range 1 to physical RAM size +Default physical_ram/4 + + **since v0.7:** min(physical_ram/4, 4GiB) + + **since v2.0 for 32-bit systems:** min(physical_ram/4, 1GiB) +Change Prior to zfs module load +Versions Affected v0.6.4 and later +====================== ==================================== + +zfs_dirty_data_max_max_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_dirty_data_max_max_percent`` an alternative to +`zfs_dirty_data_max_max <#zfs-dirty-data-max-max>`__ for setting the +maximum allowable value of `zfs_dirty_data_max <#zfs-dirty-data-max>`__ + +`zfs_dirty_data_max_max <#zfs-dirty-data-max-max>`__ takes precedence +over ``zfs_dirty_data_max_max_percent`` + +============================== ==================================== +zfs_dirty_data_max_max_percent Notes +============================== ==================================== +Tags `write_throttle <#write-throttle>`__ +When to change See section "ZFS TRANSACTION DELAY" +Data Type int +Units percent +Range 1 to 100 +Default 25% of physical RAM +Change Prior to zfs module load +Versions Affected v0.6.4 and later +============================== ==================================== + +zfs_dirty_data_sync +~~~~~~~~~~~~~~~~~~~ + +When there is at least ``zfs_dirty_data_sync`` dirty data, a transaction +group sync is started. This allows a transaction group sync to occur +more frequently than the transaction group timeout interval (see +`zfs_txg_timeout <#zfs-txg-timeout>`__) when there is dirty data to be +written. + ++---------------------+-----------------------------------------------+ +| zfs_dirty_data_sync | Notes | ++=====================+===============================================+ +| Tags | `write_throttle <#write-throttle>`__, | +| | `ZIO_scheduler <#ZIO-scheduler>`__ | ++---------------------+-----------------------------------------------+ +| When to change | TBD | ++---------------------+-----------------------------------------------+ +| Data Type | ulong | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 1 to ULONG_MAX | ++---------------------+-----------------------------------------------+ +| Default | 67,108,864 (64 MiB) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.4 through v0.8.x, deprecation planned | +| | for v2 | ++---------------------+-----------------------------------------------+ + +zfs_dirty_data_sync_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When there is at least ``zfs_dirty_data_sync_percent`` of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ dirty data, a transaction +group sync is started. This allows a transaction group sync to occur +more frequently than the transaction group timeout interval (see +`zfs_txg_timeout <#zfs-txg-timeout>`__) when there is dirty data to be +written. + ++-----------------------------+---------------------------------------+ +| zfs_dirty_data_sync_percent | Notes | ++=============================+=======================================+ +| Tags | `write_throttle <#write-throttle>`__, | +| | `ZIO_scheduler <#ZIO-scheduler>`__ | ++-----------------------------+---------------------------------------+ +| When to change | TBD | ++-----------------------------+---------------------------------------+ +| Data Type | int | ++-----------------------------+---------------------------------------+ +| Units | percent | ++-----------------------------+---------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_async_write_ac | +| | tive_min_dirty_percent <#zfs_vdev_asy | +| | nc_write_active_min_dirty_percent>`__ | ++-----------------------------+---------------------------------------+ +| Default | 20 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | planned for v2, deprecates | +| | `zfs_dirt | +| | y_data_sync <#zfs-dirty-data-sync>`__ | ++-----------------------------+---------------------------------------+ + +zfs_fletcher_4_impl +~~~~~~~~~~~~~~~~~~~ + +Fletcher-4 is the default checksum algorithm for metadata and data. When +the zfs kernel module is loaded, a set of microbenchmarks are run to +determine the fastest algorithm for the current hardware. The +``zfs_fletcher_4_impl`` parameter allows a specific implementation to be +specified other than the default (fastest). Selectors other than +*fastest* and *scalar* require instruction set extensions to be +available and will only appear if ZFS detects their presence. The +*scalar* implementation works on all processors. + +The results of the microbenchmark are visible in the +``/proc/spl/kstat/zfs/fletcher_4_bench`` file. Larger numbers indicate +better performance. Since ZFS is processor endian-independent, the +microbenchmark is run against both big and little-endian transformation. + ++---------------------+-----------------------------------------------+ +| zfs_fletcher_4_impl | Notes | ++=====================+===============================================+ +| Tags | `CPU <#cpu>`__, `checksum <#checksum>`__ | ++---------------------+-----------------------------------------------+ +| When to change | Testing Fletcher-4 algorithms | ++---------------------+-----------------------------------------------+ +| Data Type | string | ++---------------------+-----------------------------------------------+ +| Range | *fastest*, *scalar*, *superscalar*, | +| | *superscalar4*, *sse2*, *ssse3*, *avx2*, | +| | *avx512f*, or *aarch64_neon* depending on | +| | hardware support | ++---------------------+-----------------------------------------------+ +| Default | fastest | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------+-----------------------------------------------+ + +zfs_free_bpobj_enabled +~~~~~~~~~~~~~~~~~~~~~~ + +The processing of the free_bpobj object can be enabled by +``zfs_free_bpobj_enabled`` + ++------------------------+--------------------------------------------+ +| zfs_free_bpobj_enabled | Notes | ++========================+============================================+ +| Tags | `delete <#delete>`__ | ++------------------------+--------------------------------------------+ +| When to change | If there's a problem with processing | +| | free_bpobj (e.g. i/o error or bug) | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0=do not process free_bpobj objects, | +| | 1=process free_bpobj objects | ++------------------------+--------------------------------------------+ +| Default | 1 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------+--------------------------------------------+ + +zfs_free_max_blocks +~~~~~~~~~~~~~~~~~~~ + +``zfs_free_max_blocks`` sets the maximum number of blocks to be freed in +a single transaction group (txg). For workloads that delete (free) large +numbers of blocks in a short period of time, the processing of the frees +can negatively impact other operations, including txg commits. +``zfs_free_max_blocks`` acts as a limit to reduce the impact. + ++---------------------+-----------------------------------------------+ +| zfs_free_max_blocks | Notes | ++=====================+===============================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `delete <#delete>`__ | ++---------------------+-----------------------------------------------+ +| When to change | For workloads that delete large files, | +| | ``zfs_free_max_blocks`` can be adjusted to | +| | meet performance requirements while reducing | +| | the impacts of deletion | ++---------------------+-----------------------------------------------+ +| Data Type | ulong | ++---------------------+-----------------------------------------------+ +| Units | blocks | ++---------------------+-----------------------------------------------+ +| Range | 1 to ULONG_MAX | ++---------------------+-----------------------------------------------+ +| Default | 100,000 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------+-----------------------------------------------+ + +zfs_vdev_async_read_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Maximum asynchronous read I/Os active to each device. + ++--------------------------------+------------------------------------+ +| zfs_vdev_async_read_max_active | Notes | ++================================+====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------------+------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------------+------------------------------------+ +| Data Type | uint32 | ++--------------------------------+------------------------------------+ +| Units | I/O operations | ++--------------------------------+------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_ma | +| | x_active <#zfs-vdev-max-active>`__ | ++--------------------------------+------------------------------------+ +| Default | 3 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_vdev_async_read_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Minimum asynchronous read I/Os active to each device. + ++--------------------------------+------------------------------------+ +| zfs_vdev_async_read_min_active | Notes | ++================================+====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------------+------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------------+------------------------------------+ +| Data Type | uint32 | ++--------------------------------+------------------------------------+ +| Units | I/O operations | ++--------------------------------+------------------------------------+ +| Range | 1 to | +| | ( | +| | `zfs_vdev_async_read_max_active <# | +| | zfs_vdev_async_read_max_active>`__ | +| | - 1) | ++--------------------------------+------------------------------------+ +| Default | 1 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_vdev_async_write_active_max_dirty_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When the amount of dirty data exceeds the threshold +``zfs_vdev_async_write_active_max_dirty_percent`` of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ dirty data, then +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ +is used to limit active async writes. If the dirty data is between +`zfs_vdev_async_write_active_min_dirty_percent <#zfs-vdev-async-write-active-min-dirty-percent>`__ +and ``zfs_vdev_async_write_active_max_dirty_percent``, the active I/O +limit is linearly interpolated between +`zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +and +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ + ++----------------------------------+----------------------------------+ +| zfs_vdev_asyn | Notes | +| c_write_active_max_dirty_percent | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | See `ZFS I/O | +| | Sch | +| | eduler `__ | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | percent of | +| | `zfs_dirty_d | +| | ata_max <#zfs-dirty-data-max>`__ | ++----------------------------------+----------------------------------+ +| Range | 0 to 100 | ++----------------------------------+----------------------------------+ +| Default | 60 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_async_write_active_min_dirty_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If the amount of dirty data is between +``zfs_vdev_async_write_active_min_dirty_percent`` and +`zfs_vdev_async_write_active_max_dirty_percent <#zfs-vdev-async-write-active-max-dirty-percent>`__ +of `zfs_dirty_data_max <#zfs-dirty-data-max>`__, the active I/O limit is +linearly interpolated between +`zfs_vdev_async_write_min_active <#zfs-vdev-async-write-min-active>`__ +and +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ + ++----------------------------------+----------------------------------+ +| zfs_vdev_asyn | Notes | +| c_write_active_min_dirty_percent | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | See `ZFS I/O | +| | Sch | +| | eduler `__ | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | percent of zfs_dirty_data_max | ++----------------------------------+----------------------------------+ +| Range | 0 to | +| | (`z | +| | fs_vdev_async_write_active_max_d | +| | irty_percent <#zfs_vdev_async_wr | +| | ite_active_max_dirty_percent>`__ | +| | - 1) | ++----------------------------------+----------------------------------+ +| Default | 30 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_async_write_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_async_write_max_active`` sets the maximum asynchronous write +I/Os active to each device. + ++---------------------------------+-----------------------------------+ +| zfs_vdev_async_write_max_active | Notes | ++=================================+===================================+ +| Tags | `vdev <#vdev>`__, | +| | ` | +| | ZIO_scheduler <#zio-scheduler>`__ | ++---------------------------------+-----------------------------------+ +| When to change | See `ZFS I/O | +| | S | +| | cheduler `__ | ++---------------------------------+-----------------------------------+ +| Data Type | uint32 | ++---------------------------------+-----------------------------------+ +| Units | I/O operations | ++---------------------------------+-----------------------------------+ +| Range | 1 to | +| | `zfs_vdev_max | +| | _active <#zfs-vdev-max-active>`__ | ++---------------------------------+-----------------------------------+ +| Default | 10 | ++---------------------------------+-----------------------------------+ +| Change | Dynamic | ++---------------------------------+-----------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------------------+-----------------------------------+ + +zfs_vdev_async_write_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_async_write_min_active`` sets the minimum asynchronous write +I/Os active to each device. + +Lower values are associated with better latency on rotational media but +poorer resilver performance. The default value of 2 was chosen as a +compromise. A value of 3 has been shown to improve resilver performance +further at a cost of further increasing latency. + ++---------------------------------+-----------------------------------+ +| zfs_vdev_async_write_min_active | Notes | ++=================================+===================================+ +| Tags | `vdev <#vdev>`__, | +| | ` | +| | ZIO_scheduler <#zio-scheduler>`__ | ++---------------------------------+-----------------------------------+ +| When to change | See `ZFS I/O | +| | S | +| | cheduler `__ | ++---------------------------------+-----------------------------------+ +| Data Type | uint32 | ++---------------------------------+-----------------------------------+ +| Units | I/O operations | ++---------------------------------+-----------------------------------+ +| Range | 1 to | +| | `zfs | +| | _vdev_async_write_max_active <#zf | +| | s_vdev_async_write_max_active>`__ | ++---------------------------------+-----------------------------------+ +| Default | 1 for v0.6.x, 2 for v0.7.0 and | +| | later | ++---------------------------------+-----------------------------------+ +| Change | Dynamic | ++---------------------------------+-----------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------------------+-----------------------------------+ + +zfs_vdev_max_active +~~~~~~~~~~~~~~~~~~~ + +The maximum number of I/Os active to each device. Ideally, +``zfs_vdev_max_active`` >= the sum of each queue's max_active. + +Once queued to the device, the ZFS I/O scheduler is no longer able to +prioritize I/O operations. The underlying device drivers have their own +scheduler and queue depth limits. Values larger than the device's +maximum queue depth can have the affect of increased latency as the I/Os +are queued in the intervening device driver layers. + ++---------------------+-----------------------------------------------+ +| zfs_vdev_max_active | Notes | ++=====================+===============================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++---------------------+-----------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++---------------------+-----------------------------------------------+ +| Data Type | uint32 | ++---------------------+-----------------------------------------------+ +| Units | I/O operations | ++---------------------+-----------------------------------------------+ +| Range | sum of each queue's min_active to UINT32_MAX | ++---------------------+-----------------------------------------------+ +| Default | 1,000 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------+-----------------------------------------------+ + +zfs_vdev_scrub_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_scrub_max_active`` sets the maximum scrub or scan read I/Os +active to each device. + ++---------------------------+-----------------------------------------+ +| zfs_vdev_scrub_max_active | Notes | ++===========================+=========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__, | +| | `scrub <#scrub>`__, | +| | `resilver <#resilver>`__ | ++---------------------------+-----------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++---------------------------+-----------------------------------------+ +| Data Type | uint32 | ++---------------------------+-----------------------------------------+ +| Units | I/O operations | ++---------------------------+-----------------------------------------+ +| Range | 1 to | +| | `zfs_vd | +| | ev_max_active <#zfs-vdev-max-active>`__ | ++---------------------------+-----------------------------------------+ +| Default | 2 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------------+-----------------------------------------+ + +zfs_vdev_scrub_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_scrub_min_active`` sets the minimum scrub or scan read I/Os +active to each device. + ++---------------------------+-----------------------------------------+ +| zfs_vdev_scrub_min_active | Notes | ++===========================+=========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__, | +| | `scrub <#scrub>`__, | +| | `resilver <#resilver>`__ | ++---------------------------+-----------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++---------------------------+-----------------------------------------+ +| Data Type | uint32 | ++---------------------------+-----------------------------------------+ +| Units | I/O operations | ++---------------------------+-----------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_scrub_max | +| | _active <#zfs-vdev-scrub-max-active>`__ | ++---------------------------+-----------------------------------------+ +| Default | 1 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.6.4 and later | ++---------------------------+-----------------------------------------+ + +zfs_vdev_sync_read_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Maximum synchronous read I/Os active to each device. + ++-------------------------------+-------------------------------------+ +| zfs_vdev_sync_read_max_active | Notes | ++===============================+=====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------------------+-------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-------------------------------+-------------------------------------+ +| Data Type | uint32 | ++-------------------------------+-------------------------------------+ +| Units | I/O operations | ++-------------------------------+-------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_m | +| | ax_active <#zfs-vdev-max-active>`__ | ++-------------------------------+-------------------------------------+ +| Default | 10 | ++-------------------------------+-------------------------------------+ +| Change | Dynamic | ++-------------------------------+-------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-------------------------------+-------------------------------------+ + +zfs_vdev_sync_read_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_sync_read_min_active`` sets the minimum synchronous read I/Os +active to each device. + ++-------------------------------+-------------------------------------+ +| zfs_vdev_sync_read_min_active | Notes | ++===============================+=====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------------------+-------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-------------------------------+-------------------------------------+ +| Data Type | uint32 | ++-------------------------------+-------------------------------------+ +| Units | I/O operations | ++-------------------------------+-------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_sync_read_max_active | +| | <#zfs-vdev-sync-read-max-active>`__ | ++-------------------------------+-------------------------------------+ +| Default | 10 | ++-------------------------------+-------------------------------------+ +| Change | Dynamic | ++-------------------------------+-------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-------------------------------+-------------------------------------+ + +zfs_vdev_sync_write_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_sync_write_max_active`` sets the maximum synchronous write +I/Os active to each device. + ++--------------------------------+------------------------------------+ +| zfs_vdev_sync_write_max_active | Notes | ++================================+====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------------+------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------------+------------------------------------+ +| Data Type | uint32 | ++--------------------------------+------------------------------------+ +| Units | I/O operations | ++--------------------------------+------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_ma | +| | x_active <#zfs-vdev-max-active>`__ | ++--------------------------------+------------------------------------+ +| Default | 10 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_vdev_sync_write_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_sync_write_min_active`` sets the minimum synchronous write +I/Os active to each device. + ++--------------------------------+------------------------------------+ +| zfs_vdev_sync_write_min_active | Notes | ++================================+====================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------------+------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------------+------------------------------------+ +| Data Type | uint32 | ++--------------------------------+------------------------------------+ +| Units | I/O operations | ++--------------------------------+------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_sync_write_max_active <# | +| | zfs_vdev_sync_write_max_active>`__ | ++--------------------------------+------------------------------------+ +| Default | 10 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_vdev_queue_depth_pct +~~~~~~~~~~~~~~~~~~~~~~~~ + +Maximum number of queued allocations per top-level vdev expressed as a +percentage of +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__. +This allows the system to detect devices that are more capable of +handling allocations and to allocate more blocks to those devices. It +also allows for dynamic allocation distribution when devices are +imbalanced as fuller devices will tend to be slower than empty devices. +Once the queue depth reaches (``zfs_vdev_queue_depth_pct`` \* +`zfs_vdev_async_write_max_active <#zfs-vdev-async-write-max-active>`__ / +100) then allocator will stop allocating blocks on that top-level device +and switch to the next. + +See also `zio_dva_throttle_enabled <#zio-dva-throttle-enabled>`__ + ++--------------------------+------------------------------------------+ +| zfs_vdev_queue_depth_pct | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------+------------------------------------------+ +| Data Type | uint32 | ++--------------------------+------------------------------------------+ +| Units | I/O operations | ++--------------------------+------------------------------------------+ +| Range | 1 to UINT32_MAX | ++--------------------------+------------------------------------------+ +| Default | 1,000 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zfs_disable_dup_eviction +~~~~~~~~~~~~~~~~~~~~~~~~ + +Disable duplicate buffer eviction from ARC. + ++--------------------------+------------------------------------------+ +| zfs_disable_dup_eviction | Notes | ++==========================+==========================================+ +| Tags | `ARC <#arc>`__, `dedup <#dedup>`__ | ++--------------------------+------------------------------------------+ +| When to change | TBD | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=duplicate buffers can be evicted, 1=do | +| | not evict duplicate buffers | ++--------------------------+------------------------------------------+ +| Default | 0 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.6.5, deprecated in v0.7.0 | ++--------------------------+------------------------------------------+ + +zfs_expire_snapshot +~~~~~~~~~~~~~~~~~~~ + +Snapshots of filesystems are normally automounted under the filesystem's +``.zfs/snapshot`` subdirectory. When not in use, snapshots are unmounted +after zfs_expire_snapshot seconds. + ++---------------------+-----------------------------------------------+ +| zfs_expire_snapshot | Notes | ++=====================+===============================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `snapshot <#snapshot>`__ | ++---------------------+-----------------------------------------------+ +| When to change | TBD | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | seconds | ++---------------------+-----------------------------------------------+ +| Range | 0 disables automatic unmounting, maximum time | +| | is INT_MAX | ++---------------------+-----------------------------------------------+ +| Default | 300 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.1 and later | ++---------------------+-----------------------------------------------+ + +zfs_admin_snapshot +~~~~~~~~~~~~~~~~~~ + +Allow the creation, removal, or renaming of entries in the +``.zfs/snapshot`` subdirectory to cause the creation, destruction, or +renaming of snapshots. When enabled this functionality works both +locally and over NFS exports which have the "no_root_squash" option set. + ++--------------------+------------------------------------------------+ +| zfs_admin_snapshot | Notes | ++====================+================================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `snapshot <#snapshot>`__ | ++--------------------+------------------------------------------------+ +| When to change | TBD | ++--------------------+------------------------------------------------+ +| Data Type | boolean | ++--------------------+------------------------------------------------+ +| Range | 0=do not allow snapshot manipulation via the | +| | filesystem, 1=allow snapshot manipulation via | +| | the filesystem | ++--------------------+------------------------------------------------+ +| Default | 1 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++--------------------+------------------------------------------------+ + +zfs_flags +~~~~~~~~~ + +Set additional debugging flags (see +`zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__) + ++------------+---------------------------+---------------------------+ +| flag value | symbolic name | description | ++============+===========================+===========================+ +| 0x1 | ZFS_DEBUG_DPRINTF | Enable dprintf entries in | +| | | the debug log | ++------------+---------------------------+---------------------------+ +| 0x2 | ZFS_DEBUG_DBUF_VERIFY | Enable extra dnode | +| | | verifications | ++------------+---------------------------+---------------------------+ +| 0x4 | ZFS_DEBUG_DNODE_VERIFY | Enable extra dnode | +| | | verifications | ++------------+---------------------------+---------------------------+ +| 0x8 | ZFS_DEBUG_SNAPNAMES | Enable snapshot name | +| | | verification | ++------------+---------------------------+---------------------------+ +| 0x10 | ZFS_DEBUG_MODIFY | Check for illegally | +| | | modified ARC buffers | ++------------+---------------------------+---------------------------+ +| 0x20 | ZFS_DEBUG_SPA | Enable spa_dbgmsg entries | +| | | in the debug log | ++------------+---------------------------+---------------------------+ +| 0x40 | ZFS_DEBUG_ZIO_FREE | Enable verification of | +| | | block frees | ++------------+---------------------------+---------------------------+ +| 0x80 | Z | Enable extra spacemap | +| | FS_DEBUG_HISTOGRAM_VERIFY | histogram verifications | ++------------+---------------------------+---------------------------+ +| 0x100 | ZFS_DEBUG_METASLAB_VERIFY | Verify space accounting | +| | | on disk matches in-core | +| | | range_trees | ++------------+---------------------------+---------------------------+ +| 0x200 | ZFS_DEBUG_SET_ERROR | Enable SET_ERROR and | +| | | dprintf entries in the | +| | | debug log | ++------------+---------------------------+---------------------------+ + ++-------------------+-------------------------------------------------+ +| zfs_flags | Notes | ++===================+=================================================+ +| Tags | `debug <#debug>`__ | ++-------------------+-------------------------------------------------+ +| When to change | When debugging ZFS | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Default | 0 no debug flags set, for debug builds: all | +| | except ZFS_DEBUG_DPRINTF and ZFS_DEBUG_SPA | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-------------------+-------------------------------------------------+ + +zfs_free_leak_on_eio +~~~~~~~~~~~~~~~~~~~~ + +If destroy encounters an I/O error (EIO) while reading metadata (eg +indirect blocks), space referenced by the missing metadata cannot be +freed. Normally, this causes the background destroy to become "stalled", +as the destroy is unable to make forward progress. While in this stalled +state, all remaining space to free from the error-encountering +filesystem is temporarily leaked. Set ``zfs_free_leak_on_eio = 1`` to +ignore the EIO, permanently leak the space from indirect blocks that can +not be read, and continue to free everything else that it can. + +The default, stalling behavior is useful if the storage partially fails +(eg some but not all I/Os fail), and then later recovers. In this case, +we will be able to continue pool operations while it is partially +failed, and when it recovers, we can continue to free the space, with no +leaks. However, note that this case is rare. + +Typically pools either: + +1. fail completely (but perhaps temporarily (eg a top-level vdev going + offline) + +2. have localized, permanent errors (eg disk returns the wrong data due + to bit flip or firmware bug) + +In case (1), the ``zfs_free_leak_on_eio`` setting does not matter +because the pool will be suspended and the sync thread will not be able +to make forward progress. In case (2), because the error is permanent, +the best effort do is leak the minimum amount of space. Therefore, it is +reasonable for ``zfs_free_leak_on_eio`` be set, but by default the more +conservative approach is taken, so that there is no possibility of +leaking space in the "partial temporary" failure case. + ++----------------------+----------------------------------------------+ +| zfs_free_leak_on_eio | Notes | ++======================+==============================================+ +| Tags | `debug <#debug>`__ | ++----------------------+----------------------------------------------+ +| When to change | When debugging I/O errors during destroy | ++----------------------+----------------------------------------------+ +| Data Type | boolean | ++----------------------+----------------------------------------------+ +| Range | 0=normal behavior, 1=ignore error and | +| | permanently leak space | ++----------------------+----------------------------------------------+ +| Default | 0 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++----------------------+----------------------------------------------+ + +zfs_free_min_time_ms +~~~~~~~~~~~~~~~~~~~~ + +During a ``zfs destroy`` operation using ``feature@async_destroy`` a +minimum of ``zfs_free_min_time_ms`` time will be spent working on +freeing blocks per txg commit. + +==================== ============================== +zfs_free_min_time_ms Notes +==================== ============================== +Tags `delete <#delete>`__ +When to change TBD +Data Type int +Units milliseconds +Range 1 to (zfs_txg_timeout \* 1000) +Default 1,000 +Change Dynamic +Versions Affected v0.6.0 and later +==================== ============================== + +zfs_immediate_write_sz +~~~~~~~~~~~~~~~~~~~~~~ + +If a pool does not have a log device, data blocks equal to or larger +than ``zfs_immediate_write_sz`` are treated as if the dataset being +written to had the property setting ``logbias=throughput`` + +Terminology note: ``logbias=throughput`` writes the blocks in "indirect +mode" to the ZIL where the data is written to the pool and a pointer to +the data is written to the ZIL. + ++------------------------+--------------------------------------------+ +| zfs_immediate_write_sz | Notes | ++========================+============================================+ +| Tags | `ZIL <#zil>`__ | ++------------------------+--------------------------------------------+ +| When to change | TBD | ++------------------------+--------------------------------------------+ +| Data Type | long | ++------------------------+--------------------------------------------+ +| Units | bytes | ++------------------------+--------------------------------------------+ +| Range | 512 to 16,777,216 (valid block sizes) | ++------------------------+--------------------------------------------+ +| Default | 32,768 (32 KiB) | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Verification | Data blocks that exceed | +| | ``zfs_immediate_write_sz`` or are written | +| | as ``logbias=throughput`` increment the | +| | ``zil_itx_indirect_count`` entry in | +| | ``/proc/spl/kstat/zfs/zil`` | ++------------------------+--------------------------------------------+ +| Versions Affected | all | ++------------------------+--------------------------------------------+ + +zfs_max_recordsize +~~~~~~~~~~~~~~~~~~ + +ZFS supports logical record (block) sizes from 512 bytes to 16 MiB. The +benefits of larger blocks, and thus larger average I/O sizes, can be +weighed against the cost of copy-on-write of large block to modify one +byte. Additionally, very large blocks can have a negative impact on both +I/O latency at the device level and the memory allocator. The +``zfs_max_recordsize`` parameter limits the upper bound of the dataset +volblocksize and recordsize properties. + +Larger blocks can be created by enabling ``zpool`` ``large_blocks`` +feature and changing this ``zfs_max_recordsize``. Pools with larger +blocks can always be imported and used, regardless of the value of +``zfs_max_recordsize``. + +For 32-bit systems, ``zfs_max_recordsize`` also limits the size of +kernel virtual memory caches used in the ZFS I/O pipeline (``zio_buf_*`` +and ``zio_data_buf_*``). + +See also the ``zpool`` ``large_blocks`` feature. + ++--------------------+------------------------------------------------+ +| zfs_max_recordsize | Notes | ++====================+================================================+ +| Tags | `filesystem <#filesystem>`__, | +| | `memory <#memory>`__, `volume <#volume>`__ | ++--------------------+------------------------------------------------+ +| When to change | To create datasets with larger volblocksize or | +| | recordsize | ++--------------------+------------------------------------------------+ +| Data Type | int | ++--------------------+------------------------------------------------+ +| Units | bytes | ++--------------------+------------------------------------------------+ +| Range | 512 to 16,777,216 (valid block sizes) | ++--------------------+------------------------------------------------+ +| Default | 1,048,576 | ++--------------------+------------------------------------------------+ +| Change | Dynamic, set prior to creating volumes or | +| | changing filesystem recordsize | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.5 and later | ++--------------------+------------------------------------------------+ + +zfs_mdcomp_disable +~~~~~~~~~~~~~~~~~~ + +``zfs_mdcomp_disable`` allows metadata compression to be disabled. + +================== =============================================== +zfs_mdcomp_disable Notes +================== =============================================== +Tags `CPU <#cpu>`__, `metadata <#metadata>`__ +When to change When CPU cycles cost less than I/O +Data Type boolean +Range 0=compress metadata, 1=do not compress metadata +Default 0 +Change Dynamic +Versions Affected from v0.6.0 to v0.8.0 +================== =============================================== + +zfs_metaslab_fragmentation_threshold +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Allow metaslabs to keep their active state as long as their +fragmentation percentage is less than or equal to this value. When +writing, an active metaslab whose fragmentation percentage exceeds +``zfs_metaslab_fragmentation_threshold`` is avoided allowing metaslabs +with less fragmentation to be preferred. + +Metaslab fragmentation is used to calculate the overall pool +``fragmentation`` property value. However, individual metaslab +fragmentation levels are observable using the ``zdb`` with the ``-mm`` +option. + +``zfs_metaslab_fragmentation_threshold`` works at the metaslab level and +each top-level vdev has approximately +`metaslabs_per_vdev <#metaslabs-per-vdev>`__ metaslabs. See also +`zfs_mg_fragmentation_threshold <#zfs-mg-fragmentation-threshold>`__ + ++----------------------------------+----------------------------------+ +| zfs_metaslab_fragmentation_thresh| Notes | +| old | | ++==================================+==================================+ +| Tags | `allocation <#allocation>`__, | +| | `fr | +| | agmentation <#fragmentation>`__, | +| | `vdev <#vdev>`__ | ++----------------------------------+----------------------------------+ +| When to change | Testing metaslab allocation | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | percent | ++----------------------------------+----------------------------------+ +| Range | 1 to 100 | ++----------------------------------+----------------------------------+ +| Default | 70 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.6.4 and later | ++----------------------------------+----------------------------------+ + +zfs_mg_fragmentation_threshold +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Metaslab groups (top-level vdevs) are considered eligible for +allocations if their fragmentation percentage metric is less than or +equal to ``zfs_mg_fragmentation_threshold``. If a metaslab group exceeds +this threshold then it will be skipped unless all metaslab groups within +the metaslab class have also crossed the +``zfs_mg_fragmentation_threshold`` threshold. + ++--------------------------------+------------------------------------+ +| zfs_mg_fragmentation_threshold | Notes | ++================================+====================================+ +| Tags | `allocation <#allocation>`__, | +| | ` | +| | fragmentation <#fragmentation>`__, | +| | `vdev <#vdev>`__ | ++--------------------------------+------------------------------------+ +| When to change | Testing metaslab allocation | ++--------------------------------+------------------------------------+ +| Data Type | int | ++--------------------------------+------------------------------------+ +| Units | percent | ++--------------------------------+------------------------------------+ +| Range | 1 to 100 | ++--------------------------------+------------------------------------+ +| Default | 85 | ++--------------------------------+------------------------------------+ +| Change | Dynamic | ++--------------------------------+------------------------------------+ +| Versions Affected | v0.6.4 and later | ++--------------------------------+------------------------------------+ + +zfs_mg_noalloc_threshold +~~~~~~~~~~~~~~~~~~~~~~~~ + +Metaslab groups (top-level vdevs) with free space percentage greater +than ``zfs_mg_noalloc_threshold`` are eligible for new allocations. If a +metaslab group's free space is less than or equal to the threshold, the +allocator avoids allocating to that group unless all groups in the pool +have reached the threshold. Once all metaslab groups have reached the +threshold, all metaslab groups are allowed to accept allocations. The +default value of 0 disables the feature and causes all metaslab groups +to be eligible for allocations. + +This parameter allows one to deal with pools having heavily imbalanced +vdevs such as would be the case when a new vdev has been added. Setting +the threshold to a non-zero percentage will stop allocations from being +made to vdevs that aren't filled to the specified percentage and allow +lesser filled vdevs to acquire more allocations than they otherwise +would under the older ``zfs_mg_alloc_failures`` facility. + ++--------------------------+------------------------------------------+ +| zfs_mg_noalloc_threshold | Notes | ++==========================+==========================================+ +| Tags | `allocation <#allocation>`__, | +| | `fragmentation <#fragmentation>`__, | +| | `vdev <#vdev>`__ | ++--------------------------+------------------------------------------+ +| When to change | To force rebalancing as top-level vdevs | +| | are added or expanded | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | percent | ++--------------------------+------------------------------------------+ +| Range | 0 to 100 | ++--------------------------+------------------------------------------+ +| Default | 0 (disabled) | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zfs_multihost_history +~~~~~~~~~~~~~~~~~~~~~ + +The pool ``multihost`` multimodifier protection (MMP) subsystem can +record historical updates in the +``/proc/spl/kstat/zfs/POOL_NAME/multihost`` file for debugging purposes. +The number of lines of history is determined by zfs_multihost_history. + +===================== ==================================== +zfs_multihost_history Notes +===================== ==================================== +Tags `MMP <#mmp>`__, `import <#import>`__ +When to change When testing multihost feature +Data Type int +Units lines +Range 0 to INT_MAX +Default 0 +Change Dynamic +Versions Affected v0.7.0 and later +===================== ==================================== + +zfs_multihost_interval +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_multihost_interval`` controls the frequency of multihost writes +performed by the pool multihost multimodifier protection (MMP) +subsystem. The multihost write period is (``zfs_multihost_interval`` / +number of leaf-vdevs) milliseconds. Thus on average a multihost write +will be issued for each leaf vdev every ``zfs_multihost_interval`` +milliseconds. In practice, the observed period can vary with the I/O +load and this observed value is the delay which is stored in the +uberblock. + +On import the multihost activity check waits a minimum amount of time +determined by (``zfs_multihost_interval`` \* +`zfs_multihost_import_intervals <#zfs-multihost-import-intervals>`__) +with a lower bound of 1 second. The activity check time may be further +extended if the value of mmp delay found in the best uberblock indicates +actual multihost updates happened at longer intervals than +``zfs_multihost_interval`` + +Note: the multihost protection feature applies to storage devices that +can be shared between multiple systems. + ++------------------------+--------------------------------------------+ +| zfs_multihost_interval | Notes | ++========================+============================================+ +| Tags | `MMP <#mmp>`__, `import <#import>`__, | +| | `vdev <#vdev>`__ | ++------------------------+--------------------------------------------+ +| When to change | To optimize pool import time against | +| | possibility of simultaneous import by | +| | another system | ++------------------------+--------------------------------------------+ +| Data Type | ulong | ++------------------------+--------------------------------------------+ +| Units | milliseconds | ++------------------------+--------------------------------------------+ +| Range | 100 to ULONG_MAX | ++------------------------+--------------------------------------------+ +| Default | 1000 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------+--------------------------------------------+ + +zfs_multihost_import_intervals +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_multihost_import_intervals`` controls the duration of the activity +test on pool import for the multihost multimodifier protection (MMP) +subsystem. The activity test can be expected to take a minimum time of +(``zfs_multihost_import_interval``\ s \* +`zfs_multihost_interval <#zfs-multihost-interval>`__ \* ``random(25%)``) +milliseconds. The random period of up to 25% improves simultaneous +import detection. For example, if two hosts are rebooted at the same +time and automatically attempt to import the pool, then is is highly +probable that one host will win. + +Smaller values of ``zfs_multihost_import_intervals`` reduces the import +time but increases the risk of failing to detect an active pool. The +total activity check time is never allowed to drop below one second. + +Note: the multihost protection feature applies to storage devices that +can be shared between multiple systems. + +============================== ==================================== +zfs_multihost_import_intervals Notes +============================== ==================================== +Tags `MMP <#mmp>`__, `import <#import>`__ +When to change TBD +Data Type uint +Units intervals +Range 1 to UINT_MAX +Default 20 since v0.8, previously 10 +Change Dynamic +Versions Affected v0.7.0 and later +============================== ==================================== + +zfs_multihost_fail_intervals +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_multihost_fail_intervals`` controls the behavior of the pool when +write failures are detected in the multihost multimodifier protection +(MMP) subsystem. + +If ``zfs_multihost_fail_intervals = 0`` then multihost write failures +are ignored. The write failures are reported to the ZFS event daemon +(``zed``) which can take action such as suspending the pool or offlining +a device. + +| If ``zfs_multihost_fail_intervals > 0`` then sequential multihost + write failures will cause the pool to be suspended. This occurs when + (``zfs_multihost_fail_intervals`` \* + `zfs_multihost_interval <#zfs-multihost-interval>`__) milliseconds + have passed since the last successful multihost write. +| This guarantees the activity test will see multihost writes if the + pool is attempted to be imported by another system. + +============================ ==================================== +zfs_multihost_fail_intervals Notes +============================ ==================================== +Tags `MMP <#mmp>`__, `import <#import>`__ +When to change TBD +Data Type uint +Units intervals +Range 0 to UINT_MAX +Default 10 since v0.8, previously 5 +Change Dynamic +Versions Affected v0.7.0 and later +============================ ==================================== + +zfs_delays_per_second +~~~~~~~~~~~~~~~~~~~~~ + +The ZFS Event Daemon (zed) processes events from ZFS. However, it can be +overwhelmed by high rates of error reports which can be generated by +failing, high-performance devices. ``zfs_delays_per_second`` limits the +rate of delay events reported to zed. + ++-----------------------+---------------------------------------------+ +| zfs_delays_per_second | Notes | ++=======================+=============================================+ +| Tags | `zed <#zed>`__, `delay <#delay>`__ | ++-----------------------+---------------------------------------------+ +| When to change | If processing delay events at a higher rate | +| | is desired | ++-----------------------+---------------------------------------------+ +| Data Type | uint | ++-----------------------+---------------------------------------------+ +| Units | events per second | ++-----------------------+---------------------------------------------+ +| Range | 0 to UINT_MAX | ++-----------------------+---------------------------------------------+ +| Default | 20 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.7.7 and later | ++-----------------------+---------------------------------------------+ + +zfs_checksums_per_second +~~~~~~~~~~~~~~~~~~~~~~~~ + +The ZFS Event Daemon (zed) processes events from ZFS. However, it can be +overwhelmed by high rates of error reports which can be generated by +failing, high-performance devices. ``zfs_checksums_per_second`` limits +the rate of checksum events reported to zed. + +Note: do not set this value lower than the SERD limit for ``checksum`` +in zed. By default, ``checksum_N`` = 10 and ``checksum_T`` = 10 minutes, +resulting in a practical lower limit of 1. + ++--------------------------+------------------------------------------+ +| zfs_checksums_per_second | Notes | ++==========================+==========================================+ +| Tags | `zed <#zed>`__, `checksum <#checksum>`__ | ++--------------------------+------------------------------------------+ +| When to change | If processing checksum error events at a | +| | higher rate is desired | ++--------------------------+------------------------------------------+ +| Data Type | uint | ++--------------------------+------------------------------------------+ +| Units | events per second | ++--------------------------+------------------------------------------+ +| Range | 0 to UINT_MAX | ++--------------------------+------------------------------------------+ +| Default | 20 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.7 and later | ++--------------------------+------------------------------------------+ + +zfs_no_scrub_io +~~~~~~~~~~~~~~~ + +When ``zfs_no_scrub_io = 1`` scrubs do not actually scrub data and +simply doing a metadata crawl of the pool instead. + +================= =============================================== +zfs_no_scrub_io Notes +================= =============================================== +Tags `scrub <#scrub>`__ +When to change Testing scrub feature +Data Type boolean +Range 0=perform scrub I/O, 1=do not perform scrub I/O +Default 0 +Change Dynamic +Versions Affected v0.6.0 and later +================= =============================================== + +zfs_no_scrub_prefetch +~~~~~~~~~~~~~~~~~~~~~ + +When ``zfs_no_scrub_prefetch = 1``, prefetch is disabled for scrub I/Os. + ++-----------------------+-----------------------------------------------------+ +| zfs_no_scrub_prefetch | Notes | ++=======================+=====================================================+ +| Tags | `prefetch <#prefetch>`__, `scrub <#scrub>`__ | ++-----------------------+-----------------------------------------------------+ +| When to change | Testing scrub feature | ++-----------------------+-----------------------------------------------------+ +| Data Type | boolean | ++-----------------------+-----------------------------------------------------+ +| Range | 0=prefetch scrub I/Os, 1=do not prefetch scrub I/Os | ++-----------------------+-----------------------------------------------------+ +| Default | 0 | ++-----------------------+-----------------------------------------------------+ +| Change | Dynamic | ++-----------------------+-----------------------------------------------------+ +| Versions Affected | v0.6.4 and later | ++-----------------------+-----------------------------------------------------+ + +zfs_nocacheflush +~~~~~~~~~~~~~~~~ + +ZFS uses barriers (volatile cache flush commands) to ensure data is +committed to permanent media by devices. This ensures consistent +on-media state for devices where caches are volatile (eg HDDs). + +For devices with nonvolatile caches, the cache flush operation can be a +no-op. However, in some RAID arrays, cache flushes can cause the entire +cache to be flushed to the backing devices. + +To ensure on-media consistency, keep cache flush enabled. + ++-------------------+-------------------------------------------------+ +| zfs_nocacheflush | Notes | ++===================+=================================================+ +| Tags | `disks <#disks>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the storage device has nonvolatile cache, | +| | then disabling cache flush can save the cost of | +| | occasional cache flush commands | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=send cache flush commands, 1=do not send | +| | cache flush commands | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_nopwrite_enabled +~~~~~~~~~~~~~~~~~~~~ + +The NOP-write feature is enabled by default when a +crytographically-secure checksum algorithm is in use by the dataset. +``zfs_nopwrite_enabled`` allows the NOP-write feature to be completely +disabled. + ++----------------------+----------------------------------------------+ +| zfs_nopwrite_enabled | Notes | ++======================+==============================================+ +| Tags | `checksum <#checksum>`__, `debug <#debug>`__ | ++----------------------+----------------------------------------------+ +| When to change | TBD | ++----------------------+----------------------------------------------+ +| Data Type | boolean | ++----------------------+----------------------------------------------+ +| Range | 0=disable NOP-write feature, 1=enable | +| | NOP-write feature | ++----------------------+----------------------------------------------+ +| Default | 1 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++----------------------+----------------------------------------------+ + +zfs_dmu_offset_next_sync +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_dmu_offset_next_sync`` enables forcing txg sync to find holes. +This causes ZFS to act like older versions when ``SEEK_HOLE`` or +``SEEK_DATA`` flags are used: when a dirty dnode causes txgs to be +synced so the previous data can be found. + ++--------------------------+------------------------------------------+ +| zfs_dmu_offset_next_sync | Notes | ++==========================+==========================================+ +| Tags | `DMU <#dmu>`__ | ++--------------------------+------------------------------------------+ +| When to change | to exchange strict hole reporting for | +| | performance | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=do not force txg sync to find holes, | +| | 1=force txg sync to find holes | ++--------------------------+------------------------------------------+ +| Default | 1 since v2.1.5, previously 0 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zfs_pd_bytes_max +~~~~~~~~~~~~~~~~ + +``zfs_pd_bytes_max`` limits the number of bytes prefetched during a pool +traversal (eg ``zfs send`` or other data crawling operations). These +prefetches are referred to as "prescient prefetches" and are always 100% +hit rate. The traversal operations do not use the default data or +metadata prefetcher. + +================= ========================================== +zfs_pd_bytes_max Notes +================= ========================================== +Tags `prefetch <#prefetch>`__, `send <#send>`__ +When to change TBD +Data Type int32 +Units bytes +Range 0 to INT32_MAX +Default 52,428,800 (50 MiB) +Change Dynamic +Versions Affected TBD +================= ========================================== + +zfs_per_txg_dirty_frees_percent +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_per_txg_dirty_frees_percent`` as a percentage of +`zfs_dirty_data_max <#zfs-dirty-data-max>`__ controls the percentage of +dirtied blocks from frees in one txg. After the threshold is crossed, +additional dirty blocks from frees wait until the next txg. Thus, when +deleting large files, filling consecutive txgs with deletes/frees, does +not throttle other, perhaps more important, writes. + +A side effect of this throttle can impact ``zfs receive`` workloads that +contain a large number of frees and the +`ignore_hole_birth <#ignore-hole-birth>`__ optimization is disabled. The +symptom is that the receive workload causes an increase in the frequency +of txg commits. The frequency of txg commits is observable via the +``otime`` column of ``/proc/spl/kstat/zfs/POOLNAME/txgs``. Since txg +commits also flush data from volatile caches in HDDs to media, HDD +performance can be negatively impacted. Also, since the frees do not +consume much bandwidth over the pipe, the pipe can appear to stall. Thus +the overall progress of receives is slower than expected. + +A value of zero will disable this throttle. + ++---------------------------------+-----------------------------------+ +| zfs_per_txg_dirty_frees_percent | Notes | ++=================================+===================================+ +| Tags | `delete <#delete>`__ | ++---------------------------------+-----------------------------------+ +| When to change | For ``zfs receive`` workloads, | +| | consider increasing or disabling. | +| | See section `ZFS I/O | +| | S | +| | cheduler `__ | ++---------------------------------+-----------------------------------+ +| Data Type | ulong | ++---------------------------------+-----------------------------------+ +| Units | percent | ++---------------------------------+-----------------------------------+ +| Range | 0 to 100 | ++---------------------------------+-----------------------------------+ +| Default | 30 | ++---------------------------------+-----------------------------------+ +| Change | Dynamic | ++---------------------------------+-----------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------------------+-----------------------------------+ + +zfs_prefetch_disable +~~~~~~~~~~~~~~~~~~~~ + +``zfs_prefetch_disable`` controls the predictive prefetcher. + +Note that it leaves "prescient" prefetch (eg prefetch for ``zfs send``) +intact (see `zfs_pd_bytes_max <#zfs-pd-bytes-max>`__) + ++----------------------+----------------------------------------------+ +| zfs_prefetch_disable | Notes | ++======================+==============================================+ +| Tags | `prefetch <#prefetch>`__ | ++----------------------+----------------------------------------------+ +| When to change | In some case where the workload is | +| | completely random reads, overall performance | +| | can be better if prefetch is disabled | ++----------------------+----------------------------------------------+ +| Data Type | boolean | ++----------------------+----------------------------------------------+ +| Range | 0=prefetch enabled, 1=prefetch disabled | ++----------------------+----------------------------------------------+ +| Default | 0 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Verification | prefetch efficacy is observed by | +| | ``arcstat``, ``arc_summary``, and the | +| | relevant entries in | +| | ``/proc/spl/kstat/zfs/arcstats`` | ++----------------------+----------------------------------------------+ +| Versions Affected | all | ++----------------------+----------------------------------------------+ + +zfs_read_chunk_size +~~~~~~~~~~~~~~~~~~~ + +``zfs_read_chunk_size`` is the limit for ZFS filesystem reads. If an +application issues a ``read()`` larger than ``zfs_read_chunk_size``, +then the ``read()`` is divided into multiple operations no larger than +``zfs_read_chunk_size`` + +=================== ============================ +zfs_read_chunk_size Notes +=================== ============================ +Tags `filesystem <#filesystem>`__ +When to change TBD +Data Type ulong +Units bytes +Range 512 to ULONG_MAX +Default 1,048,576 +Change Dynamic +Versions Affected all +=================== ============================ + +zfs_read_history +~~~~~~~~~~~~~~~~ + +Historical statistics for the last ``zfs_read_history`` reads are +available in ``/proc/spl/kstat/zfs/POOL_NAME/reads`` + +================= ================================= +zfs_read_history Notes +================= ================================= +Tags `debug <#debug>`__ +When to change To observe read operation details +Data Type int +Units lines +Range 0 to INT_MAX +Default 0 +Change Dynamic +Versions Affected all +================= ================================= + +zfs_read_history_hits +~~~~~~~~~~~~~~~~~~~~~ + +When `zfs_read_history <#zfs-read-history>`__\ ``> 0``, +zfs_read_history_hits controls whether ARC hits are displayed in the +read history file, ``/proc/spl/kstat/zfs/POOL_NAME/reads`` + ++-----------------------+---------------------------------------------+ +| zfs_read_history_hits | Notes | ++=======================+=============================================+ +| Tags | `debug <#debug>`__ | ++-----------------------+---------------------------------------------+ +| When to change | To observe read operation details with ARC | +| | hits | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=do not include data for ARC hits, | +| | 1=include ARC hit data | ++-----------------------+---------------------------------------------+ +| Default | 0 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | all | ++-----------------------+---------------------------------------------+ + +zfs_recover +~~~~~~~~~~~ + +``zfs_recover`` can be set to true (1) to attempt to recover from +otherwise-fatal errors, typically caused by on-disk corruption. When +set, calls to ``zfs_panic_recover()`` will turn into warning messages +rather than calling ``panic()`` + ++-------------------+-------------------------------------------------+ +| zfs_recover | Notes | ++===================+=================================================+ +| Tags | `import <#import>`__ | ++-------------------+-------------------------------------------------+ +| When to change | zfs_recover should only be used as a last | +| | resort, as it typically results in leaked | +| | space, or worse | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=normal operation, 1=attempt recovery zpool | +| | import | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Verification | check output of ``dmesg`` and other logs for | +| | details | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.4 or later | ++-------------------+-------------------------------------------------+ + +zfs_resilver_min_time_ms +~~~~~~~~~~~~~~~~~~~~~~~~ + +Resilvers are processed by the sync thread in syncing context. While +resilvering, ZFS spends at least ``zfs_resilver_min_time_ms`` time +working on a resilver between txg commits. + +The `zfs_txg_timeout <#zfs-txg-timeout>`__ tunable sets a nominal +timeout value for the txg commits. By default, this timeout is 5 seconds +and the ``zfs_resilver_min_time_ms`` is 3 seconds. However, many +variables contribute to changing the actual txg times. The measured txg +interval is observed as the ``otime`` column (in nanoseconds) in the +``/proc/spl/kstat/zfs/POOL_NAME/txgs`` file. + +See also `zfs_txg_timeout <#zfs-txg-timeout>`__ and +`zfs_scan_min_time_ms <#zfs-scan-min-time-ms>`__ + ++--------------------------+------------------------------------------+ +| zfs_resilver_min_time_ms | Notes | ++==========================+==========================================+ +| Tags | `resilver <#resilver>`__ | ++--------------------------+------------------------------------------+ +| When to change | In some resilvering cases, increasing | +| | ``zfs_resilver_min_time_ms`` can result | +| | in faster completion | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | milliseconds | ++--------------------------+------------------------------------------+ +| Range | 1 to | +| | `zfs_txg_timeout <#zfs-txg-timeout>`__ | +| | converted to milliseconds | ++--------------------------+------------------------------------------+ +| Default | 3,000 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | all | ++--------------------------+------------------------------------------+ + +zfs_scan_min_time_ms +~~~~~~~~~~~~~~~~~~~~ + +Scrubs are processed by the sync thread in syncing context. While +scrubbing, ZFS spends at least ``zfs_scan_min_time_ms`` time working on +a scrub between txg commits. + +See also `zfs_txg_timeout <#zfs-txg-timeout>`__ and +`zfs_resilver_min_time_ms <#zfs-resilver-min-time-ms>`__ + ++----------------------+----------------------------------------------+ +| zfs_scan_min_time_ms | Notes | ++======================+==============================================+ +| Tags | `scrub <#scrub>`__ | ++----------------------+----------------------------------------------+ +| When to change | In some scrub cases, increasing | +| | ``zfs_scan_min_time_ms`` can result in | +| | faster completion | ++----------------------+----------------------------------------------+ +| Data Type | int | ++----------------------+----------------------------------------------+ +| Units | milliseconds | ++----------------------+----------------------------------------------+ +| Range | 1 to `zfs_txg_timeout <#zfs-txg-timeout>`__ | +| | converted to milliseconds | ++----------------------+----------------------------------------------+ +| Default | 1,000 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | all | ++----------------------+----------------------------------------------+ + +zfs_scan_checkpoint_intval +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To preserve progress across reboots the sequential scan algorithm +periodically needs to stop metadata scanning and issue all the +verifications I/Os to disk every ``zfs_scan_checkpoint_intval`` seconds. + +========================== ============================================ +zfs_scan_checkpoint_intval Notes +========================== ============================================ +Tags `resilver <#resilver>`__, `scrub <#scrub>`__ +When to change TBD +Data Type int +Units seconds +Range 1 to INT_MAX +Default 7,200 (2 hours) +Change Dynamic +Versions Affected v0.8.0 and later +========================== ============================================ + +zfs_scan_fill_weight +~~~~~~~~~~~~~~~~~~~~ + +This tunable affects how scrub and resilver I/O segments are ordered. A +higher number indicates that we care more about how filled in a segment +is, while a lower number indicates we care more about the size of the +extent without considering the gaps within a segment. + +==================== ============================================ +zfs_scan_fill_weight Notes +==================== ============================================ +Tags `resilver <#resilver>`__, `scrub <#scrub>`__ +When to change Testing sequential scrub and resilver +Data Type int +Units scalar +Range 0 to INT_MAX +Default 3 +Change Prior to zfs module load +Versions Affected v0.8.0 and later +==================== ============================================ + +zfs_scan_issue_strategy +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_issue_strategy`` controls the order of data verification +while scrubbing or resilvering. + ++-------+-------------------------------------------------------------+ +| value | description | ++=======+=============================================================+ +| 0 | fs will use strategy 1 during normal verification and | +| | strategy 2 while taking a checkpoint | ++-------+-------------------------------------------------------------+ +| 1 | data is verified as sequentially as possible, given the | +| | amount of memory reserved for scrubbing (see | +| | `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__). This | +| | can improve scrub performance if the pool's data is heavily | +| | fragmented. | ++-------+-------------------------------------------------------------+ +| 2 | the largest mostly-contiguous chunk of found data is | +| | verified first. By deferring scrubbing of small segments, | +| | we may later find adjacent data to coalesce and increase | +| | the segment size. | ++-------+-------------------------------------------------------------+ + +======================= ============================================ +zfs_scan_issue_strategy Notes +======================= ============================================ +Tags `resilver <#resilver>`__, `scrub <#scrub>`__ +When to change TBD +Data Type enum +Range 0 to 2 +Default 0 +Change Dynamic +Versions Affected TBD +======================= ============================================ + +zfs_scan_legacy +~~~~~~~~~~~~~~~ + +Setting ``zfs_scan_legacy = 1`` enables the legacy scan and scrub +behavior instead of the newer sequential behavior. + ++-------------------+-------------------------------------------------+ +| zfs_scan_legacy | Notes | ++===================+=================================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__ | ++-------------------+-------------------------------------------------+ +| When to change | In some cases, the new scan mode can consumer | +| | more memory as it collects and sorts I/Os; | +| | using the legacy algorithm can be more memory | +| | efficient at the expense of HDD read efficiency | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=use new method: scrubs and resilvers will | +| | gather metadata in memory before issuing | +| | sequential I/O, 1=use legacy algorithm will be | +| | used where I/O is initiated as soon as it is | +| | discovered | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, however changing to 0 does not affect | +| | in-progress scrubs or resilvers | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.8.0 and later | ++-------------------+-------------------------------------------------+ + +zfs_scan_max_ext_gap +~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_max_ext_gap`` limits the largest gap in bytes between scrub +and resilver I/Os that will still be considered sequential for sorting +purposes. + ++----------------------+----------------------------------------------+ +| zfs_scan_max_ext_gap | Notes | ++======================+==============================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__ | ++----------------------+----------------------------------------------+ +| When to change | TBD | ++----------------------+----------------------------------------------+ +| Data Type | ulong | ++----------------------+----------------------------------------------+ +| Units | bytes | ++----------------------+----------------------------------------------+ +| Range | 512 to ULONG_MAX | ++----------------------+----------------------------------------------+ +| Default | 2,097,152 (2 MiB) | ++----------------------+----------------------------------------------+ +| Change | Dynamic, however changing to 0 does not | +| | affect in-progress scrubs or resilvers | ++----------------------+----------------------------------------------+ +| Versions Affected | v0.8.0 and later | ++----------------------+----------------------------------------------+ + +zfs_scan_mem_lim_fact +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_mem_lim_fact`` limits the maximum fraction of RAM used for +I/O sorting by sequential scan algorithm. When the limit is reached +scanning metadata is stopped and data verification I/O is started. Data +verification I/O continues until the memory used by the sorting +algorithm drops by +`zfs_scan_mem_lim_soft_fact <#zfs-scan-mem-lim-soft-fact>`__ + +Memory used by the sequential scan algorithm can be observed as the kmem +sio_cache. This is visible from procfs as +``grep sio_cache /proc/slabinfo`` and can be monitored using +slab-monitoring tools such as ``slabtop`` + ++-----------------------+---------------------------------------------+ +| zfs_scan_mem_lim_fact | Notes | ++=======================+=============================================+ +| Tags | `memory <#memory>`__, | +| | `resilver <#resilver>`__, | +| | `scrub <#scrub>`__ | ++-----------------------+---------------------------------------------+ +| When to change | TBD | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | divisor of physical RAM | ++-----------------------+---------------------------------------------+ +| Range | TBD | ++-----------------------+---------------------------------------------+ +| Default | 20 (physical RAM / 20 or 5%) | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.8.0 and later | ++-----------------------+---------------------------------------------+ + +zfs_scan_mem_lim_soft_fact +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_mem_lim_soft_fact`` sets the fraction of the hard limit, +`zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__, used to determined +the RAM soft limit for I/O sorting by the sequential scan algorithm. +After `zfs_scan_mem_lim_fact <#zfs-scan-mem-lim-fact>`__ has been +reached, metadata scanning is stopped until the RAM usage drops by +``zfs_scan_mem_lim_soft_fact`` + ++----------------------------+----------------------------------------+ +| zfs_scan_mem_lim_soft_fact | Notes | ++============================+========================================+ +| Tags | `resilver <#resilver>`__, | +| | `scrub <#scrub>`__ | ++----------------------------+----------------------------------------+ +| When to change | TBD | ++----------------------------+----------------------------------------+ +| Data Type | int | ++----------------------------+----------------------------------------+ +| Units | divisor of (physical RAM / | +| | `zfs_scan_mem | +| | _lim_fact <#zfs-scan-mem-lim-fact>`__) | ++----------------------------+----------------------------------------+ +| Range | 1 to INT_MAX | ++----------------------------+----------------------------------------+ +| Default | 20 (for default | +| | `zfs_scan_mem | +| | _lim_fact <#zfs-scan-mem-lim-fact>`__, | +| | 0.25% of physical RAM) | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.8.0 and later | ++----------------------------+----------------------------------------+ + +zfs_scan_vdev_limit +~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_vdev_limit`` is the maximum amount of data that can be +concurrently issued at once for scrubs and resilvers per leaf vdev. +``zfs_scan_vdev_limit`` attempts to strike a balance between keeping the +leaf vdev queues full of I/Os while not overflowing the queues causing +high latency resulting in long txg sync times. While +``zfs_scan_vdev_limit`` represents a bandwidth limit, the existing I/O +limit of `zfs_vdev_scrub_max_active <#zfs-vdev-scrub-max-active>`__ +remains in effect, too. + ++---------------------+-----------------------------------------------+ +| zfs_scan_vdev_limit | Notes | ++=====================+===============================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__, | +| | `vdev <#vdev>`__ | ++---------------------+-----------------------------------------------+ +| When to change | TBD | ++---------------------+-----------------------------------------------+ +| Data Type | ulong | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 512 to ULONG_MAX | ++---------------------+-----------------------------------------------+ +| Default | 4,194,304 (4 MiB) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.8.0 and later | ++---------------------+-----------------------------------------------+ + +zfs_send_corrupt_data +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_send_corrupt_data`` enables ``zfs send`` to send of corrupt data +by ignoring read and checksum errors. The corrupted or unreadable blocks +are replaced with the value ``0x2f5baddb10c`` (ZFS bad block) + ++-----------------------+---------------------------------------------+ +| zfs_send_corrupt_data | Notes | ++=======================+=============================================+ +| Tags | `send <#send>`__ | ++-----------------------+---------------------------------------------+ +| When to change | When data corruption exists and an attempt | +| | to recover at least some data via | +| | ``zfs send`` is needed | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=do not send corrupt data, 1=replace | +| | corrupt data with cookie | ++-----------------------+---------------------------------------------+ +| Default | 0 | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++-----------------------+---------------------------------------------+ + +zfs_sync_pass_deferred_free +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The SPA sync process is performed in multiple passes. Once the pass +number reaches ``zfs_sync_pass_deferred_free``, frees are no long +processed and must wait for the next SPA sync. + +The ``zfs_sync_pass_deferred_free`` value is expected to be removed as a +tunable once the optimal value is determined during field testing. + +The ``zfs_sync_pass_deferred_free`` pass must be greater than 1 to +ensure that regular blocks are not deferred. + +=========================== ======================== +zfs_sync_pass_deferred_free Notes +=========================== ======================== +Tags `SPA <#spa>`__ +When to change Testing SPA sync process +Data Type int +Units SPA sync passes +Range 1 to INT_MAX +Default 2 +Change Dynamic +Versions Affected all +=========================== ======================== + +zfs_sync_pass_dont_compress +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The SPA sync process is performed in multiple passes. Once the pass +number reaches ``zfs_sync_pass_dont_compress``, data block compression +is no longer processed and must wait for the next SPA sync. + +The ``zfs_sync_pass_dont_compress`` value is expected to be removed as a +tunable once the optimal value is determined during field testing. + +=========================== ======================== +zfs_sync_pass_dont_compress Notes +=========================== ======================== +Tags `SPA <#spa>`__ +When to change Testing SPA sync process +Data Type int +Units SPA sync passes +Range 1 to INT_MAX +Default 5 +Change Dynamic +Versions Affected all +=========================== ======================== + +zfs_sync_pass_rewrite +~~~~~~~~~~~~~~~~~~~~~ + +The SPA sync process is performed in multiple passes. Once the pass +number reaches ``zfs_sync_pass_rewrite``, blocks can be split into gang +blocks. + +The ``zfs_sync_pass_rewrite`` value is expected to be removed as a +tunable once the optimal value is determined during field testing. + +===================== ======================== +zfs_sync_pass_rewrite Notes +===================== ======================== +Tags `SPA <#spa>`__ +When to change Testing SPA sync process +Data Type int +Units SPA sync passes +Range 1 to INT_MAX +Default 2 +Change Dynamic +Versions Affected all +===================== ======================== + +zfs_sync_taskq_batch_pct +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_sync_taskq_batch_pct`` controls the number of threads used by the +DSL pool sync taskq, ``dp_sync_taskq`` + ++--------------------------+------------------------------------------+ +| zfs_sync_taskq_batch_pct | Notes | ++==========================+==========================================+ +| Tags | `SPA <#spa>`__ | ++--------------------------+------------------------------------------+ +| When to change | to adjust the number of | +| | ``dp_sync_taskq`` threads | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | percent of number of online CPUs | ++--------------------------+------------------------------------------+ +| Range | 1 to 100 | ++--------------------------+------------------------------------------+ +| Default | 75 | ++--------------------------+------------------------------------------+ +| Change | Prior to zfs module load | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zfs_txg_history +~~~~~~~~~~~~~~~ + +Historical statistics for the last ``zfs_txg_history`` txg commits are +available in ``/proc/spl/kstat/zfs/POOL_NAME/txgs`` + +The work required to measure the txg commit (SPA statistics) is low. +However, for debugging purposes, it can be useful to observe the SPA +statistics. + +================= ====================================================== +zfs_txg_history Notes +================= ====================================================== +Tags `debug <#debug>`__ +When to change To observe details of SPA sync behavior. +Data Type int +Units lines +Range 0 to INT_MAX +Default 0 for version v0.6.0 to v0.7.6, 100 for version v0.8.0 +Change Dynamic +Versions Affected all +================= ====================================================== + +zfs_txg_timeout +~~~~~~~~~~~~~~~ + +The open txg is committed to the pool periodically (SPA sync) and +``zfs_txg_timeout`` represents the default target upper limit. + +txg commits can occur more frequently and a rapid rate of txg commits +often indicates a busy write workload, quota limits reached, or the free +space is critically low. + +Many variables contribute to changing the actual txg times. txg commits +can also take longer than ``zfs_txg_timeout`` if the ZFS write throttle +is not properly tuned or the time to sync is otherwise delayed (eg slow +device). Shorter txg commit intervals can occur due to +`zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ for write-intensive +workloads. The measured txg interval is observed as the ``otime`` column +(in nanoseconds) in the ``/proc/spl/kstat/zfs/POOL_NAME/txgs`` file. + +See also `zfs_dirty_data_sync <#zfs-dirty-data-sync>`__ and +`zfs_txg_history <#zfs-txg-history>`__ + ++-------------------+-------------------------------------------------+ +| zfs_txg_timeout | Notes | ++===================+=================================================+ +| Tags | `SPA <#spa>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------+-------------------------------------------------+ +| When to change | To optimize the work done by txg commit | +| | relative to the pool requirements. See also | +| | section `ZFS I/O | +| | Scheduler `__ | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | seconds | ++-------------------+-------------------------------------------------+ +| Range | 1 to INT_MAX | ++-------------------+-------------------------------------------------+ +| Default | 5 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zfs_vdev_aggregation_limit +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To reduce IOPs, small, adjacent I/Os can be aggregated (coalesced) into +a large I/O. For reads, aggregations occur across small adjacency gaps. +For writes, aggregation can occur at the ZFS or disk level. +``zfs_vdev_aggregation_limit`` is the upper bound on the size of the +larger, aggregated I/O. + +Setting ``zfs_vdev_aggregation_limit = 0`` effectively disables +aggregation by ZFS. However, the block device scheduler can still merge +(aggregate) I/Os. Also, many devices, such as modern HDDs, contain +schedulers that can aggregate I/Os. + +In general, I/O aggregation can improve performance for devices, such as +HDDs, where ordering I/O operations for contiguous LBAs is a benefit. +For random access devices, such as SSDs, aggregation might not improve +performance relative to the CPU cycles needed to aggregate. For devices +that represent themselves as having no rotation, the +`zfs_vdev_aggregation_limit_non_rotating <#zfs-vdev-aggregation-limit-non-rotating>`__ +parameter is used instead of ``zfs_vdev_aggregation_limit`` + ++----------------------------+----------------------------------------+ +| zfs_vdev_aggregation_limit | Notes | ++============================+========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++----------------------------+----------------------------------------+ +| When to change | If the workload does not benefit from | +| | aggregation, the | +| | ``zfs_vdev_aggregation_limit`` can be | +| | reduced to avoid aggregation attempts | ++----------------------------+----------------------------------------+ +| Data Type | int | ++----------------------------+----------------------------------------+ +| Units | bytes | ++----------------------------+----------------------------------------+ +| Range | 0 to 1,048,576 (default) or 16,777,216 | +| | (if ``zpool`` ``large_blocks`` feature | +| | is enabled) | ++----------------------------+----------------------------------------+ +| Default | 1,048,576, or 131,072 for `__, | +| | `vdev_cache <#vdev-cache>`__ | ++---------------------+-----------------------------------------------+ +| When to change | Do not change | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 0 to MAX_INT | ++---------------------+-----------------------------------------------+ +| Default | 0 (vdev cache is disabled) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Verification | vdev cache statistics are available in the | +| | ``/proc/spl/kstat/zfs/vdev_cache_stats`` file | ++---------------------+-----------------------------------------------+ +| Versions Affected | all | ++---------------------+-----------------------------------------------+ + +zfs_vdev_cache_bshift +~~~~~~~~~~~~~~~~~~~~~ + +Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thus it is disabled by setting the +`zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ to zero. This related +tunable is, by default, inoperative. + +All read I/Os smaller than `zfs_vdev_cache_max <#zfs-vdev-cache-max>`__ +are turned into (``1 << zfs_vdev_cache_bshift``) byte reads by the vdev +cache. At most `zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ bytes will +be kept in each vdev's cache. + +===================== ============================================== +zfs_vdev_cache_bshift Notes +===================== ============================================== +Tags `vdev <#vdev>`__, `vdev_cache <#vdev-cache>`__ +When to change Do not change +Data Type int +Units shift +Range 1 to INT_MAX +Default 16 (65,536 bytes) +Change Dynamic +Versions Affected all +===================== ============================================== + +zfs_vdev_cache_max +~~~~~~~~~~~~~~~~~~ + +Note: with the current ZFS code, the vdev cache is not helpful and in +some cases actually harmful. Thus it is disabled by setting the +`zfs_vdev_cache_size <#zfs-vdev-cache-size>`__ to zero. This related +tunable is, by default, inoperative. + +All read I/Os smaller than zfs_vdev_cache_max will be turned into +(``1 <<``\ `zfs_vdev_cache_bshift <#zfs-vdev-cache-bshift>`__ byte reads +by the vdev cache. At most ``zfs_vdev_cache_size`` bytes will be kept in +each vdev's cache. + +================== ============================================== +zfs_vdev_cache_max Notes +================== ============================================== +Tags `vdev <#vdev>`__, `vdev_cache <#vdev-cache>`__ +When to change Do not change +Data Type int +Units bytes +Range 512 to INT_MAX +Default 16,384 (16 KiB) +Change Dynamic +Versions Affected all +================== ============================================== + +zfs_vdev_mirror_rotating_inc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The mirror read algorithm uses current load and an incremental weighting +value to determine the vdev to service a read operation. Lower values +determine the preferred vdev. The weighting value is +``zfs_vdev_mirror_rotating_inc`` for rotating media and +`zfs_vdev_mirror_non_rotating_inc <#zfs-vdev-mirror-non-rotating-inc>`__ +for nonrotating media. + +Verify the rotational setting described by a block device in sysfs by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++------------------------------+--------------------------------------+ +| zfs_vdev_mirror_rotating_inc | Notes | ++==============================+======================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, `HDD <#hdd>`__ | ++------------------------------+--------------------------------------+ +| When to change | Increasing for mirrors with both | +| | rotating and nonrotating media more | +| | strongly favors the nonrotating | +| | media | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | scalar | ++------------------------------+--------------------------------------+ +| Range | 0 to MAX_INT | ++------------------------------+--------------------------------------+ +| Default | 0 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------------+--------------------------------------+ + +zfs_vdev_mirror_non_rotating_inc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The mirror read algorithm uses current load and an incremental weighting +value to determine the vdev to service a read operation. Lower values +determine the preferred vdev. The weighting value is +`zfs_vdev_mirror_rotating_inc <#zfs-vdev-mirror-rotating-inc>`__ for +rotating media and ``zfs_vdev_mirror_non_rotating_inc`` for nonrotating +media. + +Verify the rotational setting described by a block device in sysfs by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++----------------------------------+----------------------------------+ +| zfs_vdev_mirror_non_rotating_inc | Notes | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, | +| | `SSD <#ssd>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | scalar | ++----------------------------------+----------------------------------+ +| Range | 0 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 0 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_mirror_rotating_seek_inc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For rotating media in a mirror, if the next I/O offset is within +`zfs_vdev_mirror_rotating_seek_offset <#zfs-vdev-mirror-rotating-seek-offset>`__ +then the weighting factor is incremented by +(``zfs_vdev_mirror_rotating_seek_inc / 2``). Otherwise the weighting +factor is increased by ``zfs_vdev_mirror_rotating_seek_inc``. This +algorithm prefers rotating media with lower seek distance. + +Verify the rotational setting described by a block device in sysfs by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++----------------------------------+----------------------------------+ +| z | Notes | +| fs_vdev_mirror_rotating_seek_inc | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, | +| | `HDD <#hdd>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | scalar | ++----------------------------------+----------------------------------+ +| Range | 0 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 5 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_mirror_rotating_seek_offset +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For rotating media in a mirror, if the next I/O offset is within +``zfs_vdev_mirror_rotating_seek_offset`` then the weighting factor is +incremented by +(`zfs_vdev_mirror_rotating_seek_inc <#zfs-vdev-mirror-rotating-seek-inc>`__\ ``/ 2``). +Otherwise the weighting factor is increased by +``zfs_vdev_mirror_rotating_seek_inc``. This algorithm prefers rotating +media with lower seek distance. + +Verify the rotational setting described by a block device in sysfs by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++----------------------------------+----------------------------------+ +| zfs_vdev_mirror_rotating_seek_off| Notes | +| set | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, | +| | `HDD <#hdd>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | bytes | ++----------------------------------+----------------------------------+ +| Range | 0 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 1,048,576 (1 MiB) | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_mirror_non_rotating_seek_inc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For nonrotating media in a mirror, a seek penalty is applied as +sequential I/O's can be aggregated into fewer operations, avoiding +unnecessary per-command overhead, often boosting performance. + +Verify the rotational setting described by a block device in SysFS by +observing ``/sys/block/DISK_NAME/queue/rotational`` + ++----------------------------------+----------------------------------+ +| zfs_v | Notes | +| dev_mirror_non_rotating_seek_inc | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `mirror <#mirror>`__, | +| | `SSD <#ssd>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | scalar | ++----------------------------------+----------------------------------+ +| Range | 0 to INT_MAX | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------------+----------------------------------+ + +zfs_vdev_read_gap_limit +~~~~~~~~~~~~~~~~~~~~~~~ + +To reduce IOPs, small, adjacent I/Os are aggregated (coalesced) into +into a large I/O. For reads, aggregations occur across small adjacency +gaps where the gap is less than ``zfs_vdev_read_gap_limit`` + ++-------------------------+-------------------------------------------+ +| zfs_vdev_read_gap_limit | Notes | ++=========================+===========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------------+-------------------------------------------+ +| When to change | TBD | ++-------------------------+-------------------------------------------+ +| Data Type | int | ++-------------------------+-------------------------------------------+ +| Units | bytes | ++-------------------------+-------------------------------------------+ +| Range | 0 to INT_MAX | ++-------------------------+-------------------------------------------+ +| Default | 32,768 (32 KiB) | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | all | ++-------------------------+-------------------------------------------+ + +zfs_vdev_write_gap_limit +~~~~~~~~~~~~~~~~~~~~~~~~ + +To reduce IOPs, small, adjacent I/Os are aggregated (coalesced) into +into a large I/O. For writes, aggregations occur across small adjacency +gaps where the gap is less than ``zfs_vdev_write_gap_limit`` + ++--------------------------+------------------------------------------+ +| zfs_vdev_write_gap_limit | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | TBD | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | bytes | ++--------------------------+------------------------------------------+ +| Range | 0 to INT_MAX | ++--------------------------+------------------------------------------+ +| Default | 4,096 (4 KiB) | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | all | ++--------------------------+------------------------------------------+ + +zfs_vdev_scheduler +~~~~~~~~~~~~~~~~~~ + +Prior to version 0.8.3, when the pool is imported, for whole disk vdevs, +the block device I/O scheduler is set to ``zfs_vdev_scheduler``. +The most common schedulers are: *noop*, *cfq*, *bfq*, and *deadline*. +In some cases, the scheduler is not changeable using this method. +Known schedulers that cannot be changed are: *scsi_mq* and *none*. +In these cases, the scheduler is unchanged and an error message can be +reported to logs. + +The parameter was disabled in v0.8.3 but left in place to avoid breaking +loading of the ``zfs`` module if the parameter is specified in modprobe +configuration on existing installations. It is recommended that users +leave the default scheduler "`unless you're encountering a specific +problem, or have clearly measured a performance improvement for your +workload +`__," +and if so, to change it via the ``/sys/block//queue/scheduler`` +interface and/or udev rule. + ++--------------------+------------------------------------------------+ +| zfs_vdev_scheduler | Notes | ++====================+================================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------+------------------------------------------------+ +| When to change | since ZFS has its own I/O scheduler, using a | +| | simple scheduler can result in more consistent | +| | performance | ++--------------------+------------------------------------------------+ +| Data Type | string | ++--------------------+------------------------------------------------+ +| Range | expected: *noop*, *cfq*, *bfq*, and *deadline* | ++--------------------+------------------------------------------------+ +| Default | *noop* | ++--------------------+------------------------------------------------+ +| Change | Dynamic, but takes effect upon pool creation | +| | or import | ++--------------------+------------------------------------------------+ +| Versions Affected | all, but no effect since v0.8.3 | ++--------------------+------------------------------------------------+ + +zfs_vdev_raidz_impl +~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_raidz_impl`` overrides the raidz parity algorithm. By +default, the algorithm is selected at zfs module load time by the +results of a microbenchmark of algorithms based on the current hardware. + +Once the module is loaded, the content of +``/sys/module/zfs/parameters/zfs_vdev_raidz_impl`` shows available +options with the currently selected enclosed in ``[]``. Details of the +results of the microbenchmark are observable in the +``/proc/spl/kstat/zfs/vdev_raidz_bench`` file. + ++----------------+----------------------+-------------------------+ +| algorithm | architecture | description | ++================+======================+=========================+ +| fastest | all | fastest implementation | +| | | selected by | +| | | microbenchmark | ++----------------+----------------------+-------------------------+ +| original | all | original raidz | +| | | implementation | ++----------------+----------------------+-------------------------+ +| scalar | all | scalar raidz | +| | | implementation | ++----------------+----------------------+-------------------------+ +| sse2 | 64-bit x86 | uses SSE2 instruction | +| | | set | ++----------------+----------------------+-------------------------+ +| ssse3 | 64-bit x86 | uses SSSE3 instruction | +| | | set | ++----------------+----------------------+-------------------------+ +| avx2 | 64-bit x86 | uses AVX2 instruction | +| | | set | ++----------------+----------------------+-------------------------+ +| avx512f | 64-bit x86 | uses AVX512F | +| | | instruction set | ++----------------+----------------------+-------------------------+ +| avx512bw | 64-bit x86 | uses AVX512F & AVX512BW | +| | | instruction sets | ++----------------+----------------------+-------------------------+ +| aarch64_neon | aarch64/64 bit ARMv8 | uses NEON | ++----------------+----------------------+-------------------------+ +| aarch64_neonx2 | aarch64/64 bit ARMv8 | uses NEON with more | +| | | unrolling | ++----------------+----------------------+-------------------------+ + +=================== ==================================================== +zfs_vdev_raidz_impl Notes +=================== ==================================================== +Tags `CPU <#cpu>`__, `raidz <#raidz>`__, `vdev <#vdev>`__ +When to change testing raidz algorithms +Data Type string +Range see table above +Default *fastest* +Change Dynamic +Versions Affected v0.7.0 and later +=================== ==================================================== + +zfs_zevent_cols +~~~~~~~~~~~~~~~ + +``zfs_zevent_cols`` is a soft wrap limit in columns (characters) for ZFS +events logged to the console. + +================= ========================== +zfs_zevent_cols Notes +================= ========================== +Tags `debug <#debug>`__ +When to change if 80 columns isn't enough +Data Type int +Units characters +Range 1 to INT_MAX +Default 80 +Change Dynamic +Versions Affected all +================= ========================== + +zfs_zevent_console +~~~~~~~~~~~~~~~~~~ + +If ``zfs_zevent_console`` is true (1), then ZFS events are logged to the +console. + +More logging and log filtering capabilities are provided by ``zed`` + +================== ========================================= +zfs_zevent_console Notes +================== ========================================= +Tags `debug <#debug>`__ +When to change to log ZFS events to the console +Data Type boolean +Range 0=do not log to console, 1=log to console +Default 0 +Change Dynamic +Versions Affected all +================== ========================================= + +zfs_zevent_len_max +~~~~~~~~~~~~~~~~~~ + +``zfs_zevent_len_max`` is the maximum ZFS event queue length. A value of +0 results in a calculated value (16 \* number of CPUs) with a minimum of +64. Events in the queue can be viewed with the ``zpool events`` command. + +================== ================================ +zfs_zevent_len_max Notes +================== ================================ +Tags `debug <#debug>`__ +When to change increase to see more ZFS events +Data Type int +Units events +Range 0 to INT_MAX +Default 0 (calculate as described above) +Change Dynamic +Versions Affected all +================== ================================ + +zfs_zil_clean_taskq_maxalloc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +During a SPA sync, intent log transaction groups (itxg) are cleaned. The +cleaning work is dispatched to the DSL pool ZIL clean taskq +(``dp_zil_clean_taskq``). +`zfs_zil_clean_taskq_minalloc <#zfs-zil-clean-taskq-minalloc>`__ is the +minimum and ``zfs_zil_clean_taskq_maxalloc`` is the maximum number of +cached taskq entries for ``dp_zil_clean_taskq``. The actual number of +taskq entries dynamically varies between these values. + +When ``zfs_zil_clean_taskq_maxalloc`` is exceeded transaction records +(itxs) are cleaned synchronously with possible negative impact to the +performance of SPA sync. + +Ideally taskq entries are pre-allocated prior to being needed by +``zil_clean()``, thus avoiding dynamic allocation of new taskq entries. + ++------------------------------+--------------------------------------+ +| zfs_zil_clean_taskq_maxalloc | Notes | ++==============================+======================================+ +| Tags | `ZIL <#zil>`__ | ++------------------------------+--------------------------------------+ +| When to change | If more ``dp_zil_clean_taskq`` | +| | entries are needed to prevent the | +| | itxs from being synchronously | +| | cleaned | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | ``dp_zil_clean_taskq`` taskq entries | ++------------------------------+--------------------------------------+ +| Range | `zfs_zil_clean_taskq_minallo | +| | c <#zfs-zil-clean-taskq-minalloc>`__ | +| | to ``INT_MAX`` | ++------------------------------+--------------------------------------+ +| Default | 1,048,576 | ++------------------------------+--------------------------------------+ +| Change | Dynamic, takes effect per-pool when | +| | the pool is imported | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.8.0 | ++------------------------------+--------------------------------------+ + +zfs_zil_clean_taskq_minalloc +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +During a SPA sync, intent log transaction groups (itxg) are cleaned. The +cleaning work is dispatched to the DSL pool ZIL clean taskq +(``dp_zil_clean_taskq``). ``zfs_zil_clean_taskq_minalloc`` is the +minimum and +`zfs_zil_clean_taskq_maxalloc <#zfs-zil-clean-taskq-maxalloc>`__ is the +maximum number of cached taskq entries for ``dp_zil_clean_taskq``. The +actual number of taskq entries dynamically varies between these values. + +``zfs_zil_clean_taskq_minalloc`` is the minimum number of ZIL +transaction records (itxs). + +Ideally taskq entries are pre-allocated prior to being needed by +``zil_clean()``, thus avoiding dynamic allocation of new taskq entries. + ++------------------------------+--------------------------------------+ +| zfs_zil_clean_taskq_minalloc | Notes | ++==============================+======================================+ +| Tags | `ZIL <#zil>`__ | ++------------------------------+--------------------------------------+ +| When to change | TBD | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | dp_zil_clean_taskq taskq entries | ++------------------------------+--------------------------------------+ +| Range | 1 to | +| | `zfs_zil_clean_taskq_maxallo | +| | c <#zfs-zil-clean-taskq-maxalloc>`__ | ++------------------------------+--------------------------------------+ +| Default | 1,024 | ++------------------------------+--------------------------------------+ +| Change | Dynamic, takes effect per-pool when | +| | the pool is imported | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.8.0 | ++------------------------------+--------------------------------------+ + +zfs_zil_clean_taskq_nthr_pct +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_zil_clean_taskq_nthr_pct`` controls the number of threads used by +the DSL pool ZIL clean taskq (``dp_zil_clean_taskq``). The default value +of 100% will create a maximum of one thread per cpu. + ++------------------------------+--------------------------------------+ +| zfs_zil_clean_taskq_nthr_pct | Notes | ++==============================+======================================+ +| Tags | `taskq <#taskq>`__, `ZIL <#zil>`__ | ++------------------------------+--------------------------------------+ +| When to change | Testing ZIL clean and SPA sync | +| | performance | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | percent of number of CPUs | ++------------------------------+--------------------------------------+ +| Range | 1 to 100 | ++------------------------------+--------------------------------------+ +| Default | 100 | ++------------------------------+--------------------------------------+ +| Change | Dynamic, takes effect per-pool when | +| | the pool is imported | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.8.0 | ++------------------------------+--------------------------------------+ + +zil_replay_disable +~~~~~~~~~~~~~~~~~~ + +If ``zil_replay_disable = 1``, then when a volume or filesystem is +brought online, no attempt to replay the ZIL is made and any existing +ZIL is destroyed. This can result in loss of data without notice. + +================== ================================== +zil_replay_disable Notes +================== ================================== +Tags `debug <#debug>`__, `ZIL <#zil>`__ +When to change Do not change +Data Type boolean +Range 0=replay ZIL, 1=destroy ZIL +Default 0 +Change Dynamic +Versions Affected v0.6.5 +================== ================================== + +zil_slog_bulk +~~~~~~~~~~~~~ + +``zil_slog_bulk`` is the log device write size limit per commit executed +with synchronous priority. Writes below ``zil_slog_bulk`` are executed +with synchronous priority. Writes above ``zil_slog_bulk`` are executed +with lower (asynchronous) priority to reduct potential log device abuse +by a single active ZIL writer. + ++-------------------+-------------------------------------------------+ +| zil_slog_bulk | Notes | ++===================+=================================================+ +| Tags | `ZIL <#zil>`__ | ++-------------------+-------------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Units | bytes | ++-------------------+-------------------------------------------------+ +| Range | 0 to ULONG_MAX | ++-------------------+-------------------------------------------------+ +| Default | 786,432 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.8.0 | ++-------------------+-------------------------------------------------+ + +zio_delay_max +~~~~~~~~~~~~~ + +If a ZFS I/O operation takes more than ``zio_delay_max`` milliseconds to +complete, then an event is logged. Note that this is only a logging +facility, not a timeout on operations. See also ``zpool events`` + +================= ======================= +zio_delay_max Notes +================= ======================= +Tags `debug <#debug>`__ +When to change when debugging slow I/O +Data Type int +Units milliseconds +Range 1 to INT_MAX +Default 30,000 (30 seconds) +Change Dynamic +Versions Affected all +================= ======================= + +zio_dva_throttle_enabled +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zio_dva_throttle_enabled`` controls throttling of block allocations in +the ZFS I/O (ZIO) pipeline. When enabled, the maximum number of pending +allocations per top-level vdev is limited by +`zfs_vdev_queue_depth_pct <#zfs-vdev-queue-depth-pct>`__ + ++--------------------------+------------------------------------------+ +| zio_dva_throttle_enabled | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | Testing ZIO block allocation algorithms | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=do not throttle ZIO block allocations, | +| | 1=throttle ZIO block allocations | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++--------------------------+------------------------------------------+ + +zio_requeue_io_start_cut_in_line +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zio_requeue_io_start_cut_in_line`` controls prioritization of a +re-queued ZFS I/O (ZIO) in the ZIO pipeline by the ZIO taskq. + ++----------------------------------+----------------------------------+ +| zio_requeue_io_start_cut_in_line | Notes | ++==================================+==================================+ +| Tags | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | Do not change | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0=don't prioritize re-queued | +| | I/Os, 1=prioritize re-queued | +| | I/Os | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | all | ++----------------------------------+----------------------------------+ + +zio_taskq_batch_pct +~~~~~~~~~~~~~~~~~~~ + +``zio_taskq_batch_pct`` sets the number of I/O worker threads as a +percentage of online CPUs. These workers threads are responsible for IO +work such as compression and checksum calculations. + +Each block is handled by one worker thread, so maximum overall worker +thread throughput is function of the number of concurrent blocks being +processed, the number of worker threads, and the algorithms used. The +default value of 75% is chosen to avoid using all CPUs which can result +in latency issues and inconsistent application performance, especially +when high compression is enabled. + +The taskq batch processes are: + ++-------------+--------------+---------------------------------------+ +| taskq | process name | Notes | ++=============+==============+=======================================+ +| Write issue | z_wr_iss[_#] | Can be CPU intensive, runs at lower | +| | | priority than other taskqs | ++-------------+--------------+---------------------------------------+ + +Other taskqs exist, but most have fixed numbers of instances and +therefore require recompiling the kernel module to adjust. + ++---------------------+-----------------------------------------------+ +| zio_taskq_batch_pct | Notes | ++=====================+===============================================+ +| Tags | `taskq <#taskq>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++---------------------+-----------------------------------------------+ +| When to change | To tune parallelism in multiprocessor systems | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | percent of number of CPUs | ++---------------------+-----------------------------------------------+ +| Range | 1 to 100, fractional number of CPUs are | +| | rounded down | ++---------------------+-----------------------------------------------+ +| Default | 75 | ++---------------------+-----------------------------------------------+ +| Change | Prior to zfs module load | ++---------------------+-----------------------------------------------+ +| Verification | The number of taskqs for each batch group can | +| | be observed using ``ps`` and counting the | +| | threads | ++---------------------+-----------------------------------------------+ +| Versions Affected | TBD | ++---------------------+-----------------------------------------------+ + +zvol_inhibit_dev +~~~~~~~~~~~~~~~~ + +``zvol_inhibit_dev`` controls the creation of volume device nodes upon +pool import. + ++-------------------+-------------------------------------------------+ +| zvol_inhibit_dev | Notes | ++===================+=================================================+ +| Tags | `import <#import>`__, `volume <#volume>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Inhibiting can slightly improve startup time on | +| | systems with a very large number of volumes | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=create volume device nodes, 1=do not create | +| | volume device nodes | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, takes effect per-pool when the pool is | +| | imported | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++-------------------+-------------------------------------------------+ + +zvol_major +~~~~~~~~~~ + +``zvol_major`` is the default major number for volume devices. + ++-------------------+-------------------------------------------------+ +| zvol_major | Notes | ++===================+=================================================+ +| Tags | `volume <#volume>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Do not change | ++-------------------+-------------------------------------------------+ +| Data Type | uint | ++-------------------+-------------------------------------------------+ +| Default | 230 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, takes effect per-pool when the pool is | +| | imported or volumes are created | ++-------------------+-------------------------------------------------+ +| Versions Affected | all | ++-------------------+-------------------------------------------------+ + +zvol_max_discard_blocks +~~~~~~~~~~~~~~~~~~~~~~~ + +Discard (aka ATA TRIM or SCSI UNMAP) operations done on volumes are done +in batches ``zvol_max_discard_blocks`` blocks. The block size is +determined by the ``volblocksize`` property of a volume. + +Some applications, such as ``mkfs``, discard the whole volume at once +using the maximum possible discard size. As a result, many gigabytes of +discard requests are not uncommon. Unfortunately, if a large amount of +data is already allocated in the volume, ZFS can be quite slow to +process discard requests. This is especially true if the volblocksize is +small (eg default=8KB). As a result, very large discard requests can +take a very long time (perhaps minutes under heavy load) to complete. +This can cause a number of problems, most notably if the volume is +accessed remotely (eg via iSCSI), in which case the client has a high +probability of timing out on the request. + +Limiting the ``zvol_max_discard_blocks`` can decrease the amount of +discard workload request by setting the ``discard_max_bytes`` and +``discard_max_hw_bytes`` for the volume's block device in SysFS. This +value is readable by volume device consumers. + ++-------------------------+-------------------------------------------+ +| zvol_max_discard_blocks | Notes | ++=========================+===========================================+ +| Tags | `discard <#discard>`__, | +| | `volume <#volume>`__ | ++-------------------------+-------------------------------------------+ +| When to change | if volume discard activity severely | +| | impacts other workloads | ++-------------------------+-------------------------------------------+ +| Data Type | ulong | ++-------------------------+-------------------------------------------+ +| Units | number of blocks of size volblocksize | ++-------------------------+-------------------------------------------+ +| Range | 0 to ULONG_MAX | ++-------------------------+-------------------------------------------+ +| Default | 16,384 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic, takes effect per-pool when the | +| | pool is imported or volumes are created | ++-------------------------+-------------------------------------------+ +| Verification | Observe value of | +| | ``/sys/block/ | +| | VOLUME_INSTANCE/queue/discard_max_bytes`` | ++-------------------------+-------------------------------------------+ +| Versions Affected | v0.6.0 and later | ++-------------------------+-------------------------------------------+ + +zvol_prefetch_bytes +~~~~~~~~~~~~~~~~~~~ + +When importing a pool with volumes or adding a volume to a pool, +``zvol_prefetch_bytes`` are prefetch from the start and end of the +volume. Prefetching these regions of the volume is desirable because +they are likely to be accessed immediately by ``blkid(8)`` or by the +kernel scanning for a partition table. + +=================== ============================================== +zvol_prefetch_bytes Notes +=================== ============================================== +Tags `prefetch <#prefetch>`__, `volume <#volume>`__ +When to change TBD +Data Type uint +Units bytes +Range 0 to UINT_MAX +Default 131,072 +Change Dynamic +Versions Affected v0.6.5 and later +=================== ============================================== + +zvol_request_sync +~~~~~~~~~~~~~~~~~ + +When processing I/O requests for a volume submit them synchronously. +This effectively limits the queue depth to 1 for each I/O submitter. +When set to 0 requests are handled asynchronously by the "zvol" thread +pool. + +See also `zvol_threads <#zvol-threads>`__ + ++-------------------+-------------------------------------------------+ +| zvol_request_sync | Notes | ++===================+=================================================+ +| Tags | `volume <#volume>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Testing concurrent volume requests | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=do concurrent (async) volume requests, 1=do | +| | sync volume requests | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.7.2 and later | ++-------------------+-------------------------------------------------+ + +zvol_threads +~~~~~~~~~~~~ + +zvol_threads controls the maximum number of threads handling concurrent +volume I/O requests. + +The default of 32 threads behaves similarly to a disk with a 32-entry +command queue. The actual number of threads required can vary widely by +workload and available CPUs. If lock analysis shows high contention in +the zvol taskq threads, then reducing the number of zvol_threads or +workload queue depth can improve overall throughput. + +See also `zvol_request_sync <#zvol-request-sync>`__ + ++-------------------+-------------------------------------------------+ +| zvol_threads | Notes | ++===================+=================================================+ +| Tags | `volume <#volume>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Matching the number of concurrent volume | +| | requests with workload requirements can improve | +| | concurrency | ++-------------------+-------------------------------------------------+ +| Data Type | uint | ++-------------------+-------------------------------------------------+ +| Units | threads | ++-------------------+-------------------------------------------------+ +| Range | 1 to UINT_MAX | ++-------------------+-------------------------------------------------+ +| Default | 32 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic, takes effect per-volume when the pool | +| | is imported or volumes are created | ++-------------------+-------------------------------------------------+ +| Verification | ``iostat`` using ``avgqu-sz`` or ``aqu-sz`` | +| | results | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++-------------------+-------------------------------------------------+ + +zvol_volmode +~~~~~~~~~~~~ + +``zvol_volmode`` defines volume block devices behaviour when the +``volmode`` property is set to ``default`` + +Note: to maintain compatibility with ZFS on BSD, "geom" is synonymous +with "full" + +===== ======= =========================================== +value volmode Description +===== ======= =========================================== +1 full legacy fully functional behaviour (default) +2 dev hide partitions on volume block devices +3 none not exposing volumes outside ZFS +===== ======= =========================================== + +================= ==================== +zvol_volmode Notes +================= ==================== +Tags `volume <#volume>`__ +When to change TBD +Data Type enum +Range 1, 2, or 3 +Default 1 +Change Dynamic +Versions Affected v0.7.0 and later +================= ==================== + +zfs_qat_disable +~~~~~~~~~~~~~~~ + +``zfs_qat_disable`` controls the Intel QuickAssist Technology (QAT) +driver providing hardware acceleration for gzip compression. When the +QAT hardware is present and qat driver available, the default behaviour +is to enable QAT. + ++-------------------+-------------------------------------------------+ +| zfs_qat_disable | Notes | ++===================+=================================================+ +| Tags | `compression <#compression>`__, `QAT <#qat>`__ | ++-------------------+-------------------------------------------------+ +| When to change | Testing QAT functionality | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=use QAT acceleration if available, 1=do not | +| | use QAT acceleration | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.7, renamed to | +| | `zfs_qat_ | +| | compress_disable <#zfs-qat-compress-disable>`__ | +| | in v0.8 | ++-------------------+-------------------------------------------------+ + +zfs_qat_checksum_disable +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_qat_checksum_disable`` controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for checksums. When the QAT +hardware is present and qat driver available, the default behaviour is +to enable QAT. + ++--------------------------+------------------------------------------+ +| zfs_qat_checksum_disable | Notes | ++==========================+==========================================+ +| Tags | `checksum <#checksum>`__, `QAT <#qat>`__ | ++--------------------------+------------------------------------------+ +| When to change | Testing QAT functionality | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=use QAT acceleration if available, | +| | 1=do not use QAT acceleration | ++--------------------------+------------------------------------------+ +| Default | 0 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.8.0 | ++--------------------------+------------------------------------------+ + +zfs_qat_compress_disable +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_qat_compress_disable`` controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for gzip compression. When +the QAT hardware is present and qat driver available, the default +behaviour is to enable QAT. + ++--------------------------+------------------------------------------+ +| zfs_qat_compress_disable | Notes | ++==========================+==========================================+ +| Tags | `compression <#compression>`__, | +| | `QAT <#qat>`__ | ++--------------------------+------------------------------------------+ +| When to change | Testing QAT functionality | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=use QAT acceleration if available, | +| | 1=do not use QAT acceleration | ++--------------------------+------------------------------------------+ +| Default | 0 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.8.0 | ++--------------------------+------------------------------------------+ + +zfs_qat_encrypt_disable +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_qat_encrypt_disable`` controls the Intel QuickAssist Technology +(QAT) driver providing hardware acceleration for encryption. When the +QAT hardware is present and qat driver available, the default behaviour +is to enable QAT. + ++-------------------------+-------------------------------------------+ +| zfs_qat_encrypt_disable | Notes | ++=========================+===========================================+ +| Tags | `encryption <#encryption>`__, | +| | `QAT <#qat>`__ | ++-------------------------+-------------------------------------------+ +| When to change | Testing QAT functionality | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=use QAT acceleration if available, 1=do | +| | not use QAT acceleration | ++-------------------------+-------------------------------------------+ +| Default | 0 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | v0.8.0 | ++-------------------------+-------------------------------------------+ + +dbuf_cache_hiwater_pct +~~~~~~~~~~~~~~~~~~~~~~ + +The ``dbuf_cache_hiwater_pct`` and +`dbuf_cache_lowater_pct <#dbuf-cache-lowater-pct>`__ define the +operating range for dbuf cache evict thread. The hiwater and lowater are +percentages of the `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +value. When the dbuf cache grows above ((100% + +``dbuf_cache_hiwater_pct``) \* +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__) then the dbuf cache +thread begins evicting. When the dbug cache falls below ((100% - +`dbuf_cache_lowater_pct <#dbuf-cache-lowater-pct>`__) \* +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__) then the dbuf cache +thread stops evicting. + +====================== ============================= +dbuf_cache_hiwater_pct Notes +====================== ============================= +Tags `dbuf_cache <#dbuf-cache>`__ +When to change Testing dbuf cache algorithms +Data Type uint +Units percent +Range 0 to UINT_MAX +Default 10 +Change Dynamic +Versions Affected v0.7.0 and later +====================== ============================= + +dbuf_cache_lowater_pct +~~~~~~~~~~~~~~~~~~~~~~ + +The dbuf_cache_hiwater_pct and dbuf_cache_lowater_pct define the +operating range for dbuf cache evict thread. The hiwater and lowater are +percentages of the `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ +value. When the dbuf cache grows above ((100% + +`dbuf_cache_hiwater_pct <#dbuf-cache-hiwater-pct>`__) \* +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__) then the dbuf cache +thread begins evicting. When the dbug cache falls below ((100% - +``dbuf_cache_lowater_pct``) \* +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__) then the dbuf cache +thread stops evicting. + +====================== ============================= +dbuf_cache_lowater_pct Notes +====================== ============================= +Tags `dbuf_cache <#dbuf-cache>`__ +When to change Testing dbuf cache algorithms +Data Type uint +Units percent +Range 0 to UINT_MAX +Default 10 +Change Dynamic +Versions Affected v0.7.0 and later +====================== ============================= + +dbuf_cache_max_bytes +~~~~~~~~~~~~~~~~~~~~ + +The dbuf cache maintains a list of dbufs that are not currently held but +have been recently released. These dbufs are not eligible for ARC +eviction until they are aged out of the dbuf cache. Dbufs are added to +the dbuf cache once the last hold is released. If a dbuf is later +accessed and still exists in the dbuf cache, then it will be removed +from the cache and later re-added to the head of the cache. Dbufs that +are aged out of the cache will be immediately destroyed and become +eligible for ARC eviction. + +The size of the dbuf cache is set by ``dbuf_cache_max_bytes``. The +actual size is dynamically adjusted to the minimum of current ARC target +size (``c``) >> `dbuf_cache_max_shift <#dbuf-cache-max-shift>`__ and the +default ``dbuf_cache_max_bytes`` + +==================== ============================= +dbuf_cache_max_bytes Notes +==================== ============================= +Tags `dbuf_cache <#dbuf-cache>`__ +When to change Testing dbuf cache algorithms +Data Type ulong +Units bytes +Range 16,777,216 to ULONG_MAX +Default 104,857,600 (100 MiB) +Change Dynamic +Versions Affected v0.7.0 and later +==================== ============================= + +dbuf_cache_max_shift +~~~~~~~~~~~~~~~~~~~~ + +The `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ minimum is the +lesser of `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ and the +current ARC target size (``c``) >> ``dbuf_cache_max_shift`` + +==================== ============================= +dbuf_cache_max_shift Notes +==================== ============================= +Tags `dbuf_cache <#dbuf-cache>`__ +When to change Testing dbuf cache algorithms +Data Type int +Units shift +Range 1 to 63 +Default 5 +Change Dynamic +Versions Affected v0.7.0 and later +==================== ============================= + +dmu_object_alloc_chunk_shift +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Each of the concurrent object allocators grabs +``2^dmu_object_alloc_chunk_shift`` dnode slots at a time. The default is +to grab 128 slots, or 4 blocks worth. This default value was +experimentally determined to be the lowest value that eliminates the +measurable effect of lock contention in the DMU object allocation code +path. + ++------------------------------+--------------------------------------+ +| dmu_object_alloc_chunk_shift | Notes | ++==============================+======================================+ +| Tags | `allocation <#allocation>`__, | +| | `DMU <#dmu>`__ | ++------------------------------+--------------------------------------+ +| When to change | If the workload creates many files | +| | concurrently on a system with many | +| | CPUs, then increasing | +| | ``dmu_object_alloc_chunk_shift`` can | +| | decrease lock contention | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | shift | ++------------------------------+--------------------------------------+ +| Range | 7 to 9 | ++------------------------------+--------------------------------------+ +| Default | 7 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.7.0 and later | ++------------------------------+--------------------------------------+ + +send_holes_without_birth_time +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Alias for `ignore_hole_birth <#ignore-hole-birth>`__ + +zfs_abd_scatter_enabled +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_abd_scatter_enabled`` controls the ARC Buffer Data (ABD) +scatter/gather feature. + +When disabled, the legacy behaviour is selected using linear buffers. +For linear buffers, all the data in the ABD is stored in one contiguous +buffer in memory (from a ``zio_[data_]buf_*`` kmem cache). + +When enabled (default), the data in the ABD is split into equal-sized +chunks (from the ``abd_chunk_cache`` kmem_cache), with pointers to the +chunks recorded in an array at the end of the ABD structure. This allows +more efficient memory allocation for buffers, especially when large +recordsizes are used. + ++-------------------------+-------------------------------------------+ +| zfs_abd_scatter_enabled | Notes | ++=========================+===========================================+ +| Tags | `ABD <#abd>`__, `memory <#memory>`__ | ++-------------------------+-------------------------------------------+ +| When to change | Testing ABD | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=use linear allocation only, 1=allow | +| | scatter/gather | ++-------------------------+-------------------------------------------+ +| Default | 1 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Verification | ABD statistics are observable in | +| | ``/proc/spl/kstat/zfs/abdstats``. Slab | +| | allocations are observable in | +| | ``/proc/slabinfo`` | ++-------------------------+-------------------------------------------+ +| Versions Affected | v0.7.0 and later | ++-------------------------+-------------------------------------------+ + +zfs_abd_scatter_max_order +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_abd_scatter_max_order`` sets the maximum order for physical page +allocation when ABD is enabled (see +`zfs_abd_scatter_enabled <#zfs-abd-scatter-enabled>`__) + +See also Buddy Memory Allocation in the Linux kernel documentation. + ++---------------------------+-----------------------------------------+ +| zfs_abd_scatter_max_order | Notes | ++===========================+=========================================+ +| Tags | `ABD <#abd>`__, `memory <#memory>`__ | ++---------------------------+-----------------------------------------+ +| When to change | Testing ABD features | ++---------------------------+-----------------------------------------+ +| Data Type | int | ++---------------------------+-----------------------------------------+ +| Units | orders | ++---------------------------+-----------------------------------------+ +| Range | 1 to 10 (upper limit is | +| | hardware-dependent) | ++---------------------------+-----------------------------------------+ +| Default | 10 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Verification | ABD statistics are observable in | +| | ``/proc/spl/kstat/zfs/abdstats`` | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.7.0 and later | ++---------------------------+-----------------------------------------+ + +zfs_compressed_arc_enabled +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When compression is enabled for a dataset, later reads of the data can +store the blocks in ARC in their on-disk, compressed state. This can +increse the effective size of the ARC, as counted in blocks, and thus +improve the ARC hit ratio. + ++----------------------------+----------------------------------------+ +| zfs_compressed_arc_enabled | Notes | ++============================+========================================+ +| Tags | `ABD <#abd>`__, | +| | `compression <#compression>`__ | ++----------------------------+----------------------------------------+ +| When to change | Testing ARC compression feature | ++----------------------------+----------------------------------------+ +| Data Type | boolean | ++----------------------------+----------------------------------------+ +| Range | 0=compressed ARC disabled (legacy | +| | behaviour), 1=compress ARC data | ++----------------------------+----------------------------------------+ +| Default | 1 | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Verification | raw ARC statistics are observable in | +| | ``/proc/spl/kstat/zfs/arcstats`` and | +| | ARC hit ratios can be observed using | +| | ``arcstat`` | ++----------------------------+----------------------------------------+ +| Versions Affected | v0.7.0 and later | ++----------------------------+----------------------------------------+ + +zfs_key_max_salt_uses +~~~~~~~~~~~~~~~~~~~~~ + +For encrypted datasets, the salt is regenerated every +``zfs_key_max_salt_uses`` blocks. This automatic regeneration reduces +the probability of collisions due to the Birthday problem. When set to +the default (400,000,000) the probability of collision is approximately +1 in 1 trillion. + +===================== ============================ +zfs_key_max_salt_uses Notes +===================== ============================ +Tags `encryption <#encryption>`__ +When to change Testing encryption features +Data Type ulong +Units blocks encrypted +Range 1 to ULONG_MAX +Default 400,000,000 +Change Dynamic +Versions Affected v0.8.0 and later +===================== ============================ + +zfs_object_mutex_size +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_object_mutex_size`` facilitates resizing the the per-dataset znode +mutex array for testing deadlocks therein. + +===================== =================================== +zfs_object_mutex_size Notes +===================== =================================== +Tags `debug <#debug>`__ +When to change Testing znode mutex array deadlocks +Data Type uint +Units orders +Range 1 to UINT_MAX +Default 64 +Change Dynamic +Versions Affected v0.7.0 and later +===================== =================================== + +zfs_scan_strict_mem_lim +~~~~~~~~~~~~~~~~~~~~~~~ + +When scrubbing or resilvering, by default, ZFS checks to ensure it is +not over the hard memory limit before each txg commit. If finer-grained +control of this is needed ``zfs_scan_strict_mem_lim`` can be set to 1 to +enable checking before scanning each block. + ++-------------------------+-------------------------------------------+ +| zfs_scan_strict_mem_lim | Notes | ++=========================+===========================================+ +| Tags | `memory <#memory>`__, | +| | `resilver <#resilver>`__, | +| | `scrub <#scrub>`__ | ++-------------------------+-------------------------------------------+ +| When to change | Do not change | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=normal scan behaviour, 1=check hard | +| | memory limit strictly during scan | ++-------------------------+-------------------------------------------+ +| Default | 0 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | v0.8.0 | ++-------------------------+-------------------------------------------+ + +zfs_send_queue_length +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_send_queue_length`` is the maximum number of bytes allowed in the +zfs send queue. + ++-----------------------+---------------------------------------------+ +| zfs_send_queue_length | Notes | ++=======================+=============================================+ +| Tags | `send <#send>`__ | ++-----------------------+---------------------------------------------+ +| When to change | When using the largest recordsize or | +| | volblocksize (16 MiB), increasing can | +| | improve send efficiency | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | bytes | ++-----------------------+---------------------------------------------+ +| Range | Must be at least twice the maximum | +| | recordsize or volblocksize in use | ++-----------------------+---------------------------------------------+ +| Default | 16,777,216 bytes (16 MiB) | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.8.1 | ++-----------------------+---------------------------------------------+ + +zfs_recv_queue_length +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_recv_queue_length`` is the maximum number of bytes allowed in the +zfs receive queue. + ++-----------------------+---------------------------------------------+ +| zfs_recv_queue_length | Notes | ++=======================+=============================================+ +| Tags | `receive <#receive>`__ | ++-----------------------+---------------------------------------------+ +| When to change | When using the largest recordsize or | +| | volblocksize (16 MiB), increasing can | +| | improve receive efficiency | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | bytes | ++-----------------------+---------------------------------------------+ +| Range | Must be at least twice the maximum | +| | recordsize or volblocksize in use | ++-----------------------+---------------------------------------------+ +| Default | 16,777,216 bytes (16 MiB) | ++-----------------------+---------------------------------------------+ +| Change | Dynamic | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.8.1 | ++-----------------------+---------------------------------------------+ + +zfs_arc_min_prefetch_lifespan +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``arc_min_prefetch_lifespan`` is the minimum time for a prefetched block +to remain in ARC before it is eligible for eviction. + +============================= ====================================== +zfs_arc_min_prefetch_lifespan Notes +============================= ====================================== +Tags `ARC <#ARC>`__ +When to change TBD +Data Type int +Units clock ticks +Range 0 = use default value +Default 1 second (as expressed in clock ticks) +Change Dynamic +Versions Affected v0.7.0 +============================= ====================================== + +zfs_scan_ignore_errors +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_ignore_errors`` allows errors discovered during scrub or +resilver to be ignored. This can be tuned as a workaround to remove the +dirty time list (DTL) when completing a pool scan. It is intended to be +used during pool repair or recovery to prevent resilvering when the pool +is imported. + ++------------------------+--------------------------------------------+ +| zfs_scan_ignore_errors | Notes | ++========================+============================================+ +| Tags | `resilver <#resilver>`__ | ++------------------------+--------------------------------------------+ +| When to change | See description above | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0 = do not ignore errors, 1 = ignore | +| | errors during pool scrub or resilver | ++------------------------+--------------------------------------------+ +| Default | 0 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.8.1 | ++------------------------+--------------------------------------------+ + +zfs_top_maxinflight +~~~~~~~~~~~~~~~~~~~ + +``zfs_top_maxinflight`` is used to limit the maximum number of I/Os +queued to top-level vdevs during scrub or resilver operations. The +actual top-level vdev limit is calculated by multiplying the number of +child vdevs by ``zfs_top_maxinflight`` This limit is an additional cap +over and above the scan limits + ++---------------------+-----------------------------------------------+ +| zfs_top_maxinflight | Notes | ++=====================+===============================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++---------------------+-----------------------------------------------+ +| When to change | for modern ZFS versions, the ZIO scheduler | +| | limits usually take precedence | ++---------------------+-----------------------------------------------+ +| Data Type | int | ++---------------------+-----------------------------------------------+ +| Units | I/O operations | ++---------------------+-----------------------------------------------+ +| Range | 1 to MAX_INT | ++---------------------+-----------------------------------------------+ +| Default | 32 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.6.0 | ++---------------------+-----------------------------------------------+ + +zfs_resilver_delay +~~~~~~~~~~~~~~~~~~ + +``zfs_resilver_delay`` sets a time-based delay for resilver I/Os. This +delay is in addition to the ZIO scheduler's treatment of scrub +workloads. See also `zfs_scan_idle <#zfs-scan-idle>`__ + ++--------------------+------------------------------------------------+ +| zfs_resilver_delay | Notes | ++====================+================================================+ +| Tags | `resilver <#resilver>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------+------------------------------------------------+ +| When to change | increasing can reduce impact of resilver | +| | workload on dynamic workloads | ++--------------------+------------------------------------------------+ +| Data Type | int | ++--------------------+------------------------------------------------+ +| Units | clock ticks | ++--------------------+------------------------------------------------+ +| Range | 0 to MAX_INT | ++--------------------+------------------------------------------------+ +| Default | 2 | ++--------------------+------------------------------------------------+ +| Change | Dynamic | ++--------------------+------------------------------------------------+ +| Versions Affected | v0.6.0 | ++--------------------+------------------------------------------------+ + +zfs_scrub_delay +~~~~~~~~~~~~~~~ + +``zfs_scrub_delay`` sets a time-based delay for scrub I/Os. This delay +is in addition to the ZIO scheduler's treatment of scrub workloads. See +also `zfs_scan_idle <#zfs-scan-idle>`__ + ++-------------------+-------------------------------------------------+ +| zfs_scrub_delay | Notes | ++===================+=================================================+ +| Tags | `scrub <#scrub>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------+-------------------------------------------------+ +| When to change | increasing can reduce impact of scrub workload | +| | on dynamic workloads | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | clock ticks | ++-------------------+-------------------------------------------------+ +| Range | 0 to MAX_INT | ++-------------------+-------------------------------------------------+ +| Default | 4 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.0 | ++-------------------+-------------------------------------------------+ + +zfs_scan_idle +~~~~~~~~~~~~~ + +When a non-scan I/O has occurred in the past ``zfs_scan_idle`` clock +ticks, then `zfs_resilver_delay <#zfs-resilver-delay>`__ or +`zfs_scrub_delay <#zfs-scrub-delay>`__ are enabled. + ++-------------------+-------------------------------------------------+ +| zfs_scan_idle | Notes | ++===================+=================================================+ +| Tags | `resilver <#resilver>`__, `scrub <#scrub>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------+-------------------------------------------------+ +| When to change | as part of a resilver/scrub tuning effort | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | clock ticks | ++-------------------+-------------------------------------------------+ +| Range | 0 to MAX_INT | ++-------------------+-------------------------------------------------+ +| Default | 50 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.0 | ++-------------------+-------------------------------------------------+ + +icp_aes_impl +~~~~~~~~~~~~ + +By default, ZFS will choose the highest performance, hardware-optimized +implementation of the AES encryption algorithm. The ``icp_aes_impl`` +tunable overrides this automatic choice. + +Note: ``icp_aes_impl`` is set in the ``icp`` kernel module, not the +``zfs`` kernel module. + +To observe the available options +``cat /sys/module/icp/parameters/icp_aes_impl`` The default option is +shown in brackets '[]' + +================= ==================================== +icp_aes_impl Notes +================= ==================================== +Tags `encryption <#encryption>`__ +Kernel module icp +When to change debugging ZFS encryption on hardware +Data Type string +Range varies by hardware +Default automatic, depends on the hardware +Change dynamic +Versions Affected planned for v2 +================= ==================================== + +icp_gcm_impl +~~~~~~~~~~~~ + +By default, ZFS will choose the highest performance, hardware-optimized +implementation of the GCM encryption algorithm. The ``icp_gcm_impl`` +tunable overrides this automatic choice. + +Note: ``icp_gcm_impl`` is set in the ``icp`` kernel module, not the +``zfs`` kernel module. + +To observe the available options +``cat /sys/module/icp/parameters/icp_gcm_impl`` The default option is +shown in brackets '[]' + +================= ==================================== +icp_gcm_impl Notes +================= ==================================== +Tags `encryption <#encryption>`__ +Kernel module icp +When to change debugging ZFS encryption on hardware +Data Type string +Range varies by hardware +Default automatic, depends on the hardware +Change Dynamic +Versions Affected planned for v2 +================= ==================================== + +zfs_abd_scatter_min_size +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_abd_scatter_min_size`` changes the ARC buffer data (ABD) +allocator's threshold for using linear or page-based scatter buffers. +Allocations smaller than ``zfs_abd_scatter_min_size`` use linear ABDs. + +Scatter ABD's use at least one page each, so sub-page allocations waste +some space when allocated as scatter allocations. For example, 2KB +scatter allocation wastes half of each page. Using linear ABD's for +small allocations results in slabs containing many allocations. This can +improve memory efficiency, at the expense of more work for ARC evictions +attempting to free pages, because all the buffers on one slab need to be +freed in order to free the slab and its underlying pages. + +Typically, 512B and 1KB kmem caches have 16 buffers per slab, so it's +possible for them to actually waste more memory than scatter +allocations: + +- one page per buf = wasting 3/4 or 7/8 +- one buf per slab = wasting 15/16 + +Spill blocks are typically 512B and are heavily used on systems running +*selinux* with the default dnode size and the ``xattr=sa`` property set. + +By default, linear allocations for 512B and 1KB, and scatter allocations +for larger (>= 1.5KB) allocation requests. + ++--------------------------+------------------------------------------+ +| zfs_abd_scatter_min_size | Notes | ++==========================+==========================================+ +| Tags | `ARC <#ARC>`__ | ++--------------------------+------------------------------------------+ +| When to change | debugging memory allocation, especially | +| | for large pages | ++--------------------------+------------------------------------------+ +| Data Type | int | ++--------------------------+------------------------------------------+ +| Units | bytes | ++--------------------------+------------------------------------------+ +| Range | 0 to MAX_INT | ++--------------------------+------------------------------------------+ +| Default | 1536 (512B and 1KB allocations will be | +| | linear) | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------------+------------------------------------------+ + +zfs_unlink_suspend_progress +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_unlink_suspend_progress`` changes the policy for removing pending +unlinks. When enabled, files will not be asynchronously removed from the +list of pending unlinks and the space they consume will be leaked. Once +this option has been disabled and the dataset is remounted, the pending +unlinks will be processed and the freed space returned to the pool. + ++-----------------------------+---------------------------------------+ +| zfs_unlink_suspend_progress | Notes | ++=============================+=======================================+ +| Tags | | ++-----------------------------+---------------------------------------+ +| When to change | used by the ZFS test suite (ZTS) to | +| | facilitate testing | ++-----------------------------+---------------------------------------+ +| Data Type | boolean | ++-----------------------------+---------------------------------------+ +| Range | 0 = use async unlink removal, 1 = do | +| | not async unlink thus leaking space | ++-----------------------------+---------------------------------------+ +| Default | 0 | ++-----------------------------+---------------------------------------+ +| Change | prior to dataset mount | ++-----------------------------+---------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------------+---------------------------------------+ + +spa_load_verify_shift +~~~~~~~~~~~~~~~~~~~~~ + +``spa_load_verify_shift`` sets the fraction of ARC that can be used by +inflight I/Os when verifying the pool during import. This value is a +"shift" representing the fraction of ARC target size +(``grep -w c /proc/spl/kstat/zfs/arcstats``). The ARC target size is +shifted to the right. Thus a value of '2' results in the fraction = 1/4, +while a value of '4' results in the fraction = 1/8. + +For large memory machines, pool import can consume large amounts of ARC: +much larger than the value of maxinflight. This can result in +`spa_load_verify_maxinflight <#spa-load-verify-maxinflight>`__ having a +value of 0 causing the system to hang. Setting ``spa_load_verify_shift`` +can reduce this limit and allow importing without hanging. + ++-----------------------+---------------------------------------------+ +| spa_load_verify_shift | Notes | ++=======================+=============================================+ +| Tags | `import <#import>`__, `ARC <#ARC>`__, | +| | `SPA <#SPA>`__ | ++-----------------------+---------------------------------------------+ +| When to change | troubleshooting pool import on large memory | +| | machines | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | shift | ++-----------------------+---------------------------------------------+ +| Range | 1 to MAX_INT | ++-----------------------+---------------------------------------------+ +| Default | 4 | ++-----------------------+---------------------------------------------+ +| Change | prior to importing a pool | ++-----------------------+---------------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------+---------------------------------------------+ + +spa_load_print_vdev_tree +~~~~~~~~~~~~~~~~~~~~~~~~ + +``spa_load_print_vdev_tree`` enables printing of the attempted pool +import's vdev tree to kernel message to the ZFS debug message log +``/proc/spl/kstat/zfs/dbgmsg`` Both the provided vdev tree and MOS vdev +tree are printed, which can be useful for debugging problems with the +zpool ``cachefile`` + ++--------------------------+------------------------------------------+ +| spa_load_print_vdev_tree | Notes | ++==========================+==========================================+ +| Tags | `import <#import>`__, `SPA <#SPA>`__ | ++--------------------------+------------------------------------------+ +| When to change | troubleshooting pool import failures | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0 = do not print pool configuration in | +| | logs, 1 = print pool configuration in | +| | logs | ++--------------------------+------------------------------------------+ +| Default | 0 | ++--------------------------+------------------------------------------+ +| Change | prior to pool import | ++--------------------------+------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------------+------------------------------------------+ + +zfs_max_missing_tvds +~~~~~~~~~~~~~~~~~~~~ + +When importing a pool in readonly mode +(``zpool import -o readonly=on ...``) then up to +``zfs_max_missing_tvds`` top-level vdevs can be missing, but the import +can attempt to progress. + +Note: This is strictly intended for advanced pool recovery cases since +missing data is almost inevitable. Pools with missing devices can only +be imported read-only for safety reasons, and the pool's ``failmode`` +property is automatically set to ``continue`` + +The expected use case is to recover pool data immediately after +accidentally adding a non-protected vdev to a protected pool. + +- With 1 missing top-level vdev, ZFS should be able to import the pool + and mount all datasets. User data that was not modified after the + missing device has been added should be recoverable. Thus snapshots + created prior to the addition of that device should be completely + intact. + +- With 2 missing top-level vdevs, some datasets may fail to mount since + there are dataset statistics that are stored as regular metadata. + Some data might be recoverable if those vdevs were added recently. + +- With 3 or more top-level missing vdevs, the pool is severely damaged + and MOS entries may be missing entirely. Chances of data recovery are + very low. Note that there are also risks of performing an inadvertent + rewind as we might be missing all the vdevs with the latest + uberblocks. + +==================== ========================================== +zfs_max_missing_tvds Notes +==================== ========================================== +Tags `import <#import>`__ +When to change troubleshooting pools with missing devices +Data Type int +Units missing top-level vdevs +Range 0 to MAX_INT +Default 0 +Change prior to pool import +Versions Affected planned for v2 +==================== ========================================== + +dbuf_metadata_cache_shift +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``dbuf_metadata_cache_shift`` sets the size of the dbuf metadata cache +as a fraction of ARC target size. This is an alternate method for +setting dbuf metadata cache size than +`dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__. + +`dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__ +overrides ``dbuf_metadata_cache_shift`` + +This value is a "shift" representing the fraction of ARC target size +(``grep -w c /proc/spl/kstat/zfs/arcstats``). The ARC target size is +shifted to the right. Thus a value of '2' results in the fraction = 1/4, +while a value of '6' results in the fraction = 1/64. + ++---------------------------+-----------------------------------------+ +| dbuf_metadata_cache_shift | Notes | ++===========================+=========================================+ +| Tags | `ARC <#ARC>`__, | +| | `dbuf_cache <#dbuf-cache>`__ | ++---------------------------+-----------------------------------------+ +| When to change | | ++---------------------------+-----------------------------------------+ +| Data Type | int | ++---------------------------+-----------------------------------------+ +| Units | shift | ++---------------------------+-----------------------------------------+ +| Range | practical range is | +| | (` | +| | dbuf_cache_shift <#dbuf-cache-shift>`__ | +| | + 1) to MAX_INT | ++---------------------------+-----------------------------------------+ +| Default | 6 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +dbuf_metadata_cache_max_bytes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``dbuf_metadata_cache_max_bytes`` sets the size of the dbuf metadata +cache as a number of bytes. This is an alternate method for setting dbuf +metadata cache size than +`dbuf_metadata_cache_shift <#dbuf-metadata-cache-shift>`__ + +`dbuf_metadata_cache_max_bytes <#dbuf-metadata-cache-max-bytes>`__ +overrides ``dbuf_metadata_cache_shift`` + ++-------------------------------+-------------------------------------+ +| dbuf_metadata_cache_max_bytes | Notes | ++===============================+=====================================+ +| Tags | `dbuf_cache <#dbuf-cache>`__ | ++-------------------------------+-------------------------------------+ +| When to change | | ++-------------------------------+-------------------------------------+ +| Data Type | int | ++-------------------------------+-------------------------------------+ +| Units | bytes | ++-------------------------------+-------------------------------------+ +| Range | 0 = use | +| | `dbuf_metadata_cache_sh | +| | ift <#dbuf-metadata-cache-shift>`__ | +| | to ARC ``c_max`` | ++-------------------------------+-------------------------------------+ +| Default | 0 | ++-------------------------------+-------------------------------------+ +| Change | Dynamic | ++-------------------------------+-------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------------+-------------------------------------+ + +dbuf_cache_shift +~~~~~~~~~~~~~~~~ + +``dbuf_cache_shift`` sets the size of the dbuf cache as a fraction of +ARC target size. This is an alternate method for setting dbuf cache size +than `dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__. + +`dbuf_cache_max_bytes <#dbuf-cache-max-bytes>`__ overrides +``dbuf_cache_shift`` + +This value is a "shift" representing the fraction of ARC target size +(``grep -w c /proc/spl/kstat/zfs/arcstats``). The ARC target size is +shifted to the right. Thus a value of '2' results in the fraction = 1/4, +while a value of '5' results in the fraction = 1/32. + +Performance tuning of dbuf cache can be monitored using: + +- ``dbufstat`` command +- `node_exporter `__ ZFS + module for prometheus environments +- `telegraf `__ ZFS plugin for + general-purpose metric collection +- ``/proc/spl/kstat/zfs/dbufstats`` kstat + ++-------------------+-------------------------------------------------+ +| dbuf_cache_shift | Notes | ++===================+=================================================+ +| Tags | `ARC <#ARC>`__, `dbuf_cache <#dbuf-cache>`__ | ++-------------------+-------------------------------------------------+ +| When to change | to improve performance of read-intensive | +| | channel programs | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | shift | ++-------------------+-------------------------------------------------+ +| Range | 5 to MAX_INT | ++-------------------+-------------------------------------------------+ +| Default | 5 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------+-------------------------------------------------+ + +.. _dbuf_cache_max_bytes-1: + +dbuf_cache_max_bytes +~~~~~~~~~~~~~~~~~~~~ + +``dbuf_cache_max_bytes`` sets the size of the dbuf cache in bytes. This +is an alternate method for setting dbuf cache size than +`dbuf_cache_shift <#dbuf-cache-shift>`__ + +Performance tuning of dbuf cache can be monitored using: + +- ``dbufstat`` command +- `node_exporter `__ ZFS + module for prometheus environments +- `telegraf `__ ZFS plugin for + general-purpose metric collection +- ``/proc/spl/kstat/zfs/dbufstats`` kstat + ++----------------------+----------------------------------------------+ +| dbuf_cache_max_bytes | Notes | ++======================+==============================================+ +| Tags | `ARC <#ARC>`__, `dbuf_cache <#dbuf-cache>`__ | ++----------------------+----------------------------------------------+ +| When to change | | ++----------------------+----------------------------------------------+ +| Data Type | int | ++----------------------+----------------------------------------------+ +| Units | bytes | ++----------------------+----------------------------------------------+ +| Range | 0 = use | +| | `dbuf_cache_shift <#dbuf-cache-shift>`__ to | +| | ARC ``c_max`` | ++----------------------+----------------------------------------------+ +| Default | 0 | ++----------------------+----------------------------------------------+ +| Change | Dynamic | ++----------------------+----------------------------------------------+ +| Versions Affected | planned for v2 | ++----------------------+----------------------------------------------+ + +metaslab_force_ganging +~~~~~~~~~~~~~~~~~~~~~~ + +When testing allocation code, ``metaslab_force_ganging`` forces blocks +above the specified size to be ganged. + +====================== ========================================== +metaslab_force_ganging Notes +====================== ========================================== +Tags `allocation <#allocation>`__ +When to change for development testing purposes only +Data Type ulong +Units bytes +Range SPA_MINBLOCKSIZE to (SPA_MAXBLOCKSIZE + 1) +Default SPA_MAXBLOCKSIZE + 1 (16,777,217 bytes) +Change Dynamic +Versions Affected planned for v2 +====================== ========================================== + +zfs_vdev_default_ms_count +~~~~~~~~~~~~~~~~~~~~~~~~~ + +When adding a top-level vdev, ``zfs_vdev_default_ms_count`` is the +target number of metaslabs. + ++---------------------------+-----------------------------------------+ +| zfs_vdev_default_ms_count | Notes | ++===========================+=========================================+ +| Tags | `allocation <#allocation>`__ | ++---------------------------+-----------------------------------------+ +| When to change | for development testing purposes only | ++---------------------------+-----------------------------------------+ +| Data Type | int | ++---------------------------+-----------------------------------------+ +| Range | 16 to MAX_INT | ++---------------------------+-----------------------------------------+ +| Default | 200 | ++---------------------------+-----------------------------------------+ +| Change | prior to creating a pool or adding a | +| | top-level vdev | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +vdev_removal_max_span +~~~~~~~~~~~~~~~~~~~~~ + +During top-level vdev removal, chunks of data are copied from the vdev +which may include free space in order to trade bandwidth for IOPS. +``vdev_removal_max_span`` sets the maximum span of free space included +as unnecessary data in a chunk of copied data. + +===================== ================================ +vdev_removal_max_span Notes +===================== ================================ +Tags `vdev_removal <#vdev-removal>`__ +When to change TBD +Data Type int +Units bytes +Range 0 to MAX_INT +Default 32,768 (32 MiB) +Change Dynamic +Versions Affected planned for v2 +===================== ================================ + +zfs_removal_ignore_errors +~~~~~~~~~~~~~~~~~~~~~~~~~ + +When removing a device, ``zfs_removal_ignore_errors`` controls the +process for handling hard I/O errors. When set, if a device encounters a +hard IO error during the removal process the removal will not be +cancelled. This can result in a normally recoverable block becoming +permanently damaged and is not recommended. This should only be used as +a last resort when the pool cannot be returned to a healthy state prior +to removing the device. + ++---------------------------+-----------------------------------------+ +| zfs_removal_ignore_errors | Notes | ++===========================+=========================================+ +| Tags | `vdev_removal <#vdev-removal>`__ | ++---------------------------+-----------------------------------------+ +| When to change | See description for caveat | ++---------------------------+-----------------------------------------+ +| Data Type | boolean | ++---------------------------+-----------------------------------------+ +| Range | during device removal: 0 = hard errors | +| | are not ignored, 1 = hard errors are | +| | ignored | ++---------------------------+-----------------------------------------+ +| Default | 0 | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +zfs_removal_suspend_progress +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_removal_suspend_progress`` is used during automated testing of the +ZFS code to incease test coverage. + +============================ ====================================== +zfs_removal_suspend_progress Notes +============================ ====================================== +Tags `vdev_removal <#vdev-removal>`__ +When to change do not change +Data Type boolean +Range 0 = do not suspend during vdev removal +Default 0 +Change Dynamic +Versions Affected planned for v2 +============================ ====================================== + +zfs_condense_indirect_commit_entry_delay_ms +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +During vdev removal, the vdev indirection layer sleeps for +``zfs_condense_indirect_commit_entry_delay_ms`` milliseconds during +mapping generation. This parameter is used during automated testing of +the ZFS code to improve test coverage. + ++----------------------------------+----------------------------------+ +| zfs_condens | Notes | +| e_indirect_commit_entry_delay_ms | | ++==================================+==================================+ +| Tags | `vdev_removal <#vdev-removal>`__ | ++----------------------------------+----------------------------------+ +| When to change | do not change | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | milliseconds | ++----------------------------------+----------------------------------+ +| Range | 0 to MAX_INT | ++----------------------------------+----------------------------------+ +| Default | 0 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_condense_indirect_vdevs_enable +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +During vdev removal, condensing process is an attempt to save memory by +removing obsolete mappings. ``zfs_condense_indirect_vdevs_enable`` +enables condensing indirect vdev mappings. When set, ZFS attempts to +condense indirect vdev mappings if the mapping uses more than +`zfs_condense_min_mapping_bytes <#zfs-condense-min-mapping-bytes>`__ +bytes of memory and if the obsolete space map object uses more than +`zfs_condense_max_obsolete_bytes <#zfs-condense-max-obsolete-bytes>`__ +bytes on disk. + ++----------------------------------+----------------------------------+ +| zf | Notes | +| s_condense_indirect_vdevs_enable | | ++==================================+==================================+ +| Tags | `vdev_removal <#vdev-removal>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0 = do not save memory, 1 = save | +| | memory by condensing obsolete | +| | mapping after vdev removal | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_condense_max_obsolete_bytes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After vdev removal, ``zfs_condense_max_obsolete_bytes`` sets the limit +for beginning the condensing process. Condensing begins if the obsolete +space map takes up more than ``zfs_condense_max_obsolete_bytes`` of +space on disk (logically). The default of 1 GiB is small enough relative +to a typical pool that the space consumed by the obsolete space map is +minimal. + +See also +`zfs_condense_indirect_vdevs_enable <#zfs-condense-indirect-vdevs-enable>`__ + +=============================== ================================ +zfs_condense_max_obsolete_bytes Notes +=============================== ================================ +Tags `vdev_removal <#vdev-removal>`__ +When to change no not change +Data Type ulong +Units bytes +Range 0 to MAX_ULONG +Default 1,073,741,824 (1 GiB) +Change Dynamic +Versions Affected planned for v2 +=============================== ================================ + +zfs_condense_min_mapping_bytes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After vdev removal, ``zfs_condense_min_mapping_bytes`` is the lower +limit for determining when to condense the in-memory obsolete space map. +The condensing process will not continue unless a minimum of +``zfs_condense_min_mapping_bytes`` of memory can be freed. + +See also +`zfs_condense_indirect_vdevs_enable <#zfs-condense-indirect-vdevs-enable>`__ + +============================== ================================ +zfs_condense_min_mapping_bytes Notes +============================== ================================ +Tags `vdev_removal <#vdev-removal>`__ +When to change do not change +Data Type ulong +Units bytes +Range 0 to MAX_ULONG +Default 128 KiB +Change Dynamic +Versions Affected planned for v2 +============================== ================================ + +zfs_vdev_initializing_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_initializing_max_active`` sets the maximum initializing I/Os +active to each device. + ++----------------------------------+----------------------------------+ +| zfs_vdev_initializing_max_active | Notes | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | See `ZFS I/O | +| | Sch | +| | eduler `__ | ++----------------------------------+----------------------------------+ +| Data Type | uint32 | ++----------------------------------+----------------------------------+ +| Units | I/O operations | ++----------------------------------+----------------------------------+ +| Range | 1 to | +| | `zfs_vdev_max_ | +| | active <#zfs-vdev-max-active>`__ | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_vdev_initializing_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_initializing_min_active`` sets the minimum initializing I/Os +active to each device. + ++----------------------------------+----------------------------------+ +| zfs_vdev_initializing_min_active | Notes | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | See `ZFS I/O | +| | Sch | +| | eduler `__ | ++----------------------------------+----------------------------------+ +| Data Type | uint32 | ++----------------------------------+----------------------------------+ +| Units | I/O operations | ++----------------------------------+----------------------------------+ +| Range | 1 to | +| | `zfs_vde | +| | v_initializing_max_active <#zfs_ | +| | vdev_initializing_max_active>`__ | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_vdev_removal_max_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_removal_max_active`` sets the maximum top-level vdev removal +I/Os active to each device. + ++-----------------------------+---------------------------------------+ +| zfs_vdev_removal_max_active | Notes | ++=============================+=======================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-----------------------------+---------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-----------------------------+---------------------------------------+ +| Data Type | uint32 | ++-----------------------------+---------------------------------------+ +| Units | I/O operations | ++-----------------------------+---------------------------------------+ +| Range | 1 to | +| | `zfs_vdev | +| | _max_active <#zfs-vdev-max-active>`__ | ++-----------------------------+---------------------------------------+ +| Default | 2 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------------+---------------------------------------+ + +zfs_vdev_removal_min_active +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_removal_min_active`` sets the minimum top-level vdev removal +I/Os active to each device. + ++-----------------------------+---------------------------------------+ +| zfs_vdev_removal_min_active | Notes | ++=============================+=======================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-----------------------------+---------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++-----------------------------+---------------------------------------+ +| Data Type | uint32 | ++-----------------------------+---------------------------------------+ +| Units | I/O operations | ++-----------------------------+---------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_removal_max_act | +| | ive <#zfs-vdev-removal-max-active>`__ | ++-----------------------------+---------------------------------------+ +| Default | 1 | ++-----------------------------+---------------------------------------+ +| Change | Dynamic | ++-----------------------------+---------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------------+---------------------------------------+ + +zfs_vdev_trim_max_active +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_trim_max_active`` sets the maximum trim I/Os active to each +device. + ++--------------------------+------------------------------------------+ +| zfs_vdev_trim_max_active | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------+------------------------------------------+ +| Data Type | uint32 | ++--------------------------+------------------------------------------+ +| Units | I/O operations | ++--------------------------+------------------------------------------+ +| Range | 1 to | +| | `zfs_v | +| | dev_max_active <#zfs-vdev-max-active>`__ | ++--------------------------+------------------------------------------+ +| Default | 2 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------------+------------------------------------------+ + +zfs_vdev_trim_min_active +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_trim_min_active`` sets the minimum trim I/Os active to each +device. + ++--------------------------+------------------------------------------+ +| zfs_vdev_trim_min_active | Notes | ++==========================+==========================================+ +| Tags | `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++--------------------------+------------------------------------------+ +| When to change | See `ZFS I/O | +| | Scheduler `__ | ++--------------------------+------------------------------------------+ +| Data Type | uint32 | ++--------------------------+------------------------------------------+ +| Units | I/O operations | ++--------------------------+------------------------------------------+ +| Range | 1 to | +| | `zfs_vdev_trim_m | +| | ax_active <#zfs-vdev-trim-max-active>`__ | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | Dynamic | ++--------------------------+------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------------+------------------------------------------+ + +zfs_initialize_value +~~~~~~~~~~~~~~~~~~~~ + +When initializing a vdev, ZFS writes patterns of +``zfs_initialize_value`` bytes to the device. + ++----------------------+----------------------------------------------+ +| zfs_initialize_value | Notes | ++======================+==============================================+ +| Tags | `vdev_initialize <#vdev-initialize>`__ | ++----------------------+----------------------------------------------+ +| When to change | when debugging initialization code | ++----------------------+----------------------------------------------+ +| Data Type | uint32 or uint64 | ++----------------------+----------------------------------------------+ +| Default | 0xdeadbeef for 32-bit systems, | +| | 0xdeadbeefdeadbeee for 64-bit systems | ++----------------------+----------------------------------------------+ +| Change | prior to running ``zpool initialize`` | ++----------------------+----------------------------------------------+ +| Versions Affected | planned for v2 | ++----------------------+----------------------------------------------+ + +zfs_lua_max_instrlimit +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_lua_max_instrlimit`` limits the maximum time for a ZFS channel +program to run. + ++------------------------+--------------------------------------------+ +| zfs_lua_max_instrlimit | Notes | ++========================+============================================+ +| Tags | `channel_programs <#channel-programs>`__ | ++------------------------+--------------------------------------------+ +| When to change | to enforce a CPU usage limit on ZFS | +| | channel programs | ++------------------------+--------------------------------------------+ +| Data Type | ulong | ++------------------------+--------------------------------------------+ +| Units | LUA instructions | ++------------------------+--------------------------------------------+ +| Range | 0 to MAX_ULONG | ++------------------------+--------------------------------------------+ +| Default | 100,000,000 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------+--------------------------------------------+ + +zfs_lua_max_memlimit +~~~~~~~~~~~~~~~~~~~~ + +'zfs_lua_max_memlimit' is the maximum memory limit for a ZFS channel +program. + +==================== ======================================== +zfs_lua_max_memlimit Notes +==================== ======================================== +Tags `channel_programs <#channel-programs>`__ +When to change +Data Type ulong +Units bytes +Range 0 to MAX_ULONG +Default 104,857,600 (100 MiB) +Change Dynamic +Versions Affected planned for v2 +==================== ======================================== + +zfs_max_dataset_nesting +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_max_dataset_nesting`` limits the depth of nested datasets. Deeply +nested datasets can overflow the stack. The maximum stack depth depends +on kernel compilation options, so it is impractical to predict the +possible limits. For kernels compiled with small stack sizes, +``zfs_max_dataset_nesting`` may require changes. + ++-------------------------+-------------------------------------------+ +| zfs_max_dataset_nesting | Notes | ++=========================+===========================================+ +| Tags | `dataset <#dataset>`__ | ++-------------------------+-------------------------------------------+ +| When to change | can be tuned temporarily to fix existing | +| | datasets that exceed the predefined limit | ++-------------------------+-------------------------------------------+ +| Data Type | int | ++-------------------------+-------------------------------------------+ +| Units | datasets | ++-------------------------+-------------------------------------------+ +| Range | 0 to MAX_INT | ++-------------------------+-------------------------------------------+ +| Default | 50 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic, though once on-disk the value | +| | for the pool is set | ++-------------------------+-------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------+-------------------------------------------+ + +zfs_ddt_data_is_special +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_ddt_data_is_special`` enables the deduplication table (DDT) to +reside on a special top-level vdev. + ++-------------------------+-------------------------------------------+ +| zfs_ddt_data_is_special | Notes | ++=========================+===========================================+ +| Tags | `dedup <#dedup>`__, | +| | `special_vdev <#special-vdev>`__ | ++-------------------------+-------------------------------------------+ +| When to change | when using a special top-level vdev and | +| | no dedup top-level vdev and it is desired | +| | to store the DDT in the main pool | +| | top-level vdevs | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=do not use special vdevs to store DDT, | +| | 1=store DDT in special vdevs | ++-------------------------+-------------------------------------------+ +| Default | 1 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------+-------------------------------------------+ + +zfs_user_indirect_is_special +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If special vdevs are in use, ``zfs_user_indirect_is_special`` enables +user data indirect blocks (a form of metadata) to be written to the +special vdevs. + ++------------------------------+--------------------------------------+ +| zfs_user_indirect_is_special | Notes | ++==============================+======================================+ +| Tags | `special_vdev <#special-vdev>`__ | ++------------------------------+--------------------------------------+ +| When to change | to force user data indirect blocks | +| | to remain in the main pool top-level | +| | vdevs | ++------------------------------+--------------------------------------+ +| Data Type | boolean | ++------------------------------+--------------------------------------+ +| Range | 0=do not write user indirect blocks | +| | to a special vdev, 1=write user | +| | indirect blocks to a special vdev | ++------------------------------+--------------------------------------+ +| Default | 1 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------------+--------------------------------------+ + +zfs_reconstruct_indirect_combinations_max +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After device removal, if an indirect split block contains more than +``zfs_reconstruct_indirect_combinations_max`` many possible unique +combinations when being reconstructed, it can be considered too +computationally expensive to check them all. Instead, at most +``zfs_reconstruct_indirect_combinations_max`` randomly-selected +combinations are attempted each time the block is accessed. This allows +all segment copies to participate fairly in the reconstruction when all +combinations cannot be checked and prevents repeated use of one bad +copy. + ++----------------------------------+----------------------------------+ +| zfs_recon | Notes | +| struct_indirect_combinations_max | | ++==================================+==================================+ +| Tags | `vdev_removal <#vdev-removal>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | attempts | ++----------------------------------+----------------------------------+ +| Range | 0=do not limit attempts, 1 to | +| | MAX_INT = limit for attempts | ++----------------------------------+----------------------------------+ +| Default | 4096 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_send_unmodified_spill_blocks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_send_unmodified_spill_blocks`` enables sending of unmodified spill +blocks in the send stream. Under certain circumstances, previous +versions of ZFS could incorrectly remove the spill block from an +existing object. Including unmodified copies of the spill blocks creates +a backwards compatible stream which will recreate a spill block if it +was incorrectly removed. + ++----------------------------------+----------------------------------+ +| zfs_send_unmodified_spill_blocks | Notes | ++==================================+==================================+ +| Tags | `send <#send>`__ | ++----------------------------------+----------------------------------+ +| When to change | TBD | ++----------------------------------+----------------------------------+ +| Data Type | boolean | ++----------------------------------+----------------------------------+ +| Range | 0=do not send unmodified spill | +| | blocks, 1=send unmodified spill | +| | blocks | ++----------------------------------+----------------------------------+ +| Default | 1 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_spa_discard_memory_limit +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_spa_discard_memory_limit`` sets the limit for maximum memory used +for prefetching a pool's checkpoint space map on each vdev while +discarding a pool checkpoint. + +============================ ============================ +zfs_spa_discard_memory_limit Notes +============================ ============================ +Tags `checkpoint <#checkpoint>`__ +When to change TBD +Data Type int +Units bytes +Range 0 to MAX_INT +Default 16,777,216 (16 MiB) +Change Dynamic +Versions Affected planned for v2 +============================ ============================ + +zfs_special_class_metadata_reserve_pct +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_special_class_metadata_reserve_pct`` sets a threshold for space in +special vdevs to be reserved exclusively for metadata. This prevents +small data blocks from completely consuming a special vdev. + +====================================== ================================ +zfs_special_class_metadata_reserve_pct Notes +====================================== ================================ +Tags `special_vdev <#special-vdev>`__ +When to change TBD +Data Type int +Units percent +Range 0 to 100 +Default 25 +Change Dynamic +Versions Affected planned for v2 +====================================== ================================ + +zfs_trim_extent_bytes_max +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_trim_extent_bytes_max`` sets the maximum size of a trim (aka +discard, scsi unmap) command. Ranges larger than +``zfs_trim_extent_bytes_max`` are split in to chunks no larger than +``zfs_trim_extent_bytes_max`` bytes prior to being issued to the device. +Use ``zpool iostat -w`` to observe the latency of trim commands. + ++---------------------------+-----------------------------------------+ +| zfs_trim_extent_bytes_max | Notes | ++===========================+=========================================+ +| Tags | `trim <#trim>`__ | ++---------------------------+-----------------------------------------+ +| When to change | if the device can efficiently handle | +| | larger trim requests | ++---------------------------+-----------------------------------------+ +| Data Type | uint | ++---------------------------+-----------------------------------------+ +| Units | bytes | ++---------------------------+-----------------------------------------+ +| Range | `zfs_trim_extent_by | +| | tes_min <#zfs-trim-extent-bytes-min>`__ | +| | to MAX_UINT | ++---------------------------+-----------------------------------------+ +| Default | 134,217,728 (128 MiB) | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +zfs_trim_extent_bytes_min +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_trim_extent_bytes_min`` sets the minimum size of trim (aka +discard, scsi unmap) commands. Trim ranges smaller than +``zfs_trim_extent_bytes_min`` are skipped unless they're part of a +larger range which was broken in to chunks. Some devices have +performance degradation during trim operations, so using a larger +``zfs_trim_extent_bytes_min`` can reduce the total amount of space +trimmed. Use ``zpool iostat -w`` to observe the latency of trim +commands. + ++---------------------------+-----------------------------------------+ +| zfs_trim_extent_bytes_min | Notes | ++===========================+=========================================+ +| Tags | `trim <#trim>`__ | ++---------------------------+-----------------------------------------+ +| When to change | when trim is in use and device | +| | performance suffers from trimming small | +| | allocations | ++---------------------------+-----------------------------------------+ +| Data Type | uint | ++---------------------------+-----------------------------------------+ +| Units | bytes | ++---------------------------+-----------------------------------------+ +| Range | 0=trim all unallocated space, otherwise | +| | minimum physical block size to MAX\_ | ++---------------------------+-----------------------------------------+ +| Default | 32,768 (32 KiB) | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------------+-----------------------------------------+ + +zfs_trim_metaslab_skip +~~~~~~~~~~~~~~~~~~~~~~ + +| ``zfs_trim_metaslab_skip`` enables uninitialized metaslabs to be + skipped during the trim (aka discard, scsi unmap) process. + ``zfs_trim_metaslab_skip`` can be useful for pools constructed from + large thinly-provisioned devices where trim operations perform slowly. +| As a pool ages an increasing fraction of the pool's metaslabs are + initialized, progressively degrading the usefulness of this option. + This setting is stored when starting a manual trim and persists for + the duration of the requested trim. Use ``zpool iostat -w`` to observe + the latency of trim commands. + ++------------------------+--------------------------------------------+ +| zfs_trim_metaslab_skip | Notes | ++========================+============================================+ +| Tags | `trim <#trim>`__ | ++------------------------+--------------------------------------------+ +| When to change | | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0=do not skip uninitialized metaslabs | +| | during trim, 1=skip uninitialized | +| | metaslabs during trim | ++------------------------+--------------------------------------------+ +| Default | 0 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------+--------------------------------------------+ + +zfs_trim_queue_limit +~~~~~~~~~~~~~~~~~~~~ + +``zfs_trim_queue_limit`` sets the maximum queue depth for leaf vdevs. +See also `zfs_vdev_trim_max_active <#zfs-vdev-trim-max-active>`__ and +`zfs_trim_extent_bytes_max <#zfs-trim-extent-bytes-max>`__ Use +``zpool iostat -q`` to observe trim queue depth. + ++----------------------+------------------------------------------------------+ +| zfs_trim_queue_limit | Notes | ++======================+======================================================+ +| Tags | `trim <#trim>`__ | ++----------------------+------------------------------------------------------+ +| When to change | to restrict the number of trim commands in the queue | ++----------------------+------------------------------------------------------+ +| Data Type | uint | ++----------------------+------------------------------------------------------+ +| Units | I/O operations | ++----------------------+------------------------------------------------------+ +| Range | 1 to MAX_UINT | ++----------------------+------------------------------------------------------+ +| Default | 10 | ++----------------------+------------------------------------------------------+ +| Change | Dynamic | ++----------------------+------------------------------------------------------+ +| Versions Affected | planned for v2 | ++----------------------+------------------------------------------------------+ + +zfs_trim_txg_batch +~~~~~~~~~~~~~~~~~~ + +``zfs_trim_txg_batch`` sets the number of transaction groups worth of +frees which should be aggregated before trim (aka discard, scsi unmap) +commands are issued to a device. This setting represents a trade-off +between issuing larger, more efficient trim commands and the delay +before the recently trimmed space is available for use by the device. + +Increasing this value will allow frees to be aggregated for a longer +time. This will result is larger trim operations and potentially +increased memory usage. Decreasing this value will have the opposite +effect. The default value of 32 was empirically determined to be a +reasonable compromise. + +================== =================== +zfs_trim_txg_batch Notes +================== =================== +Tags `trim <#trim>`__ +When to change TBD +Data Type uint +Units metaslabs to stride +Range 1 to MAX_UINT +Default 32 +Change Dynamic +Versions Affected planned for v2 +================== =================== + +zfs_vdev_aggregate_trim +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_aggregate_trim`` allows trim I/Os to be aggregated. This is +normally not helpful because the extents to be trimmed will have been +already been aggregated by the metaslab. + ++-------------------------+-------------------------------------------+ +| zfs_vdev_aggregate_trim | Notes | ++=========================+===========================================+ +| Tags | `trim <#trim>`__, `vdev <#vdev>`__, | +| | `ZIO_scheduler <#zio-scheduler>`__ | ++-------------------------+-------------------------------------------+ +| When to change | when debugging trim code or trim | +| | performance issues | ++-------------------------+-------------------------------------------+ +| Data Type | boolean | ++-------------------------+-------------------------------------------+ +| Range | 0=do not attempt to aggregate trim | +| | commands, 1=attempt to aggregate trim | +| | commands | ++-------------------------+-------------------------------------------+ +| Default | 0 | ++-------------------------+-------------------------------------------+ +| Change | Dynamic | ++-------------------------+-------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------+-------------------------------------------+ + +zfs_vdev_aggregation_limit_non_rotating +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_aggregation_limit_non_rotating`` is the equivalent of +`zfs_vdev_aggregation_limit <#zfs-vdev-aggregation-limit>`__ for devices +which represent themselves as non-rotating to the Linux blkdev +interfaces. Such devices have a value of 0 in +``/sys/block/DEVICE/queue/rotational`` and are expected to be SSDs. + ++----------------------------------+----------------------------------+ +| zfs_vde | Notes | +| v_aggregation_limit_non_rotating | | ++==================================+==================================+ +| Tags | `vdev <#vdev>`__, | +| | `Z | +| | IO_scheduler <#zio-scheduler>`__ | ++----------------------------------+----------------------------------+ +| When to change | see | +| | `zfs_vdev_aggregation_limit | +| | <#zfs-vdev-aggregation-limit>`__ | ++----------------------------------+----------------------------------+ +| Data Type | int | ++----------------------------------+----------------------------------+ +| Units | bytes | ++----------------------------------+----------------------------------+ +| Range | 0 to MAX_INT | ++----------------------------------+----------------------------------+ +| Default | 131,072 bytes (128 KiB) | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zil_nocacheflush +~~~~~~~~~~~~~~~~ + +ZFS uses barriers (volatile cache flush commands) to ensure data is +committed to permanent media by devices. This ensures consistent +on-media state for devices where caches are volatile (eg HDDs). + +``zil_nocacheflush`` disables the cache flush commands that are normally +sent to devices by the ZIL after a log write has completed. + +The difference between ``zil_nocacheflush`` and +`zfs_nocacheflush <#zfs-nocacheflush>`__ is ``zil_nocacheflush`` applies +to ZIL writes while `zfs_nocacheflush <#zfs-nocacheflush>`__ disables +barrier writes to the pool devices at the end of transaction group syncs. + +WARNING: setting this can cause ZIL corruption on power loss if the +device has a volatile write cache. + ++-------------------+-------------------------------------------------+ +| zil_nocacheflush | Notes | ++===================+=================================================+ +| Tags | `disks <#disks>`__, `ZIL <#ZIL>`__ | ++-------------------+-------------------------------------------------+ +| When to change | If the storage device has nonvolatile cache, | +| | then disabling cache flush can save the cost of | +| | occasional cache flush commands | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=send cache flush commands, 1=do not send | +| | cache flush commands | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------+-------------------------------------------------+ + +zio_deadman_log_all +~~~~~~~~~~~~~~~~~~~ + +``zio_deadman_log_all`` enables debugging messages for all ZFS I/Os, +rather than only for leaf ZFS I/Os for a vdev. This is meant to be used +by developers to gain diagnostic information for hang conditions which +don't involve a mutex or other locking primitive. Typically these are +conditions where a thread in the zio pipeline is looping indefinitely. + +See also `zfs_dbgmsg_enable <#zfs-dbgmsg-enable>`__ + ++---------------------+-----------------------------------------------+ +| zio_deadman_log_all | Notes | ++=====================+===============================================+ +| Tags | `debug <#debug>`__ | ++---------------------+-----------------------------------------------+ +| When to change | when debugging ZFS I/O pipeline | ++---------------------+-----------------------------------------------+ +| Data Type | boolean | ++---------------------+-----------------------------------------------+ +| Range | 0=do not log all deadman events, 1=log all | +| | deadman events | ++---------------------+-----------------------------------------------+ +| Default | 0 | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | planned for v2 | ++---------------------+-----------------------------------------------+ + +zio_decompress_fail_fraction +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If non-zero, ``zio_decompress_fail_fraction`` represents the denominator +of the probability that ZFS should induce a decompression failure. For +instance, for a 5% decompression failure rate, this value should be set +to 20. + ++------------------------------+--------------------------------------+ +| zio_decompress_fail_fraction | Notes | ++==============================+======================================+ +| Tags | `debug <#debug>`__ | ++------------------------------+--------------------------------------+ +| When to change | when debugging ZFS internal | +| | compressed buffer code | ++------------------------------+--------------------------------------+ +| Data Type | ulong | ++------------------------------+--------------------------------------+ +| Units | probability of induced decompression | +| | failure is | +| | 1/``zio_decompress_fail_fraction`` | ++------------------------------+--------------------------------------+ +| Range | 0 = do not induce failures, or 1 to | +| | MAX_ULONG | ++------------------------------+--------------------------------------+ +| Default | 0 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------------+--------------------------------------+ + +zio_slow_io_ms +~~~~~~~~~~~~~~ + +An I/O operation taking more than ``zio_slow_io_ms`` milliseconds to +complete is marked as a slow I/O. Slow I/O counters can be observed with +``zpool status -s``. Each slow I/O causes a delay zevent, observable +using ``zpool events``. See also ``zfs-events(5)``. + ++-------------------+-------------------------------------------------+ +| zio_slow_io_ms | Notes | ++===================+=================================================+ +| Tags | `vdev <#vdev>`__, `zed <#zed>`__ | ++-------------------+-------------------------------------------------+ +| When to change | when debugging slow devices and the default | +| | value is inappropriate | ++-------------------+-------------------------------------------------+ +| Data Type | int | ++-------------------+-------------------------------------------------+ +| Units | milliseconds | ++-------------------+-------------------------------------------------+ +| Range | 0 to MAX_INT | ++-------------------+-------------------------------------------------+ +| Default | 30,000 (30 seconds) | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------+-------------------------------------------------+ + +vdev_validate_skip +~~~~~~~~~~~~~~~~~~ + +``vdev_validate_skip`` disables label validation steps during pool +import. Changing is not recommended unless you know what you are doing +and are recovering a damaged label. + ++--------------------+------------------------------------------------+ +| vdev_validate_skip | Notes | ++====================+================================================+ +| Tags | `vdev <#vdev>`__ | ++--------------------+------------------------------------------------+ +| When to change | do not change | ++--------------------+------------------------------------------------+ +| Data Type | boolean | ++--------------------+------------------------------------------------+ +| Range | 0=validate labels during pool import, 1=do not | +| | validate vdev labels during pool import | ++--------------------+------------------------------------------------+ +| Default | 0 | ++--------------------+------------------------------------------------+ +| Change | prior to pool import | ++--------------------+------------------------------------------------+ +| Versions Affected | planned for v2 | ++--------------------+------------------------------------------------+ + +zfs_async_block_max_blocks +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_async_block_max_blocks`` limits the number of blocks freed in a +single transaction group commit. During deletes of large objects, such +as snapshots, the number of freed blocks can cause the DMU to extend txg +sync times well beyond `zfs_txg_timeout <#zfs-txg-timeout>`__. +``zfs_async_block_max_blocks`` is used to limit these effects. + +========================== ==================================== +zfs_async_block_max_blocks Notes +========================== ==================================== +Tags `delete <#delete>`__, `DMU <#DMU>`__ +When to change TBD +Data Type ulong +Units blocks +Range 1 to MAX_ULONG +Default MAX_ULONG (do not limit) +Change Dynamic +Versions Affected planned for v2 +========================== ==================================== + +zfs_checksum_events_per_second +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_checksum_events_per_second`` is a rate limit for checksum events. +Note that this should not be set below the ``zed`` thresholds (currently +10 checksums over 10 sec) or else ``zed`` may not trigger any action. + +============================== ============================= +zfs_checksum_events_per_second Notes +============================== ============================= +Tags `vdev <#vdev>`__ +When to change TBD +Data Type uint +Units checksum events +Range ``zed`` threshold to MAX_UINT +Default 20 +Change Dynamic +Versions Affected planned for v2 +============================== ============================= + +zfs_disable_ivset_guid_check +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_disable_ivset_guid_check`` disables requirement for IVset guids to +be present and match when doing a raw receive of encrypted datasets. +Intended for users whose pools were created with ZFS on Linux +pre-release versions and now have compatibility issues. + +For a ZFS raw receive, from a send stream created by ``zfs send --raw``, +the crypt_keydata nvlist includes a to_ivset_guid to be set on the new +snapshot. This value will override the value generated by the snapshot +code. However, this value may not be present, because older +implementations of the raw send code did not include this value. When +``zfs_disable_ivset_guid_check`` is enabled, the receive proceeds and a +newly-generated value is used. + ++------------------------------+--------------------------------------+ +| zfs_disable_ivset_guid_check | Notes | ++==============================+======================================+ +| Tags | `receive <#receive>`__ | ++------------------------------+--------------------------------------+ +| When to change | debugging pre-release ZFS raw sends | ++------------------------------+--------------------------------------+ +| Data Type | boolean | ++------------------------------+--------------------------------------+ +| Range | 0=check IVset guid, 1=do not check | +| | IVset guid | ++------------------------------+--------------------------------------+ +| Default | 0 | ++------------------------------+--------------------------------------+ +| Change | Dynamic | ++------------------------------+--------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------------+--------------------------------------+ + +zfs_obsolete_min_time_ms +~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_obsolete_min_time_ms`` is similar to +`zfs_free_min_time_ms <#zfs-free-min-time-ms>`__ and used for cleanup of +old indirection records for vdevs removed using the ``zpool remove`` +command. + +======================== ========================================== +zfs_obsolete_min_time_ms Notes +======================== ========================================== +Tags `delete <#delete>`__, `remove <#remove>`__ +When to change TBD +Data Type int +Units milliseconds +Range 0 to MAX_INT +Default 500 +Change Dynamic +Versions Affected planned for v2 +======================== ========================================== + +zfs_override_estimate_recordsize +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_override_estimate_recordsize`` overrides the default logic for +estimating block sizes when doing a zfs send. The default heuristic is +that the average block size will be the current recordsize. + ++----------------------------------+----------------------------------+ +| zfs_override_estimate_recordsize | Notes | ++==================================+==================================+ +| Tags | `send <#send>`__ | ++----------------------------------+----------------------------------+ +| When to change | if most data in your dataset is | +| | not of the current recordsize | +| | and you require accurate zfs | +| | send size estimates | ++----------------------------------+----------------------------------+ +| Data Type | ulong | ++----------------------------------+----------------------------------+ +| Units | bytes | ++----------------------------------+----------------------------------+ +| Range | 0=do not override, 1 to | +| | MAX_ULONG | ++----------------------------------+----------------------------------+ +| Default | 0 | ++----------------------------------+----------------------------------+ +| Change | Dynamic | ++----------------------------------+----------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------------+----------------------------------+ + +zfs_remove_max_segment +~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_remove_max_segment`` sets the largest contiguous segment that ZFS +attempts to allocate when removing a vdev. This can be no larger than +16MB. If there is a performance problem with attempting to allocate +large blocks, consider decreasing this. The value is rounded up to a +power-of-2. + ++------------------------+--------------------------------------------+ +| zfs_remove_max_segment | Notes | ++========================+============================================+ +| Tags | `remove <#remove>`__ | ++------------------------+--------------------------------------------+ +| When to change | after removing a top-level vdev, consider | +| | decreasing if there is a performance | +| | degradation when attempting to allocate | +| | large blocks | ++------------------------+--------------------------------------------+ +| Data Type | int | ++------------------------+--------------------------------------------+ +| Units | bytes | ++------------------------+--------------------------------------------+ +| Range | maximum of the physical block size of all | +| | vdevs in the pool to 16,777,216 bytes (16 | +| | MiB) | ++------------------------+--------------------------------------------+ +| Default | 16,777,216 bytes (16 MiB) | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | planned for v2 | ++------------------------+--------------------------------------------+ + +zfs_resilver_disable_defer +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_resilver_disable_defer`` disables the ``resilver_defer`` pool +feature. The ``resilver_defer`` feature allows ZFS to postpone new +resilvers if an existing resilver is in progress. + ++----------------------------+----------------------------------------+ +| zfs_resilver_disable_defer | Notes | ++============================+========================================+ +| Tags | `resilver <#resilver>`__ | ++----------------------------+----------------------------------------+ +| When to change | if resilver postponement is not | +| | desired due to overall resilver time | +| | constraints | ++----------------------------+----------------------------------------+ +| Data Type | boolean | ++----------------------------+----------------------------------------+ +| Range | 0=allow ``resilver_defer`` to postpone | +| | new resilver operations, 1=immediately | +| | restart resilver when needed | ++----------------------------+----------------------------------------+ +| Default | 0 | ++----------------------------+----------------------------------------+ +| Change | Dynamic | ++----------------------------+----------------------------------------+ +| Versions Affected | planned for v2 | ++----------------------------+----------------------------------------+ + +zfs_scan_suspend_progress +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_scan_suspend_progress`` causes a scrub or resilver scan to freeze +without actually pausing. + +========================= ============================================ +zfs_scan_suspend_progress Notes +========================= ============================================ +Tags `resilver <#resilver>`__, `scrub <#scrub>`__ +When to change testing or debugging scan code +Data Type boolean +Range 0=do not freeze scans, 1=freeze scans +Default 0 +Change Dynamic +Versions Affected planned for v2 +========================= ============================================ + +zfs_scrub_min_time_ms +~~~~~~~~~~~~~~~~~~~~~ + +Scrubs are processed by the sync thread. While scrubbing at least +``zfs_scrub_min_time_ms`` time is spent working on a scrub between txg +syncs. + +===================== ================================================= +zfs_scrub_min_time_ms Notes +===================== ================================================= +Tags `scrub <#scrub>`__ +When to change +Data Type int +Units milliseconds +Range 1 to (`zfs_txg_timeout <#zfs-txg-timeout>`__ - 1) +Default 1,000 +Change Dynamic +Versions Affected planned for v2 +===================== ================================================= + +zfs_slow_io_events_per_second +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_slow_io_events_per_second`` is a rate limit for slow I/O events. +Note that this should not be set below the ``zed`` thresholds (currently +10 checksums over 10 sec) or else ``zed`` may not trigger any action. + +============================= ============================= +zfs_slow_io_events_per_second Notes +============================= ============================= +Tags `vdev <#vdev>`__ +When to change TBD +Data Type uint +Units slow I/O events +Range ``zed`` threshold to MAX_UINT +Default 20 +Change Dynamic +Versions Affected planned for v2 +============================= ============================= + +zfs_vdev_min_ms_count +~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_min_ms_count`` is the minimum number of metaslabs to create +in a top-level vdev. + ++-----------------------+---------------------------------------------+ +| zfs_vdev_min_ms_count | Notes | ++=======================+=============================================+ +| Tags | `metaslab <#metaslab>`__, `vdev <#vdev>`__ | ++-----------------------+---------------------------------------------+ +| When to change | TBD | ++-----------------------+---------------------------------------------+ +| Data Type | int | ++-----------------------+---------------------------------------------+ +| Units | metaslabs | ++-----------------------+---------------------------------------------+ +| Range | 16 to | +| | `zfs_vdev_m | +| | s_count_limit <#zfs-vdev-ms-count-limit>`__ | ++-----------------------+---------------------------------------------+ +| Default | 16 | ++-----------------------+---------------------------------------------+ +| Change | prior to creating a pool or adding a | +| | top-level vdev | ++-----------------------+---------------------------------------------+ +| Versions Affected | planned for v2 | ++-----------------------+---------------------------------------------+ + +zfs_vdev_ms_count_limit +~~~~~~~~~~~~~~~~~~~~~~~ + +``zfs_vdev_ms_count_limit`` is the practical upper limit for the number +of metaslabs per top-level vdev. + ++-------------------------+-------------------------------------------+ +| zfs_vdev_ms_count_limit | Notes | ++=========================+===========================================+ +| Tags | `metaslab <#metaslab>`__, | +| | `vdev <#vdev>`__ | ++-------------------------+-------------------------------------------+ +| When to change | TBD | ++-------------------------+-------------------------------------------+ +| Data Type | int | ++-------------------------+-------------------------------------------+ +| Units | metaslabs | ++-------------------------+-------------------------------------------+ +| Range | `zfs_vdev | +| | _min_ms_count <#zfs-vdev-min-ms-count>`__ | +| | to 131,072 | ++-------------------------+-------------------------------------------+ +| Default | 131,072 | ++-------------------------+-------------------------------------------+ +| Change | prior to creating a pool or adding a | +| | top-level vdev | ++-------------------------+-------------------------------------------+ +| Versions Affected | planned for v2 | ++-------------------------+-------------------------------------------+ + +spl_hostid +~~~~~~~~~~ + +| ``spl_hostid`` is a unique system id number. It originated in Sun's + products where most systems had a unique id assigned at the factory. + This assignment does not exist in modern hardware. +| In ZFS, the hostid is stored in the vdev label and can be used to + determine if another system had imported the pool. When set + ``spl_hostid`` can be used to uniquely identify a system. By default + this value is set to zero which indicates the hostid is disabled. It + can be explicitly enabled by placing a unique non-zero value in the + file shown in `spl_hostid_path <#spl-hostid-path>`__ + ++-------------------+-------------------------------------------------+ +| spl_hostid | Notes | ++===================+=================================================+ +| Tags | `hostid <#hostid>`__, `MMP <#MMP>`__ | ++-------------------+-------------------------------------------------+ +| Kernel module | spl | ++-------------------+-------------------------------------------------+ +| When to change | to uniquely identify a system when vdevs can be | +| | shared across multiple systems | ++-------------------+-------------------------------------------------+ +| Data Type | ulong | ++-------------------+-------------------------------------------------+ +| Range | 0=ignore hostid, 1 to 4,294,967,295 (32-bits or | +| | 0xffffffff) | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | prior to importing pool | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.1 | ++-------------------+-------------------------------------------------+ + +spl_hostid_path +~~~~~~~~~~~~~~~ + +``spl_hostid_path`` is the path name for a file that can contain a +unique hostid. For testing purposes, ``spl_hostid_path`` can be +overridden by the ZFS_HOSTID environment variable. + ++-------------------+-------------------------------------------------+ +| spl_hostid_path | Notes | ++===================+=================================================+ +| Tags | `hostid <#hostid>`__, `MMP <#MMP>`__ | ++-------------------+-------------------------------------------------+ +| Kernel module | spl | ++-------------------+-------------------------------------------------+ +| When to change | when creating a new ZFS distribution where the | +| | default value is inappropriate | ++-------------------+-------------------------------------------------+ +| Data Type | string | ++-------------------+-------------------------------------------------+ +| Default | "/etc/hostid" | ++-------------------+-------------------------------------------------+ +| Change | read-only, can only be changed prior to spl | +| | module load | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.6.1 | ++-------------------+-------------------------------------------------+ + +spl_kmem_alloc_max +~~~~~~~~~~~~~~~~~~ + +Large ``kmem_alloc()`` allocations fail if they exceed KMALLOC_MAX_SIZE, +as determined by the kernel source. Allocations which are marginally +smaller than this limit may succeed but should still be avoided due to +the expense of locating a contiguous range of free pages. Therefore, a +maximum kmem size with reasonable safely margin of 4x is set. +``kmem_alloc()`` allocations larger than this maximum will quickly fail. +``vmem_alloc()`` allocations less than or equal to this value will use +``kmalloc()``, but shift to ``vmalloc()`` when exceeding this value. + +================== ==================== +spl_kmem_alloc_max Notes +================== ==================== +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type uint +Units bytes +Range TBD +Default KMALLOC_MAX_SIZE / 4 +Change Dynamic +Versions Affected v0.7.0 +================== ==================== + +spl_kmem_alloc_warn +~~~~~~~~~~~~~~~~~~~ + +As a general rule ``kmem_alloc()`` allocations should be small, +preferably just a few pages since they must by physically contiguous. +Therefore, a rate limited warning is printed to the console for any +``kmem_alloc()`` which exceeds the threshold ``spl_kmem_alloc_warn`` + +The default warning threshold is set to eight pages but capped at 32K to +accommodate systems using large pages. This value was selected to be +small enough to ensure the largest allocations are quickly noticed and +fixed. But large enough to avoid logging any warnings when a allocation +size is larger than optimal but not a serious concern. Since this value +is tunable, developers are encouraged to set it lower when testing so +any new largish allocations are quickly caught. These warnings may be +disabled by setting the threshold to zero. + ++---------------------+-----------------------------------------------+ +| spl_kmem_alloc_warn | Notes | ++=====================+===============================================+ +| Tags | `memory <#memory>`__ | ++---------------------+-----------------------------------------------+ +| Kernel module | spl | ++---------------------+-----------------------------------------------+ +| When to change | developers are encouraged lower when testing | +| | so any new, large allocations are quickly | +| | caught | ++---------------------+-----------------------------------------------+ +| Data Type | uint | ++---------------------+-----------------------------------------------+ +| Units | bytes | ++---------------------+-----------------------------------------------+ +| Range | 0=disable the warnings, | ++---------------------+-----------------------------------------------+ +| Default | 32,768 (32 KiB) | ++---------------------+-----------------------------------------------+ +| Change | Dynamic | ++---------------------+-----------------------------------------------+ +| Versions Affected | v0.7.0 | ++---------------------+-----------------------------------------------+ + +spl_kmem_cache_expire +~~~~~~~~~~~~~~~~~~~~~ + +Cache expiration is part of default illumos cache behavior. The idea is +that objects in magazines which have not been recently accessed should +be returned to the slabs periodically. This is known as cache aging and +when enabled objects will be typically returned after 15 seconds. + +On the other hand Linux slabs are designed to never move objects back to +the slabs unless there is memory pressure. This is possible because +under Linux the cache will be notified when memory is low and objects +can be released. + +By default only the Linux method is enabled. It has been shown to +improve responsiveness on low memory systems and not negatively impact +the performance of systems with more memory. This policy may be changed +by setting the ``spl_kmem_cache_expire`` bit mask as follows, both +policies may be enabled concurrently. + +===================== ================================================= +spl_kmem_cache_expire Notes +===================== ================================================= +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type bitmask +Range 0x01 - Aging (illumos), 0x02 - Low memory (Linux) +Default 0x02 +Change Dynamic +Versions Affected v0.6.1 to v0.8.x +===================== ================================================= + +spl_kmem_cache_kmem_limit +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Depending on the size of a memory cache object it may be backed by +``kmalloc()`` or ``vmalloc()`` memory. This is because the size of the +required allocation greatly impacts the best way to allocate the memory. + +When objects are small and only a small number of memory pages need to +be allocated, ideally just one, then ``kmalloc()`` is very efficient. +However, allocating multiple pages with ``kmalloc()`` gets increasingly +expensive because the pages must be physically contiguous. + +For this reason we shift to ``vmalloc()`` for slabs of large objects +which which removes the need for contiguous pages. ``vmalloc()`` cannot +be used in all cases because there is significant locking overhead +involved. This function takes a single global lock over the entire +virtual address range which serializes all allocations. Using slightly +different allocation functions for small and large objects allows us to +handle a wide range of object sizes. + +The ``spl_kmem_cache_kmem_limit`` value is used to determine this cutoff +size. One quarter of the kernel's compiled PAGE_SIZE is used as the +default value because +`spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ defaults +to 16. With these default values, at most four contiguous pages are +allocated. + +========================= ==================== +spl_kmem_cache_kmem_limit Notes +========================= ==================== +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type uint +Units pages +Range TBD +Default PAGE_SIZE / 4 +Change Dynamic +Versions Affected v0.7.0 to v0.8.x +========================= ==================== + +spl_kmem_cache_max_size +~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_max_size`` is the maximum size of a kmem cache slab in +MiB. This effectively limits the maximum cache object size to +``spl_kmem_cache_max_size`` / +`spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ Kmem +caches may not be created with object sized larger than this limit. + +======================= ========================================= +spl_kmem_cache_max_size Notes +======================= ========================================= +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type uint +Units MiB +Range TBD +Default 4 for 32-bit kernel, 32 for 64-bit kernel +Change Dynamic +Versions Affected v0.7.0 +======================= ========================================= + +spl_kmem_cache_obj_per_slab +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_obj_per_slab`` is the preferred number of objects per +slab in the kmem cache. In general, a larger value will increase the +caches memory footprint while decreasing the time required to perform an +allocation. Conversely, a smaller value will minimize the footprint and +improve cache reclaim time but individual allocations may take longer. + +=========================== ==================== +spl_kmem_cache_obj_per_slab Notes +=========================== ==================== +Tags `memory <#memory>`__ +Kernel module spl +When to change TBD +Data Type uint +Units kmem cache objects +Range TBD +Default 8 +Change Dynamic +Versions Affected v0.7.0 to v0.8.x +=========================== ==================== + +spl_kmem_cache_obj_per_slab_min +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_obj_per_slab_min`` is the minimum number of objects +allowed per slab. Normally slabs will contain +`spl_kmem_cache_obj_per_slab <#spl-kmem-cache-obj-per-slab>`__ objects +but for caches that contain very large objects it's desirable to only +have a few, or even just one, object per slab. + +=============================== =============================== +spl_kmem_cache_obj_per_slab_min Notes +=============================== =============================== +Tags `memory <#memory>`__ +Kernel module spl +When to change debugging kmem cache operations +Data Type uint +Units kmem cache objects +Range TBD +Default 1 +Change Dynamic +Versions Affected v0.7.0 +=============================== =============================== + +spl_kmem_cache_reclaim +~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_reclaim`` prevents Linux from being able to rapidly +reclaim all the memory held by the kmem caches. This may be useful in +circumstances where it's preferable that Linux reclaim memory from some +other subsystem first. Setting ``spl_kmem_cache_reclaim`` increases the +likelihood out of memory events on a memory constrained system. + ++------------------------+--------------------------------------------+ +| spl_kmem_cache_reclaim | Notes | ++========================+============================================+ +| Tags | `memory <#memory>`__ | ++------------------------+--------------------------------------------+ +| Kernel module | spl | ++------------------------+--------------------------------------------+ +| When to change | TBD | ++------------------------+--------------------------------------------+ +| Data Type | boolean | ++------------------------+--------------------------------------------+ +| Range | 0=enable rapid memory reclaim from kmem | +| | caches, 1=disable rapid memory reclaim | +| | from kmem caches | ++------------------------+--------------------------------------------+ +| Default | 0 | ++------------------------+--------------------------------------------+ +| Change | Dynamic | ++------------------------+--------------------------------------------+ +| Versions Affected | v0.7.0 | ++------------------------+--------------------------------------------+ + +spl_kmem_cache_slab_limit +~~~~~~~~~~~~~~~~~~~~~~~~~ + +For small objects the Linux slab allocator should be used to make the +most efficient use of the memory. However, large objects are not +supported by the Linux slab allocator and therefore the SPL +implementation is preferred. ``spl_kmem_cache_slab_limit`` is used to +determine the cutoff between a small and large object. + +Objects of ``spl_kmem_cache_slab_limit`` or smaller will be allocated +using the Linux slab allocator, large objects use the SPL allocator. A +cutoff of 16 KiB was determined to be optimal for architectures using 4 +KiB pages. + ++---------------------------+-----------------------------------------+ +| spl_kmem_cache_slab_limit | Notes | ++===========================+=========================================+ +| Tags | `memory <#memory>`__ | ++---------------------------+-----------------------------------------+ +| Kernel module | spl | ++---------------------------+-----------------------------------------+ +| When to change | TBD | ++---------------------------+-----------------------------------------+ +| Data Type | uint | ++---------------------------+-----------------------------------------+ +| Units | bytes | ++---------------------------+-----------------------------------------+ +| Range | TBD | ++---------------------------+-----------------------------------------+ +| Default | 16,384 (16 KiB) when kernel PAGE_SIZE = | +| | 4KiB, 0 for other PAGE_SIZE values | ++---------------------------+-----------------------------------------+ +| Change | Dynamic | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.7.0 | ++---------------------------+-----------------------------------------+ + +spl_max_show_tasks +~~~~~~~~~~~~~~~~~~ + +``spl_max_show_tasks`` is the limit of tasks per pending list in each +taskq shown in ``/proc/spl/taskq`` and ``/proc/spl/taskq-all``. Reading +the ProcFS files walks the lists with lock held and it could cause a +lock up if the list grow too large. If the list is larger than the +limit, the string \`"(truncated)" is printed. + +================== =================================== +spl_max_show_tasks Notes +================== =================================== +Tags `taskq <#taskq>`__ +Kernel module spl +When to change TBD +Data Type uint +Units tasks reported +Range 0 disables the limit, 1 to MAX_UINT +Default 512 +Change Dynamic +Versions Affected v0.7.0 +================== =================================== + +spl_panic_halt +~~~~~~~~~~~~~~ + +``spl_panic_halt`` enables kernel panic upon assertion failures. When +not enabled, the asserting thread is halted to facilitate further +debugging. + ++-------------------+-------------------------------------------------+ +| spl_panic_halt | Notes | ++===================+=================================================+ +| Tags | `debug <#debug>`__, `panic <#panic>`__ | ++-------------------+-------------------------------------------------+ +| Kernel module | spl | ++-------------------+-------------------------------------------------+ +| When to change | when debugging assertions and kernel core dumps | +| | are desired | ++-------------------+-------------------------------------------------+ +| Data Type | boolean | ++-------------------+-------------------------------------------------+ +| Range | 0=halt thread upon assertion, 1=panic kernel | +| | upon assertion | ++-------------------+-------------------------------------------------+ +| Default | 0 | ++-------------------+-------------------------------------------------+ +| Change | Dynamic | ++-------------------+-------------------------------------------------+ +| Versions Affected | v0.7.0 | ++-------------------+-------------------------------------------------+ + +spl_taskq_kick +~~~~~~~~~~~~~~ + +Upon writing a non-zero value to ``spl_taskq_kick``, all taskqs are +scanned. If any taskq has a pending task more than 5 seconds old, the +taskq spawns more threads. This can be useful in rare deadlock +situations caused by one or more taskqs not spawning a thread when it +should. + +================= ===================== +spl_taskq_kick Notes +================= ===================== +Tags `taskq <#taskq>`__ +Kernel module spl +When to change See description above +Data Type uint +Units N/A +Default 0 +Change Dynamic +Versions Affected v0.7.0 +================= ===================== + +spl_taskq_thread_bind +~~~~~~~~~~~~~~~~~~~~~ + +``spl_taskq_thread_bind`` enables binding taskq threads to specific +CPUs, distributed evenly over the available CPUs. By default, this +behavior is disabled to allow the Linux scheduler the maximum +flexibility to determine where a thread should run. + ++-----------------------+---------------------------------------------+ +| spl_taskq_thread_bind | Notes | ++=======================+=============================================+ +| Tags | `CPU <#CPU>`__, `taskq <#taskq>`__ | ++-----------------------+---------------------------------------------+ +| Kernel module | spl | ++-----------------------+---------------------------------------------+ +| When to change | when debugging CPU scheduling options | ++-----------------------+---------------------------------------------+ +| Data Type | boolean | ++-----------------------+---------------------------------------------+ +| Range | 0=taskqs are not bound to specific CPUs, | +| | 1=taskqs are bound to CPUs | ++-----------------------+---------------------------------------------+ +| Default | 0 | ++-----------------------+---------------------------------------------+ +| Change | prior to loading spl kernel module | ++-----------------------+---------------------------------------------+ +| Versions Affected | v0.7.0 | ++-----------------------+---------------------------------------------+ + +spl_taskq_thread_dynamic +~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_taskq_thread_dynamic`` enables taskqs to set the TASKQ_DYNAMIC +flag will by default create only a single thread. New threads will be +created on demand up to a maximum allowed number to facilitate the +completion of outstanding tasks. Threads which are no longer needed are +promptly destroyed. By default this behavior is enabled but it can be d. + +See also +`zfs_zil_clean_taskq_nthr_pct <#zfs-zil-clean-taskq-nthr-pct>`__, +`zio_taskq_batch_pct <#zio-taskq-batch-pct>`__ + ++--------------------------+------------------------------------------+ +| spl_taskq_thread_dynamic | Notes | ++==========================+==========================================+ +| Tags | `taskq <#taskq>`__ | ++--------------------------+------------------------------------------+ +| Kernel module | spl | ++--------------------------+------------------------------------------+ +| When to change | disable for performance analysis or | +| | troubleshooting | ++--------------------------+------------------------------------------+ +| Data Type | boolean | ++--------------------------+------------------------------------------+ +| Range | 0=taskq threads are not dynamic, 1=taskq | +| | threads are dynamically created and | +| | destroyed | ++--------------------------+------------------------------------------+ +| Default | 1 | ++--------------------------+------------------------------------------+ +| Change | prior to loading spl kernel module | ++--------------------------+------------------------------------------+ +| Versions Affected | v0.7.0 | ++--------------------------+------------------------------------------+ + +spl_taskq_thread_priority +~~~~~~~~~~~~~~~~~~~~~~~~~ + +| ``spl_taskq_thread_priority`` allows newly created taskq threads to + set a non-default scheduler priority. When enabled the priority + specified when a taskq is created will be applied to all threads + created by that taskq. +| When disabled all threads will use the default Linux kernel thread + priority. + ++---------------------------+-----------------------------------------+ +| spl_taskq_thread_priority | Notes | ++===========================+=========================================+ +| Tags | `CPU <#CPU>`__, `taskq <#taskq>`__ | ++---------------------------+-----------------------------------------+ +| Kernel module | spl | ++---------------------------+-----------------------------------------+ +| When to change | when troubleshooting CPU | +| | scheduling-related performance issues | ++---------------------------+-----------------------------------------+ +| Data Type | boolean | ++---------------------------+-----------------------------------------+ +| Range | 0=taskq threads use the default Linux | +| | kernel thread priority, 1= | ++---------------------------+-----------------------------------------+ +| Default | 1 | ++---------------------------+-----------------------------------------+ +| Change | prior to loading spl kernel module | ++---------------------------+-----------------------------------------+ +| Versions Affected | v0.7.0 | ++---------------------------+-----------------------------------------+ + +spl_taskq_thread_sequential +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_taskq_thread_sequential`` is the number of items a taskq worker +thread must handle without interruption before requesting a new worker +thread be spawned. ``spl_taskq_thread_sequential`` controls how quickly +taskqs ramp up the number of threads processing the queue. Because Linux +thread creation and destruction are relatively inexpensive a small +default value has been selected. Thus threads are created aggressively, +which is typically desirable. Increasing this value results in a slower +thread creation rate which may be preferable for some configurations. + +=========================== ================================== +spl_taskq_thread_sequential Notes +=========================== ================================== +Tags `CPU <#CPU>`__, `taskq <#taskq>`__ +Kernel module spl +When to change TBD +Data Type int +Units taskq items +Range 1 to MAX_INT +Default 4 +Change Dynamic +Versions Affected v0.7.0 +=========================== ================================== + +spl_kmem_cache_kmem_threads +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_kmem_threads`` shows the current number of +``spl_kmem_cache`` threads. This task queue is responsible for +allocating new slabs for use by the kmem caches. For the majority of +systems and workloads only a small number of threads are required. + ++-----------------------------+---------------------------------------+ +| spl_kmem_cache_kmem_threads | Notes | ++=============================+=======================================+ +| Tags | `CPU <#CPU>`__, `memory <#memory>`__ | ++-----------------------------+---------------------------------------+ +| Kernel module | spl | ++-----------------------------+---------------------------------------+ +| When to change | read-only | ++-----------------------------+---------------------------------------+ +| Data Type | int | ++-----------------------------+---------------------------------------+ +| Range | 1 to MAX_INT | ++-----------------------------+---------------------------------------+ +| Units | threads | ++-----------------------------+---------------------------------------+ +| Default | 4 | ++-----------------------------+---------------------------------------+ +| Change | read-only, can only be changed prior | +| | to spl module load | ++-----------------------------+---------------------------------------+ +| Versions Affected | v0.7.0 | ++-----------------------------+---------------------------------------+ + +spl_kmem_cache_magazine_size +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``spl_kmem_cache_magazine_size`` shows the current . Cache magazines are +an optimization designed to minimize the cost of allocating memory. They +do this by keeping a per-cpu cache of recently freed objects, which can +then be reallocated without taking a lock. This can improve performance +on highly contended caches. However, because objects in magazines will +prevent otherwise empty slabs from being immediately released this may +not be ideal for low memory machines. + +For this reason spl_kmem_cache_magazine_size can be used to set a +maximum magazine size. When this value is set to 0 the magazine size +will be automatically determined based on the object size. Otherwise +magazines will be limited to 2-256 objects per magazine (eg per CPU). +Magazines cannot be disabled entirely in this implementation. + ++------------------------------+--------------------------------------+ +| spl_kmem_cache_magazine_size | Notes | ++==============================+======================================+ +| Tags | `CPU <#CPU>`__, `memory <#memory>`__ | ++------------------------------+--------------------------------------+ +| Kernel module | spl | ++------------------------------+--------------------------------------+ +| When to change | | ++------------------------------+--------------------------------------+ +| Data Type | int | ++------------------------------+--------------------------------------+ +| Units | threads | ++------------------------------+--------------------------------------+ +| Range | 0=automatically scale magazine size, | +| | otherwise 2 to 256 | ++------------------------------+--------------------------------------+ +| Default | 0 | ++------------------------------+--------------------------------------+ +| Change | read-only, can only be changed prior | +| | to spl module load | ++------------------------------+--------------------------------------+ +| Versions Affected | v0.7.0 | ++------------------------------+--------------------------------------+ diff --git a/_sources/Performance and Tuning/Workload Tuning.rst.txt b/_sources/Performance and Tuning/Workload Tuning.rst.txt new file mode 100644 index 000000000..d86f61bb4 --- /dev/null +++ b/_sources/Performance and Tuning/Workload Tuning.rst.txt @@ -0,0 +1,789 @@ +Workload Tuning +=============== + +Below are tips for various workloads. + +.. contents:: Table of Contents + :local: + +.. _basic_concepts: + +Basic concepts +-------------- + +Descriptions of ZFS internals that have an effect on application +performance follow. + +.. _adaptive_replacement_cache: + +Adaptive Replacement Cache +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For decades, operating systems have used RAM as a cache to avoid the +necessity of waiting on disk IO, which is extremely slow. This concept +is called page replacement. Until ZFS, virtually all filesystems used +the Least Recently Used (LRU) page replacement algorithm in which the +least recently used pages are the first to be replaced. Unfortunately, +the LRU algorithm is vulnerable to cache flushes, where a brief change +in workload that occurs occasionally removes all frequently used data +from cache. The Adaptive Replacement Cache (ARC) algorithm was +implemented in ZFS to replace LRU. It solves this problem by maintaining +four lists: + +#. A list for recently cached entries. +#. A list for recently cached entries that have been accessed more than + once. +#. A list for entries evicted from #1. +#. A list of entries evicited from #2. + +Data is evicted from the first list while an effort is made to keep data +in the second list. In this way, ARC is able to outperform LRU by +providing a superior hit rate. + +In addition, a dedicated cache device (typically a SSD) can be added to +the pool, with +``zpool add POOLNAME cache DEVICENAME``. The cache +device is managed by the L2ARC, which scans entries that are next to be +evicted and writes them to the cache device. The data stored in ARC and +L2ARC can be controlled via the ``primarycache`` and ``secondarycache`` +zfs properties respectively, which can be set on both zvols and +datasets. Possible settings are ``all``, ``none`` and ``metadata``. It +is possible to improve performance when a zvol or dataset hosts an +application that does its own caching by caching only metadata. One +example would be a virtual machine using ZFS. Another would be a +database system which manages its own cache (Oracle for instance). +PostgreSQL, by contrast, depends on the OS-level file cache for the +majority of cache. + +.. _alignment_shift_ashift: + +Alignment Shift (ashift) +~~~~~~~~~~~~~~~~~~~~~~~~ + +Top-level vdevs contain an internal property called ashift, which stands +for alignment shift. It is set at vdev creation and it is immutable. It +can be read using the ``zdb`` command. It is calculated as the maximum +base 2 logarithm of the physical sector size of any child vdev and it +alters the disk format such that writes are always done according to it. +This makes 2^ashift the smallest possible IO on a vdev. Configuring +ashift correctly is important because partial sector writes incur a +penalty where the sector must be read into a buffer before it can be +written. ZFS makes the implicit assumption that the sector size reported +by drives is correct and calculates ashift based on that. + +In an ideal world, physical sector size is always reported correctly and +therefore, this requires no attention. Unfortunately, this is not the +case. The sector size on all storage devices was 512-bytes prior to the +creation of flash-based solid state drives. Some operating systems, such +as Windows XP, were written under this assumption and will not function +when drives report a different sector size. + +Flash-based solid state drives came to market around 2007. These devices +report 512-byte sectors, but the actual flash pages, which roughly +correspond to sectors, are never 512-bytes. The early models used +4096-byte pages while the newer models have moved to an 8192-byte page. +In addition, "Advanced Format" hard drives have been created which also +use a 4096-byte sector size. Partial page writes suffer from similar +performance degradation as partial sector writes. In some cases, the +design of NAND-flash makes the performance degradation even worse, but +that is beyond the scope of this description. + +Reporting the correct sector sizes is the responsibility the block +device layer. This unfortunately has made proper handling of devices +that misreport drives different across different platforms. The +respective methods are as follows: + +- `sd.conf `__ + on illumos +- `gnop(8) `__ + on FreeBSD; see for example `FreeBSD on 4K sector + drives `__ + (2011-01-01) +- `ashift= `__ + on ZFS on Linux +- -o ashift= also works with both MacZFS (pool version 8) and ZFS-OSX + (pool version 5000). + +-o ashift= is convenient, but it is flawed in that the creation of pools +containing top level vdevs that have multiple optimal sector sizes +require the use of multiple commands. `A newer +syntax `__ +that will rely on the actual sector sizes has been discussed as a cross +platform replacement and will likely be implemented in the future. + +In addition, there is a `database of +drives known to misreport sector +sizes `__ +to the ZFS on Linux project. It is used to automatically adjust ashift +without the assistance of the system administrator. This approach is +unable to fully compensate for misreported sector sizes whenever drive +identifiers are used ambiguously (e.g. virtual machines, iSCSI LUNs, +some rare SSDs), but it does a great amount of good. The format is +roughly compatible with illumos' sd.conf and it is expected that other +implementations will integrate the database in future releases. Strictly +speaking, this database does not belong in ZFS, but the difficulty of +patching the Linux kernel (especially older ones) necessitated that this +be implemented in ZFS itself for Linux. The same is true for MacZFS. +However, FreeBSD and illumos are both able to implement this in the +correct layer. + +Compression +~~~~~~~~~~~ + +Internally, ZFS allocates data using multiples of the device's sector +size, typically either 512 bytes or 4KB (see above). When compression is +enabled, a smaller number of sectors can be allocated for each block. +The uncompressed block size is set by the ``recordsize`` (defaults to +128KB) or ``volblocksize`` (defaults to 16KB since v2.2) property (for filesystems +vs volumes). + +The following compression algorithms are available: + +- LZ4 + + - New algorithm added after feature flags were created. It is + significantly superior to LZJB in all metrics tested. It is `new + default compression algorithm `__ + (compression=on) in OpenZFS. + It is available on all platforms as of 2020. + +- LZJB + + - Original default compression algorithm (compression=on) for ZFS. + It was created to satisfy the desire for a compression algorithm + suitable for use in filesystems. Specifically, that it provides + fair compression, has a high compression speed, has a high + decompression speed and detects incompressible data + quickly. + +- GZIP (1 through 9) + + - Classic Lempel-Ziv implementation. It provides high compression, + but it often makes IO CPU-bound. + +- ZLE (Zero Length Encoding) + + - A very simple algorithm that only compresses zeroes. + +- ZSTD (Zstandard) + + - Zstandard is a modern, high performance, general compression + algorithm which provides similar or better compression levels to + GZIP, but with much better performance. Zstandard offers a very + wide range of performance/compression trade-off, and is backed by + an extremely fast decoder. + It is available from `OpenZFS 2.0 version `__. + +If you want to use compression and are uncertain which to use, use LZ4. +It averages a 2.1:1 compression ratio while gzip-1 averages 2.7:1, but +gzip is much slower. Both figures are obtained from `testing by the LZ4 +project `__ on the Silesia corpus. The +greater compression ratio of gzip is usually only worthwhile for rarely +accessed data. + +.. _raid_z_stripe_width: + +RAID-Z stripe width +~~~~~~~~~~~~~~~~~~~ + +Choose a RAID-Z stripe width based on your IOPS needs and the amount of +space you are willing to devote to parity information. If you need more +IOPS, use fewer disks per stripe. If you need more usable space, use +more disks per stripe. Trying to optimize your RAID-Z stripe width based +on exact numbers is irrelevant in nearly all cases. See this `blog +post `__ +for more details. + +.. _dataset_recordsize: + +Dataset recordsize +~~~~~~~~~~~~~~~~~~ + +ZFS datasets use an internal recordsize of 128KB by default. The dataset +recordsize is the basic unit of data used for internal copy-on-write on +files. Partial record writes require that data be read from either ARC +(cheap) or disk (expensive). recordsize can be set to any power of 2 +from 512 bytes to 1 megabyte. Software that writes in fixed record +sizes (e.g. databases) will benefit from the use of a matching +recordsize. + +Changing the recordsize on a dataset will only take effect for new +files. If you change the recordsize because your application should +perform better with a different one, you will need to recreate its +files. A cp followed by a mv on each file is sufficient. Alternatively, +send/recv should recreate the files with the correct recordsize when a +full receive is done. + +.. _larger_record_sizes: + +Larger record sizes +^^^^^^^^^^^^^^^^^^^ + +Record sizes of up to 16M are supported with the large_blocks pool +feature, which is enabled by default on new pools on systems that +support it. + +Record sizes larger than 1M were disabled by default +before openZFS v2.2, +unless the zfs_max_recordsize kernel module parameter was set to allow +sizes higher than 1M. + +\`zfs send\` operations must specify -L +to ensure that larger than 128KB blocks are sent and the receiving pools +must support the large_blocks feature. + +.. _zvol_volblocksize: + +zvol volblocksize +~~~~~~~~~~~~~~~~~ + +Zvols have a ``volblocksize`` property that is analogous to ``recordsize``. +Current default (16KB since v2.2) balances the metadata overhead, compression +opportunities and decent space efficiency on majority of pool configurations +due to 4KB disk physical block rounding (especially on RAIDZ and DRAID), +while incurring some write amplification on guest FSes that run with smaller +block sizes [#VOLBLOCKSIZE]_. + +Users are advised to test their scenarios and see whether the ``volblocksize`` +needs to be changed to favor one or the other: + +- sector alignment of guest FS is crucial +- most of guest FSes use default block size of 4-8KB, so: + + - Larger ``volblocksize`` can help with mostly sequential workloads and + will gain a compression efficiency + + - Smaller ``volblocksize`` can help with random workloads and minimize + IO amplification, but will use more metadata + (e.g. more small IOs will be generated by ZFS) and may have worse + space efficiency (especially on RAIDZ and DRAID) + + - It's meaningless to set ``volblocksize`` less than guest FS's block size + or :ref:`ashift ` + + - See :ref:`Dataset recordsize ` + for additional information + +Deduplication +~~~~~~~~~~~~~ + +Deduplication uses an on-disk hash table, using `extensible +hashing `__ as +implemented in the ZAP (ZFS Attribute Processor). Each cached entry uses +slightly more than 320 bytes of memory. The DDT code relies on ARC for +caching the DDT entries, such that there is no double caching or +internal fragmentation from the kernel memory allocator. Each pool has a +global deduplication table shared across all datasets and zvols on which +deduplication is enabled. Each entry in the hash table is a record of a +unique block in the pool. (Where the block size is set by the +``recordsize`` or ``volblocksize`` properties.) + +The hash table (also known as the DDT or DeDup Table) must be accessed +for every dedup-able block that is written or freed (regardless of +whether it has multiple references). If there is insufficient memory for +the DDT to be cached in memory, each cache miss will require reading a +random block from disk, resulting in poor performance. For example, if +operating on a single 7200RPM drive that can do 100 io/s, uncached DDT +reads would limit overall write throughput to 100 blocks per second, or +400KB/s with 4KB blocks. + +The consequence is that sufficient memory to store deduplication data is +required for good performance. The deduplication data is considered +metadata and therefore can be cached if the ``primarycache`` or +``secondarycache`` properties are set to ``metadata``. In addition, the +deduplication table will compete with other metadata for metadata +storage, which can have a negative effect on performance. Simulation of +the number of deduplication table entries needed for a given pool can be +done using the -D option to zdb. Then a simple multiplication by +320-bytes can be done to get the approximate memory requirements. +Alternatively, you can estimate an upper bound on the number of unique +blocks by dividing the amount of storage you plan to use on each dataset +(taking into account that partial records each count as a full +recordsize for the purposes of deduplication) by the recordsize and each +zvol by the volblocksize, summing and then multiplying by 320-bytes. + +.. _metaslab_allocator: + +Metaslab Allocator +~~~~~~~~~~~~~~~~~~ + +ZFS top level vdevs are divided into metaslabs from which blocks can be +independently allocated so allow for concurrent IOs to perform +allocations without blocking one another. At present, `there is a +regression `__ on the +Linux and Mac OS X ports that causes serialization to occur. + +By default, the selection of a metaslab is biased toward lower LBAs to +improve performance of spinning disks, but this does not make sense on +solid state media. This behavior can be adjusted globally by setting the +ZFS module's global metaslab_lba_weighting_enabled tuanble to 0. This +tunable is only advisable on systems that only use solid state media for +pools. + +The metaslab allocator will allocate blocks on a first-fit basis when a +metaslab has more than or equal to 4 percent free space and a best-fit +basis when a metaslab has less than 4 percent free space. The former is +much faster than the latter, but it is not possible to tell when this +behavior occurs from the pool's free space. However, the command ``zdb +-mmm $POOLNAME`` will provide this information. + +.. _pool_geometry: + +Pool Geometry +~~~~~~~~~~~~~ + +If small random IOPS are of primary importance, mirrored vdevs will +outperform raidz vdevs. Read IOPS on mirrors will scale with the number +of drives in each mirror while raidz vdevs will each be limited to the +IOPS of the slowest drive. + +If sequential writes are of primary importance, raidz will outperform +mirrored vdevs. Sequential write throughput increases linearly with the +number of data disks in raidz while writes are limited to the slowest +drive in mirrored vdevs. Sequential read performance should be roughly +the same on each. + +Both IOPS and throughput will increase by the respective sums of the +IOPS and throughput of each top level vdev, regardless of whether they +are raidz or mirrors. + +.. _whole_disks_versus_partitions: + +Whole Disks versus Partitions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +ZFS will behave differently on different platforms when given a whole +disk. + +On illumos, ZFS attempts to enable the write cache on a whole disk. The +illumos UFS driver cannot ensure integrity with the write cache enabled, +so by default Sun/Solaris systems using UFS file system for boot were +shipped with drive write cache disabled (long ago, when Sun was still an +independent company). For safety on illumos, if ZFS is not given the +whole disk, it could be shared with UFS and thus it is not appropriate +for ZFS to enable write cache. In this case, the write cache setting is +not changed and will remain as-is. Today, most vendors ship drives with +write cache enabled by default. + +On Linux, the Linux IO elevator is largely redundant given that ZFS has +its own IO elevator. + +ZFS will also create a GPT partition table own partitions when given a +whole disk under illumos on x86/amd64 and on Linux. This is mainly to +make booting through UEFI possible because UEFI requires a small FAT +partition to be able to boot the system. The ZFS driver will be able to +tell the difference between whether the pool had been given the entire +disk or not via the whole_disk field in the label. + +This is not done on FreeBSD. Pools created by FreeBSD will always have +the whole_disk field set to true, such that a pool imported on another +platform that was created on FreeBSD will always be treated as the whole +disks were given to ZFS. + +.. _OS_specific: + +OS/distro-specific recommendations +---------------------------------- + +.. _linux_specific: + +Linux +~~~~~ + +init_on_alloc +^^^^^^^^^^^^^ +Some Linux distributions (at least Debian, Ubuntu) enable +``init_on_alloc`` option as security precaution by default. +This option can help to [#init_on_alloc]_: + + prevent possible information leaks and + make control-flow bugs that depend on uninitialized values more + deterministic. + +Unfortunately, it can lower ARC throughput considerably +(see `bug `__). + +If you're ready to cope with these security risks [#init_on_alloc]_, +you may disable it +by setting ``init_on_alloc=0`` in the GRUB kernel boot parameters. + +.. _general_recommendations: + +General recommendations +----------------------- + +.. _alignment_shift: + +Alignment shift +~~~~~~~~~~~~~~~ + +Make sure that you create your pools such that the vdevs have the +correct alignment shift for your storage device's size. if dealing with +flash media, this is going to be either 12 (4K sectors) or 13 (8K +sectors). For SSD ephemeral storage on Amazon EC2, the proper setting is +12. + +.. _atime_updates: + +Atime Updates +~~~~~~~~~~~~~ + +Set either relatime=on or atime=off to minimize IOs used to update +access time stamps. For backward compatibility with a small percentage +of software that supports it, relatime is preferred when available and +should be set on your entire pool. atime=off should be used more +selectively. + +.. _free_space: + +Free Space +~~~~~~~~~~ + +Keep pool free space above 10% to avoid many metaslabs from reaching the +4% free space threshold to switch from first-fit to best-fit allocation +strategies. When the threshold is hit, the :ref:`metaslab_allocator` becomes very CPU +intensive in an attempt to protect itself from fragmentation. This +reduces IOPS, especially as more metaslabs reach the 4% threshold. + +The recommendation is 10% rather than 5% because metaslabs selection +considers both location and free space unless the global +metaslab_lba_weighting_enabled tunable is set to 0. When that tunable is +0, ZFS will consider only free space, so the the expense of the best-fit +allocator can be avoided by keeping free space above 5%. That setting +should only be used on systems with pools that consist of solid state +drives because it will reduce sequential IO performance on mechanical +disks. + +.. _lz4_compression: + +LZ4 compression +~~~~~~~~~~~~~~~ + +Set compression=lz4 on your pools' root datasets so that all datasets +inherit it unless you have a reason not to enable it. Userland tests of +LZ4 compression of incompressible data in a single thread has shown that +it can process 10GB/sec, so it is unlikely to be a bottleneck even on +incompressible data. Furthermore, incompressible data will be stored +without compression such that reads of incompressible data with +compression enabled will not be subject to decompression. Writes are so +fast that in-compressible data is unlikely to see a performance penalty +from the use of LZ4 compression. The reduction in IO from LZ4 will +typically be a performance win. + +Note that larger record sizes will increase compression ratios on +compressible data by allowing compression algorithms to process more +data at a time. + +.. _nvme_low_level_formatting_link: + +NVMe low level formatting +~~~~~~~~~~~~~~~~~~~~~~~~~ + +See :ref:`nvme_low_level_formatting`. + +.. _pool_geometry_1: + +Pool Geometry +~~~~~~~~~~~~~ + +Do not put more than ~16 disks in raidz. The rebuild times on mechanical +disks will be excessive when the pool is full. + +.. _synchronous_io: + +Synchronous I/O +~~~~~~~~~~~~~~~ + +If your workload involves fsync or O_SYNC and your pool is backed by +mechanical storage, consider adding one or more SLOG devices. Pools that +have multiple SLOG devices will distribute ZIL operations across them. +The best choice for SLOG device(s) are likely Optane / 3D XPoint SSDs. +See :ref:`optane_3d_xpoint_ssds` +for a description of them. If an Optane / 3D XPoint SSD is an option, +the rest of this section on synchronous I/O need not be read. If Optane +/ 3D XPoint SSDs is not an option, see +:ref:`nand_flash_ssds` for suggestions +for NAND flash SSDs and also read the information below. + +To ensure maximum ZIL performance on NAND flash SSD-based SLOG devices, +you should also overprovison spare area to increase +IOPS [#ssd_iops]_. Only +about 4GB is needed, so the rest can be left as overprovisioned storage. +The choice of 4GB is somewhat arbitrary. Most systems do not write +anything close to 4GB to ZIL between transaction group commits, so +overprovisioning all storage beyond the 4GB partition should be alright. +If a workload needs more, then make it no more than the maximum ARC +size. Even under extreme workloads, ZFS will not benefit from more SLOG +storage than the maximum ARC size. That is half of system memory on +Linux and 3/4 of system memory on illumos. + +.. _overprovisioning_by_secure_erase_and_partition_table_trick: + +Overprovisioning by secure erase and partition table trick +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +You can do this with a mix of a secure erase and a partition table +trick, such as the following: + +#. Run a secure erase on the NAND-flash SSD. +#. Create a partition table on the NAND-flash SSD. +#. Create a 4GB partition. +#. Give the partition to ZFS to use as a log device. + +If using the secure erase and partition table trick, do *not* use the +unpartitioned space for other things, even temporarily. That will reduce +or eliminate the overprovisioning by marking pages as dirty. + +Alternatively, some devices allow you to change the sizes that they +report.This would also work, although a secure erase should be done +prior to changing the reported size to ensure that the SSD recognizes +the additional spare area. Changing the reported size can be done on +drives that support it with \`hdparm -N \` on systems that have +laptop-mode-tools. + +.. _nvme_overprovisioning: + +NVMe overprovisioning +^^^^^^^^^^^^^^^^^^^^^ + +On NVMe, you can use namespaces to achieve overprovisioning: + +#. Do a sanitize command as a precaution to ensure the device is + completely clean. +#. Delete the default namespace. +#. Create a new namespace of size 4GB. +#. Give the namespace to ZFS to use as a log device. e.g. zfs add tank + log /dev/nvme1n1 + +.. _whole_disks: + +Whole disks +~~~~~~~~~~~ + +Whole disks should be given to ZFS rather than partitions. If you must +use a partition, make certain that the partition is properly aligned to +avoid read-modify-write overhead. See the section on +:ref:`Alignment Shift (ashift) ` +for a description of proper alignment. Also, see the section on +:ref:`Whole Disks versus Partitions ` +for a description of changes in ZFS behavior when operating on a +partition. + +Single disk RAID 0 arrays from RAID controllers are not equivalent to +whole disks. The :ref:`hardware_raid_controllers` page +explains in detail. + +.. _bit_torrent: + +Bit Torrent +----------- + +Bit torrent performs 16KB random reads/writes. The 16KB writes cause +read-modify-write overhead. The read-modify-write overhead can reduce +performance by a factor of 16 with 128KB record sizes when the amount of +data written exceeds system memory. This can be avoided by using a +dedicated dataset for bit torrent downloads with recordsize=16KB. + +When the files are read sequentially through a HTTP server, the random +nature in which the files were generated creates fragmentation that has +been observed to reduce sequential read performance by a factor of two +on 7200RPM hard disks. If performance is a problem, fragmentation can be +eliminated by rewriting the files sequentially in either of two ways: + +The first method is to configure your client to download the files to a +temporary directory and then copy them into their final location when +the downloads are finished, provided that your client supports this. + +The second method is to use send/recv to recreate a dataset +sequentially. + +In practice, defragmenting files obtained through bit torrent should +only improve performance when the files are stored on magnetic storage +and are subject to significant sequential read workloads after creation. + +.. _database_workloads: + +Database workloads +------------------ + +Setting ``redundant_metadata=most`` can increase IOPS by at least a few +percentage points by eliminating redundant metadata at the lowest level +of the indirect block tree. This comes with the caveat that data loss +will occur if a metadata block pointing to data blocks is corrupted and +there are no duplicate copies, but this is generally not a problem in +production on mirrored or raidz vdevs. + +MySQL +~~~~~ + +InnoDB +^^^^^^ + +Make separate datasets for InnoDB's data files and log files. Set +``recordsize=16K`` on InnoDB's data files to avoid expensive partial record +writes and leave recordsize=128K on the log files. Set +``primarycache=metadata`` on both to prefer InnoDB's +caching [#mysql_basic]_. +Set ``logbias=throughput`` on the data to stop ZIL from writing twice. + +Set ``skip-innodb_doublewrite`` in my.cnf to prevent innodb from writing +twice. The double writes are a data integrity feature meant to protect +against corruption from partially-written records, but those are not +possible on ZFS. It should be noted that `Percona’s +blog had advocated `__ +using an ext4 configuration where double writes were +turned off for a performance gain, but later recanted it because it +caused data corruption. Following a well timed power failure, an in +place filesystem such as ext4 can have half of a 8KB record be old while +the other half would be new. This would be the corruption that caused +Percona to recant its advice. However, ZFS’ copy on write design would +cause it to return the old correct data following a power failure (no +matter what the timing is). That prevents the corruption that the double +write feature is intended to prevent from ever happening. The double +write feature is therefore unnecessary on ZFS and can be safely turned +off for better performance. + +On Linux, the driver's AIO implementation is a compatibility shim that +just barely passes the POSIX standard. InnoDB performance suffers when +using its default AIO codepath. Set ``innodb_use_native_aio=0`` and +``innodb_use_atomic_writes=0`` in my.cnf to disable AIO. Both of these +settings must be disabled to disable AIO. + +PostgreSQL +~~~~~~~~~~ + +Make separate datasets for PostgreSQL's data and WAL. Set +``compression=lz4`` and ``recordsize=32K`` (64K also work well, as +does the 128K default) on both. Configure ``full_page_writes = off`` +for PostgreSQL, as ZFS will never commit a partial write. For a database +with large updates, experiment with ``logbias=throughput`` on +PostgreSQL's data to avoid writing twice, but be aware that with this +setting smaller updates can cause severe fragmentation. + +SQLite +~~~~~~ + +Make a separate dataset for the database. Set the recordsize to 64K. Set +the SQLite page size to 65536 +bytes [#sqlite_ps]_. + +Note that SQLite databases typically are not exercised enough to merit +special tuning, but this will provide it. Note the side effect on cache +size mentioned at +SQLite.org [#sqlite_ps_change]_. + +.. _file_servers: + +File servers +------------ + +Create a dedicated dataset for files being served. + +See +:ref:`Sequential workloads ` +for configuration recommendations. + +Samba +~~~~~ +Windows/DOS clients doesn't support case sensitive file names. +If your main workload won't need case sensitivity for other supported clients, +create dataset with ``zfs create -o casesensitivity=insensitive`` +so Samba may search filenames faster in future [#FS_CASEFOLD_FL]_. + +See ``case sensitive`` option in +`smb.conf(5) `__. + +.. _sequential_workloads: + +Sequential workloads +-------------------- + +Set ``recordsize=1M`` on datasets that are subject to sequential workloads. +Read +:ref:`Larger record sizes ` +for documentation on things that should be known before setting 1M +record sizes. + +Set ``compression=lz4`` as per the general recommendation for :ref:`LZ4 +compression `. + +.. _video_games_directories: + +Video games directories +----------------------- + +Create a dedicated dataset, use chown to make it user accessible (or +create a directory under it and use chown on that) and then configure +the game download application to place games there. Specific information +on how to configure various ones is below. + +See +:ref:`Sequential workloads ` +for configuration recommendations before installing games. + +Note that the performance gains from this tuning are likely to be small +and limited to load times. However, the combination of 1M records and +LZ4 will allow more games to be stored, which is why this tuning is +documented despite the performance gains being limited. A steam library +of 300 games (mostly from humble bundle) that had these tweaks applied +to it saw 20% space savings. Both faster load times and significant +space savings are possible on compressible games when this tuning has +been done. Games whose assets are already compressed will see little to +no benefit. + +Lutris +~~~~~~ + +Open the context menu by left clicking on the triple bar icon in the +upper right. Go to "Preferences" and then the "System options" tab. +Change the default installation directory and click save. + +Steam +~~~~~ + +Go to "Settings" -> "Downloads" -> "Steam Library Folders" and use "Add +Library Folder" to set the directory for steam to use to store games. +Make sure to set it to the default by right clicking on it and clicking +"Make Default Folder" before closing the dialogue. + +If you'll use Proton to run non-native games, +create dataset with ``zfs create -o casesensitivity=insensitive`` +so Wine may search filenames faster in future [#FS_CASEFOLD_FL]_. + +.. _wine: + +Wine +---- + +Windows file systems' standard behavior is to be case-insensitive. +Create dataset with ``zfs create -o casesensitivity=insensitive`` +so Wine may search filenames faster in future [#FS_CASEFOLD_FL]_. + +.. _virtual_machines: + +Virtual machines +---------------- + +Virtual machine images on ZFS should be stored using either zvols or raw +files to avoid unnecessary overhead. The recordsize/volblocksize and +guest filesystem may be configured to match to avoid overhead from +partial record modification, see :ref:`zvol volblocksize `. +If raw files are used, a separate dataset should be used to make it easy to configure +recordsize independently of other things stored on ZFS. + +.. _qemu_kvm_xen: + +QEMU / KVM / Xen +~~~~~~~~~~~~~~~~ + +AIO should be used to maximize IOPS when using files for guest storage. + +.. rubric:: Footnotes + +.. [#ssd_iops] +.. [#mysql_basic] +.. [#sqlite_ps] +.. [#sqlite_ps_change] +.. [#FS_CASEFOLD_FL] +.. [#init_on_alloc] +.. [#VOLBLOCKSIZE] diff --git a/_sources/Performance and Tuning/ZFS Transaction Delay.rst.txt b/_sources/Performance and Tuning/ZFS Transaction Delay.rst.txt new file mode 100644 index 000000000..1ee539cc7 --- /dev/null +++ b/_sources/Performance and Tuning/ZFS Transaction Delay.rst.txt @@ -0,0 +1,105 @@ +ZFS Transaction Delay +===================== + +ZFS write operations are delayed when the backend storage isn't able to +accommodate the rate of incoming writes. This delay process is known as +the ZFS write throttle. + +If there is already a write transaction waiting, the delay is relative +to when that transaction will finish waiting. Thus the calculated delay +time is independent of the number of threads concurrently executing +transactions. + +If there is only one waiter, the delay is relative to when the +transaction started, rather than the current time. This credits the +transaction for "time already served." For example, if a write +transaction requires reading indirect blocks first, then the delay is +counted at the start of the transaction, just prior to the indirect +block reads. + +The minimum time for a transaction to take is calculated as: + +:: + + min_time = zfs_delay_scale * (dirty - min) / (max - dirty) + min_time is then capped at 100 milliseconds + +The delay has two degrees of freedom that can be adjusted via tunables: + +1. The percentage of dirty data at which we start to delay is defined by + zfs_delay_min_dirty_percent. This is typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so delays occur after + writing at full speed has failed to keep up with the incoming write + rate. +2. The scale of the curve is defined by zfs_delay_scale. Roughly + speaking, this variable determines the amount of delay at the + midpoint of the curve. + +:: + + delay + 10ms +-------------------------------------------------------------*+ + | *| + 9ms + *+ + | *| + 8ms + *+ + | * | + 7ms + * + + | * | + 6ms + * + + | * | + 5ms + * + + | * | + 4ms + * + + | * | + 3ms + * + + | * | + 2ms + (midpoint) * + + | | ** | + 1ms + v *** + + | zfs_delay_scale ----------> ******** | + 0 +-------------------------------------*********----------------+ + 0% <- zfs_dirty_data_max -> 100% + +Note that since the delay is added to the outstanding time remaining on +the most recent transaction, the delay is effectively the inverse of +IOPS. Here the midpoint of 500 microseconds translates to 2000 IOPS. The +shape of the curve was chosen such that small changes in the amount of +accumulated dirty data in the first 3/4 of the curve yield relatively +small differences in the amount of delay. + +The effects can be easier to understand when the amount of delay is +represented on a log scale: + +:: + + delay + 100ms +-------------------------------------------------------------++ + + + + | | + + *+ + 10ms + *+ + + ** + + | (midpoint) ** | + + | ** + + 1ms + v **** + + + zfs_delay_scale ----------> ***** + + | **** | + + **** + + 100us + ** + + + * + + | * | + + * + + 10us + * + + + + + | | + + + + +--------------------------------------------------------------+ + 0% <- zfs_dirty_data_max -> 100% + +Note here that only as the amount of dirty data approaches its limit +does the delay start to increase rapidly. The goal of a properly tuned +system should be to keep the amount of dirty data out of that range by +first ensuring that the appropriate limits are set for the I/O scheduler +to reach optimal throughput on the backend storage, and then by changing +the value of zfs_delay_scale to increase the steepness of the curve. diff --git a/_sources/Performance and Tuning/ZIO Scheduler.rst.txt b/_sources/Performance and Tuning/ZIO Scheduler.rst.txt new file mode 100644 index 000000000..53551bf56 --- /dev/null +++ b/_sources/Performance and Tuning/ZIO Scheduler.rst.txt @@ -0,0 +1,93 @@ +ZFS I/O (ZIO) Scheduler +======================= + +ZFS issues I/O operations to leaf vdevs (usually devices) to satisfy and +complete I/Os. The ZIO scheduler determines when and in what order those +operations are issued. Operations are divided into five I/O classes +prioritized in the following order: + ++----------+-------------+-------------------------------------------+ +| Priority | I/O Class | Description | ++==========+=============+===========================================+ +| highest | sync read | most reads | ++----------+-------------+-------------------------------------------+ +| | sync write | as defined by application or via 'zfs' | +| | | 'sync' property | ++----------+-------------+-------------------------------------------+ +| | async read | prefetch reads | ++----------+-------------+-------------------------------------------+ +| | async write | most writes | ++----------+-------------+-------------------------------------------+ +| lowest | scrub read | scan read: includes both scrub and | +| | | resilver | ++----------+-------------+-------------------------------------------+ + +Each queue defines the minimum and maximum number of concurrent +operations issued to the device. In addition, the device has an +aggregate maximum, zfs_vdev_max_active. Note that the sum of the +per-queue minimums must not exceed the aggregate maximum. If the sum of +the per-queue maximums exceeds the aggregate maximum, then the number of +active I/Os may reach zfs_vdev_max_active, in which case no further I/Os +are issued regardless of whether all per-queue minimums have been met. + ++-------------+------------------------------------+------------------------------------+ +| I/O Class | Min Active Parameter | Max Active Parameter | ++=============+====================================+====================================+ +| sync read | ``zfs_vdev_sync_read_min_active`` | ``zfs_vdev_sync_read_max_active`` | ++-------------+------------------------------------+------------------------------------+ +| sync write | ``zfs_vdev_sync_write_min_active`` | ``zfs_vdev_sync_write_max_active`` | ++-------------+------------------------------------+------------------------------------+ +| async read | ``zfs_vdev_async_read_min_active`` | ``zfs_vdev_async_read_max_active`` | ++-------------+------------------------------------+------------------------------------+ +| async write | ``zfs_vdev_async_write_min_active``| ``zfs_vdev_async_write_max_active``| ++-------------+------------------------------------+------------------------------------+ +| scrub read | ``zfs_vdev_scrub_min_active`` | ``zfs_vdev_scrub_max_active`` | ++-------------+------------------------------------+------------------------------------+ + +For many physical devices, throughput increases with the number of +concurrent operations, but latency typically suffers. Further, physical +devices typically have a limit at which more concurrent operations have +no effect on throughput or can cause the disk performance to +decrease. + +The ZIO scheduler selects the next operation to issue by first looking +for an I/O class whose minimum has not been satisfied. Once all are +satisfied and the aggregate maximum has not been hit, the scheduler +looks for classes whose maximum has not been satisfied. Iteration +through the I/O classes is done in the order specified above. No further +operations are issued if the aggregate maximum number of concurrent +operations has been hit or if there are no operations queued for an I/O +class that has not hit its maximum. Every time an I/O is queued or an +operation completes, the I/O scheduler looks for new operations to +issue. + +In general, smaller max_active's will lead to lower latency of +synchronous operations. Larger max_active's may lead to higher overall +throughput, depending on underlying storage and the I/O mix. + +The ratio of the queues' max_actives determines the balance of +performance between reads, writes, and scrubs. For example, when there +is contention, increasing zfs_vdev_scrub_max_active will cause the scrub +or resilver to complete more quickly, but reads and writes to have +higher latency and lower throughput. + +All I/O classes have a fixed maximum number of outstanding operations +except for the async write class. Asynchronous writes represent the data +that is committed to stable storage during the syncing stage for +transaction groups (txgs). Transaction groups enter the syncing state +periodically so the number of queued async writes quickly bursts up and +then reduce down to zero. The zfs_txg_timeout tunable (default=5 +seconds) sets the target interval for txg sync. Thus a burst of async +writes every 5 seconds is a normal ZFS I/O pattern. + +Rather than servicing I/Os as quickly as possible, the ZIO scheduler +changes the maximum number of active async write I/Os according to the +amount of dirty data in the pool. Since both throughput and latency +typically increase as the number of concurrent operations issued to +physical devices, reducing the burstiness in the number of concurrent +operations also stabilizes the response time of operations from other +queues. This is particularly important for the sync read and write queues, +where the periodic async write bursts of the txg sync can lead to +device-level contention. In broad strokes, the ZIO scheduler issues more +concurrent operations from the async write queue as there's more dirty +data in the pool. diff --git a/_sources/Performance and Tuning/index.rst.txt b/_sources/Performance and Tuning/index.rst.txt new file mode 100644 index 000000000..1d1479b73 --- /dev/null +++ b/_sources/Performance and Tuning/index.rst.txt @@ -0,0 +1,9 @@ +Performance and Tuning +====================== + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + * diff --git a/_sources/Project and Community/Admin Documentation.rst.txt b/_sources/Project and Community/Admin Documentation.rst.txt new file mode 100644 index 000000000..6385f192d --- /dev/null +++ b/_sources/Project and Community/Admin Documentation.rst.txt @@ -0,0 +1,9 @@ +Admin Documentation +=================== + +- `Aaron Toponce's ZFS on Linux User + Guide `__ +- `OpenZFS System + Administration `__ +- `Oracle Solaris ZFS Administration + Guide `__ diff --git a/_sources/Project and Community/FAQ hole birth.rst.txt b/_sources/Project and Community/FAQ hole birth.rst.txt new file mode 100644 index 000000000..52411d674 --- /dev/null +++ b/_sources/Project and Community/FAQ hole birth.rst.txt @@ -0,0 +1,67 @@ +:orphan: + +FAQ Hole birth +============== + +Short explanation +~~~~~~~~~~~~~~~~~ + +The hole_birth feature has/had bugs, the result of which is that, if you +do a ``zfs send -i`` (or ``-R``, since it uses ``-i``) from an affected +dataset, the receiver will not see any checksum or other errors, but the +resulting destination snapshot will not match the source. + +ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the +faulty metadata which causes this issue *on the sender side*. + +FAQ +~~~ + +I have a pool with hole_birth enabled, how do I know if I am affected? +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +It is technically possible to calculate whether you have any affected +files, but it requires scraping zdb output for each file in each +snapshot in each dataset, which is a combinatoric nightmare. (If you +really want it, there is a proof of concept +`here `__. + +Is there any less painful way to fix this if we have already received an affected snapshot? +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +No, the data you need was simply not present in the send stream, +unfortunately, and cannot feasibly be rewritten in place. + +Long explanation +~~~~~~~~~~~~~~~~ + +hole_birth is a feature to speed up ZFS send -i - in particular, ZFS +used to not store metadata on when "holes" (sparse regions) in files +were created, so every zfs send -i needed to include every hole. + +hole_birth, as the name implies, added tracking for the txg (transaction +group) when a hole was created, so that zfs send -i could only send +holes that had a birth_time between (starting snapshot txg) and (ending +snapshot txg), and life was wonderful. + +Unfortunately, hole_birth had a number of edge cases where it could +"forget" to set the birth_time of holes in some cases, causing it to +record the birth_time as 0 (the value used prior to hole_birth, and +essentially equivalent to "since file creation"). + +This meant that, when you did a zfs send -i, since zfs send does not +have any knowledge of the surrounding snapshots when sending a given +snapshot, it would see the creation txg as 0, conclude "oh, it is 0, I +must have already sent this before", and not include it. + +This means that, on the receiving side, it does not know those holes +should exist, and does not create them. This leads to differences +between the source and the destination. + +ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring this +metadata and always sending holes with birth_time 0, configurable using +the tunable known as ``ignore_hole_birth`` or +``send_holes_without_birth_time``. The latter is what OpenZFS +standardized on. ZoL version 0.6.5.8 only has the former, but for any +ZoL version with ``send_holes_without_birth_time``, they point to the +same value, so changing either will work. diff --git a/_sources/Project and Community/FAQ.rst.txt b/_sources/Project and Community/FAQ.rst.txt new file mode 100644 index 000000000..155a8d091 --- /dev/null +++ b/_sources/Project and Community/FAQ.rst.txt @@ -0,0 +1,694 @@ +FAQ +=== + +.. contents:: Table of Contents + :local: + +What is OpenZFS +--------------- + +OpenZFS is an outstanding storage platform that +encompasses the functionality of traditional filesystems, volume +managers, and more, with consistent reliability, functionality and +performance across all distributions. Additional information about +OpenZFS can be found in the `OpenZFS wikipedia +article `__. + +Hardware Requirements +--------------------- + +Because ZFS was originally designed for Sun Solaris it was long +considered a filesystem for large servers and for companies that could +afford the best and most powerful hardware available. But since the +porting of ZFS to numerous OpenSource platforms (The BSDs, Illumos and +Linux - under the umbrella organization "OpenZFS"), these requirements +have been lowered. + +The suggested hardware requirements are: + +- ECC memory. This isn't really a requirement, but it's highly + recommended. +- 8GB+ of memory for the best performance. It's perfectly possible to + run with 2GB or less (and people do), but you'll need more if using + deduplication. + +Do I have to use ECC memory for ZFS? +------------------------------------ + +Using ECC memory for OpenZFS is strongly recommended for enterprise +environments where the strongest data integrity guarantees are required. +Without ECC memory rare random bit flips caused by cosmic rays or by +faulty memory can go undetected. If this were to occur OpenZFS (or any +other filesystem) will write the damaged data to disk and be unable to +automatically detect the corruption. + +Unfortunately, ECC memory is not always supported by consumer grade +hardware. And even when it is, ECC memory will be more expensive. For +home users the additional safety brought by ECC memory might not justify +the cost. It's up to you to determine what level of protection your data +requires. + +Installation +------------ + +OpenZFS is available for FreeBSD and all major Linux distributions. Refer to +the :doc:`getting started <../Getting Started/index>` section of the wiki for +links to installations instructions. If your distribution/OS isn't +listed you can always build OpenZFS from the latest official +`tarball `__. + +Supported Architectures +----------------------- + +OpenZFS is regularly compiled for the following architectures: +aarch64, arm, ppc, ppc64, x86, x86_64. + +Supported Linux Kernels +----------------------- + +The `notes `__ for a given +OpenZFS release will include a range of supported kernels. Point +releases will be tagged as needed in order to support the *stable* +kernel available from `kernel.org `__. The +oldest supported kernel is 2.6.32 due to its prominence in Enterprise +Linux distributions. + +.. _32-bit-vs-64-bit-systems: + +32-bit vs 64-bit Systems +------------------------ + +You are **strongly** encouraged to use a 64-bit kernel. OpenZFS +will build for 32-bit systems but you may encounter stability problems. + +ZFS was originally developed for the Solaris kernel which differs from +some OpenZFS platforms in several significant ways. Perhaps most importantly +for ZFS it is common practice in the Solaris kernel to make heavy use of +the virtual address space. However, use of the virtual address space is +strongly discouraged in the Linux kernel. This is particularly true on +32-bit architectures where the virtual address space is limited to 100M +by default. Using the virtual address space on 64-bit Linux kernels is +also discouraged but the address space is so much larger than physical +memory that it is less of an issue. + +If you are bumping up against the virtual memory limit on a 32-bit +system you will see the following message in your system logs. You can +increase the virtual address size with the boot option ``vmalloc=512M``. + +:: + + vmap allocation for size 4198400 failed: use vmalloc= to increase size. + +However, even after making this change your system will likely not be +entirely stable. Proper support for 32-bit systems is contingent upon +the OpenZFS code being weaned off its dependence on virtual memory. This +will take some time to do correctly but it is planned for OpenZFS. This +change is also expected to improve how efficiently OpenZFS manages the +ARC cache and allow for tighter integration with the standard Linux page +cache. + +Booting from ZFS +---------------- + +Booting from ZFS on Linux is possible and many people do it. There are +excellent walk throughs available for +:doc:`Debian <../Getting Started/Debian/index>`, +:doc:`Ubuntu <../Getting Started/Ubuntu/index>`, and +`Gentoo `__. + +On FreeBSD 13+ booting from ZFS is supported out of the box. + +Selecting /dev/ names when creating a pool (Linux) +-------------------------------------------------- + +There are different /dev/ names that can be used when creating a ZFS +pool. Each option has advantages and drawbacks, the right choice for +your ZFS pool really depends on your requirements. For development and +testing using /dev/sdX naming is quick and easy. A typical home server +might prefer /dev/disk/by-id/ naming for simplicity and readability. +While very large configurations with multiple controllers, enclosures, +and switches will likely prefer /dev/disk/by-vdev naming for maximum +control. But in the end, how you choose to identify your disks is up to +you. + +- **/dev/sdX, /dev/hdX:** Best for development/test pools + + - Summary: The top level /dev/ names are the default for consistency + with other ZFS implementations. They are available under all Linux + distributions and are commonly used. However, because they are not + persistent they should only be used with ZFS for development/test + pools. + - Benefits: This method is easy for a quick test, the names are + short, and they will be available on all Linux distributions. + - Drawbacks: The names are not persistent and will change depending + on what order the disks are detected in. Adding or removing + hardware for your system can easily cause the names to change. You + would then need to remove the zpool.cache file and re-import the + pool using the new names. + - Example: ``zpool create tank sda sdb`` + +- **/dev/disk/by-id/:** Best for small pools (less than 10 disks) + + - Summary: This directory contains disk identifiers with more human + readable names. The disk identifier usually consists of the + interface type, vendor name, model number, device serial number, + and partition number. This approach is more user friendly because + it simplifies identifying a specific disk. + - Benefits: Nice for small systems with a single disk controller. + Because the names are persistent and guaranteed not to change, it + doesn't matter how the disks are attached to the system. You can + take them all out, randomly mix them up on the desk, put them + back anywhere in the system and your pool will still be + automatically imported correctly. + - Drawbacks: Configuring redundancy groups based on physical + location becomes difficult and error prone. Unreliable on many + personal virtual machine setups because the software does not + generate persistent unique names by default. + - Example: + ``zpool create tank scsi-SATA_Hitachi_HTS7220071201DP1D10DGG6HMRP`` + +- **/dev/disk/by-path/:** Good for large pools (greater than 10 disks) + + - Summary: This approach is to use device names which include the + physical cable layout in the system, which means that a particular + disk is tied to a specific location. The name describes the PCI + bus number, as well as enclosure names and port numbers. This + allows the most control when configuring a large pool. + - Benefits: Encoding the storage topology in the name is not only + helpful for locating a disk in large installations. But it also + allows you to explicitly layout your redundancy groups over + multiple adapters or enclosures. + - Drawbacks: These names are long, cumbersome, and difficult for a + human to manage. + - Example: + ``zpool create tank pci-0000:00:1f.2-scsi-0:0:0:0 pci-0000:00:1f.2-scsi-1:0:0:0`` + +- **/dev/disk/by-vdev/:** Best for large pools (greater than 10 disks) + + - Summary: This approach provides administrative control over device + naming using the configuration file /etc/zfs/vdev_id.conf. Names + for disks in JBODs can be generated automatically to reflect their + physical location by enclosure IDs and slot numbers. The names can + also be manually assigned based on existing udev device links, + including those in /dev/disk/by-path or /dev/disk/by-id. This + allows you to pick your own unique meaningful names for the disks. + These names will be displayed by all the zfs utilities so it can + be used to clarify the administration of a large complex pool. See + the vdev_id and vdev_id.conf man pages for further details. + - Benefits: The main benefit of this approach is that it allows you + to choose meaningful human-readable names. Beyond that, the + benefits depend on the naming method employed. If the names are + derived from the physical path the benefits of /dev/disk/by-path + are realized. On the other hand, aliasing the names based on drive + identifiers or WWNs has the same benefits as using + /dev/disk/by-id. + - Drawbacks: This method relies on having a /etc/zfs/vdev_id.conf + file properly configured for your system. To configure this file + please refer to section `Setting up the /etc/zfs/vdev_id.conf + file <#setting-up-the-etc-zfs-vdev-id-conf-file>`__. As with + benefits, the drawbacks of /dev/disk/by-id or /dev/disk/by-path + may apply depending on the naming method employed. + - Example: ``zpool create tank mirror A1 B1 mirror A2 B2`` + +- **/dev/disk/by-uuid/:** Not a great option + + - Summary: One might think from the use of "UUID" that this would + be an ideal option - however, in practice, this ends up listing + one device per **pool** ID, which is not very useful for importing + pools with multiple disks. + +- **/dev/disk/by-partuuid/**/**by-partlabel:** Works only for existing partitions + + - Summary: partition UUID is generated on it's creation, so usage is limited + - Drawbacks: you can't refer to a partition unique ID on + an unpartitioned disk for ``zpool replace``/``add``/``attach``, + and you can't find failed disk easily without a mapping written + down ahead of time. + +Setting up the /etc/zfs/vdev_id.conf file +----------------------------------------- + +In order to use /dev/disk/by-vdev/ naming the ``/etc/zfs/vdev_id.conf`` +must be configured. The format of this file is described in the +vdev_id.conf man page. Several examples follow. + +A non-multipath configuration with direct-attached SAS enclosures and an +arbitrary slot re-mapping. + +:: + + multipath no + topology sas_direct + phys_per_port 4 + + # PCI_SLOT HBA PORT CHANNEL NAME + channel 85:00.0 1 A + channel 85:00.0 0 B + + # Linux Mapped + # Slot Slot + slot 0 2 + slot 1 6 + slot 2 0 + slot 3 3 + slot 4 5 + slot 5 7 + slot 6 4 + slot 7 1 + +A SAS-switch topology. Note that the channel keyword takes only two +arguments in this example. + +:: + + topology sas_switch + + # SWITCH PORT CHANNEL NAME + channel 1 A + channel 2 B + channel 3 C + channel 4 D + +A multipath configuration. Note that channel names have multiple +definitions - one per physical path. + +:: + + multipath yes + + # PCI_SLOT HBA PORT CHANNEL NAME + channel 85:00.0 1 A + channel 85:00.0 0 B + channel 86:00.0 1 A + channel 86:00.0 0 B + +A configuration using device link aliases. + +:: + + # by-vdev + # name fully qualified or base name of device link + alias d1 /dev/disk/by-id/wwn-0x5000c5002de3b9ca + alias d2 wwn-0x5000c5002def789e + +After defining the new disk names run ``udevadm trigger`` to prompt udev +to parse the configuration file. This will result in a new +/dev/disk/by-vdev directory which is populated with symlinks to /dev/sdX +names. Following the first example above, you could then create the new +pool of mirrors with the following command: + +:: + + $ zpool create tank \ + mirror A0 B0 mirror A1 B1 mirror A2 B2 mirror A3 B3 \ + mirror A4 B4 mirror A5 B5 mirror A6 B6 mirror A7 B7 + + $ zpool status + pool: tank + state: ONLINE + scan: none requested + config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + A0 ONLINE 0 0 0 + B0 ONLINE 0 0 0 + mirror-1 ONLINE 0 0 0 + A1 ONLINE 0 0 0 + B1 ONLINE 0 0 0 + mirror-2 ONLINE 0 0 0 + A2 ONLINE 0 0 0 + B2 ONLINE 0 0 0 + mirror-3 ONLINE 0 0 0 + A3 ONLINE 0 0 0 + B3 ONLINE 0 0 0 + mirror-4 ONLINE 0 0 0 + A4 ONLINE 0 0 0 + B4 ONLINE 0 0 0 + mirror-5 ONLINE 0 0 0 + A5 ONLINE 0 0 0 + B5 ONLINE 0 0 0 + mirror-6 ONLINE 0 0 0 + A6 ONLINE 0 0 0 + B6 ONLINE 0 0 0 + mirror-7 ONLINE 0 0 0 + A7 ONLINE 0 0 0 + B7 ONLINE 0 0 0 + + errors: No known data errors + +Changing /dev/ names on an existing pool +---------------------------------------- + +Changing the /dev/ names on an existing pool can be done by simply +exporting the pool and re-importing it with the -d option to specify +which new names should be used. For example, to use the custom names in +/dev/disk/by-vdev: + +:: + + $ zpool export tank + $ zpool import -d /dev/disk/by-vdev tank + +.. _the-etczfszpoolcache-file: + +The /etc/zfs/zpool.cache file +----------------------------- + +Whenever a pool is imported on the system it will be added to the +``/etc/zfs/zpool.cache file``. This file stores pool configuration +information, such as the device names and pool state. If this file +exists when running the ``zpool import`` command then it will be used to +determine the list of pools available for import. When a pool is not +listed in the cache file it will need to be detected and imported using +the ``zpool import -d /dev/disk/by-id`` command. + +.. _generating-a-new-etczfszpoolcache-file: + +Generating a new /etc/zfs/zpool.cache file +------------------------------------------ + +The ``/etc/zfs/zpool.cache`` file will be automatically updated when +your pool configuration is changed. However, if for some reason it +becomes stale you can force the generation of a new +``/etc/zfs/zpool.cache`` file by setting the cachefile property on the +pool. + +:: + + $ zpool set cachefile=/etc/zfs/zpool.cache tank + +Conversely the cache file can be disabled by setting ``cachefile=none``. +This is useful for failover configurations where the pool should always +be explicitly imported by the failover software. + +:: + + $ zpool set cachefile=none tank + +Sending and Receiving Streams +----------------------------- + +hole_birth Bugs +~~~~~~~~~~~~~~~ + +The hole_birth feature has/had bugs, the result of which is that, if you +do a ``zfs send -i`` (or ``-R``, since it uses ``-i``) from an affected +dataset, the receiver *will not see any checksum or other errors, but +will not match the source*. + +ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the +faulty metadata which causes this issue *on the sender side*. + +For more details, see the :doc:`hole_birth FAQ <./FAQ hole birth>`. + +Sending Large Blocks +~~~~~~~~~~~~~~~~~~~~ + +When sending incremental streams which contain large blocks (>128K) the +``--large-block`` flag must be specified. Inconsistent use of the flag +between incremental sends can result in files being incorrectly zeroed +when they are received. Raw encrypted send/recvs automatically imply the +``--large-block`` flag and are therefore unaffected. + +For more details, see `issue +6224 `__. + +CEPH/ZFS +-------- + +There is a lot of tuning that can be done that's dependent on the +workload that is being put on CEPH/ZFS, as well as some general +guidelines. Some are as follow; + +ZFS Configuration +~~~~~~~~~~~~~~~~~ + +The CEPH filestore back-end heavily relies on xattrs, for optimal +performance all CEPH workloads will benefit from the following ZFS +dataset parameters + +- ``xattr=sa`` +- ``dnodesize=auto`` + +Beyond that typically rbd/cephfs focused workloads benefit from small +recordsize({16K-128K), while objectstore/s3/rados focused workloads +benefit from large recordsize (128K-1M). + +.. _ceph-configuration-cephconf: + +CEPH Configuration (ceph.conf) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Additionally CEPH sets various values internally for handling xattrs +based on the underlying filesystem. As CEPH only officially +supports/detects XFS and BTRFS, for all other filesystems it falls back +to rather `limited "safe" +values `__. +On newer releases, the need for larger xattrs will prevent OSD's from even +starting. + +The officially recommended workaround (`see +here `__) +has some severe downsides, and more specifically is geared toward +filesystems with "limited" xattr support such as ext4. + +ZFS does not have a limit internally to xattrs length, as such we can +treat it similarly to how CEPH treats XFS. We can set overrides to set 3 +internal values to the same as those used with XFS(`see +here `__ +and +`here `__) +and allow it be used without the severe limitations of the "official" +workaround. + +:: + + [osd] + filestore_max_inline_xattrs = 10 + filestore_max_inline_xattr_size = 65536 + filestore_max_xattr_value_size = 65536 + +Other General Guidelines +~~~~~~~~~~~~~~~~~~~~~~~~ + +- Use a separate journal device. Do not collocate CEPH journal on + ZFS dataset if at all possible, this will quickly lead to terrible + fragmentation, not to mention terrible performance upfront even + before fragmentation (CEPH journal does a dsync for every write). +- Use a SLOG device, even with a separate CEPH journal device. For some + workloads, skipping SLOG and setting ``logbias=throughput`` may be + acceptable. +- Use a high-quality SLOG/CEPH journal device. A consumer based SSD, or + even NVMe WILL NOT DO (Samsung 830, 840, 850, etc) for a variety of + reasons. CEPH will kill them quickly, on-top of the performance being + quite low in this use. Generally recommended devices are [Intel DC S3610, + S3700, S3710, P3600, P3700], or [Samsung SM853, SM863], or better. +- If using a high quality SSD or NVMe device (as mentioned above), you + CAN share SLOG and CEPH Journal to good results on single device. A + ratio of 4 HDDs to 1 SSD (Intel DC S3710 200GB), with each SSD + partitioned (remember to align!) to 4x10GB (for ZIL/SLOG) + 4x20GB + (for CEPH journal) has been reported to work well. + +Again - CEPH + ZFS will KILL a consumer based SSD VERY quickly. Even +ignoring the lack of power-loss protection, and endurance ratings, you +will be very disappointed with performance of consumer based SSD under +such a workload. + +Performance Considerations +-------------------------- + +To achieve good performance with your pool there are some easy best +practices you should follow. + +- **Evenly balance your disks across controllers:** Often the limiting + factor for performance is not the disks but the controller. By + balancing your disks evenly across controllers you can often improve + throughput. +- **Create your pool using whole disks:** When running zpool create use + whole disk names. This will allow ZFS to automatically partition the + disk to ensure correct alignment. It will also improve + interoperability with other OpenZFS implementations which honor the + wholedisk property. +- **Have enough memory:** A minimum of 2GB of memory is recommended for + ZFS. Additional memory is strongly recommended when the compression + and deduplication features are enabled. +- **Improve performance by setting ashift=12:** You may be able to + improve performance for some workloads by setting ``ashift=12``. This + tuning can only be set when block devices are first added to a pool, + such as when the pool is first created or when a new vdev is added to + the pool. This tuning parameter can result in a decrease of capacity + for RAIDZ configurations. + +Advanced Format Disks +--------------------- + +Advanced Format (AF) is a new disk format which natively uses a 4,096 +byte, instead of 512 byte, sector size. To maintain compatibility with +legacy systems many AF disks emulate a sector size of 512 bytes. By +default, ZFS will automatically detect the sector size of the drive. +This combination can result in poorly aligned disk accesses which will +greatly degrade the pool performance. + +Therefore, the ability to set the ashift property has been added to the +zpool command. This allows users to explicitly assign the sector size +when devices are first added to a pool (typically at pool creation time +or adding a vdev to the pool). The ashift values range from 9 to 16 with +the default value 0 meaning that zfs should auto-detect the sector size. +This value is actually a bit shift value, so an ashift value for 512 +bytes is 9 (2^9 = 512) while the ashift value for 4,096 bytes is 12 +(2^12 = 4,096). + +To force the pool to use 4,096 byte sectors at pool creation time, you +may run: + +:: + + $ zpool create -o ashift=12 tank mirror sda sdb + +To force the pool to use 4,096 byte sectors when adding a vdev to a +pool, you may run: + +:: + + $ zpool add -o ashift=12 tank mirror sdc sdd + +ZVOL used space larger than expected +------------------------------------ + +| Depending on the filesystem used on the zvol (e.g. ext4) and the usage + (e.g. deletion and creation of many files) the ``used`` and + ``referenced`` properties reported by the zvol may be larger than the + "actual" space that is being used as reported by the consumer. +| This can happen due to the way some filesystems work, in which they + prefer to allocate files in new untouched blocks rather than the + fragmented used blocks marked as free. This forces zfs to reference + all blocks that the underlying filesystem has ever touched. +| This is in itself not much of a problem, as when the ``used`` property + reaches the configured ``volsize`` the underlying filesystem will + start reusing blocks. But the problem arises if it is desired to + snapshot the zvol, as the space referenced by the snapshots will + contain the unused blocks. + +| This issue can be prevented, by issuing the so-called trim + (for ex. ``fstrim`` command on Linux) to allow + the kernel to specify to zfs which blocks are unused. +| Issuing a trim before a snapshot is taken will ensure + a minimum snapshot size. +| For Linux adding the ``discard`` option for the mounted ZVOL in ``/etc/fstab`` + effectively enables the kernel to issue the trim commands + continuously, without the need to execute fstrim on-demand. + +Using a zvol for a swap device on Linux +--------------------------------------- + +You may use a zvol as a swap device but you'll need to configure it +appropriately. + +**CAUTION:** for now swap on zvol may lead to deadlock, in this case +please send your logs +`here `__. + +- Set the volume block size to match your systems page size. This + tuning prevents ZFS from having to perform read-modify-write options + on a larger block while the system is already low on memory. +- Set the ``logbias=throughput`` and ``sync=always`` properties. Data + written to the volume will be flushed immediately to disk freeing up + memory as quickly as possible. +- Set ``primarycache=metadata`` to avoid keeping swap data in RAM via + the ARC. +- Disable automatic snapshots of the swap device. + +:: + + $ zfs create -V 4G -b $(getconf PAGESIZE) \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata \ + -o com.sun:auto-snapshot=false rpool/swap + +Using ZFS on Xen Hypervisor or Xen Dom0 (Linux) +----------------------------------------------- + +It is usually recommended to keep virtual machine storage and hypervisor +pools, quite separate. Although few people have managed to successfully +deploy and run OpenZFS using the same machine configured as Dom0. +There are few caveats: + +- Set a fair amount of memory in grub.conf, dedicated to Dom0. + + - dom0_mem=16384M,max:16384M + +- Allocate no more of 30-40% of Dom0's memory to ZFS in + ``/etc/modprobe.d/zfs.conf``. + + - options zfs zfs_arc_max=6442450944 + +- Disable Xen's auto-ballooning in ``/etc/xen/xl.conf`` +- Watch out for any Xen bugs, such as `this + one `__ related to + ballooning + +udisks2 creating /dev/mapper/ entries for zvol (Linux) +------------------------------------------------------ + +To prevent udisks2 from creating /dev/mapper entries that must be +manually removed or maintained during zvol remove / rename, create a +udev rule such as ``/etc/udev/rules.d/80-udisks2-ignore-zfs.rules`` with +the following contents: + +:: + + ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_FS_TYPE}=="zfs_member", ENV{ID_PART_ENTRY_TYPE}=="6a898cc3-1dd2-11b2-99a6-080020736631", ENV{UDISKS_IGNORE}="1" + +Licensing +--------- + +License information can be found `here `__. + +Reporting a problem +------------------- + +You can open a new issue and search existing issues using the public +`issue tracker `__. The issue +tracker is used to organize outstanding bug reports, feature requests, +and other development tasks. Anyone may post comments after signing up +for a github account. + +Please make sure that what you're actually seeing is a bug and not a +support issue. If in doubt, please ask on the mailing list first, and if +you're then asked to file an issue, do so. + +When opening a new issue include this information at the top of the +issue: + +- What distribution you're using and the version. +- What spl/zfs packages you're using and the version. +- Describe the problem you're observing. +- Describe how to reproduce the problem. +- Including any warning/errors/backtraces from the system logs. + +When a new issue is opened it's not uncommon for a developer to request +additional information about the problem. In general, the more detail +you share about a problem the quicker a developer can resolve it. For +example, providing a simple test case is always exceptionally helpful. +Be prepared to work with the developer looking in to your bug in order +to get it resolved. They may ask for information like: + +- Your pool configuration as reported by ``zdb`` or ``zpool status``. +- Your hardware configuration, such as + + - Number of CPUs. + - Amount of memory. + - Whether your system has ECC memory. + - Whether it is running under a VMM/Hypervisor. + - Kernel version. + - Values of the spl/zfs module parameters. + +- Stack traces which may be logged to ``dmesg``. + +Does OpenZFS have a Code of Conduct? +------------------------------------ + +Yes, the OpenZFS community has a code of conduct. See the `Code of +Conduct `__ for details. diff --git a/_sources/Project and Community/Mailing Lists.rst.txt b/_sources/Project and Community/Mailing Lists.rst.txt new file mode 100644 index 000000000..8aba7e735 --- /dev/null +++ b/_sources/Project and Community/Mailing Lists.rst.txt @@ -0,0 +1,36 @@ +.. _mailing_lists: + +Mailing Lists +============= + ++----------------------+----------------------+----------------------+ +|              | Description | List Archive | +|             List     | | | +|                      | | | ++======================+======================+======================+ +| `zfs-announce\ | A low-traffic list | `archive | +| @list.zfsonlinux.\ | for announcements | `__ | +| ups/zfs-announce>`__ | | | ++----------------------+----------------------+----------------------+ +| `zfs-discuss\ | A user discussion | `archive | +| @list.zfsonlinux\ | list for issues | `__ | +| oups/zfs-discuss>`__ | usability | | ++----------------------+----------------------+----------------------+ +| `zfs-\ | A development list | `archive | +| devel@list.zfsonlin\ | for developers to | `__ | +| groups/zfs-devel>`__ | | | ++----------------------+----------------------+----------------------+ +| `devel\ | A | `archive `__ | +| iki/Mailing_list>`__ | developers to review | | +| | ZFS code and | | +| | architecture changes | | +| | from all platforms | | ++----------------------+----------------------+----------------------+ diff --git a/_sources/Project and Community/Signing Keys.rst.txt b/_sources/Project and Community/Signing Keys.rst.txt new file mode 100644 index 000000000..b25a08c35 --- /dev/null +++ b/_sources/Project and Community/Signing Keys.rst.txt @@ -0,0 +1,64 @@ +Signing Keys +============ + +All tagged ZFS on Linux +`releases `__ are signed by +the official maintainer for that branch. These signatures are +automatically verified by GitHub and can be checked locally by +downloading the maintainers public key. + +Maintainers +----------- + +Release branch (spl/zfs-\*-release) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +| **Maintainer:** `Ned Bass `__ +| **Download:** + `pgp.mit.edu `__ +| **Key ID:** C77B9667 +| **Fingerprint:** 29D5 610E AE29 41E3 55A2 FE8A B974 67AA C77B 9667 + +| **Maintainer:** `Tony Hutter `__ +| **Download:** + `pgp.mit.edu `__ +| **Key ID:** D4598027 +| **Fingerprint:** 4F3B A9AB 6D1F 8D68 3DC2 DFB5 6AD8 60EE D459 8027 + +Master branch (master) +~~~~~~~~~~~~~~~~~~~~~~ + +| **Maintainer:** `Brian Behlendorf `__ +| **Download:** + `pgp.mit.edu `__ +| **Key ID:** C6AF658B +| **Fingerprint:** C33D F142 657E D1F7 C328 A296 0AB9 E991 C6AF 658B + +Checking the Signature of a Git Tag +----------------------------------- + +First import the public key listed above in to your key ring. + +:: + + $ gpg --keyserver pgp.mit.edu --recv C6AF658B + gpg: requesting key C6AF658B from hkp server pgp.mit.edu + gpg: key C6AF658B: "Brian Behlendorf " not changed + gpg: Total number processed: 1 + gpg: unchanged: 1 + +After the public key is imported the signature of a git tag can be +verified as shown. + +:: + + $ git tag --verify zfs-0.6.5 + object 7a27ad00ae142b38d4aef8cc0af7a72b4c0e44fe + type commit + tag zfs-0.6.5 + tagger Brian Behlendorf 1441996302 -0700 + + ZFS Version 0.6.5 + gpg: Signature made Fri 11 Sep 2015 11:31:42 AM PDT using DSA key ID C6AF658B + gpg: Good signature from "Brian Behlendorf " + gpg: aka "Brian Behlendorf (LLNL) " diff --git a/_sources/Project and Community/index.rst.txt b/_sources/Project and Community/index.rst.txt new file mode 100644 index 000000000..4ed8122e3 --- /dev/null +++ b/_sources/Project and Community/index.rst.txt @@ -0,0 +1,31 @@ +Project and Community +===================== + +OpenZFS is storage software which combines the functionality of +traditional filesystems, volume manager, and more. OpenZFS includes +protection against data corruption, support for high storage capacities, +efficient data compression, snapshots and copy-on-write clones, +continuous integrity checking and automatic repair, remote replication +with ZFS send and receive, and RAID-Z. + +OpenZFS brings together developers from the illumos, Linux, FreeBSD and +OS X platforms, and a wide range of companies -- both online and at the +annual OpenZFS Developer Summit. High-level goals of the project include +raising awareness of the quality, utility and availability of +open-source implementations of ZFS, encouraging open communication about +ongoing efforts toward improving open-source variants of ZFS, and +ensuring consistent reliability, functionality and performance of all +distributions of ZFS. + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + Admin Documentation + FAQ + Mailing Lists + Signing Keys + Issue Tracker + Releases + Roadmap diff --git a/_sources/_TableOfContents.rst.txt b/_sources/_TableOfContents.rst.txt new file mode 100644 index 000000000..3502e12d9 --- /dev/null +++ b/_sources/_TableOfContents.rst.txt @@ -0,0 +1,12 @@ +.. toctree:: + :maxdepth: 2 + :glob: + + Getting Started/index + Project and Community/index + Developer Resources/index + Performance and Tuning/index + Basic Concepts/index + man/index + msg/index + License diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt new file mode 100644 index 000000000..3b694ccbe --- /dev/null +++ b/_sources/index.rst.txt @@ -0,0 +1,24 @@ +OpenZFS Documentation +===================== + +Welcome to the OpenZFS Documentation. This resource provides documentation for +users and developers working with (or contributing to) the OpenZFS +project. New users or system administrators should refer to the +documentation for their favorite platform to get started. + ++----------------------+----------------------+----------------------+ +| :doc:`Getting Started| :doc:`Project and | :doc:`Developer | +| <./Getting | Community <./Project | Resources ` | and Community/index>`| Resources/index>` | ++======================+======================+======================+ +| How to get started | About the project | Technical | +| with OpenZFS on your | and how to | documentation | +| favorite platform | contribute | discussing the | +| | | OpenZFS | +| | | implementation | ++----------------------+----------------------+----------------------+ + + +Table of Contents: +------------------ +.. include:: _TableOfContents.rst diff --git a/_sources/man/index.rst.txt b/_sources/man/index.rst.txt new file mode 100644 index 000000000..e555d5d9b --- /dev/null +++ b/_sources/man/index.rst.txt @@ -0,0 +1,15 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +Man Pages +========= +.. toctree:: + :maxdepth: 1 + :glob: + + master/index + v2.2/index + v2.1/index + v2.0/index + v0.8/index + v0.7/index + v0.6/index diff --git a/_sources/man/master/1/arcstat.1.rst.txt b/_sources/man/master/1/arcstat.1.rst.txt new file mode 100644 index 000000000..74cae1a17 --- /dev/null +++ b/_sources/man/master/1/arcstat.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/arcstat.1 + +arcstat.1 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/arcstat.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/cstyle.1.rst.txt b/_sources/man/master/1/cstyle.1.rst.txt new file mode 100644 index 000000000..2d7beadc0 --- /dev/null +++ b/_sources/man/master/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/index.rst.txt b/_sources/man/master/1/index.rst.txt new file mode 100644 index 000000000..6981144fb --- /dev/null +++ b/_sources/man/master/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/1/raidz_test.1.rst.txt b/_sources/man/master/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..08c042614 --- /dev/null +++ b/_sources/man/master/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/test-runner.1.rst.txt b/_sources/man/master/1/test-runner.1.rst.txt new file mode 100644 index 000000000..3b1b16ed1 --- /dev/null +++ b/_sources/man/master/1/test-runner.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/test-runner.1 + +test-runner.1 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/test-runner.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/zhack.1.rst.txt b/_sources/man/master/1/zhack.1.rst.txt new file mode 100644 index 000000000..93c530d91 --- /dev/null +++ b/_sources/man/master/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/ztest.1.rst.txt b/_sources/man/master/1/ztest.1.rst.txt new file mode 100644 index 000000000..9438f4f80 --- /dev/null +++ b/_sources/man/master/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/1/zvol_wait.1.rst.txt b/_sources/man/master/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..4d77975f3 --- /dev/null +++ b/_sources/man/master/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/4/index.rst.txt b/_sources/man/master/4/index.rst.txt new file mode 100644 index 000000000..10e6950ab --- /dev/null +++ b/_sources/man/master/4/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man4/ + +Devices and Special Files (4) +============================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/4/spl.4.rst.txt b/_sources/man/master/4/spl.4.rst.txt new file mode 100644 index 000000000..de76f2f77 --- /dev/null +++ b/_sources/man/master/4/spl.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man4/spl.4 + +spl.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man4/spl.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/4/zfs.4.rst.txt b/_sources/man/master/4/zfs.4.rst.txt new file mode 100644 index 000000000..ca6f3c963 --- /dev/null +++ b/_sources/man/master/4/zfs.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man4/zfs.4 + +zfs.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man4/zfs.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/5/index.rst.txt b/_sources/man/master/5/index.rst.txt new file mode 100644 index 000000000..ec202a199 --- /dev/null +++ b/_sources/man/master/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/5/vdev_id.conf.5.rst.txt b/_sources/man/master/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..ce71e2bef --- /dev/null +++ b/_sources/man/master/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/dracut.zfs.7.rst.txt b/_sources/man/master/7/dracut.zfs.7.rst.txt new file mode 100644 index 000000000..ab81fda2a --- /dev/null +++ b/_sources/man/master/7/dracut.zfs.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/dracut.zfs.7 + +dracut.zfs.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/dracut.zfs.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/index.rst.txt b/_sources/man/master/7/index.rst.txt new file mode 100644 index 000000000..08a08f746 --- /dev/null +++ b/_sources/man/master/7/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/ + +Miscellaneous (7) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/7/vdevprops.7.rst.txt b/_sources/man/master/7/vdevprops.7.rst.txt new file mode 100644 index 000000000..00279c4d0 --- /dev/null +++ b/_sources/man/master/7/vdevprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/vdevprops.7 + +vdevprops.7 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/vdevprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zfsconcepts.7.rst.txt b/_sources/man/master/7/zfsconcepts.7.rst.txt new file mode 100644 index 000000000..360b75f42 --- /dev/null +++ b/_sources/man/master/7/zfsconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zfsconcepts.7 + +zfsconcepts.7 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zfsconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zfsprops.7.rst.txt b/_sources/man/master/7/zfsprops.7.rst.txt new file mode 100644 index 000000000..32f0bedc1 --- /dev/null +++ b/_sources/man/master/7/zfsprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zfsprops.7 + +zfsprops.7 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zfsprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zpool-features.7.rst.txt b/_sources/man/master/7/zpool-features.7.rst.txt new file mode 100644 index 000000000..e7d8f1122 --- /dev/null +++ b/_sources/man/master/7/zpool-features.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zpool-features.7 + +zpool-features.7 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zpool-features.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zpoolconcepts.7.rst.txt b/_sources/man/master/7/zpoolconcepts.7.rst.txt new file mode 100644 index 000000000..e812be284 --- /dev/null +++ b/_sources/man/master/7/zpoolconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zpoolconcepts.7 + +zpoolconcepts.7 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zpoolconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/7/zpoolprops.7.rst.txt b/_sources/man/master/7/zpoolprops.7.rst.txt new file mode 100644 index 000000000..e871927e7 --- /dev/null +++ b/_sources/man/master/7/zpoolprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man7/zpoolprops.7 + +zpoolprops.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man7/zpoolprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/fsck.zfs.8.rst.txt b/_sources/man/master/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..4e701e018 --- /dev/null +++ b/_sources/man/master/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/index.rst.txt b/_sources/man/master/8/index.rst.txt new file mode 100644 index 000000000..99184bac4 --- /dev/null +++ b/_sources/man/master/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/master/8/mount.zfs.8.rst.txt b/_sources/man/master/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..b721f264e --- /dev/null +++ b/_sources/man/master/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/vdev_id.8.rst.txt b/_sources/man/master/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..c3693e44f --- /dev/null +++ b/_sources/man/master/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zdb.8.rst.txt b/_sources/man/master/8/zdb.8.rst.txt new file mode 100644 index 000000000..e9730d2d6 --- /dev/null +++ b/_sources/man/master/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zed.8.rst.txt b/_sources/man/master/8/zed.8.rst.txt new file mode 100644 index 000000000..db0622099 --- /dev/null +++ b/_sources/man/master/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-allow.8.rst.txt b/_sources/man/master/8/zfs-allow.8.rst.txt new file mode 100644 index 000000000..4b440b402 --- /dev/null +++ b/_sources/man/master/8/zfs-allow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-allow.8 + +zfs-allow.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-allow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-bookmark.8.rst.txt b/_sources/man/master/8/zfs-bookmark.8.rst.txt new file mode 100644 index 000000000..2016899db --- /dev/null +++ b/_sources/man/master/8/zfs-bookmark.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-bookmark.8 + +zfs-bookmark.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-bookmark.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-change-key.8.rst.txt b/_sources/man/master/8/zfs-change-key.8.rst.txt new file mode 100644 index 000000000..1e65ca4f7 --- /dev/null +++ b/_sources/man/master/8/zfs-change-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-change-key.8 + +zfs-change-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-change-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-clone.8.rst.txt b/_sources/man/master/8/zfs-clone.8.rst.txt new file mode 100644 index 000000000..73ae2cfab --- /dev/null +++ b/_sources/man/master/8/zfs-clone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-clone.8 + +zfs-clone.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-clone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-create.8.rst.txt b/_sources/man/master/8/zfs-create.8.rst.txt new file mode 100644 index 000000000..91d05c297 --- /dev/null +++ b/_sources/man/master/8/zfs-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-create.8 + +zfs-create.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-destroy.8.rst.txt b/_sources/man/master/8/zfs-destroy.8.rst.txt new file mode 100644 index 000000000..880923e14 --- /dev/null +++ b/_sources/man/master/8/zfs-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-destroy.8 + +zfs-destroy.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-diff.8.rst.txt b/_sources/man/master/8/zfs-diff.8.rst.txt new file mode 100644 index 000000000..2537e6776 --- /dev/null +++ b/_sources/man/master/8/zfs-diff.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-diff.8 + +zfs-diff.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-diff.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-get.8.rst.txt b/_sources/man/master/8/zfs-get.8.rst.txt new file mode 100644 index 000000000..145395060 --- /dev/null +++ b/_sources/man/master/8/zfs-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-get.8 + +zfs-get.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-groupspace.8.rst.txt b/_sources/man/master/8/zfs-groupspace.8.rst.txt new file mode 100644 index 000000000..3eedf7648 --- /dev/null +++ b/_sources/man/master/8/zfs-groupspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-groupspace.8 + +zfs-groupspace.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-groupspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-hold.8.rst.txt b/_sources/man/master/8/zfs-hold.8.rst.txt new file mode 100644 index 000000000..3b7737f2f --- /dev/null +++ b/_sources/man/master/8/zfs-hold.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-hold.8 + +zfs-hold.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-hold.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-inherit.8.rst.txt b/_sources/man/master/8/zfs-inherit.8.rst.txt new file mode 100644 index 000000000..24b85f8bb --- /dev/null +++ b/_sources/man/master/8/zfs-inherit.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-inherit.8 + +zfs-inherit.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-inherit.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-jail.8.rst.txt b/_sources/man/master/8/zfs-jail.8.rst.txt new file mode 100644 index 000000000..3652ae8d4 --- /dev/null +++ b/_sources/man/master/8/zfs-jail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-jail.8 + +zfs-jail.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-jail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-list.8.rst.txt b/_sources/man/master/8/zfs-list.8.rst.txt new file mode 100644 index 000000000..091e258d8 --- /dev/null +++ b/_sources/man/master/8/zfs-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-list.8 + +zfs-list.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-load-key.8.rst.txt b/_sources/man/master/8/zfs-load-key.8.rst.txt new file mode 100644 index 000000000..6c5caea32 --- /dev/null +++ b/_sources/man/master/8/zfs-load-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-load-key.8 + +zfs-load-key.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-load-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-mount-generator.8.rst.txt b/_sources/man/master/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..af5ccf97c --- /dev/null +++ b/_sources/man/master/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-mount.8.rst.txt b/_sources/man/master/8/zfs-mount.8.rst.txt new file mode 100644 index 000000000..de1233778 --- /dev/null +++ b/_sources/man/master/8/zfs-mount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-mount.8 + +zfs-mount.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-mount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-program.8.rst.txt b/_sources/man/master/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..833776b2b --- /dev/null +++ b/_sources/man/master/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-project.8.rst.txt b/_sources/man/master/8/zfs-project.8.rst.txt new file mode 100644 index 000000000..9c161e768 --- /dev/null +++ b/_sources/man/master/8/zfs-project.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-project.8 + +zfs-project.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-project.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-projectspace.8.rst.txt b/_sources/man/master/8/zfs-projectspace.8.rst.txt new file mode 100644 index 000000000..9ffefb346 --- /dev/null +++ b/_sources/man/master/8/zfs-projectspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-projectspace.8 + +zfs-projectspace.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-projectspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-promote.8.rst.txt b/_sources/man/master/8/zfs-promote.8.rst.txt new file mode 100644 index 000000000..09eeb9b5a --- /dev/null +++ b/_sources/man/master/8/zfs-promote.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-promote.8 + +zfs-promote.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-promote.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-receive.8.rst.txt b/_sources/man/master/8/zfs-receive.8.rst.txt new file mode 100644 index 000000000..2c9a0852f --- /dev/null +++ b/_sources/man/master/8/zfs-receive.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-receive.8 + +zfs-receive.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-receive.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-recv.8.rst.txt b/_sources/man/master/8/zfs-recv.8.rst.txt new file mode 100644 index 000000000..5ee87738d --- /dev/null +++ b/_sources/man/master/8/zfs-recv.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-recv.8 + +zfs-recv.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-recv.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-redact.8.rst.txt b/_sources/man/master/8/zfs-redact.8.rst.txt new file mode 100644 index 000000000..347080ac0 --- /dev/null +++ b/_sources/man/master/8/zfs-redact.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-redact.8 + +zfs-redact.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-redact.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-release.8.rst.txt b/_sources/man/master/8/zfs-release.8.rst.txt new file mode 100644 index 000000000..fd651c8e0 --- /dev/null +++ b/_sources/man/master/8/zfs-release.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-release.8 + +zfs-release.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-release.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-rename.8.rst.txt b/_sources/man/master/8/zfs-rename.8.rst.txt new file mode 100644 index 000000000..215da65db --- /dev/null +++ b/_sources/man/master/8/zfs-rename.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-rename.8 + +zfs-rename.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-rename.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-rollback.8.rst.txt b/_sources/man/master/8/zfs-rollback.8.rst.txt new file mode 100644 index 000000000..75b9e8829 --- /dev/null +++ b/_sources/man/master/8/zfs-rollback.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-rollback.8 + +zfs-rollback.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-rollback.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-send.8.rst.txt b/_sources/man/master/8/zfs-send.8.rst.txt new file mode 100644 index 000000000..301546001 --- /dev/null +++ b/_sources/man/master/8/zfs-send.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-send.8 + +zfs-send.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-send.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-set.8.rst.txt b/_sources/man/master/8/zfs-set.8.rst.txt new file mode 100644 index 000000000..563f752ef --- /dev/null +++ b/_sources/man/master/8/zfs-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-set.8 + +zfs-set.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-share.8.rst.txt b/_sources/man/master/8/zfs-share.8.rst.txt new file mode 100644 index 000000000..a25d386fc --- /dev/null +++ b/_sources/man/master/8/zfs-share.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-share.8 + +zfs-share.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-share.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-snapshot.8.rst.txt b/_sources/man/master/8/zfs-snapshot.8.rst.txt new file mode 100644 index 000000000..a32c3c7a5 --- /dev/null +++ b/_sources/man/master/8/zfs-snapshot.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-snapshot.8 + +zfs-snapshot.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-snapshot.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unallow.8.rst.txt b/_sources/man/master/8/zfs-unallow.8.rst.txt new file mode 100644 index 000000000..27a710afb --- /dev/null +++ b/_sources/man/master/8/zfs-unallow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unallow.8 + +zfs-unallow.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unallow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unjail.8.rst.txt b/_sources/man/master/8/zfs-unjail.8.rst.txt new file mode 100644 index 000000000..d3d709c19 --- /dev/null +++ b/_sources/man/master/8/zfs-unjail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unjail.8 + +zfs-unjail.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unjail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unload-key.8.rst.txt b/_sources/man/master/8/zfs-unload-key.8.rst.txt new file mode 100644 index 000000000..d6f24dfe7 --- /dev/null +++ b/_sources/man/master/8/zfs-unload-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unload-key.8 + +zfs-unload-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unload-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unmount.8.rst.txt b/_sources/man/master/8/zfs-unmount.8.rst.txt new file mode 100644 index 000000000..f5aa20432 --- /dev/null +++ b/_sources/man/master/8/zfs-unmount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unmount.8 + +zfs-unmount.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unmount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-unzone.8.rst.txt b/_sources/man/master/8/zfs-unzone.8.rst.txt new file mode 100644 index 000000000..b05a9cced --- /dev/null +++ b/_sources/man/master/8/zfs-unzone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-unzone.8 + +zfs-unzone.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-unzone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-upgrade.8.rst.txt b/_sources/man/master/8/zfs-upgrade.8.rst.txt new file mode 100644 index 000000000..697bf7bfb --- /dev/null +++ b/_sources/man/master/8/zfs-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-upgrade.8 + +zfs-upgrade.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-userspace.8.rst.txt b/_sources/man/master/8/zfs-userspace.8.rst.txt new file mode 100644 index 000000000..2898f9f8c --- /dev/null +++ b/_sources/man/master/8/zfs-userspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-userspace.8 + +zfs-userspace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-userspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-wait.8.rst.txt b/_sources/man/master/8/zfs-wait.8.rst.txt new file mode 100644 index 000000000..d2f1ad899 --- /dev/null +++ b/_sources/man/master/8/zfs-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-wait.8 + +zfs-wait.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs-zone.8.rst.txt b/_sources/man/master/8/zfs-zone.8.rst.txt new file mode 100644 index 000000000..d03395c04 --- /dev/null +++ b/_sources/man/master/8/zfs-zone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs-zone.8 + +zfs-zone.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs-zone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs.8.rst.txt b/_sources/man/master/8/zfs.8.rst.txt new file mode 100644 index 000000000..99132cd10 --- /dev/null +++ b/_sources/man/master/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs_ids_to_path.8.rst.txt b/_sources/man/master/8/zfs_ids_to_path.8.rst.txt new file mode 100644 index 000000000..c5339446c --- /dev/null +++ b/_sources/man/master/8/zfs_ids_to_path.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs_ids_to_path.8 + +zfs_ids_to_path.8 +================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs_ids_to_path.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zfs_prepare_disk.8.rst.txt b/_sources/man/master/8/zfs_prepare_disk.8.rst.txt new file mode 100644 index 000000000..4510a8abe --- /dev/null +++ b/_sources/man/master/8/zfs_prepare_disk.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zfs_prepare_disk.8 + +zfs_prepare_disk.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zfs_prepare_disk.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zgenhostid.8.rst.txt b/_sources/man/master/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..ad0d76c44 --- /dev/null +++ b/_sources/man/master/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zinject.8.rst.txt b/_sources/man/master/8/zinject.8.rst.txt new file mode 100644 index 000000000..d52d5f68b --- /dev/null +++ b/_sources/man/master/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-add.8.rst.txt b/_sources/man/master/8/zpool-add.8.rst.txt new file mode 100644 index 000000000..1f315adaf --- /dev/null +++ b/_sources/man/master/8/zpool-add.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-add.8 + +zpool-add.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-add.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-attach.8.rst.txt b/_sources/man/master/8/zpool-attach.8.rst.txt new file mode 100644 index 000000000..06af83321 --- /dev/null +++ b/_sources/man/master/8/zpool-attach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-attach.8 + +zpool-attach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-attach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-checkpoint.8.rst.txt b/_sources/man/master/8/zpool-checkpoint.8.rst.txt new file mode 100644 index 000000000..0f763841b --- /dev/null +++ b/_sources/man/master/8/zpool-checkpoint.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-checkpoint.8 + +zpool-checkpoint.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-checkpoint.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-clear.8.rst.txt b/_sources/man/master/8/zpool-clear.8.rst.txt new file mode 100644 index 000000000..15b49e26c --- /dev/null +++ b/_sources/man/master/8/zpool-clear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-clear.8 + +zpool-clear.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-clear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-create.8.rst.txt b/_sources/man/master/8/zpool-create.8.rst.txt new file mode 100644 index 000000000..9f12988ec --- /dev/null +++ b/_sources/man/master/8/zpool-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-create.8 + +zpool-create.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-destroy.8.rst.txt b/_sources/man/master/8/zpool-destroy.8.rst.txt new file mode 100644 index 000000000..bfa476bdc --- /dev/null +++ b/_sources/man/master/8/zpool-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-destroy.8 + +zpool-destroy.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-detach.8.rst.txt b/_sources/man/master/8/zpool-detach.8.rst.txt new file mode 100644 index 000000000..628ec1477 --- /dev/null +++ b/_sources/man/master/8/zpool-detach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-detach.8 + +zpool-detach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-detach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-events.8.rst.txt b/_sources/man/master/8/zpool-events.8.rst.txt new file mode 100644 index 000000000..15bb149e8 --- /dev/null +++ b/_sources/man/master/8/zpool-events.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-events.8 + +zpool-events.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-events.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-export.8.rst.txt b/_sources/man/master/8/zpool-export.8.rst.txt new file mode 100644 index 000000000..9a5a59a7c --- /dev/null +++ b/_sources/man/master/8/zpool-export.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-export.8 + +zpool-export.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-export.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-get.8.rst.txt b/_sources/man/master/8/zpool-get.8.rst.txt new file mode 100644 index 000000000..1205db06e --- /dev/null +++ b/_sources/man/master/8/zpool-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-get.8 + +zpool-get.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-history.8.rst.txt b/_sources/man/master/8/zpool-history.8.rst.txt new file mode 100644 index 000000000..a34b58617 --- /dev/null +++ b/_sources/man/master/8/zpool-history.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-history.8 + +zpool-history.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-history.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-import.8.rst.txt b/_sources/man/master/8/zpool-import.8.rst.txt new file mode 100644 index 000000000..8d30383bc --- /dev/null +++ b/_sources/man/master/8/zpool-import.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-import.8 + +zpool-import.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-import.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-initialize.8.rst.txt b/_sources/man/master/8/zpool-initialize.8.rst.txt new file mode 100644 index 000000000..c09465f21 --- /dev/null +++ b/_sources/man/master/8/zpool-initialize.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-initialize.8 + +zpool-initialize.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-initialize.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-iostat.8.rst.txt b/_sources/man/master/8/zpool-iostat.8.rst.txt new file mode 100644 index 000000000..fe923dbc6 --- /dev/null +++ b/_sources/man/master/8/zpool-iostat.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-iostat.8 + +zpool-iostat.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-iostat.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-labelclear.8.rst.txt b/_sources/man/master/8/zpool-labelclear.8.rst.txt new file mode 100644 index 000000000..0586d539d --- /dev/null +++ b/_sources/man/master/8/zpool-labelclear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-labelclear.8 + +zpool-labelclear.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-labelclear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-list.8.rst.txt b/_sources/man/master/8/zpool-list.8.rst.txt new file mode 100644 index 000000000..da8884f8c --- /dev/null +++ b/_sources/man/master/8/zpool-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-list.8 + +zpool-list.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-offline.8.rst.txt b/_sources/man/master/8/zpool-offline.8.rst.txt new file mode 100644 index 000000000..c9dc13cad --- /dev/null +++ b/_sources/man/master/8/zpool-offline.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-offline.8 + +zpool-offline.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-offline.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-online.8.rst.txt b/_sources/man/master/8/zpool-online.8.rst.txt new file mode 100644 index 000000000..6873779d1 --- /dev/null +++ b/_sources/man/master/8/zpool-online.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-online.8 + +zpool-online.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-online.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-reguid.8.rst.txt b/_sources/man/master/8/zpool-reguid.8.rst.txt new file mode 100644 index 000000000..735913796 --- /dev/null +++ b/_sources/man/master/8/zpool-reguid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-reguid.8 + +zpool-reguid.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-reguid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-remove.8.rst.txt b/_sources/man/master/8/zpool-remove.8.rst.txt new file mode 100644 index 000000000..f532317b8 --- /dev/null +++ b/_sources/man/master/8/zpool-remove.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-remove.8 + +zpool-remove.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-remove.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-reopen.8.rst.txt b/_sources/man/master/8/zpool-reopen.8.rst.txt new file mode 100644 index 000000000..4ab383016 --- /dev/null +++ b/_sources/man/master/8/zpool-reopen.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-reopen.8 + +zpool-reopen.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-reopen.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-replace.8.rst.txt b/_sources/man/master/8/zpool-replace.8.rst.txt new file mode 100644 index 000000000..2bb16d3bd --- /dev/null +++ b/_sources/man/master/8/zpool-replace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-replace.8 + +zpool-replace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-replace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-resilver.8.rst.txt b/_sources/man/master/8/zpool-resilver.8.rst.txt new file mode 100644 index 000000000..e491136c1 --- /dev/null +++ b/_sources/man/master/8/zpool-resilver.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-resilver.8 + +zpool-resilver.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-resilver.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-scrub.8.rst.txt b/_sources/man/master/8/zpool-scrub.8.rst.txt new file mode 100644 index 000000000..8835c31ed --- /dev/null +++ b/_sources/man/master/8/zpool-scrub.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-scrub.8 + +zpool-scrub.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-scrub.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-set.8.rst.txt b/_sources/man/master/8/zpool-set.8.rst.txt new file mode 100644 index 000000000..c566b9bc6 --- /dev/null +++ b/_sources/man/master/8/zpool-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-set.8 + +zpool-set.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-split.8.rst.txt b/_sources/man/master/8/zpool-split.8.rst.txt new file mode 100644 index 000000000..6a3f01321 --- /dev/null +++ b/_sources/man/master/8/zpool-split.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-split.8 + +zpool-split.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-split.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-status.8.rst.txt b/_sources/man/master/8/zpool-status.8.rst.txt new file mode 100644 index 000000000..54eeb645c --- /dev/null +++ b/_sources/man/master/8/zpool-status.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-status.8 + +zpool-status.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-status.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-sync.8.rst.txt b/_sources/man/master/8/zpool-sync.8.rst.txt new file mode 100644 index 000000000..d82a72b7c --- /dev/null +++ b/_sources/man/master/8/zpool-sync.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-sync.8 + +zpool-sync.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-sync.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-trim.8.rst.txt b/_sources/man/master/8/zpool-trim.8.rst.txt new file mode 100644 index 000000000..48018ac21 --- /dev/null +++ b/_sources/man/master/8/zpool-trim.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-trim.8 + +zpool-trim.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-trim.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-upgrade.8.rst.txt b/_sources/man/master/8/zpool-upgrade.8.rst.txt new file mode 100644 index 000000000..83980bcf6 --- /dev/null +++ b/_sources/man/master/8/zpool-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-upgrade.8 + +zpool-upgrade.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool-wait.8.rst.txt b/_sources/man/master/8/zpool-wait.8.rst.txt new file mode 100644 index 000000000..cef33250f --- /dev/null +++ b/_sources/man/master/8/zpool-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool-wait.8 + +zpool-wait.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool.8.rst.txt b/_sources/man/master/8/zpool.8.rst.txt new file mode 100644 index 000000000..0ef799edb --- /dev/null +++ b/_sources/man/master/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zpool_influxdb.8.rst.txt b/_sources/man/master/8/zpool_influxdb.8.rst.txt new file mode 100644 index 000000000..c4bca6e1a --- /dev/null +++ b/_sources/man/master/8/zpool_influxdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zpool_influxdb.8 + +zpool_influxdb.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zpool_influxdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zstream.8.rst.txt b/_sources/man/master/8/zstream.8.rst.txt new file mode 100644 index 000000000..ed8ac3b58 --- /dev/null +++ b/_sources/man/master/8/zstream.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zstream.8 + +zstream.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zstream.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/8/zstreamdump.8.rst.txt b/_sources/man/master/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..dd4e94a68 --- /dev/null +++ b/_sources/man/master/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/master/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/master/index.rst.txt b/_sources/man/master/index.rst.txt new file mode 100644 index 000000000..4cfb92b15 --- /dev/null +++ b/_sources/man/master/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/master/man/ + +master +====== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v0.6/1/cstyle.1.rst.txt b/_sources/man/v0.6/1/cstyle.1.rst.txt new file mode 100644 index 000000000..068acdb77 --- /dev/null +++ b/_sources/man/v0.6/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/1/index.rst.txt b/_sources/man/v0.6/1/index.rst.txt new file mode 100644 index 000000000..ba7af7efb --- /dev/null +++ b/_sources/man/v0.6/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.6/1/zhack.1.rst.txt b/_sources/man/v0.6/1/zhack.1.rst.txt new file mode 100644 index 000000000..330094d93 --- /dev/null +++ b/_sources/man/v0.6/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/1/zpios.1.rst.txt b/_sources/man/v0.6/1/zpios.1.rst.txt new file mode 100644 index 000000000..36f617243 --- /dev/null +++ b/_sources/man/v0.6/1/zpios.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/zpios.1 + +zpios.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man1/zpios.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/1/ztest.1.rst.txt b/_sources/man/v0.6/1/ztest.1.rst.txt new file mode 100644 index 000000000..71112a7af --- /dev/null +++ b/_sources/man/v0.6/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/5/index.rst.txt b/_sources/man/v0.6/5/index.rst.txt new file mode 100644 index 000000000..56a6ae520 --- /dev/null +++ b/_sources/man/v0.6/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.6/5/vdev_id.conf.5.rst.txt b/_sources/man/v0.6/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..9ff7cb8d5 --- /dev/null +++ b/_sources/man/v0.6/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/5/zfs-events.5.rst.txt b/_sources/man/v0.6/5/zfs-events.5.rst.txt new file mode 100644 index 000000000..cd78b4652 --- /dev/null +++ b/_sources/man/v0.6/5/zfs-events.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/zfs-events.5 + +zfs-events.5 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man5/zfs-events.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/5/zfs-module-parameters.5.rst.txt b/_sources/man/v0.6/5/zfs-module-parameters.5.rst.txt new file mode 100644 index 000000000..18b1baa7f --- /dev/null +++ b/_sources/man/v0.6/5/zfs-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/zfs-module-parameters.5 + +zfs-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man5/zfs-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/5/zpool-features.5.rst.txt b/_sources/man/v0.6/5/zpool-features.5.rst.txt new file mode 100644 index 000000000..428c31ca4 --- /dev/null +++ b/_sources/man/v0.6/5/zpool-features.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man5/zpool-features.5 + +zpool-features.5 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man5/zpool-features.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/fsck.zfs.8.rst.txt b/_sources/man/v0.6/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..e1f9dda12 --- /dev/null +++ b/_sources/man/v0.6/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/index.rst.txt b/_sources/man/v0.6/8/index.rst.txt new file mode 100644 index 000000000..b32eab8cd --- /dev/null +++ b/_sources/man/v0.6/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.6/8/mount.zfs.8.rst.txt b/_sources/man/v0.6/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..5a9fbc6e4 --- /dev/null +++ b/_sources/man/v0.6/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/vdev_id.8.rst.txt b/_sources/man/v0.6/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..9afaad856 --- /dev/null +++ b/_sources/man/v0.6/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zdb.8.rst.txt b/_sources/man/v0.6/8/zdb.8.rst.txt new file mode 100644 index 000000000..90bfa4830 --- /dev/null +++ b/_sources/man/v0.6/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zed.8.rst.txt b/_sources/man/v0.6/8/zed.8.rst.txt new file mode 100644 index 000000000..09bfc47c6 --- /dev/null +++ b/_sources/man/v0.6/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zfs.8.rst.txt b/_sources/man/v0.6/8/zfs.8.rst.txt new file mode 100644 index 000000000..d7ac33c27 --- /dev/null +++ b/_sources/man/v0.6/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zinject.8.rst.txt b/_sources/man/v0.6/8/zinject.8.rst.txt new file mode 100644 index 000000000..361329272 --- /dev/null +++ b/_sources/man/v0.6/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zpool.8.rst.txt b/_sources/man/v0.6/8/zpool.8.rst.txt new file mode 100644 index 000000000..c856f79a4 --- /dev/null +++ b/_sources/man/v0.6/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/8/zstreamdump.8.rst.txt b/_sources/man/v0.6/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..2a33ac124 --- /dev/null +++ b/_sources/man/v0.6/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.6/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.6/index.rst.txt b/_sources/man/v0.6/index.rst.txt new file mode 100644 index 000000000..58e744cac --- /dev/null +++ b/_sources/man/v0.6/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.6.5.11/man/ + +v0.6 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v0.7/1/cstyle.1.rst.txt b/_sources/man/v0.7/1/cstyle.1.rst.txt new file mode 100644 index 000000000..e9d88519d --- /dev/null +++ b/_sources/man/v0.7/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/1/index.rst.txt b/_sources/man/v0.7/1/index.rst.txt new file mode 100644 index 000000000..6e18a7641 --- /dev/null +++ b/_sources/man/v0.7/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.7/1/raidz_test.1.rst.txt b/_sources/man/v0.7/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..4c834061d --- /dev/null +++ b/_sources/man/v0.7/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/1/zhack.1.rst.txt b/_sources/man/v0.7/1/zhack.1.rst.txt new file mode 100644 index 000000000..a9e774fc3 --- /dev/null +++ b/_sources/man/v0.7/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/1/zpios.1.rst.txt b/_sources/man/v0.7/1/zpios.1.rst.txt new file mode 100644 index 000000000..a04f4a4ad --- /dev/null +++ b/_sources/man/v0.7/1/zpios.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/zpios.1 + +zpios.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/zpios.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/1/ztest.1.rst.txt b/_sources/man/v0.7/1/ztest.1.rst.txt new file mode 100644 index 000000000..19f25c5ae --- /dev/null +++ b/_sources/man/v0.7/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/5/index.rst.txt b/_sources/man/v0.7/5/index.rst.txt new file mode 100644 index 000000000..e62c984bd --- /dev/null +++ b/_sources/man/v0.7/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.7/5/vdev_id.conf.5.rst.txt b/_sources/man/v0.7/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..69fa91cac --- /dev/null +++ b/_sources/man/v0.7/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/5/zfs-events.5.rst.txt b/_sources/man/v0.7/5/zfs-events.5.rst.txt new file mode 100644 index 000000000..a0c1c0cda --- /dev/null +++ b/_sources/man/v0.7/5/zfs-events.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/zfs-events.5 + +zfs-events.5 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man5/zfs-events.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/5/zfs-module-parameters.5.rst.txt b/_sources/man/v0.7/5/zfs-module-parameters.5.rst.txt new file mode 100644 index 000000000..3759beff1 --- /dev/null +++ b/_sources/man/v0.7/5/zfs-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/zfs-module-parameters.5 + +zfs-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man5/zfs-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/5/zpool-features.5.rst.txt b/_sources/man/v0.7/5/zpool-features.5.rst.txt new file mode 100644 index 000000000..1be5db5fa --- /dev/null +++ b/_sources/man/v0.7/5/zpool-features.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man5/zpool-features.5 + +zpool-features.5 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man5/zpool-features.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/fsck.zfs.8.rst.txt b/_sources/man/v0.7/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..ceece6b38 --- /dev/null +++ b/_sources/man/v0.7/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/index.rst.txt b/_sources/man/v0.7/8/index.rst.txt new file mode 100644 index 000000000..d45c02924 --- /dev/null +++ b/_sources/man/v0.7/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.7/8/mount.zfs.8.rst.txt b/_sources/man/v0.7/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..f47fc79de --- /dev/null +++ b/_sources/man/v0.7/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/vdev_id.8.rst.txt b/_sources/man/v0.7/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..4738f3265 --- /dev/null +++ b/_sources/man/v0.7/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zdb.8.rst.txt b/_sources/man/v0.7/8/zdb.8.rst.txt new file mode 100644 index 000000000..a6c71f3c2 --- /dev/null +++ b/_sources/man/v0.7/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zed.8.rst.txt b/_sources/man/v0.7/8/zed.8.rst.txt new file mode 100644 index 000000000..db4a8cd1a --- /dev/null +++ b/_sources/man/v0.7/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zfs.8.rst.txt b/_sources/man/v0.7/8/zfs.8.rst.txt new file mode 100644 index 000000000..31f7cf27a --- /dev/null +++ b/_sources/man/v0.7/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zgenhostid.8.rst.txt b/_sources/man/v0.7/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..daeef3bbc --- /dev/null +++ b/_sources/man/v0.7/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zinject.8.rst.txt b/_sources/man/v0.7/8/zinject.8.rst.txt new file mode 100644 index 000000000..77394e6a8 --- /dev/null +++ b/_sources/man/v0.7/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zpool.8.rst.txt b/_sources/man/v0.7/8/zpool.8.rst.txt new file mode 100644 index 000000000..6669995e9 --- /dev/null +++ b/_sources/man/v0.7/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/8/zstreamdump.8.rst.txt b/_sources/man/v0.7/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..b00520de6 --- /dev/null +++ b/_sources/man/v0.7/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.7/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.7/index.rst.txt b/_sources/man/v0.7/index.rst.txt new file mode 100644 index 000000000..f7348cf6c --- /dev/null +++ b/_sources/man/v0.7/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.7.13/man/ + +v0.7 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v0.8/1/cstyle.1.rst.txt b/_sources/man/v0.8/1/cstyle.1.rst.txt new file mode 100644 index 000000000..38753099d --- /dev/null +++ b/_sources/man/v0.8/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/1/index.rst.txt b/_sources/man/v0.8/1/index.rst.txt new file mode 100644 index 000000000..f39f7cf34 --- /dev/null +++ b/_sources/man/v0.8/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.8/1/raidz_test.1.rst.txt b/_sources/man/v0.8/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..350d2930f --- /dev/null +++ b/_sources/man/v0.8/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/1/zhack.1.rst.txt b/_sources/man/v0.8/1/zhack.1.rst.txt new file mode 100644 index 000000000..b8304b530 --- /dev/null +++ b/_sources/man/v0.8/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/1/ztest.1.rst.txt b/_sources/man/v0.8/1/ztest.1.rst.txt new file mode 100644 index 000000000..d14313e10 --- /dev/null +++ b/_sources/man/v0.8/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/1/zvol_wait.1.rst.txt b/_sources/man/v0.8/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..1eed1316c --- /dev/null +++ b/_sources/man/v0.8/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/index.rst.txt b/_sources/man/v0.8/5/index.rst.txt new file mode 100644 index 000000000..67e29b9fd --- /dev/null +++ b/_sources/man/v0.8/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.8/5/spl-module-parameters.5.rst.txt b/_sources/man/v0.8/5/spl-module-parameters.5.rst.txt new file mode 100644 index 000000000..1096b7b01 --- /dev/null +++ b/_sources/man/v0.8/5/spl-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/spl-module-parameters.5 + +spl-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/spl-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/vdev_id.conf.5.rst.txt b/_sources/man/v0.8/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..f548cb6c6 --- /dev/null +++ b/_sources/man/v0.8/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/zfs-events.5.rst.txt b/_sources/man/v0.8/5/zfs-events.5.rst.txt new file mode 100644 index 000000000..ab3ff7edc --- /dev/null +++ b/_sources/man/v0.8/5/zfs-events.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/zfs-events.5 + +zfs-events.5 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/zfs-events.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/zfs-module-parameters.5.rst.txt b/_sources/man/v0.8/5/zfs-module-parameters.5.rst.txt new file mode 100644 index 000000000..2e4049079 --- /dev/null +++ b/_sources/man/v0.8/5/zfs-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/zfs-module-parameters.5 + +zfs-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/zfs-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/5/zpool-features.5.rst.txt b/_sources/man/v0.8/5/zpool-features.5.rst.txt new file mode 100644 index 000000000..50afa8811 --- /dev/null +++ b/_sources/man/v0.8/5/zpool-features.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man5/zpool-features.5 + +zpool-features.5 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man5/zpool-features.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/fsck.zfs.8.rst.txt b/_sources/man/v0.8/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..7c1d3f261 --- /dev/null +++ b/_sources/man/v0.8/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/index.rst.txt b/_sources/man/v0.8/8/index.rst.txt new file mode 100644 index 000000000..3ba1e232d --- /dev/null +++ b/_sources/man/v0.8/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v0.8/8/mount.zfs.8.rst.txt b/_sources/man/v0.8/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..43dfaeb7c --- /dev/null +++ b/_sources/man/v0.8/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/vdev_id.8.rst.txt b/_sources/man/v0.8/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..2037d14b7 --- /dev/null +++ b/_sources/man/v0.8/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zdb.8.rst.txt b/_sources/man/v0.8/8/zdb.8.rst.txt new file mode 100644 index 000000000..36bcb8a73 --- /dev/null +++ b/_sources/man/v0.8/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zed.8.rst.txt b/_sources/man/v0.8/8/zed.8.rst.txt new file mode 100644 index 000000000..15c0c41c2 --- /dev/null +++ b/_sources/man/v0.8/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zfs-mount-generator.8.rst.txt b/_sources/man/v0.8/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..3cf59bea0 --- /dev/null +++ b/_sources/man/v0.8/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zfs-program.8.rst.txt b/_sources/man/v0.8/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..1299e1e38 --- /dev/null +++ b/_sources/man/v0.8/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zfs.8.rst.txt b/_sources/man/v0.8/8/zfs.8.rst.txt new file mode 100644 index 000000000..347e69182 --- /dev/null +++ b/_sources/man/v0.8/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zfsprops.8.rst.txt b/_sources/man/v0.8/8/zfsprops.8.rst.txt new file mode 100644 index 000000000..fb51f65d2 --- /dev/null +++ b/_sources/man/v0.8/8/zfsprops.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zfsprops.8 + +zfsprops.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zfsprops.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zgenhostid.8.rst.txt b/_sources/man/v0.8/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..e175327bf --- /dev/null +++ b/_sources/man/v0.8/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zinject.8.rst.txt b/_sources/man/v0.8/8/zinject.8.rst.txt new file mode 100644 index 000000000..8db555875 --- /dev/null +++ b/_sources/man/v0.8/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zpool.8.rst.txt b/_sources/man/v0.8/8/zpool.8.rst.txt new file mode 100644 index 000000000..e771ed419 --- /dev/null +++ b/_sources/man/v0.8/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/8/zstreamdump.8.rst.txt b/_sources/man/v0.8/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..e363c99f5 --- /dev/null +++ b/_sources/man/v0.8/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v0.8/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v0.8/index.rst.txt b/_sources/man/v0.8/index.rst.txt new file mode 100644 index 000000000..5b12af500 --- /dev/null +++ b/_sources/man/v0.8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-0.8.6/man/ + +v0.8 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v2.0/1/arcstat.1.rst.txt b/_sources/man/v2.0/1/arcstat.1.rst.txt new file mode 100644 index 000000000..c33120fe1 --- /dev/null +++ b/_sources/man/v2.0/1/arcstat.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/arcstat.1 + +arcstat.1 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/arcstat.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/cstyle.1.rst.txt b/_sources/man/v2.0/1/cstyle.1.rst.txt new file mode 100644 index 000000000..2ea60fd16 --- /dev/null +++ b/_sources/man/v2.0/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/index.rst.txt b/_sources/man/v2.0/1/index.rst.txt new file mode 100644 index 000000000..0eef9b1c0 --- /dev/null +++ b/_sources/man/v2.0/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.0/1/raidz_test.1.rst.txt b/_sources/man/v2.0/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..5c1f34a70 --- /dev/null +++ b/_sources/man/v2.0/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/zhack.1.rst.txt b/_sources/man/v2.0/1/zhack.1.rst.txt new file mode 100644 index 000000000..30cfe73ee --- /dev/null +++ b/_sources/man/v2.0/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/ztest.1.rst.txt b/_sources/man/v2.0/1/ztest.1.rst.txt new file mode 100644 index 000000000..4f8fda834 --- /dev/null +++ b/_sources/man/v2.0/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/1/zvol_wait.1.rst.txt b/_sources/man/v2.0/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..5a0450a98 --- /dev/null +++ b/_sources/man/v2.0/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/index.rst.txt b/_sources/man/v2.0/5/index.rst.txt new file mode 100644 index 000000000..1af97ff34 --- /dev/null +++ b/_sources/man/v2.0/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.0/5/spl-module-parameters.5.rst.txt b/_sources/man/v2.0/5/spl-module-parameters.5.rst.txt new file mode 100644 index 000000000..d99aca40e --- /dev/null +++ b/_sources/man/v2.0/5/spl-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/spl-module-parameters.5 + +spl-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/spl-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/vdev_id.conf.5.rst.txt b/_sources/man/v2.0/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..feea5a1f4 --- /dev/null +++ b/_sources/man/v2.0/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/zfs-events.5.rst.txt b/_sources/man/v2.0/5/zfs-events.5.rst.txt new file mode 100644 index 000000000..c28504730 --- /dev/null +++ b/_sources/man/v2.0/5/zfs-events.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/zfs-events.5 + +zfs-events.5 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/zfs-events.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/zfs-module-parameters.5.rst.txt b/_sources/man/v2.0/5/zfs-module-parameters.5.rst.txt new file mode 100644 index 000000000..3218ee4df --- /dev/null +++ b/_sources/man/v2.0/5/zfs-module-parameters.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/zfs-module-parameters.5 + +zfs-module-parameters.5 +======================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/zfs-module-parameters.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/5/zpool-features.5.rst.txt b/_sources/man/v2.0/5/zpool-features.5.rst.txt new file mode 100644 index 000000000..0da76ae71 --- /dev/null +++ b/_sources/man/v2.0/5/zpool-features.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man5/zpool-features.5 + +zpool-features.5 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man5/zpool-features.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/fsck.zfs.8.rst.txt b/_sources/man/v2.0/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..14e2b9e09 --- /dev/null +++ b/_sources/man/v2.0/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/index.rst.txt b/_sources/man/v2.0/8/index.rst.txt new file mode 100644 index 000000000..3a752f36d --- /dev/null +++ b/_sources/man/v2.0/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.0/8/mount.zfs.8.rst.txt b/_sources/man/v2.0/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..e086ad705 --- /dev/null +++ b/_sources/man/v2.0/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/vdev_id.8.rst.txt b/_sources/man/v2.0/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..557ccc0dd --- /dev/null +++ b/_sources/man/v2.0/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zdb.8.rst.txt b/_sources/man/v2.0/8/zdb.8.rst.txt new file mode 100644 index 000000000..c660f12a8 --- /dev/null +++ b/_sources/man/v2.0/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zed.8.rst.txt b/_sources/man/v2.0/8/zed.8.rst.txt new file mode 100644 index 000000000..8b88ddc27 --- /dev/null +++ b/_sources/man/v2.0/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-allow.8.rst.txt b/_sources/man/v2.0/8/zfs-allow.8.rst.txt new file mode 100644 index 000000000..443e18a9d --- /dev/null +++ b/_sources/man/v2.0/8/zfs-allow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-allow.8 + +zfs-allow.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-allow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-bookmark.8.rst.txt b/_sources/man/v2.0/8/zfs-bookmark.8.rst.txt new file mode 100644 index 000000000..4fef4e902 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-bookmark.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-bookmark.8 + +zfs-bookmark.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-bookmark.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-change-key.8.rst.txt b/_sources/man/v2.0/8/zfs-change-key.8.rst.txt new file mode 100644 index 000000000..eb5a47e95 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-change-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-change-key.8 + +zfs-change-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-change-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-clone.8.rst.txt b/_sources/man/v2.0/8/zfs-clone.8.rst.txt new file mode 100644 index 000000000..e428c95a6 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-clone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-clone.8 + +zfs-clone.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-clone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-create.8.rst.txt b/_sources/man/v2.0/8/zfs-create.8.rst.txt new file mode 100644 index 000000000..82de8cadb --- /dev/null +++ b/_sources/man/v2.0/8/zfs-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-create.8 + +zfs-create.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-destroy.8.rst.txt b/_sources/man/v2.0/8/zfs-destroy.8.rst.txt new file mode 100644 index 000000000..d5ed2f355 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-destroy.8 + +zfs-destroy.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-diff.8.rst.txt b/_sources/man/v2.0/8/zfs-diff.8.rst.txt new file mode 100644 index 000000000..798fbac13 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-diff.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-diff.8 + +zfs-diff.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-diff.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-get.8.rst.txt b/_sources/man/v2.0/8/zfs-get.8.rst.txt new file mode 100644 index 000000000..4ca0901bb --- /dev/null +++ b/_sources/man/v2.0/8/zfs-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-get.8 + +zfs-get.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-groupspace.8.rst.txt b/_sources/man/v2.0/8/zfs-groupspace.8.rst.txt new file mode 100644 index 000000000..634a0d254 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-groupspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-groupspace.8 + +zfs-groupspace.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-groupspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-hold.8.rst.txt b/_sources/man/v2.0/8/zfs-hold.8.rst.txt new file mode 100644 index 000000000..0d0ec6050 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-hold.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-hold.8 + +zfs-hold.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-hold.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-inherit.8.rst.txt b/_sources/man/v2.0/8/zfs-inherit.8.rst.txt new file mode 100644 index 000000000..4c3925b47 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-inherit.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-inherit.8 + +zfs-inherit.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-inherit.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-jail.8.rst.txt b/_sources/man/v2.0/8/zfs-jail.8.rst.txt new file mode 100644 index 000000000..c65e72094 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-jail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-jail.8 + +zfs-jail.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-jail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-list.8.rst.txt b/_sources/man/v2.0/8/zfs-list.8.rst.txt new file mode 100644 index 000000000..10e7fa040 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-list.8 + +zfs-list.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-load-key.8.rst.txt b/_sources/man/v2.0/8/zfs-load-key.8.rst.txt new file mode 100644 index 000000000..1d2e8902f --- /dev/null +++ b/_sources/man/v2.0/8/zfs-load-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-load-key.8 + +zfs-load-key.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-load-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-mount-generator.8.rst.txt b/_sources/man/v2.0/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..6c7d16c20 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-mount.8.rst.txt b/_sources/man/v2.0/8/zfs-mount.8.rst.txt new file mode 100644 index 000000000..6aa66de70 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-mount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-mount.8 + +zfs-mount.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-mount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-program.8.rst.txt b/_sources/man/v2.0/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..3f9a12013 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-project.8.rst.txt b/_sources/man/v2.0/8/zfs-project.8.rst.txt new file mode 100644 index 000000000..6c90e1830 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-project.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-project.8 + +zfs-project.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-project.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-projectspace.8.rst.txt b/_sources/man/v2.0/8/zfs-projectspace.8.rst.txt new file mode 100644 index 000000000..574b2be7f --- /dev/null +++ b/_sources/man/v2.0/8/zfs-projectspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-projectspace.8 + +zfs-projectspace.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-projectspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-promote.8.rst.txt b/_sources/man/v2.0/8/zfs-promote.8.rst.txt new file mode 100644 index 000000000..95edd0be3 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-promote.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-promote.8 + +zfs-promote.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-promote.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-receive.8.rst.txt b/_sources/man/v2.0/8/zfs-receive.8.rst.txt new file mode 100644 index 000000000..45569d4d1 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-receive.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-receive.8 + +zfs-receive.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-receive.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-recv.8.rst.txt b/_sources/man/v2.0/8/zfs-recv.8.rst.txt new file mode 100644 index 000000000..c06bb510d --- /dev/null +++ b/_sources/man/v2.0/8/zfs-recv.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-recv.8 + +zfs-recv.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-recv.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-redact.8.rst.txt b/_sources/man/v2.0/8/zfs-redact.8.rst.txt new file mode 100644 index 000000000..546660ebd --- /dev/null +++ b/_sources/man/v2.0/8/zfs-redact.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-redact.8 + +zfs-redact.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-redact.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-release.8.rst.txt b/_sources/man/v2.0/8/zfs-release.8.rst.txt new file mode 100644 index 000000000..d2eb4b4d9 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-release.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-release.8 + +zfs-release.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-release.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-rename.8.rst.txt b/_sources/man/v2.0/8/zfs-rename.8.rst.txt new file mode 100644 index 000000000..7063d1bef --- /dev/null +++ b/_sources/man/v2.0/8/zfs-rename.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-rename.8 + +zfs-rename.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-rename.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-rollback.8.rst.txt b/_sources/man/v2.0/8/zfs-rollback.8.rst.txt new file mode 100644 index 000000000..80fe00dfb --- /dev/null +++ b/_sources/man/v2.0/8/zfs-rollback.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-rollback.8 + +zfs-rollback.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-rollback.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-send.8.rst.txt b/_sources/man/v2.0/8/zfs-send.8.rst.txt new file mode 100644 index 000000000..ec5c3e502 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-send.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-send.8 + +zfs-send.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-send.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-set.8.rst.txt b/_sources/man/v2.0/8/zfs-set.8.rst.txt new file mode 100644 index 000000000..9020e6166 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-set.8 + +zfs-set.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-share.8.rst.txt b/_sources/man/v2.0/8/zfs-share.8.rst.txt new file mode 100644 index 000000000..20a44cf1f --- /dev/null +++ b/_sources/man/v2.0/8/zfs-share.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-share.8 + +zfs-share.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-share.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-snapshot.8.rst.txt b/_sources/man/v2.0/8/zfs-snapshot.8.rst.txt new file mode 100644 index 000000000..6a22e3219 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-snapshot.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-snapshot.8 + +zfs-snapshot.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-snapshot.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-unallow.8.rst.txt b/_sources/man/v2.0/8/zfs-unallow.8.rst.txt new file mode 100644 index 000000000..2a401cd37 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-unallow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-unallow.8 + +zfs-unallow.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-unallow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-unjail.8.rst.txt b/_sources/man/v2.0/8/zfs-unjail.8.rst.txt new file mode 100644 index 000000000..75350d2cd --- /dev/null +++ b/_sources/man/v2.0/8/zfs-unjail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-unjail.8 + +zfs-unjail.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-unjail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-unload-key.8.rst.txt b/_sources/man/v2.0/8/zfs-unload-key.8.rst.txt new file mode 100644 index 000000000..bc117f140 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-unload-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-unload-key.8 + +zfs-unload-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-unload-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-unmount.8.rst.txt b/_sources/man/v2.0/8/zfs-unmount.8.rst.txt new file mode 100644 index 000000000..4e5ca890d --- /dev/null +++ b/_sources/man/v2.0/8/zfs-unmount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-unmount.8 + +zfs-unmount.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-unmount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-upgrade.8.rst.txt b/_sources/man/v2.0/8/zfs-upgrade.8.rst.txt new file mode 100644 index 000000000..2e807a486 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-upgrade.8 + +zfs-upgrade.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-userspace.8.rst.txt b/_sources/man/v2.0/8/zfs-userspace.8.rst.txt new file mode 100644 index 000000000..1b3e4f4b4 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-userspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-userspace.8 + +zfs-userspace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-userspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs-wait.8.rst.txt b/_sources/man/v2.0/8/zfs-wait.8.rst.txt new file mode 100644 index 000000000..e0d78dfd0 --- /dev/null +++ b/_sources/man/v2.0/8/zfs-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs-wait.8 + +zfs-wait.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs.8.rst.txt b/_sources/man/v2.0/8/zfs.8.rst.txt new file mode 100644 index 000000000..5ca7a38ce --- /dev/null +++ b/_sources/man/v2.0/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfs_ids_to_path.8.rst.txt b/_sources/man/v2.0/8/zfs_ids_to_path.8.rst.txt new file mode 100644 index 000000000..98c3a7c1f --- /dev/null +++ b/_sources/man/v2.0/8/zfs_ids_to_path.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfs_ids_to_path.8 + +zfs_ids_to_path.8 +================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfs_ids_to_path.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfsconcepts.8.rst.txt b/_sources/man/v2.0/8/zfsconcepts.8.rst.txt new file mode 100644 index 000000000..e620f8c45 --- /dev/null +++ b/_sources/man/v2.0/8/zfsconcepts.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfsconcepts.8 + +zfsconcepts.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfsconcepts.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zfsprops.8.rst.txt b/_sources/man/v2.0/8/zfsprops.8.rst.txt new file mode 100644 index 000000000..1fb9978b9 --- /dev/null +++ b/_sources/man/v2.0/8/zfsprops.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zfsprops.8 + +zfsprops.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zfsprops.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zgenhostid.8.rst.txt b/_sources/man/v2.0/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..68b3cfd64 --- /dev/null +++ b/_sources/man/v2.0/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zinject.8.rst.txt b/_sources/man/v2.0/8/zinject.8.rst.txt new file mode 100644 index 000000000..49ef330f2 --- /dev/null +++ b/_sources/man/v2.0/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-add.8.rst.txt b/_sources/man/v2.0/8/zpool-add.8.rst.txt new file mode 100644 index 000000000..a137128d7 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-add.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-add.8 + +zpool-add.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-add.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-attach.8.rst.txt b/_sources/man/v2.0/8/zpool-attach.8.rst.txt new file mode 100644 index 000000000..cb989a1ee --- /dev/null +++ b/_sources/man/v2.0/8/zpool-attach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-attach.8 + +zpool-attach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-attach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-checkpoint.8.rst.txt b/_sources/man/v2.0/8/zpool-checkpoint.8.rst.txt new file mode 100644 index 000000000..75045c947 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-checkpoint.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-checkpoint.8 + +zpool-checkpoint.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-checkpoint.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-clear.8.rst.txt b/_sources/man/v2.0/8/zpool-clear.8.rst.txt new file mode 100644 index 000000000..f17298df9 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-clear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-clear.8 + +zpool-clear.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-clear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-create.8.rst.txt b/_sources/man/v2.0/8/zpool-create.8.rst.txt new file mode 100644 index 000000000..74f14c7c4 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-create.8 + +zpool-create.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-destroy.8.rst.txt b/_sources/man/v2.0/8/zpool-destroy.8.rst.txt new file mode 100644 index 000000000..335c29979 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-destroy.8 + +zpool-destroy.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-detach.8.rst.txt b/_sources/man/v2.0/8/zpool-detach.8.rst.txt new file mode 100644 index 000000000..caa2e4f19 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-detach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-detach.8 + +zpool-detach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-detach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-events.8.rst.txt b/_sources/man/v2.0/8/zpool-events.8.rst.txt new file mode 100644 index 000000000..34fa98343 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-events.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-events.8 + +zpool-events.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-events.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-export.8.rst.txt b/_sources/man/v2.0/8/zpool-export.8.rst.txt new file mode 100644 index 000000000..24d8954ed --- /dev/null +++ b/_sources/man/v2.0/8/zpool-export.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-export.8 + +zpool-export.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-export.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-get.8.rst.txt b/_sources/man/v2.0/8/zpool-get.8.rst.txt new file mode 100644 index 000000000..e9d165d89 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-get.8 + +zpool-get.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-history.8.rst.txt b/_sources/man/v2.0/8/zpool-history.8.rst.txt new file mode 100644 index 000000000..fb1196837 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-history.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-history.8 + +zpool-history.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-history.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-import.8.rst.txt b/_sources/man/v2.0/8/zpool-import.8.rst.txt new file mode 100644 index 000000000..4fefc6366 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-import.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-import.8 + +zpool-import.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-import.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-initialize.8.rst.txt b/_sources/man/v2.0/8/zpool-initialize.8.rst.txt new file mode 100644 index 000000000..a6049ba3b --- /dev/null +++ b/_sources/man/v2.0/8/zpool-initialize.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-initialize.8 + +zpool-initialize.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-initialize.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-iostat.8.rst.txt b/_sources/man/v2.0/8/zpool-iostat.8.rst.txt new file mode 100644 index 000000000..4224e46d6 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-iostat.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-iostat.8 + +zpool-iostat.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-iostat.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-labelclear.8.rst.txt b/_sources/man/v2.0/8/zpool-labelclear.8.rst.txt new file mode 100644 index 000000000..453dcf106 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-labelclear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-labelclear.8 + +zpool-labelclear.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-labelclear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-list.8.rst.txt b/_sources/man/v2.0/8/zpool-list.8.rst.txt new file mode 100644 index 000000000..a981d4ed0 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-list.8 + +zpool-list.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-offline.8.rst.txt b/_sources/man/v2.0/8/zpool-offline.8.rst.txt new file mode 100644 index 000000000..1735d904f --- /dev/null +++ b/_sources/man/v2.0/8/zpool-offline.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-offline.8 + +zpool-offline.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-offline.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-online.8.rst.txt b/_sources/man/v2.0/8/zpool-online.8.rst.txt new file mode 100644 index 000000000..b4e74c54a --- /dev/null +++ b/_sources/man/v2.0/8/zpool-online.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-online.8 + +zpool-online.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-online.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-reguid.8.rst.txt b/_sources/man/v2.0/8/zpool-reguid.8.rst.txt new file mode 100644 index 000000000..141a4380c --- /dev/null +++ b/_sources/man/v2.0/8/zpool-reguid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-reguid.8 + +zpool-reguid.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-reguid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-remove.8.rst.txt b/_sources/man/v2.0/8/zpool-remove.8.rst.txt new file mode 100644 index 000000000..db4667f68 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-remove.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-remove.8 + +zpool-remove.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-remove.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-reopen.8.rst.txt b/_sources/man/v2.0/8/zpool-reopen.8.rst.txt new file mode 100644 index 000000000..150a48494 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-reopen.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-reopen.8 + +zpool-reopen.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-reopen.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-replace.8.rst.txt b/_sources/man/v2.0/8/zpool-replace.8.rst.txt new file mode 100644 index 000000000..bc73d5415 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-replace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-replace.8 + +zpool-replace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-replace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-resilver.8.rst.txt b/_sources/man/v2.0/8/zpool-resilver.8.rst.txt new file mode 100644 index 000000000..8e75103da --- /dev/null +++ b/_sources/man/v2.0/8/zpool-resilver.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-resilver.8 + +zpool-resilver.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-resilver.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-scrub.8.rst.txt b/_sources/man/v2.0/8/zpool-scrub.8.rst.txt new file mode 100644 index 000000000..bccc8b22e --- /dev/null +++ b/_sources/man/v2.0/8/zpool-scrub.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-scrub.8 + +zpool-scrub.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-scrub.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-set.8.rst.txt b/_sources/man/v2.0/8/zpool-set.8.rst.txt new file mode 100644 index 000000000..0e218ceb2 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-set.8 + +zpool-set.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-split.8.rst.txt b/_sources/man/v2.0/8/zpool-split.8.rst.txt new file mode 100644 index 000000000..73de77ae1 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-split.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-split.8 + +zpool-split.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-split.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-status.8.rst.txt b/_sources/man/v2.0/8/zpool-status.8.rst.txt new file mode 100644 index 000000000..bacfd18e2 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-status.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-status.8 + +zpool-status.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-status.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-sync.8.rst.txt b/_sources/man/v2.0/8/zpool-sync.8.rst.txt new file mode 100644 index 000000000..531d00e22 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-sync.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-sync.8 + +zpool-sync.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-sync.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-trim.8.rst.txt b/_sources/man/v2.0/8/zpool-trim.8.rst.txt new file mode 100644 index 000000000..ea73cde18 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-trim.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-trim.8 + +zpool-trim.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-trim.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-upgrade.8.rst.txt b/_sources/man/v2.0/8/zpool-upgrade.8.rst.txt new file mode 100644 index 000000000..1429c3192 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-upgrade.8 + +zpool-upgrade.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool-wait.8.rst.txt b/_sources/man/v2.0/8/zpool-wait.8.rst.txt new file mode 100644 index 000000000..1365cca74 --- /dev/null +++ b/_sources/man/v2.0/8/zpool-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool-wait.8 + +zpool-wait.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpool.8.rst.txt b/_sources/man/v2.0/8/zpool.8.rst.txt new file mode 100644 index 000000000..c3c951048 --- /dev/null +++ b/_sources/man/v2.0/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpoolconcepts.8.rst.txt b/_sources/man/v2.0/8/zpoolconcepts.8.rst.txt new file mode 100644 index 000000000..0d35da910 --- /dev/null +++ b/_sources/man/v2.0/8/zpoolconcepts.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpoolconcepts.8 + +zpoolconcepts.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpoolconcepts.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zpoolprops.8.rst.txt b/_sources/man/v2.0/8/zpoolprops.8.rst.txt new file mode 100644 index 000000000..cf3be631e --- /dev/null +++ b/_sources/man/v2.0/8/zpoolprops.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zpoolprops.8 + +zpoolprops.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zpoolprops.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zstream.8.rst.txt b/_sources/man/v2.0/8/zstream.8.rst.txt new file mode 100644 index 000000000..1177cf86e --- /dev/null +++ b/_sources/man/v2.0/8/zstream.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zstream.8 + +zstream.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zstream.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/8/zstreamdump.8.rst.txt b/_sources/man/v2.0/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..4ea673e21 --- /dev/null +++ b/_sources/man/v2.0/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.0/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.0/index.rst.txt b/_sources/man/v2.0/index.rst.txt new file mode 100644 index 000000000..65e27de1a --- /dev/null +++ b/_sources/man/v2.0/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.0.7/man/ + +v2.0 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v2.1/1/arcstat.1.rst.txt b/_sources/man/v2.1/1/arcstat.1.rst.txt new file mode 100644 index 000000000..b53d6419f --- /dev/null +++ b/_sources/man/v2.1/1/arcstat.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man1/arcstat.1 + +arcstat.1 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/arcstat.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/cstyle.1.rst.txt b/_sources/man/v2.1/1/cstyle.1.rst.txt new file mode 100644 index 000000000..aab7572d9 --- /dev/null +++ b/_sources/man/v2.1/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/index.rst.txt b/_sources/man/v2.1/1/index.rst.txt new file mode 100644 index 000000000..0287ff088 --- /dev/null +++ b/_sources/man/v2.1/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/1/raidz_test.1.rst.txt b/_sources/man/v2.1/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..b0b413448 --- /dev/null +++ b/_sources/man/v2.1/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/zhack.1.rst.txt b/_sources/man/v2.1/1/zhack.1.rst.txt new file mode 100644 index 000000000..fe21eb793 --- /dev/null +++ b/_sources/man/v2.1/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/ztest.1.rst.txt b/_sources/man/v2.1/1/ztest.1.rst.txt new file mode 100644 index 000000000..866ffb1c9 --- /dev/null +++ b/_sources/man/v2.1/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/1/zvol_wait.1.rst.txt b/_sources/man/v2.1/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..cb5e58cfe --- /dev/null +++ b/_sources/man/v2.1/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/4/index.rst.txt b/_sources/man/v2.1/4/index.rst.txt new file mode 100644 index 000000000..9a32874ea --- /dev/null +++ b/_sources/man/v2.1/4/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man4/ + +Devices and Special Files (4) +============================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/4/spl.4.rst.txt b/_sources/man/v2.1/4/spl.4.rst.txt new file mode 100644 index 000000000..78d0d29c8 --- /dev/null +++ b/_sources/man/v2.1/4/spl.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man4/spl.4 + +spl.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man4/spl.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/4/zfs.4.rst.txt b/_sources/man/v2.1/4/zfs.4.rst.txt new file mode 100644 index 000000000..fccd92541 --- /dev/null +++ b/_sources/man/v2.1/4/zfs.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man4/zfs.4 + +zfs.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man4/zfs.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/5/index.rst.txt b/_sources/man/v2.1/5/index.rst.txt new file mode 100644 index 000000000..a1cc75207 --- /dev/null +++ b/_sources/man/v2.1/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/5/vdev_id.conf.5.rst.txt b/_sources/man/v2.1/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..384414513 --- /dev/null +++ b/_sources/man/v2.1/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/dracut.zfs.7.rst.txt b/_sources/man/v2.1/7/dracut.zfs.7.rst.txt new file mode 100644 index 000000000..2135119cb --- /dev/null +++ b/_sources/man/v2.1/7/dracut.zfs.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man7/dracut.zfs.7 + +dracut.zfs.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/dracut.zfs.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/index.rst.txt b/_sources/man/v2.1/7/index.rst.txt new file mode 100644 index 000000000..5f377f715 --- /dev/null +++ b/_sources/man/v2.1/7/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man7/ + +Miscellaneous (7) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/7/zfsconcepts.7.rst.txt b/_sources/man/v2.1/7/zfsconcepts.7.rst.txt new file mode 100644 index 000000000..905933ce2 --- /dev/null +++ b/_sources/man/v2.1/7/zfsconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man7/zfsconcepts.7 + +zfsconcepts.7 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zfsconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/zfsprops.7.rst.txt b/_sources/man/v2.1/7/zfsprops.7.rst.txt new file mode 100644 index 000000000..207cc78fe --- /dev/null +++ b/_sources/man/v2.1/7/zfsprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man7/zfsprops.7 + +zfsprops.7 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zfsprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/zpool-features.7.rst.txt b/_sources/man/v2.1/7/zpool-features.7.rst.txt new file mode 100644 index 000000000..2b93994b6 --- /dev/null +++ b/_sources/man/v2.1/7/zpool-features.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man7/zpool-features.7 + +zpool-features.7 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zpool-features.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/zpoolconcepts.7.rst.txt b/_sources/man/v2.1/7/zpoolconcepts.7.rst.txt new file mode 100644 index 000000000..3b009e7aa --- /dev/null +++ b/_sources/man/v2.1/7/zpoolconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man7/zpoolconcepts.7 + +zpoolconcepts.7 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zpoolconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/7/zpoolprops.7.rst.txt b/_sources/man/v2.1/7/zpoolprops.7.rst.txt new file mode 100644 index 000000000..48f9fbc03 --- /dev/null +++ b/_sources/man/v2.1/7/zpoolprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man7/zpoolprops.7 + +zpoolprops.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man7/zpoolprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/fsck.zfs.8.rst.txt b/_sources/man/v2.1/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..56a09660d --- /dev/null +++ b/_sources/man/v2.1/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/index.rst.txt b/_sources/man/v2.1/8/index.rst.txt new file mode 100644 index 000000000..7bddf7cac --- /dev/null +++ b/_sources/man/v2.1/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.1/8/mount.zfs.8.rst.txt b/_sources/man/v2.1/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..673af47bc --- /dev/null +++ b/_sources/man/v2.1/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/vdev_id.8.rst.txt b/_sources/man/v2.1/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..809a022ed --- /dev/null +++ b/_sources/man/v2.1/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zdb.8.rst.txt b/_sources/man/v2.1/8/zdb.8.rst.txt new file mode 100644 index 000000000..daf13670f --- /dev/null +++ b/_sources/man/v2.1/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zed.8.rst.txt b/_sources/man/v2.1/8/zed.8.rst.txt new file mode 100644 index 000000000..ccde03a6b --- /dev/null +++ b/_sources/man/v2.1/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-allow.8.rst.txt b/_sources/man/v2.1/8/zfs-allow.8.rst.txt new file mode 100644 index 000000000..9464791f0 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-allow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-allow.8 + +zfs-allow.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-allow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-bookmark.8.rst.txt b/_sources/man/v2.1/8/zfs-bookmark.8.rst.txt new file mode 100644 index 000000000..e22a53f9d --- /dev/null +++ b/_sources/man/v2.1/8/zfs-bookmark.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-bookmark.8 + +zfs-bookmark.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-bookmark.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-change-key.8.rst.txt b/_sources/man/v2.1/8/zfs-change-key.8.rst.txt new file mode 100644 index 000000000..08b6a85b8 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-change-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-change-key.8 + +zfs-change-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-change-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-clone.8.rst.txt b/_sources/man/v2.1/8/zfs-clone.8.rst.txt new file mode 100644 index 000000000..faee7bda7 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-clone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-clone.8 + +zfs-clone.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-clone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-create.8.rst.txt b/_sources/man/v2.1/8/zfs-create.8.rst.txt new file mode 100644 index 000000000..2ab7c808f --- /dev/null +++ b/_sources/man/v2.1/8/zfs-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-create.8 + +zfs-create.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-destroy.8.rst.txt b/_sources/man/v2.1/8/zfs-destroy.8.rst.txt new file mode 100644 index 000000000..6a038b5fb --- /dev/null +++ b/_sources/man/v2.1/8/zfs-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-destroy.8 + +zfs-destroy.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-diff.8.rst.txt b/_sources/man/v2.1/8/zfs-diff.8.rst.txt new file mode 100644 index 000000000..bff6b735a --- /dev/null +++ b/_sources/man/v2.1/8/zfs-diff.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-diff.8 + +zfs-diff.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-diff.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-get.8.rst.txt b/_sources/man/v2.1/8/zfs-get.8.rst.txt new file mode 100644 index 000000000..f42a13108 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-get.8 + +zfs-get.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-groupspace.8.rst.txt b/_sources/man/v2.1/8/zfs-groupspace.8.rst.txt new file mode 100644 index 000000000..99e4f2e29 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-groupspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-groupspace.8 + +zfs-groupspace.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-groupspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-hold.8.rst.txt b/_sources/man/v2.1/8/zfs-hold.8.rst.txt new file mode 100644 index 000000000..2b3424e2b --- /dev/null +++ b/_sources/man/v2.1/8/zfs-hold.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-hold.8 + +zfs-hold.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-hold.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-inherit.8.rst.txt b/_sources/man/v2.1/8/zfs-inherit.8.rst.txt new file mode 100644 index 000000000..be7977654 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-inherit.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-inherit.8 + +zfs-inherit.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-inherit.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-jail.8.rst.txt b/_sources/man/v2.1/8/zfs-jail.8.rst.txt new file mode 100644 index 000000000..49f3b3d0d --- /dev/null +++ b/_sources/man/v2.1/8/zfs-jail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-jail.8 + +zfs-jail.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-jail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-list.8.rst.txt b/_sources/man/v2.1/8/zfs-list.8.rst.txt new file mode 100644 index 000000000..f13940947 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-list.8 + +zfs-list.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-load-key.8.rst.txt b/_sources/man/v2.1/8/zfs-load-key.8.rst.txt new file mode 100644 index 000000000..65467feb9 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-load-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-load-key.8 + +zfs-load-key.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-load-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-mount-generator.8.rst.txt b/_sources/man/v2.1/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..d94f64991 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-mount.8.rst.txt b/_sources/man/v2.1/8/zfs-mount.8.rst.txt new file mode 100644 index 000000000..5e457588e --- /dev/null +++ b/_sources/man/v2.1/8/zfs-mount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-mount.8 + +zfs-mount.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-mount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-program.8.rst.txt b/_sources/man/v2.1/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..d9cf7bd3b --- /dev/null +++ b/_sources/man/v2.1/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-project.8.rst.txt b/_sources/man/v2.1/8/zfs-project.8.rst.txt new file mode 100644 index 000000000..baa3ab464 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-project.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-project.8 + +zfs-project.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-project.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-projectspace.8.rst.txt b/_sources/man/v2.1/8/zfs-projectspace.8.rst.txt new file mode 100644 index 000000000..3886f90a4 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-projectspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-projectspace.8 + +zfs-projectspace.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-projectspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-promote.8.rst.txt b/_sources/man/v2.1/8/zfs-promote.8.rst.txt new file mode 100644 index 000000000..f618b8bd6 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-promote.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-promote.8 + +zfs-promote.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-promote.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-receive.8.rst.txt b/_sources/man/v2.1/8/zfs-receive.8.rst.txt new file mode 100644 index 000000000..63e74080a --- /dev/null +++ b/_sources/man/v2.1/8/zfs-receive.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-receive.8 + +zfs-receive.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-receive.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-recv.8.rst.txt b/_sources/man/v2.1/8/zfs-recv.8.rst.txt new file mode 100644 index 000000000..7f4476831 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-recv.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-recv.8 + +zfs-recv.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-recv.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-redact.8.rst.txt b/_sources/man/v2.1/8/zfs-redact.8.rst.txt new file mode 100644 index 000000000..edfab6e85 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-redact.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-redact.8 + +zfs-redact.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-redact.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-release.8.rst.txt b/_sources/man/v2.1/8/zfs-release.8.rst.txt new file mode 100644 index 000000000..8b896f858 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-release.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-release.8 + +zfs-release.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-release.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-rename.8.rst.txt b/_sources/man/v2.1/8/zfs-rename.8.rst.txt new file mode 100644 index 000000000..ab5450101 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-rename.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-rename.8 + +zfs-rename.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-rename.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-rollback.8.rst.txt b/_sources/man/v2.1/8/zfs-rollback.8.rst.txt new file mode 100644 index 000000000..116424843 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-rollback.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-rollback.8 + +zfs-rollback.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-rollback.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-send.8.rst.txt b/_sources/man/v2.1/8/zfs-send.8.rst.txt new file mode 100644 index 000000000..03477ca34 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-send.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-send.8 + +zfs-send.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-send.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-set.8.rst.txt b/_sources/man/v2.1/8/zfs-set.8.rst.txt new file mode 100644 index 000000000..28d2e6e88 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-set.8 + +zfs-set.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-share.8.rst.txt b/_sources/man/v2.1/8/zfs-share.8.rst.txt new file mode 100644 index 000000000..530310e26 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-share.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-share.8 + +zfs-share.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-share.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-snapshot.8.rst.txt b/_sources/man/v2.1/8/zfs-snapshot.8.rst.txt new file mode 100644 index 000000000..2bf53bea9 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-snapshot.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-snapshot.8 + +zfs-snapshot.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-snapshot.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-unallow.8.rst.txt b/_sources/man/v2.1/8/zfs-unallow.8.rst.txt new file mode 100644 index 000000000..28784e372 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-unallow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-unallow.8 + +zfs-unallow.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-unallow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-unjail.8.rst.txt b/_sources/man/v2.1/8/zfs-unjail.8.rst.txt new file mode 100644 index 000000000..214fe4a88 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-unjail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-unjail.8 + +zfs-unjail.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-unjail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-unload-key.8.rst.txt b/_sources/man/v2.1/8/zfs-unload-key.8.rst.txt new file mode 100644 index 000000000..89a57178e --- /dev/null +++ b/_sources/man/v2.1/8/zfs-unload-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-unload-key.8 + +zfs-unload-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-unload-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-unmount.8.rst.txt b/_sources/man/v2.1/8/zfs-unmount.8.rst.txt new file mode 100644 index 000000000..2fa2a59bc --- /dev/null +++ b/_sources/man/v2.1/8/zfs-unmount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-unmount.8 + +zfs-unmount.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-unmount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-upgrade.8.rst.txt b/_sources/man/v2.1/8/zfs-upgrade.8.rst.txt new file mode 100644 index 000000000..e4a958fc1 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-upgrade.8 + +zfs-upgrade.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-userspace.8.rst.txt b/_sources/man/v2.1/8/zfs-userspace.8.rst.txt new file mode 100644 index 000000000..0e9186e71 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-userspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-userspace.8 + +zfs-userspace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-userspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs-wait.8.rst.txt b/_sources/man/v2.1/8/zfs-wait.8.rst.txt new file mode 100644 index 000000000..517f0e8c3 --- /dev/null +++ b/_sources/man/v2.1/8/zfs-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs-wait.8 + +zfs-wait.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs.8.rst.txt b/_sources/man/v2.1/8/zfs.8.rst.txt new file mode 100644 index 000000000..48c92bf0c --- /dev/null +++ b/_sources/man/v2.1/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs_ids_to_path.8.rst.txt b/_sources/man/v2.1/8/zfs_ids_to_path.8.rst.txt new file mode 100644 index 000000000..f0d134091 --- /dev/null +++ b/_sources/man/v2.1/8/zfs_ids_to_path.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs_ids_to_path.8 + +zfs_ids_to_path.8 +================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs_ids_to_path.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zfs_prepare_disk.8.rst.txt b/_sources/man/v2.1/8/zfs_prepare_disk.8.rst.txt new file mode 100644 index 000000000..c8881b000 --- /dev/null +++ b/_sources/man/v2.1/8/zfs_prepare_disk.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zfs_prepare_disk.8 + +zfs_prepare_disk.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zfs_prepare_disk.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zgenhostid.8.rst.txt b/_sources/man/v2.1/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..b402aba53 --- /dev/null +++ b/_sources/man/v2.1/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zinject.8.rst.txt b/_sources/man/v2.1/8/zinject.8.rst.txt new file mode 100644 index 000000000..e4565bc64 --- /dev/null +++ b/_sources/man/v2.1/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-add.8.rst.txt b/_sources/man/v2.1/8/zpool-add.8.rst.txt new file mode 100644 index 000000000..263d64075 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-add.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-add.8 + +zpool-add.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-add.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-attach.8.rst.txt b/_sources/man/v2.1/8/zpool-attach.8.rst.txt new file mode 100644 index 000000000..4d9681c2d --- /dev/null +++ b/_sources/man/v2.1/8/zpool-attach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-attach.8 + +zpool-attach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-attach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-checkpoint.8.rst.txt b/_sources/man/v2.1/8/zpool-checkpoint.8.rst.txt new file mode 100644 index 000000000..c683b6a7b --- /dev/null +++ b/_sources/man/v2.1/8/zpool-checkpoint.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-checkpoint.8 + +zpool-checkpoint.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-checkpoint.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-clear.8.rst.txt b/_sources/man/v2.1/8/zpool-clear.8.rst.txt new file mode 100644 index 000000000..19f4b489d --- /dev/null +++ b/_sources/man/v2.1/8/zpool-clear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-clear.8 + +zpool-clear.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-clear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-create.8.rst.txt b/_sources/man/v2.1/8/zpool-create.8.rst.txt new file mode 100644 index 000000000..76d494a0c --- /dev/null +++ b/_sources/man/v2.1/8/zpool-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-create.8 + +zpool-create.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-destroy.8.rst.txt b/_sources/man/v2.1/8/zpool-destroy.8.rst.txt new file mode 100644 index 000000000..77ce5701a --- /dev/null +++ b/_sources/man/v2.1/8/zpool-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-destroy.8 + +zpool-destroy.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-detach.8.rst.txt b/_sources/man/v2.1/8/zpool-detach.8.rst.txt new file mode 100644 index 000000000..28e445cea --- /dev/null +++ b/_sources/man/v2.1/8/zpool-detach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-detach.8 + +zpool-detach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-detach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-events.8.rst.txt b/_sources/man/v2.1/8/zpool-events.8.rst.txt new file mode 100644 index 000000000..aee08b72a --- /dev/null +++ b/_sources/man/v2.1/8/zpool-events.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-events.8 + +zpool-events.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-events.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-export.8.rst.txt b/_sources/man/v2.1/8/zpool-export.8.rst.txt new file mode 100644 index 000000000..fb412f0d5 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-export.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-export.8 + +zpool-export.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-export.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-get.8.rst.txt b/_sources/man/v2.1/8/zpool-get.8.rst.txt new file mode 100644 index 000000000..7fcd45713 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-get.8 + +zpool-get.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-history.8.rst.txt b/_sources/man/v2.1/8/zpool-history.8.rst.txt new file mode 100644 index 000000000..125d2663f --- /dev/null +++ b/_sources/man/v2.1/8/zpool-history.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-history.8 + +zpool-history.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-history.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-import.8.rst.txt b/_sources/man/v2.1/8/zpool-import.8.rst.txt new file mode 100644 index 000000000..80e8d396c --- /dev/null +++ b/_sources/man/v2.1/8/zpool-import.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-import.8 + +zpool-import.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-import.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-initialize.8.rst.txt b/_sources/man/v2.1/8/zpool-initialize.8.rst.txt new file mode 100644 index 000000000..5319d82d9 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-initialize.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-initialize.8 + +zpool-initialize.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-initialize.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-iostat.8.rst.txt b/_sources/man/v2.1/8/zpool-iostat.8.rst.txt new file mode 100644 index 000000000..55f1739d3 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-iostat.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-iostat.8 + +zpool-iostat.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-iostat.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-labelclear.8.rst.txt b/_sources/man/v2.1/8/zpool-labelclear.8.rst.txt new file mode 100644 index 000000000..0c2e57e3c --- /dev/null +++ b/_sources/man/v2.1/8/zpool-labelclear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-labelclear.8 + +zpool-labelclear.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-labelclear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-list.8.rst.txt b/_sources/man/v2.1/8/zpool-list.8.rst.txt new file mode 100644 index 000000000..dfb948400 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-list.8 + +zpool-list.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-offline.8.rst.txt b/_sources/man/v2.1/8/zpool-offline.8.rst.txt new file mode 100644 index 000000000..315d75e3a --- /dev/null +++ b/_sources/man/v2.1/8/zpool-offline.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-offline.8 + +zpool-offline.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-offline.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-online.8.rst.txt b/_sources/man/v2.1/8/zpool-online.8.rst.txt new file mode 100644 index 000000000..1be5c7174 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-online.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-online.8 + +zpool-online.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-online.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-reguid.8.rst.txt b/_sources/man/v2.1/8/zpool-reguid.8.rst.txt new file mode 100644 index 000000000..390c05c62 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-reguid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-reguid.8 + +zpool-reguid.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-reguid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-remove.8.rst.txt b/_sources/man/v2.1/8/zpool-remove.8.rst.txt new file mode 100644 index 000000000..e2facc34e --- /dev/null +++ b/_sources/man/v2.1/8/zpool-remove.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-remove.8 + +zpool-remove.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-remove.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-reopen.8.rst.txt b/_sources/man/v2.1/8/zpool-reopen.8.rst.txt new file mode 100644 index 000000000..ba3ffc3ef --- /dev/null +++ b/_sources/man/v2.1/8/zpool-reopen.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-reopen.8 + +zpool-reopen.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-reopen.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-replace.8.rst.txt b/_sources/man/v2.1/8/zpool-replace.8.rst.txt new file mode 100644 index 000000000..1bb354d78 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-replace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-replace.8 + +zpool-replace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-replace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-resilver.8.rst.txt b/_sources/man/v2.1/8/zpool-resilver.8.rst.txt new file mode 100644 index 000000000..7f0db2982 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-resilver.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-resilver.8 + +zpool-resilver.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-resilver.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-scrub.8.rst.txt b/_sources/man/v2.1/8/zpool-scrub.8.rst.txt new file mode 100644 index 000000000..ae785931d --- /dev/null +++ b/_sources/man/v2.1/8/zpool-scrub.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-scrub.8 + +zpool-scrub.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-scrub.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-set.8.rst.txt b/_sources/man/v2.1/8/zpool-set.8.rst.txt new file mode 100644 index 000000000..928882084 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-set.8 + +zpool-set.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-split.8.rst.txt b/_sources/man/v2.1/8/zpool-split.8.rst.txt new file mode 100644 index 000000000..89eaa03f7 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-split.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-split.8 + +zpool-split.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-split.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-status.8.rst.txt b/_sources/man/v2.1/8/zpool-status.8.rst.txt new file mode 100644 index 000000000..245f9a635 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-status.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-status.8 + +zpool-status.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-status.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-sync.8.rst.txt b/_sources/man/v2.1/8/zpool-sync.8.rst.txt new file mode 100644 index 000000000..eb4ea57d9 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-sync.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-sync.8 + +zpool-sync.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-sync.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-trim.8.rst.txt b/_sources/man/v2.1/8/zpool-trim.8.rst.txt new file mode 100644 index 000000000..a7ba519bc --- /dev/null +++ b/_sources/man/v2.1/8/zpool-trim.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-trim.8 + +zpool-trim.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-trim.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-upgrade.8.rst.txt b/_sources/man/v2.1/8/zpool-upgrade.8.rst.txt new file mode 100644 index 000000000..14ab0a03a --- /dev/null +++ b/_sources/man/v2.1/8/zpool-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-upgrade.8 + +zpool-upgrade.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool-wait.8.rst.txt b/_sources/man/v2.1/8/zpool-wait.8.rst.txt new file mode 100644 index 000000000..c47a29832 --- /dev/null +++ b/_sources/man/v2.1/8/zpool-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool-wait.8 + +zpool-wait.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool.8.rst.txt b/_sources/man/v2.1/8/zpool.8.rst.txt new file mode 100644 index 000000000..654ba6018 --- /dev/null +++ b/_sources/man/v2.1/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zpool_influxdb.8.rst.txt b/_sources/man/v2.1/8/zpool_influxdb.8.rst.txt new file mode 100644 index 000000000..6c1fcf7e3 --- /dev/null +++ b/_sources/man/v2.1/8/zpool_influxdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zpool_influxdb.8 + +zpool_influxdb.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zpool_influxdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zstream.8.rst.txt b/_sources/man/v2.1/8/zstream.8.rst.txt new file mode 100644 index 000000000..f9d19dc81 --- /dev/null +++ b/_sources/man/v2.1/8/zstream.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zstream.8 + +zstream.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zstream.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/8/zstreamdump.8.rst.txt b/_sources/man/v2.1/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..bebfcd150 --- /dev/null +++ b/_sources/man/v2.1/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.1/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.1/index.rst.txt b/_sources/man/v2.1/index.rst.txt new file mode 100644 index 000000000..f54bccc84 --- /dev/null +++ b/_sources/man/v2.1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.1.15/man/ + +v2.1 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/man/v2.2/1/arcstat.1.rst.txt b/_sources/man/v2.2/1/arcstat.1.rst.txt new file mode 100644 index 000000000..de6a52e65 --- /dev/null +++ b/_sources/man/v2.2/1/arcstat.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man1/arcstat.1 + +arcstat.1 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/arcstat.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/cstyle.1.rst.txt b/_sources/man/v2.2/1/cstyle.1.rst.txt new file mode 100644 index 000000000..bd026f0bf --- /dev/null +++ b/_sources/man/v2.2/1/cstyle.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man1/cstyle.1 + +cstyle.1 +======== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/cstyle.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/index.rst.txt b/_sources/man/v2.2/1/index.rst.txt new file mode 100644 index 000000000..4bab3c370 --- /dev/null +++ b/_sources/man/v2.2/1/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man1/ + +User Commands (1) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/1/raidz_test.1.rst.txt b/_sources/man/v2.2/1/raidz_test.1.rst.txt new file mode 100644 index 000000000..80d658965 --- /dev/null +++ b/_sources/man/v2.2/1/raidz_test.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man1/raidz_test.1 + +raidz_test.1 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/raidz_test.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/test-runner.1.rst.txt b/_sources/man/v2.2/1/test-runner.1.rst.txt new file mode 100644 index 000000000..672dd11e8 --- /dev/null +++ b/_sources/man/v2.2/1/test-runner.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man1/test-runner.1 + +test-runner.1 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/test-runner.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/zhack.1.rst.txt b/_sources/man/v2.2/1/zhack.1.rst.txt new file mode 100644 index 000000000..7f4fc06e9 --- /dev/null +++ b/_sources/man/v2.2/1/zhack.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man1/zhack.1 + +zhack.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/zhack.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/ztest.1.rst.txt b/_sources/man/v2.2/1/ztest.1.rst.txt new file mode 100644 index 000000000..94d646df3 --- /dev/null +++ b/_sources/man/v2.2/1/ztest.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man1/ztest.1 + +ztest.1 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/ztest.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/1/zvol_wait.1.rst.txt b/_sources/man/v2.2/1/zvol_wait.1.rst.txt new file mode 100644 index 000000000..bb12a9e0f --- /dev/null +++ b/_sources/man/v2.2/1/zvol_wait.1.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man1/zvol_wait.1 + +zvol_wait.1 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man1/zvol_wait.1.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/4/index.rst.txt b/_sources/man/v2.2/4/index.rst.txt new file mode 100644 index 000000000..784aa640d --- /dev/null +++ b/_sources/man/v2.2/4/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man4/ + +Devices and Special Files (4) +============================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/4/spl.4.rst.txt b/_sources/man/v2.2/4/spl.4.rst.txt new file mode 100644 index 000000000..b80759cbb --- /dev/null +++ b/_sources/man/v2.2/4/spl.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man4/spl.4 + +spl.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man4/spl.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/4/zfs.4.rst.txt b/_sources/man/v2.2/4/zfs.4.rst.txt new file mode 100644 index 000000000..e0e148375 --- /dev/null +++ b/_sources/man/v2.2/4/zfs.4.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man4/zfs.4 + +zfs.4 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man4/zfs.4.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/5/index.rst.txt b/_sources/man/v2.2/5/index.rst.txt new file mode 100644 index 000000000..602541853 --- /dev/null +++ b/_sources/man/v2.2/5/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man5/ + +File Formats and Conventions (5) +================================ +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/5/vdev_id.conf.5.rst.txt b/_sources/man/v2.2/5/vdev_id.conf.5.rst.txt new file mode 100644 index 000000000..274141309 --- /dev/null +++ b/_sources/man/v2.2/5/vdev_id.conf.5.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man5/vdev_id.conf.5 + +vdev_id.conf.5 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man5/vdev_id.conf.5.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/dracut.zfs.7.rst.txt b/_sources/man/v2.2/7/dracut.zfs.7.rst.txt new file mode 100644 index 000000000..5c3195f01 --- /dev/null +++ b/_sources/man/v2.2/7/dracut.zfs.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man7/dracut.zfs.7 + +dracut.zfs.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/dracut.zfs.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/index.rst.txt b/_sources/man/v2.2/7/index.rst.txt new file mode 100644 index 000000000..df650fc37 --- /dev/null +++ b/_sources/man/v2.2/7/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man7/ + +Miscellaneous (7) +================= +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/7/vdevprops.7.rst.txt b/_sources/man/v2.2/7/vdevprops.7.rst.txt new file mode 100644 index 000000000..d9748acf4 --- /dev/null +++ b/_sources/man/v2.2/7/vdevprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man7/vdevprops.7 + +vdevprops.7 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/vdevprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zfsconcepts.7.rst.txt b/_sources/man/v2.2/7/zfsconcepts.7.rst.txt new file mode 100644 index 000000000..906e38d79 --- /dev/null +++ b/_sources/man/v2.2/7/zfsconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man7/zfsconcepts.7 + +zfsconcepts.7 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zfsconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zfsprops.7.rst.txt b/_sources/man/v2.2/7/zfsprops.7.rst.txt new file mode 100644 index 000000000..06f803761 --- /dev/null +++ b/_sources/man/v2.2/7/zfsprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man7/zfsprops.7 + +zfsprops.7 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zfsprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zpool-features.7.rst.txt b/_sources/man/v2.2/7/zpool-features.7.rst.txt new file mode 100644 index 000000000..d60153555 --- /dev/null +++ b/_sources/man/v2.2/7/zpool-features.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man7/zpool-features.7 + +zpool-features.7 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zpool-features.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zpoolconcepts.7.rst.txt b/_sources/man/v2.2/7/zpoolconcepts.7.rst.txt new file mode 100644 index 000000000..17800c84a --- /dev/null +++ b/_sources/man/v2.2/7/zpoolconcepts.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man7/zpoolconcepts.7 + +zpoolconcepts.7 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zpoolconcepts.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/7/zpoolprops.7.rst.txt b/_sources/man/v2.2/7/zpoolprops.7.rst.txt new file mode 100644 index 000000000..87610e716 --- /dev/null +++ b/_sources/man/v2.2/7/zpoolprops.7.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man7/zpoolprops.7 + +zpoolprops.7 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man7/zpoolprops.7.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/fsck.zfs.8.rst.txt b/_sources/man/v2.2/8/fsck.zfs.8.rst.txt new file mode 100644 index 000000000..54235cfef --- /dev/null +++ b/_sources/man/v2.2/8/fsck.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/fsck.zfs.8 + +fsck.zfs.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/fsck.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/index.rst.txt b/_sources/man/v2.2/8/index.rst.txt new file mode 100644 index 000000000..0acfac796 --- /dev/null +++ b/_sources/man/v2.2/8/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/ + +System Administration Commands (8) +================================== +.. toctree:: + :maxdepth: 1 + :glob: + + * + \ No newline at end of file diff --git a/_sources/man/v2.2/8/mount.zfs.8.rst.txt b/_sources/man/v2.2/8/mount.zfs.8.rst.txt new file mode 100644 index 000000000..8c9e7577a --- /dev/null +++ b/_sources/man/v2.2/8/mount.zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/mount.zfs.8 + +mount.zfs.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/mount.zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/vdev_id.8.rst.txt b/_sources/man/v2.2/8/vdev_id.8.rst.txt new file mode 100644 index 000000000..8a51edc7d --- /dev/null +++ b/_sources/man/v2.2/8/vdev_id.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/vdev_id.8 + +vdev_id.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/vdev_id.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zdb.8.rst.txt b/_sources/man/v2.2/8/zdb.8.rst.txt new file mode 100644 index 000000000..3e502c88b --- /dev/null +++ b/_sources/man/v2.2/8/zdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zdb.8 + +zdb.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zed.8.rst.txt b/_sources/man/v2.2/8/zed.8.rst.txt new file mode 100644 index 000000000..81e37f5c9 --- /dev/null +++ b/_sources/man/v2.2/8/zed.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zed.8 + +zed.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zed.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-allow.8.rst.txt b/_sources/man/v2.2/8/zfs-allow.8.rst.txt new file mode 100644 index 000000000..9d5c74d1e --- /dev/null +++ b/_sources/man/v2.2/8/zfs-allow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-allow.8 + +zfs-allow.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-allow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-bookmark.8.rst.txt b/_sources/man/v2.2/8/zfs-bookmark.8.rst.txt new file mode 100644 index 000000000..a2a4d68eb --- /dev/null +++ b/_sources/man/v2.2/8/zfs-bookmark.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-bookmark.8 + +zfs-bookmark.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-bookmark.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-change-key.8.rst.txt b/_sources/man/v2.2/8/zfs-change-key.8.rst.txt new file mode 100644 index 000000000..64430b9a2 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-change-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-change-key.8 + +zfs-change-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-change-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-clone.8.rst.txt b/_sources/man/v2.2/8/zfs-clone.8.rst.txt new file mode 100644 index 000000000..2226a736d --- /dev/null +++ b/_sources/man/v2.2/8/zfs-clone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-clone.8 + +zfs-clone.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-clone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-create.8.rst.txt b/_sources/man/v2.2/8/zfs-create.8.rst.txt new file mode 100644 index 000000000..4d79eac42 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-create.8 + +zfs-create.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-destroy.8.rst.txt b/_sources/man/v2.2/8/zfs-destroy.8.rst.txt new file mode 100644 index 000000000..2773c119d --- /dev/null +++ b/_sources/man/v2.2/8/zfs-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-destroy.8 + +zfs-destroy.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-diff.8.rst.txt b/_sources/man/v2.2/8/zfs-diff.8.rst.txt new file mode 100644 index 000000000..1179dbe30 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-diff.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-diff.8 + +zfs-diff.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-diff.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-get.8.rst.txt b/_sources/man/v2.2/8/zfs-get.8.rst.txt new file mode 100644 index 000000000..86ce9383c --- /dev/null +++ b/_sources/man/v2.2/8/zfs-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-get.8 + +zfs-get.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-groupspace.8.rst.txt b/_sources/man/v2.2/8/zfs-groupspace.8.rst.txt new file mode 100644 index 000000000..c86ef578f --- /dev/null +++ b/_sources/man/v2.2/8/zfs-groupspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-groupspace.8 + +zfs-groupspace.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-groupspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-hold.8.rst.txt b/_sources/man/v2.2/8/zfs-hold.8.rst.txt new file mode 100644 index 000000000..fee4b9f01 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-hold.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-hold.8 + +zfs-hold.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-hold.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-inherit.8.rst.txt b/_sources/man/v2.2/8/zfs-inherit.8.rst.txt new file mode 100644 index 000000000..10aa6efb7 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-inherit.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-inherit.8 + +zfs-inherit.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-inherit.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-jail.8.rst.txt b/_sources/man/v2.2/8/zfs-jail.8.rst.txt new file mode 100644 index 000000000..f6403261c --- /dev/null +++ b/_sources/man/v2.2/8/zfs-jail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-jail.8 + +zfs-jail.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-jail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-list.8.rst.txt b/_sources/man/v2.2/8/zfs-list.8.rst.txt new file mode 100644 index 000000000..690fce469 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-list.8 + +zfs-list.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-load-key.8.rst.txt b/_sources/man/v2.2/8/zfs-load-key.8.rst.txt new file mode 100644 index 000000000..255dad2ce --- /dev/null +++ b/_sources/man/v2.2/8/zfs-load-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-load-key.8 + +zfs-load-key.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-load-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-mount-generator.8.rst.txt b/_sources/man/v2.2/8/zfs-mount-generator.8.rst.txt new file mode 100644 index 000000000..f3ed7445f --- /dev/null +++ b/_sources/man/v2.2/8/zfs-mount-generator.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-mount-generator.8 + +zfs-mount-generator.8 +===================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-mount-generator.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-mount.8.rst.txt b/_sources/man/v2.2/8/zfs-mount.8.rst.txt new file mode 100644 index 000000000..fc0f9edb4 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-mount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-mount.8 + +zfs-mount.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-mount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-program.8.rst.txt b/_sources/man/v2.2/8/zfs-program.8.rst.txt new file mode 100644 index 000000000..4a2ad1e62 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-program.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-program.8 + +zfs-program.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-program.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-project.8.rst.txt b/_sources/man/v2.2/8/zfs-project.8.rst.txt new file mode 100644 index 000000000..e07d04a27 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-project.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-project.8 + +zfs-project.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-project.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-projectspace.8.rst.txt b/_sources/man/v2.2/8/zfs-projectspace.8.rst.txt new file mode 100644 index 000000000..fdd021db5 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-projectspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-projectspace.8 + +zfs-projectspace.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-projectspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-promote.8.rst.txt b/_sources/man/v2.2/8/zfs-promote.8.rst.txt new file mode 100644 index 000000000..bdca6a981 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-promote.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-promote.8 + +zfs-promote.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-promote.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-receive.8.rst.txt b/_sources/man/v2.2/8/zfs-receive.8.rst.txt new file mode 100644 index 000000000..f4ffdc6d1 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-receive.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-receive.8 + +zfs-receive.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-receive.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-recv.8.rst.txt b/_sources/man/v2.2/8/zfs-recv.8.rst.txt new file mode 100644 index 000000000..1fdda0ff3 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-recv.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-recv.8 + +zfs-recv.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-recv.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-redact.8.rst.txt b/_sources/man/v2.2/8/zfs-redact.8.rst.txt new file mode 100644 index 000000000..a28e20fee --- /dev/null +++ b/_sources/man/v2.2/8/zfs-redact.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-redact.8 + +zfs-redact.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-redact.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-release.8.rst.txt b/_sources/man/v2.2/8/zfs-release.8.rst.txt new file mode 100644 index 000000000..ef2a35f7a --- /dev/null +++ b/_sources/man/v2.2/8/zfs-release.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-release.8 + +zfs-release.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-release.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-rename.8.rst.txt b/_sources/man/v2.2/8/zfs-rename.8.rst.txt new file mode 100644 index 000000000..0f9dd94c8 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-rename.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-rename.8 + +zfs-rename.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-rename.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-rollback.8.rst.txt b/_sources/man/v2.2/8/zfs-rollback.8.rst.txt new file mode 100644 index 000000000..4ec89e48b --- /dev/null +++ b/_sources/man/v2.2/8/zfs-rollback.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-rollback.8 + +zfs-rollback.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-rollback.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-send.8.rst.txt b/_sources/man/v2.2/8/zfs-send.8.rst.txt new file mode 100644 index 000000000..01ce73b15 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-send.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-send.8 + +zfs-send.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-send.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-set.8.rst.txt b/_sources/man/v2.2/8/zfs-set.8.rst.txt new file mode 100644 index 000000000..3b3fccbe7 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-set.8 + +zfs-set.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-share.8.rst.txt b/_sources/man/v2.2/8/zfs-share.8.rst.txt new file mode 100644 index 000000000..8fb648353 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-share.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-share.8 + +zfs-share.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-share.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-snapshot.8.rst.txt b/_sources/man/v2.2/8/zfs-snapshot.8.rst.txt new file mode 100644 index 000000000..aa8933fd7 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-snapshot.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-snapshot.8 + +zfs-snapshot.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-snapshot.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unallow.8.rst.txt b/_sources/man/v2.2/8/zfs-unallow.8.rst.txt new file mode 100644 index 000000000..7f3d62c6c --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unallow.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-unallow.8 + +zfs-unallow.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unallow.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unjail.8.rst.txt b/_sources/man/v2.2/8/zfs-unjail.8.rst.txt new file mode 100644 index 000000000..fc0a858a1 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unjail.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-unjail.8 + +zfs-unjail.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unjail.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unload-key.8.rst.txt b/_sources/man/v2.2/8/zfs-unload-key.8.rst.txt new file mode 100644 index 000000000..48ff792a1 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unload-key.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-unload-key.8 + +zfs-unload-key.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unload-key.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unmount.8.rst.txt b/_sources/man/v2.2/8/zfs-unmount.8.rst.txt new file mode 100644 index 000000000..9fed52fa6 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unmount.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-unmount.8 + +zfs-unmount.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unmount.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-unzone.8.rst.txt b/_sources/man/v2.2/8/zfs-unzone.8.rst.txt new file mode 100644 index 000000000..30c8914eb --- /dev/null +++ b/_sources/man/v2.2/8/zfs-unzone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-unzone.8 + +zfs-unzone.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-unzone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-upgrade.8.rst.txt b/_sources/man/v2.2/8/zfs-upgrade.8.rst.txt new file mode 100644 index 000000000..1781c7012 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-upgrade.8 + +zfs-upgrade.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-userspace.8.rst.txt b/_sources/man/v2.2/8/zfs-userspace.8.rst.txt new file mode 100644 index 000000000..2df954db0 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-userspace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-userspace.8 + +zfs-userspace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-userspace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-wait.8.rst.txt b/_sources/man/v2.2/8/zfs-wait.8.rst.txt new file mode 100644 index 000000000..0a5f4ba49 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-wait.8 + +zfs-wait.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs-zone.8.rst.txt b/_sources/man/v2.2/8/zfs-zone.8.rst.txt new file mode 100644 index 000000000..fb1ef4927 --- /dev/null +++ b/_sources/man/v2.2/8/zfs-zone.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs-zone.8 + +zfs-zone.8 +========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs-zone.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs.8.rst.txt b/_sources/man/v2.2/8/zfs.8.rst.txt new file mode 100644 index 000000000..4a178655e --- /dev/null +++ b/_sources/man/v2.2/8/zfs.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs.8 + +zfs.8 +===== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs_ids_to_path.8.rst.txt b/_sources/man/v2.2/8/zfs_ids_to_path.8.rst.txt new file mode 100644 index 000000000..7a8a787f8 --- /dev/null +++ b/_sources/man/v2.2/8/zfs_ids_to_path.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs_ids_to_path.8 + +zfs_ids_to_path.8 +================= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs_ids_to_path.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zfs_prepare_disk.8.rst.txt b/_sources/man/v2.2/8/zfs_prepare_disk.8.rst.txt new file mode 100644 index 000000000..95a4e0aa6 --- /dev/null +++ b/_sources/man/v2.2/8/zfs_prepare_disk.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zfs_prepare_disk.8 + +zfs_prepare_disk.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zfs_prepare_disk.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zgenhostid.8.rst.txt b/_sources/man/v2.2/8/zgenhostid.8.rst.txt new file mode 100644 index 000000000..d1c8e761c --- /dev/null +++ b/_sources/man/v2.2/8/zgenhostid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zgenhostid.8 + +zgenhostid.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zgenhostid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zinject.8.rst.txt b/_sources/man/v2.2/8/zinject.8.rst.txt new file mode 100644 index 000000000..dd16c4ab5 --- /dev/null +++ b/_sources/man/v2.2/8/zinject.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zinject.8 + +zinject.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zinject.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-add.8.rst.txt b/_sources/man/v2.2/8/zpool-add.8.rst.txt new file mode 100644 index 000000000..0d1dd9850 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-add.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-add.8 + +zpool-add.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-add.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-attach.8.rst.txt b/_sources/man/v2.2/8/zpool-attach.8.rst.txt new file mode 100644 index 000000000..219eee42d --- /dev/null +++ b/_sources/man/v2.2/8/zpool-attach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-attach.8 + +zpool-attach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-attach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-checkpoint.8.rst.txt b/_sources/man/v2.2/8/zpool-checkpoint.8.rst.txt new file mode 100644 index 000000000..7ffffba39 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-checkpoint.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-checkpoint.8 + +zpool-checkpoint.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-checkpoint.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-clear.8.rst.txt b/_sources/man/v2.2/8/zpool-clear.8.rst.txt new file mode 100644 index 000000000..83b77a819 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-clear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-clear.8 + +zpool-clear.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-clear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-create.8.rst.txt b/_sources/man/v2.2/8/zpool-create.8.rst.txt new file mode 100644 index 000000000..e7ade8a63 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-create.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-create.8 + +zpool-create.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-create.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-destroy.8.rst.txt b/_sources/man/v2.2/8/zpool-destroy.8.rst.txt new file mode 100644 index 000000000..6415829b7 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-destroy.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-destroy.8 + +zpool-destroy.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-destroy.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-detach.8.rst.txt b/_sources/man/v2.2/8/zpool-detach.8.rst.txt new file mode 100644 index 000000000..54d04a017 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-detach.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-detach.8 + +zpool-detach.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-detach.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-events.8.rst.txt b/_sources/man/v2.2/8/zpool-events.8.rst.txt new file mode 100644 index 000000000..b8326585a --- /dev/null +++ b/_sources/man/v2.2/8/zpool-events.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-events.8 + +zpool-events.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-events.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-export.8.rst.txt b/_sources/man/v2.2/8/zpool-export.8.rst.txt new file mode 100644 index 000000000..a3399280b --- /dev/null +++ b/_sources/man/v2.2/8/zpool-export.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-export.8 + +zpool-export.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-export.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-get.8.rst.txt b/_sources/man/v2.2/8/zpool-get.8.rst.txt new file mode 100644 index 000000000..51b9bd9be --- /dev/null +++ b/_sources/man/v2.2/8/zpool-get.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-get.8 + +zpool-get.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-get.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-history.8.rst.txt b/_sources/man/v2.2/8/zpool-history.8.rst.txt new file mode 100644 index 000000000..c51a0e6a8 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-history.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-history.8 + +zpool-history.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-history.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-import.8.rst.txt b/_sources/man/v2.2/8/zpool-import.8.rst.txt new file mode 100644 index 000000000..d1dc039bd --- /dev/null +++ b/_sources/man/v2.2/8/zpool-import.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-import.8 + +zpool-import.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-import.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-initialize.8.rst.txt b/_sources/man/v2.2/8/zpool-initialize.8.rst.txt new file mode 100644 index 000000000..dedfedd5c --- /dev/null +++ b/_sources/man/v2.2/8/zpool-initialize.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-initialize.8 + +zpool-initialize.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-initialize.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-iostat.8.rst.txt b/_sources/man/v2.2/8/zpool-iostat.8.rst.txt new file mode 100644 index 000000000..7e09f597c --- /dev/null +++ b/_sources/man/v2.2/8/zpool-iostat.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-iostat.8 + +zpool-iostat.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-iostat.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-labelclear.8.rst.txt b/_sources/man/v2.2/8/zpool-labelclear.8.rst.txt new file mode 100644 index 000000000..8cd38d80a --- /dev/null +++ b/_sources/man/v2.2/8/zpool-labelclear.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-labelclear.8 + +zpool-labelclear.8 +================== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-labelclear.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-list.8.rst.txt b/_sources/man/v2.2/8/zpool-list.8.rst.txt new file mode 100644 index 000000000..5c89776da --- /dev/null +++ b/_sources/man/v2.2/8/zpool-list.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-list.8 + +zpool-list.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-list.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-offline.8.rst.txt b/_sources/man/v2.2/8/zpool-offline.8.rst.txt new file mode 100644 index 000000000..d06ec26f8 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-offline.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-offline.8 + +zpool-offline.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-offline.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-online.8.rst.txt b/_sources/man/v2.2/8/zpool-online.8.rst.txt new file mode 100644 index 000000000..267483483 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-online.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-online.8 + +zpool-online.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-online.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-reguid.8.rst.txt b/_sources/man/v2.2/8/zpool-reguid.8.rst.txt new file mode 100644 index 000000000..506a013d3 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-reguid.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-reguid.8 + +zpool-reguid.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-reguid.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-remove.8.rst.txt b/_sources/man/v2.2/8/zpool-remove.8.rst.txt new file mode 100644 index 000000000..ea91b1cbf --- /dev/null +++ b/_sources/man/v2.2/8/zpool-remove.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-remove.8 + +zpool-remove.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-remove.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-reopen.8.rst.txt b/_sources/man/v2.2/8/zpool-reopen.8.rst.txt new file mode 100644 index 000000000..41dbd8dfc --- /dev/null +++ b/_sources/man/v2.2/8/zpool-reopen.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-reopen.8 + +zpool-reopen.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-reopen.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-replace.8.rst.txt b/_sources/man/v2.2/8/zpool-replace.8.rst.txt new file mode 100644 index 000000000..1043e92fc --- /dev/null +++ b/_sources/man/v2.2/8/zpool-replace.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-replace.8 + +zpool-replace.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-replace.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-resilver.8.rst.txt b/_sources/man/v2.2/8/zpool-resilver.8.rst.txt new file mode 100644 index 000000000..f8d011712 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-resilver.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-resilver.8 + +zpool-resilver.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-resilver.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-scrub.8.rst.txt b/_sources/man/v2.2/8/zpool-scrub.8.rst.txt new file mode 100644 index 000000000..c0c5eb9ea --- /dev/null +++ b/_sources/man/v2.2/8/zpool-scrub.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-scrub.8 + +zpool-scrub.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-scrub.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-set.8.rst.txt b/_sources/man/v2.2/8/zpool-set.8.rst.txt new file mode 100644 index 000000000..3ba2b5497 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-set.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-set.8 + +zpool-set.8 +=========== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-set.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-split.8.rst.txt b/_sources/man/v2.2/8/zpool-split.8.rst.txt new file mode 100644 index 000000000..d791cc92d --- /dev/null +++ b/_sources/man/v2.2/8/zpool-split.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-split.8 + +zpool-split.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-split.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-status.8.rst.txt b/_sources/man/v2.2/8/zpool-status.8.rst.txt new file mode 100644 index 000000000..8a6494910 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-status.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-status.8 + +zpool-status.8 +============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-status.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-sync.8.rst.txt b/_sources/man/v2.2/8/zpool-sync.8.rst.txt new file mode 100644 index 000000000..e1856df82 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-sync.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-sync.8 + +zpool-sync.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-sync.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-trim.8.rst.txt b/_sources/man/v2.2/8/zpool-trim.8.rst.txt new file mode 100644 index 000000000..fc932fb90 --- /dev/null +++ b/_sources/man/v2.2/8/zpool-trim.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-trim.8 + +zpool-trim.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-trim.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-upgrade.8.rst.txt b/_sources/man/v2.2/8/zpool-upgrade.8.rst.txt new file mode 100644 index 000000000..5de2cb51b --- /dev/null +++ b/_sources/man/v2.2/8/zpool-upgrade.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-upgrade.8 + +zpool-upgrade.8 +=============== +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-upgrade.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool-wait.8.rst.txt b/_sources/man/v2.2/8/zpool-wait.8.rst.txt new file mode 100644 index 000000000..9c26ebbfd --- /dev/null +++ b/_sources/man/v2.2/8/zpool-wait.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool-wait.8 + +zpool-wait.8 +============ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool-wait.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool.8.rst.txt b/_sources/man/v2.2/8/zpool.8.rst.txt new file mode 100644 index 000000000..1cffd8f23 --- /dev/null +++ b/_sources/man/v2.2/8/zpool.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool.8 + +zpool.8 +======= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zpool_influxdb.8.rst.txt b/_sources/man/v2.2/8/zpool_influxdb.8.rst.txt new file mode 100644 index 000000000..e098deca0 --- /dev/null +++ b/_sources/man/v2.2/8/zpool_influxdb.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zpool_influxdb.8 + +zpool_influxdb.8 +================ +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zpool_influxdb.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zstream.8.rst.txt b/_sources/man/v2.2/8/zstream.8.rst.txt new file mode 100644 index 000000000..f0d62cd0e --- /dev/null +++ b/_sources/man/v2.2/8/zstream.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zstream.8 + +zstream.8 +========= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zstream.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/8/zstreamdump.8.rst.txt b/_sources/man/v2.2/8/zstreamdump.8.rst.txt new file mode 100644 index 000000000..3415f60ae --- /dev/null +++ b/_sources/man/v2.2/8/zstreamdump.8.rst.txt @@ -0,0 +1,17 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/man8/zstreamdump.8 + +zstreamdump.8 +============= +.. raw:: html + +
+ +.. raw:: html + :file: ../../../_build/man/v2.2/man8/zstreamdump.8.html + +.. raw:: html + +
+ \ No newline at end of file diff --git a/_sources/man/v2.2/index.rst.txt b/_sources/man/v2.2/index.rst.txt new file mode 100644 index 000000000..fdd7cf2fa --- /dev/null +++ b/_sources/man/v2.2/index.rst.txt @@ -0,0 +1,12 @@ +.. THIS FILE IS AUTOGENERATED, DO NOT EDIT! + +:github_url: https://github.com/openzfs/zfs/blob/zfs-2.2.3/man/ + +v2.2 +==== +.. toctree:: + :maxdepth: 1 + :glob: + + */index + \ No newline at end of file diff --git a/_sources/msg/ZFS-8000-14/index.rst.txt b/_sources/msg/ZFS-8000-14/index.rst.txt new file mode 100644 index 000000000..5084bfcd8 --- /dev/null +++ b/_sources/msg/ZFS-8000-14/index.rst.txt @@ -0,0 +1,82 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-14 +======================= + +Corrupt ZFS cache +----------------- + ++-------------------------+--------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------+ +| **Severity:** | Critical | ++-------------------------+--------------------------------------+ +| **Description:** | The ZFS cache file is corrupted. | ++-------------------------+--------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------+ +| **Impact:** | ZFS filesystems are not available. | ++-------------------------+--------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +ZFS keeps a list of active pools on the filesystem to avoid having to +scan all devices when the system is booted. If this file is corrupted, +then normally active pools will not be automatically opened. The pools +can be recovered using the ``zpool import`` command: + +:: + + # zpool import + pool: test + id: 12743384782310107047 + state: ONLINE + action: The pool can be imported using its name or numeric identifier. + config: + + test ONLINE + sda9 ONLINE + +This will automatically scan ``/dev`` for any devices part of a pool. +If devices have been made available in an alternate location, use the +``-d`` option to ``zpool import`` to search for devices in a different +directory. + +Once you have determined which pools are available for import, you +can import the pool explicitly by specifying the name or numeric +identifier: + +:: + + # zpool import test + +Alternately, you can import all available pools by specifying the ``-a`` +option. Once a pool has been imported, the ZFS cache will be repaired +so that the pool will appear normally in the future. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-14`` indicates a corrupted ZFS cache file. +Take the documented action to resolve the problem. diff --git a/_sources/msg/ZFS-8000-2Q/index.rst.txt b/_sources/msg/ZFS-8000-2Q/index.rst.txt new file mode 100644 index 000000000..3eac49fa6 --- /dev/null +++ b/_sources/msg/ZFS-8000-2Q/index.rst.txt @@ -0,0 +1,134 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-2Q +======================= + +Missing device in replicated configuration +------------------------------------------ + ++-------------------------+--------------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Major | ++-------------------------+--------------------------------------------------+ +| **Description:** | A device in a replicated configuration could not | +| | be opened. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | A hot spare will be activated if available. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | The pool is no longer providing the configured | +| | level of replication. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +.. rubric:: For an active pool: + +If this error was encountered while running ``zpool import``, please +see the section below. Otherwise, run ``zpool status -x`` to determine +which pool has experienced a failure: + +:: + + # zpool status -x + pool: test + state: DEGRADED + status: One or more devices could not be opened. Sufficient replicas exist for + the pool to continue functioning in a degraded state. + action: Attach the missing device and online it using 'zpool online'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test DEGRADED 0 0 0 + mirror DEGRADED 0 0 0 + c0t0d0 ONLINE 0 0 0 + c0t0d1 FAULTED 0 0 0 cannot open + + errors: No known data errors + +Determine which device failed to open by looking for a FAULTED device +with an additional 'cannot open' message. If this device has been +inadvertently removed from the system, attach the device and bring it +online with ``zpool online``: + +:: + + # zpool online test c0t0d1 + +If the device is no longer available, the device can be replaced +using the ``zpool replace`` command: + +:: + + # zpool replace test c0t0d1 c0t0d2 + +If the device has been replaced by another disk in the same physical +slot, then the device can be replaced using a single argument to the +``zpool replace`` command: + +:: + + # zpool replace test c0t0d1 + +Existing data will be resilvered to the new device. Once the +resilvering completes, the device will be removed from the pool. + +.. rubric:: For an exported pool: + +If this error is encountered during a ``zpool import``, it means that +one of the devices is not attached to the system: + +:: + + # zpool import + pool: test + id: 10121266328238932306 + state: DEGRADED + status: One or more devices are missing from the system. + action: The pool can be imported despite missing or damaged devices. The + fault tolerance of the pool may be compromised if imported. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q + config: + + test DEGRADED + mirror DEGRADED + c0t0d0 ONLINE + c0t0d1 FAULTED cannot open + +Unlike when the pool is active on the system, the device cannot be +replaced while the pool is exported. If the device can be attached to +the system, attach the device and run ``zpool import`` again. + +Alternatively, the pool can be imported as-is, though it will be +placed in the DEGRADED state due to a missing device. The device will +be marked as UNAVAIL. Once the pool has been imported, the missing +device can be replaced as described above. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-2Q`` indicates a device which was unable +to be opened by the ZFS subsystem. diff --git a/_sources/msg/ZFS-8000-3C/index.rst.txt b/_sources/msg/ZFS-8000-3C/index.rst.txt new file mode 100644 index 000000000..fcdb0ccd9 --- /dev/null +++ b/_sources/msg/ZFS-8000-3C/index.rst.txt @@ -0,0 +1,110 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-3C +======================= + +Missing device in non-replicated configuration +---------------------------------------------- + ++-------------------------+--------------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+--------------------------------------------------+ +| **Description:** | A device could not be opened and no replicas are | +| | available. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | The pool is no longer available. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +.. rubric:: For an active pool: + +If this error was encountered while running ``zpool import``, please +see the section below. Otherwise, run ``zpool status -x`` to determine +which pool has experienced a failure: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: One or more devices could not be opened. There are insufficient + replicas for the pool to continue functioning. + action: Attach the missing device and online it using 'zpool online'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 0 insufficient replicas + c0t0d0 ONLINE 0 0 0 + c0t0d1 FAULTED 0 0 0 cannot open + + errors: No known data errors + +If the device has been temporarily detached from the system, attach +the device to the system and run ``zpool status`` again. The pool +should automatically detect the newly attached device and resume +functioning. You may have to mount the filesystems in the pool +explicitly using ``zfs mount -a``. + +If the device is no longer available and cannot be reattached to the +system, then the pool must be destroyed and re-created from a backup +source. + +.. rubric:: For an exported pool: + +If this error is encountered during a ``zpool import``, it means that +one of the devices is not attached to the system: + +:: + + # zpool import + pool: test + id: 10121266328238932306 + state: FAULTED + status: One or more devices are missing from the system. + action: The pool cannot be imported. Attach the missing devices and try again. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C + config: + + test FAULTED insufficient replicas + c0t0d0 ONLINE + c0t0d1 FAULTED cannot open + +The pool cannot be imported until the missing device is attached to +the system. If the device has been made available in an alternate +location, use the ``-d`` option to ``zpool import`` to search for devices +in a different directory. If the missing device is unavailable, then +the pool cannot be imported. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-3C`` indicates a device which was unable +to be opened by the ZFS subsystem. diff --git a/_sources/msg/ZFS-8000-4J/index.rst.txt b/_sources/msg/ZFS-8000-4J/index.rst.txt new file mode 100644 index 000000000..cab39c293 --- /dev/null +++ b/_sources/msg/ZFS-8000-4J/index.rst.txt @@ -0,0 +1,133 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-4J +======================= + +Corrupted device label in a replicated configuration +---------------------------------------------------- + ++-------------------------+--------------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Major | ++-------------------------+--------------------------------------------------+ +| **Description:** | A device could not be opened due to a missing or | +| | invalid device label. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | A hot spare will be activated if available. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | The pool is no longer providing the configured | +| | level of replication. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +.. rubric:: For an active pool: + +If this error was encountered while running ``zpool import``, please +see the section below. Otherwise, run ``zpool status -x`` to determine +which pool has experienced a failure: + +:: + + # zpool status -x + pool: test + state: DEGRADED + status: One or more devices could not be used because the label is missing or + invalid. Sufficient replicas exist for the pool to continue + functioning in a degraded state. + action: Replace the device using 'zpool replace'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test DEGRADED 0 0 0 + mirror DEGRADED 0 0 0 + c0t0d0 ONLINE 0 0 0 + c0t0d1 FAULTED 0 0 0 corrupted data + + errors: No known data errors + +If the device has been temporarily detached from the system, attach +the device to the system and run ``zpool status`` again. The pool +should automatically detect the newly attached device and resume +functioning. + +If the device is no longer available, it can be replaced using ``zpool +replace``: + +:: + + # zpool replace test c0t0d1 c0t0d2 + +If the device has been replaced by another disk in the same physical +slot, then the device can be replaced using a single argument to the +``zpool replace`` command: + +:: + + # zpool replace test c0t0d1 + +ZFS will begin migrating data to the new device as soon as the +replace is issued. Once the resilvering completes, the original +device (if different from the replacement) will be removed, and the +pool will be restored to the ONLINE state. + +.. rubric:: For an exported pool: + +If this error is encountered while running ``zpool import``, the pool +can be still be imported despite the failure: + +:: + + # zpool import + pool: test + id: 5187963178597328409 + state: DEGRADED + status: One or more devices contains corrupted data. The fault tolerance of + the pool may be compromised if imported. + action: The pool can be imported using its name or numeric identifier. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J + config: + + test DEGRADED + mirror DEGRADED + c0t0d0 ONLINE + c0t0d1 FAULTED corrupted data + +To import the pool, run ``zpool import``: + +:: + + # zpool import test + +Once the pool has been imported, the damaged device can be replaced +according to the above procedure. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-4J`` indicates a device which was unable +to be opened by the ZFS subsystem. diff --git a/_sources/msg/ZFS-8000-5E/index.rst.txt b/_sources/msg/ZFS-8000-5E/index.rst.txt new file mode 100644 index 000000000..0b895153f --- /dev/null +++ b/_sources/msg/ZFS-8000-5E/index.rst.txt @@ -0,0 +1,88 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-5E +======================= + +Corrupted device label in non-replicated configuration +------------------------------------------------------ + ++-------------------------+--------------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+--------------------------------------------------+ +| **Description:** | A device could not be opened due to a missing or | +| | invalid device label and no replicas are | +| | available. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | The pool is no longer available. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +.. rubric:: For an active pool: + +If this error was encountered while running ``zpool import``, please see the +section below. Otherwise, run ``zpool status -x`` to determine which pool has +experienced a failure: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: One or more devices could not be used because the the label is missing + or invalid. There are insufficient replicas for the pool to continue + functioning. + action: Destroy and re-create the pool from a backup source. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 0 insufficient replicas + c0t0d0 FAULTED 0 0 0 corrupted data + c0t0d1 ONLINE 0 0 0 + + errors: No known data errors + +The device listed as FAULTED with 'corrupted data' cannot be opened due to a +corrupt label. ZFS will be unable to use the pool, and all data within the +pool is irrevocably lost. The pool must be destroyed and recreated from an +appropriate backup source. Using replicated configurations will prevent this +from happening in the future. + +.. rubric:: For an exported pool: + +If this error is encountered during ``zpool import``, the action is the same. +The pool cannot be imported - all data is lost and must be restored from an +appropriate backup source. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-5E`` indicates a device which was unable to be +opened by the ZFS subsystem. diff --git a/_sources/msg/ZFS-8000-6X/index.rst.txt b/_sources/msg/ZFS-8000-6X/index.rst.txt new file mode 100644 index 000000000..b6702eb2e --- /dev/null +++ b/_sources/msg/ZFS-8000-6X/index.rst.txt @@ -0,0 +1,80 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-6X +======================= + +Missing top level device +------------------------ + ++-------------------------+--------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+--------------------------------------------+ +| **Description:** | One or more top level devices are missing. | ++-------------------------+--------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------+ +| **Impact:** | The pool cannot be imported. | ++-------------------------+--------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Run ``zpool import`` to list which pool cannot be imported: + +:: + + # zpool import + pool: test + id: 13783646421373024673 + state: FAULTED + status: One or more devices are missing from the system. + action: The pool cannot be imported. Attach the missing devices and try again. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-6X + config: + + test FAULTED missing device + c0t0d0 ONLINE + + Additional devices are known to be part of this pool, though their + exact configuration cannot be determined. + +ZFS attempts to store enough configuration data on the devices such +that the configuration is recoverable from any subset of devices. In +some cases, particularly when an entire toplevel virtual device is +not attached to the system, ZFS will be unable to determine the +complete configuration. It will always detect that these devices are +missing, even if it cannot identify all of the devices. + +The pool cannot be imported until the unknown missing device is +attached to the system. If the device has been made available in an +alternate location, use the ``-d`` option to ``zpool import`` to search +for devices in a different directory. If the missing device is +unavailable, then the pool cannot be imported. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-6X`` indicates one or more top level +devices are missing from the configuration. diff --git a/_sources/msg/ZFS-8000-72/index.rst.txt b/_sources/msg/ZFS-8000-72/index.rst.txt new file mode 100644 index 000000000..e302ea24e --- /dev/null +++ b/_sources/msg/ZFS-8000-72/index.rst.txt @@ -0,0 +1,112 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-72 +======================= + +Corrupted pool metadata +----------------------- + ++-------------------------+-------------------------------------------+ +| **Type:** | Error | ++-------------------------+-------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+-------------------------------------------+ +| **Description:** | The metadata required to open the pool is | +| | corrupt. | ++-------------------------+-------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+-------------------------------------------+ +| **Impact:** | The pool is no longer available. | ++-------------------------+-------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Even though all the devices are available, the on-disk data has been +corrupted such that the pool cannot be opened. If a recovery action +is presented, the pool can be returned to a usable state. Otherwise, +all data within the pool is lost, and the pool must be destroyed and +restored from an appropriate backup source. ZFS includes built-in +metadata replication to prevent this from happening even for +unreplicated pools, but running in a replicated configuration will +decrease the chances of this happening in the future. + +If this error is encountered during ``zpool import``, see the section +below. Otherwise, run ``zpool status -x`` to determine which pool is +faulted and if a recovery option is available: + +:: + + # zpool status -x + pool: test + id: 13783646421373024673 + state: FAULTED + status: The pool metadata is corrupted and cannot be opened. + action: Recovery is possible, but will result in some data loss. + Returning the pool to its state as of Mon Sep 28 10:24:39 2009 + should correct the problem. Approximately 59 seconds of data + will have to be discarded, irreversibly. Recovery can be + attempted by executing 'zpool clear -F test'. A scrub of the pool + is strongly recommended following a successful recovery. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72 + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 2 corrupted data + c0t0d0 ONLINE 0 0 2 + c0t0d1 ONLINE 0 0 2 + +If recovery is unavailable, the recommended action will be: + +:: + + action: Destroy the pool and restore from backup. + +If this error is encountered during ``zpool import``, and if no recovery option +is mentioned, the pool is unrecoverable and cannot be imported. The pool must +be restored from an appropriate backup source. If a recovery option is +available, the output from ``zpool import`` will look something like the +following: + +:: + + # zpool import share + cannot import 'share': I/O error + Recovery is possible, but will result in some data loss. + Returning the pool to its state as of Sun Sep 27 12:31:07 2009 + should correct the problem. Approximately 53 seconds of data + will have to be discarded, irreversibly. Recovery can be + attempted by executing 'zpool import -F share'. A scrub of the pool + is strongly recommended following a successful recovery. + +Recovery actions are requested with the -F option to either ``zpool +clear`` or ``zpool import``. Recovery will result in some data loss, +because it reverts the pool to an earlier state. A dry-run recovery +check can be performed by adding the ``-n`` option, affirming if recovery +is possible without actually reverting the pool to its earlier state. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-72`` indicates a pool was unable to be +opened due to a detected corruption in the pool metadata. diff --git a/_sources/msg/ZFS-8000-8A/index.rst.txt b/_sources/msg/ZFS-8000-8A/index.rst.txt new file mode 100644 index 000000000..a854e839d --- /dev/null +++ b/_sources/msg/ZFS-8000-8A/index.rst.txt @@ -0,0 +1,111 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-8A +======================= + +Corrupted data +-------------- + ++-------------------------+----------------------------------------------+ +| **Type:** | Error | ++-------------------------+----------------------------------------------+ +| **Severity:** | Critical | ++-------------------------+----------------------------------------------+ +| **Description:** | A file or directory could not be read due to | +| | corrupt data. | ++-------------------------+----------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+----------------------------------------------+ +| **Impact:** | The file or directory is unavailable. | ++-------------------------+----------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Run ``zpool status -x`` to determine which pool is damaged: + +:: + + # zpool status -x + pool: test + state: ONLINE + status: One or more devices has experienced an error and no valid replicas + are available. Some filesystem data is corrupt, and applications + may have been affected. + action: Destroy the pool and restore from backup. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 2 + c0t0d0 ONLINE 0 0 2 + c0t0d1 ONLINE 0 0 0 + + errors: 1 data errors, use '-v' for a list + +Unfortunately, the data cannot be repaired, and the only choice to +repair the data is to restore the pool from backup. Applications +attempting to access the corrupted data will get an error (EIO), and +data may be permanently lost. + +The list of affected files can be retrieved by using the ``-v`` option to +``zpool status``: + +:: + + # zpool status -xv + pool: test + state: ONLINE + status: One or more devices has experienced an error and no valid replicas + are available. Some filesystem data is corrupt, and applications + may have been affected. + action: Destroy the pool and restore from backup. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 2 + c0t0d0 ONLINE 0 0 2 + c0t0d1 ONLINE 0 0 0 + + errors: Permanent errors have been detected in the following files: + + /export/example/foo + +Damaged files may or may not be able to be removed depending on the +type of corruption. If the corruption is within the plain data, the +file should be removable. If the corruption is in the file metadata, +then the file cannot be removed, though it can be moved to an +alternate location. In either case, the data should be restored from +a backup source. It is also possible for the corruption to be within +pool-wide metadata, resulting in entire datasets being unavailable. +If this is the case, the only option is to destroy the pool and +re-create the datasets from backup. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-8A`` indicates corrupted data exists in +the current pool. diff --git a/_sources/msg/ZFS-8000-9P/index.rst.txt b/_sources/msg/ZFS-8000-9P/index.rst.txt new file mode 100644 index 000000000..e49b099a4 --- /dev/null +++ b/_sources/msg/ZFS-8000-9P/index.rst.txt @@ -0,0 +1,157 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-9P +======================= + +Failing device in replicated configuration +------------------------------------------ + ++-------------------------+----------------------------------------------------+ +| **Type:** | Error | ++-------------------------+----------------------------------------------------+ +| **Severity:** | Minor | ++-------------------------+----------------------------------------------------+ +| **Description:** | A device has experienced uncorrectable errors in a | +| | replicated configuration. | ++-------------------------+----------------------------------------------------+ +| **Automated Response:** | ZFS has attempted to repair the affected data. | ++-------------------------+----------------------------------------------------+ +| **Impact:** | The system is unaffected, though errors may | +| | indicate future failure. Future errors may cause | +| | ZFS to automatically fault the device. | ++-------------------------+----------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Run ``zpool status -x`` to determine which pool has experienced errors: + +:: + + # zpool status + pool: test + state: ONLINE + status: One or more devices has experienced an unrecoverable error. An + attempt was made to correct the error. Applications are unaffected. + action: Determine if the device needs to be replaced, and clear the errors + using 'zpool online' or replace the device with 'zpool replace'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + mirror ONLINE 0 0 0 + c0t0d0 ONLINE 0 0 2 + c0t0d1 ONLINE 0 0 0 + + errors: No known data errors + +Find the device with a non-zero error count for READ, WRITE, or +CKSUM. This indicates that the device has experienced a read I/O +error, write I/O error, or checksum validation error. Because the +device is part of a mirror or RAID-Z device, ZFS was able to recover +from the error and subsequently repair the damaged data. + +If these errors persist over a period of time, ZFS may determine the +device is faulty and mark it as such. However, these error counts may +or may not indicate that the device is unusable. It depends on how +the errors were caused, which the administrator can determine in +advance of any ZFS diagnosis. For example, the following cases will +all produce errors that do not indicate potential device failure: + +- A network attached device lost connectivity but has now + recovered +- A device suffered from a bit flip, an expected event over long + periods of time +- An administrator accidentally wrote over a portion of the disk + using another program + +In these cases, the presence of errors does not indicate that the +device is likely to fail in the future, and therefore does not need +to be replaced. If this is the case, then the device errors should be +cleared using ``zpool clear``: + +:: + + # zpool clear test c0t0d0 + +On the other hand, errors may very well indicate that the device has +failed or is about to fail. If there are continual I/O errors to a +device that is otherwise attached and functioning on the system, it +most likely needs to be replaced. The administrator should check the +system log for any driver messages that may indicate hardware +failure. If it is determined that the device needs to be replaced, +then the ``zpool replace`` command should be used: + +:: + + # zpool replace test c0t0d0 c0t0d2 + +This will attach the new device to the pool and begin resilvering +data to it. Once the resilvering process is complete, the old device +will automatically be removed from the pool, at which point it can +safely be removed from the system. If the device needs to be replaced +in-place (because there are no available spare devices), the original +device can be removed and replaced with a new device, at which point +a different form of ``zpool replace`` can be used: + +:: + + # zpool replace test c0t0d0 + +This assumes that the original device at 'c0t0d0' has been replaced +with a new device under the same path, and will be replaced +appropriately. + +You can monitor the progress of the resilvering operation by using +the ``zpool status -x`` command: + +:: + + # zpool status -x + pool: test + state: DEGRADED + status: One or more devices is currently being replaced. The pool may not be + providing the necessary level of replication. + action: Wait for the resilvering operation to complete + scrub: resilver in progress, 0.14% done, 0h0m to go + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + mirror ONLINE 0 0 0 + replacing ONLINE 0 0 0 + c0t0d0 ONLINE 0 0 3 + c0t0d2 ONLINE 0 0 0 58.5K resilvered + c0t0d1 ONLINE 0 0 0 + + errors: No known data errors + +.. rubric:: Details + +The Message ID: ``ZFS-8000-9P`` indicates a device has exceeded the +acceptable limit of errors allowed by the system. See document +`203768 `__ +for additional information. diff --git a/_sources/msg/ZFS-8000-A5/index.rst.txt b/_sources/msg/ZFS-8000-A5/index.rst.txt new file mode 100644 index 000000000..b58c12974 --- /dev/null +++ b/_sources/msg/ZFS-8000-A5/index.rst.txt @@ -0,0 +1,83 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-A5 +======================= + +Incompatible version +-------------------- + ++-------------------------+------------------------------------------------+ +| **Type:** | Error | ++-------------------------+------------------------------------------------+ +| **Severity:** | Major | ++-------------------------+------------------------------------------------+ +| **Description:** | The on-disk version is not compatible with the | +| | running system. | ++-------------------------+------------------------------------------------+ +| **Automated Response:** | No automated response will occur. | ++-------------------------+------------------------------------------------+ +| **Impact:** | The pool is unavailable. | ++-------------------------+------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +If this error is seen during ``zpool import``, see the section below. +Otherwise, run ``zpool status -x`` to determine which pool is faulted: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: The ZFS version for the pool is incompatible with the software running + on this system. + action: Destroy and re-create the pool. + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 0 incompatible version + mirror ONLINE 0 0 0 + sda9 ONLINE 0 0 0 + sdb9 ONLINE 0 0 0 + + errors: No known errors + +The pool cannot be used on this system. Either move the storage to +the system where the pool was originally created, upgrade the current +system software to a more recent version, or destroy the pool and +re-create it from backup. + +If this error is seen during import, the pool cannot be imported on +the current system. The disks must be attached to the system which +originally created the pool, and imported there. + +The list of currently supported versions can be displayed using +``zpool upgrade -v``. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-A5`` indicates a version mismatch exists +between the running system and the on-disk data. diff --git a/_sources/msg/ZFS-8000-ER/index.rst.txt b/_sources/msg/ZFS-8000-ER/index.rst.txt new file mode 100644 index 000000000..b890abc27 --- /dev/null +++ b/_sources/msg/ZFS-8000-ER/index.rst.txt @@ -0,0 +1,320 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-ER +======================= + +ZFS Errata #1 +------------- + ++-------------------------+--------------------------------------------------+ +| **Type:** | Compatibility | ++-------------------------+--------------------------------------------------+ +| **Severity:** | Moderate | ++-------------------------+--------------------------------------------------+ +| **Description:** | The ZFS pool contains an on-disk format | +| | incompatibility. | ++-------------------------+--------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------------+ +| **Impact:** | Until the pool is scrubbed using OpenZFS version | +| | 0.6.3 or newer the pool may not be imported by | +| | older versions of OpenZFS or other ZFS | +| | implementations. | ++-------------------------+--------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +The pool contains an on-disk format incompatibility. Affected pools +must be imported and scrubbed using the current version of ZFS. This +will return the pool to a state in which it may be imported by other +implementations. This errata only impacts compatibility between ZFS +versions, no user data is at risk as result of this erratum. + +:: + + # zpool status -x + pool: test + state: ONLINE + status: Errata #1 detected. + action: To correct the issue run 'zpool scrub'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER + scan: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + raidz1-0 ONLINE 0 0 0 + vdev0 ONLINE 0 0 0 + vdev1 ONLINE 0 0 0 + vdev2 ONLINE 0 0 0 + vdev3 ONLINE 0 0 0 + + errors: No known data errors + + # zpool scrub test + + # zpool status -x + all pools are healthy + + +ZFS Errata #2 +------------- + ++-------------------------+---------------------------------------------------+ +| **Type:** | Compatibility | ++-------------------------+---------------------------------------------------+ +| **Severity:** | Moderate | ++-------------------------+---------------------------------------------------+ +| **Description:** | The ZFS packages were updated while an | +| | asynchronous destroy was in progress and the pool | +| | contains an on-disk format incompatibility. | ++-------------------------+---------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+---------------------------------------------------+ +| **Impact:** | The pool cannot be imported until the issue is | +| | corrected. | ++-------------------------+---------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +Affected pools must be reverted to the previous ZFS version where +they can be correctly imported. Once imported, all asynchronous +destroy operations must be allowed to complete. The ZFS packages may +then be updated and the pool can be imported cleanly by the newer +software. + +:: + + # zpool import + pool: test + id: 1165955789558693437 + state: ONLINE + status: Errata #2 detected. + action: The pool cannot be imported with this version of ZFS due to + an active asynchronous destroy. Revert to an earlier version + and allow the destroy to complete before updating. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER + config: + + test ONLINE + raidz1-0 ONLINE + vdev0 ONLINE + vdev1 ONLINE + vdev2 ONLINE + vdev3 ONLINE + +Revert to previous ZFS version, import the pool, then wait for the +``freeing`` property to drop to zero. This indicates that all +outstanding asynchronous destroys have completed. + +:: + + # zpool get freeing + NAME PROPERTY VALUE SOURCE + test freeing 0 default + +The ZFS packages may be now be updated and the pool imported. The +on-disk format incompatibility can now be corrected online as +described in `Errata #1 <#1>`__. + + +ZFS Errata #3 +------------- + ++-------------------------+----------------------------------------------------+ +| **Type:** | Compatibility | ++-------------------------+----------------------------------------------------+ +| **Severity:** | Moderate | ++-------------------------+----------------------------------------------------+ +| **Description:** | An encrypted dataset contains an on-disk format | +| | incompatibility. | ++-------------------------+----------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+----------------------------------------------------+ +| **Impact:** | Encrypted datasets created before the ZFS packages | +| | were updated cannot be mounted or opened for | +| | write. The errata impacts the ability of ZFS to | +| | correctly perform raw sends, so this functionality | +| | has been disabled for these datasets. | ++-------------------------+----------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +System administrators with affected pools will need to recreate any +encrypted datasets created before the new version of ZFS was used. +This can be accomplished by using ``zfs send`` and ``zfs receive``. +Note, however, that backups can NOT be done with a raw ``zfs send -w``, +since this would preserve the on-disk incompatibility. +Alternatively, system administrators can use conventional tools to +back up data to new encrypted datasets. The new version of ZFS will +prevent new data from being written to the impacted datasets, but +they can still be mounted read-only. + +:: + + # zpool status + pool: test + id: 1165955789558693437 + state: ONLINE + status: Errata #3 detected. + action: To correct the issue backup existing encrypted datasets to new + encrypted datasets and destroy the old ones. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER + config: + + test ONLINE + raidz1-0 ONLINE + vdev0 ONLINE + vdev1 ONLINE + vdev2 ONLINE + vdev3 ONLINE + +Import the pool and backup any existing encrypted datasets to new +datasets. To ensure the new datasets are re-encrypted, be sure to +receive them below an encryption root or use ``zfs receive -o +encryption=on``, then destroy the source dataset. + +:: + + # zfs send test/crypt1@snap1 | zfs receive -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile test/newcrypt1 + # zfs send -I test/crypt1@snap1 test/crypt1@snap5 | zfs receive test/newcrypt1 + # zfs destroy -R test/crypt1 + +New datasets can be mounted read-write and used normally. The errata +will be cleared upon reimporting the pool and the alert will only be +shown again if another dataset is found with the errata. To ensure +that all datasets are on the new version reimport the pool, load all +keys, mount all encrypted datasets, and check ``zpool status``. + +:: + + # zpool export test + # zpool import test + # zfs load-key -a + Enter passphrase for 'test/crypt1': + 1 / 1 key(s) successfully loaded + # zfs mount -a + # zpool status -x + all pools are healthy + + +ZFS Errata #4 +------------- + ++-------------------------+----------------------------------------------------+ +| **Type:** | Compatibility | ++-------------------------+----------------------------------------------------+ +| **Severity:** | Moderate | ++-------------------------+----------------------------------------------------+ +| **Description:** | An encrypted dataset contains an on-disk format | +| | incompatibility. | ++-------------------------+----------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+----------------------------------------------------+ +| **Impact:** | Encrypted datasets created before the ZFS packages | +| | were updated cannot be backed up via a raw send to | +| | an updated system. These datasets also cannot | +| | receive additional snapshots. New encrypted | +| | datasets cannot be created until the | +| | ``bookmark_v2`` feature has been enabled. | ++-------------------------+----------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +First, system administrators with affected pools will need to enable +the ``bookmark_v2`` feature on their pools. Enabling this feature +will prevent this pool from being imported by previous versions of +the ZFS software after any new bookmarks are created (including +read-only imports). If the pool contains no encrypted datasets, this +is the only step required. If there are existing encrypted datasets, +administrators will then need to back these datasets up. This can be +done in several ways. Non-raw ``zfs send`` and ``zfs receive`` can be +used as per usual, as can traditional backup tools. Raw receives of +existing encrypted datasets and raw receives into existing encrypted +datasets are currently disabled because ZFS is not able to guarantee +that the stream and the existing dataset came from a consistent +source. This check can be disabled which will allow ZFS to receive +these streams anyway. Note that this can result in datasets with data +that cannot be accessed due to authentication errors if raw and +non-raw receives are mixed over the course of several incremental +backups. To disable this restriction, set the +``zfs_disable_ivset_guid_check`` module parameter to 1. Streams +received this way (as well as any received before the upgrade) will +need to be manually checked by reading the data to ensure they are +not corrupted. Note that ``zpool scrub`` cannot be used for this +purpose because the scrub does not check the cryptographic +authentication codes. For more information on this issue, please +refer to the zfs man page section on ``zfs receive`` which describes +the restrictions on raw sends. + +:: + + # zpool status + pool: test + state: ONLINE + status: Errata #4 detected. + Existing encrypted datasets contain an on-disk incompatibility + which needs to be corrected. + action: To correct the issue enable the bookmark_v2 feature and backup + any existing encrypted datasets to new encrypted datasets and + destroy the old ones. If this pool does not contain any + encrypted datasets, simply enable the bookmark_v2 feature. + see: http://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER + scan: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + /root/vdev0 ONLINE 0 0 0 + + errors: No known data errors + +Import the pool and enable the ``bookmark_v2`` feature. Then backup +any existing encrypted datasets to new datasets. This can be done +with traditional tools or via ``zfs send``. Raw sends will require +that the ``zfs_disable_ivset_guid_check`` is set to 1 on the receive +side. Once this is done, the original datasets should be destroyed. + +:: + + # zpool set feature@bookmark_v2=enabled test + # echo 1 > /sys/module/zfs/parameters/zfs_disable_ivset_guid_check + # zfs send -Rw test/crypt1@snap1 | zfs receive test/newcrypt1 + # zfs send -I test/crypt1@snap1 test/crypt1@snap5 | zfs receive test/newcrypt1 + # zfs destroy -R test/crypt1 + # echo 0 > /sys/module/zfs/parameters/zfs_disable_ivset_guid_check + +The errata will be cleared upon reimporting the pool and the alert +will only be shown again if another dataset is found with the errata. +To check that all datasets are fixed, perform a ``zfs list -t all``, +and check ``zpool status`` once it is completed. + +:: + + # zpool export test + # zpool import test + # zpool scrub # wait for completion + # zpool status -x + all pools are healthy diff --git a/_sources/msg/ZFS-8000-EY/index.rst.txt b/_sources/msg/ZFS-8000-EY/index.rst.txt new file mode 100644 index 000000000..0bd466e8a --- /dev/null +++ b/_sources/msg/ZFS-8000-EY/index.rst.txt @@ -0,0 +1,79 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-EY +======================= + +ZFS label hostid mismatch +------------------------- + ++-------------------------+---------------------------------------------------+ +| **Type:** | Error | ++-------------------------+---------------------------------------------------+ +| **Severity:** | Major | ++-------------------------+---------------------------------------------------+ +| **Description:** | The ZFS pool was last accessed by another system. | ++-------------------------+---------------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+---------------------------------------------------+ +| **Impact:** | ZFS filesystems are not available. | ++-------------------------+---------------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +The pool has been written to from another host, and was not cleanly +exported from the other system. Actively importing a pool on multiple +systems will corrupt the pool and leave it in an unrecoverable state. +To determine which system last accessed the pool, run the ``zpool +import`` command: + +:: + + # zpool import + pool: test + id: 14702934086626715962 + state: ONLINE + status: The pool was last accessed by another system. + action: The pool can be imported using its name or numeric identifier and + the '-f' flag. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY + config: + + test ONLINE + c0t0d0 ONLINE + + # zpool import test + cannot import 'test': pool may be in use from other system, it was last + accessed by 'tank' (hostid: 0x1435718c) on Fri Mar 9 15:42:47 2007 + use '-f' to import anyway + +If you are certain that the pool is not being actively accessed by +another system, then you can use the ``-f`` option to ``zpool import`` to +forcibly import the pool. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-EY`` indicates that the pool cannot be +imported as it was last accessed by another system. Take the +documented action to resolve the problem. diff --git a/_sources/msg/ZFS-8000-HC/index.rst.txt b/_sources/msg/ZFS-8000-HC/index.rst.txt new file mode 100644 index 000000000..1a94d7e44 --- /dev/null +++ b/_sources/msg/ZFS-8000-HC/index.rst.txt @@ -0,0 +1,85 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-HC +======================= + +ZFS pool I/O failures +--------------------- + ++-------------------------+-----------------------------------------+ +| **Type:** | Error | ++-------------------------+-----------------------------------------+ +| **Severity:** | Major | ++-------------------------+-----------------------------------------+ +| **Description:** | The ZFS pool has experienced currently | +| | unrecoverable I/O failures. | ++-------------------------+-----------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+-----------------------------------------+ +| **Impact:** | Read and write I/Os cannot be serviced. | ++-------------------------+-----------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +The pool has experienced I/O failures. Since the ZFS pool property +``failmode`` is set to 'wait', all I/Os (reads and writes) are blocked. +See the zpoolprops(8) manpage for more information on the ``failmode`` +property. Manual intervention is required for I/Os to be serviced. + +You can see which devices are affected by running ``zpool status -x``: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: There are I/O failures. + action: Make sure the affected devices are connected, then run 'zpool clear'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 13 0 insufficient replicas + c0t0d0 FAULTED 0 7 0 experienced I/O failures + c0t1d0 ONLINE 0 0 0 + + errors: 1 data errors, use '-v' for a list + +After you have made sure the affected devices are connected, run ``zpool +clear`` to allow I/O to the pool again: + +:: + + # zpool clear test + +If I/O failures continue to happen, then applications and commands for the pool +may hang. At this point, a reboot may be necessary to allow I/O to the pool +again. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-HC`` indicates that the pool has experienced I/O +failures. Take the documented action to resolve the problem. diff --git a/_sources/msg/ZFS-8000-JQ/index.rst.txt b/_sources/msg/ZFS-8000-JQ/index.rst.txt new file mode 100644 index 000000000..6ffcc2fcc --- /dev/null +++ b/_sources/msg/ZFS-8000-JQ/index.rst.txt @@ -0,0 +1,86 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-JQ +======================= + +ZFS pool I/O failures +--------------------- + ++-------------------------+----------------------------------------+ +| **Type:** | Error | ++-------------------------+----------------------------------------+ +| **Severity:** | Major | ++-------------------------+----------------------------------------+ +| **Description:** | The ZFS pool has experienced currently | +| | unrecoverable I/O failures. | ++-------------------------+----------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+----------------------------------------+ +| **Impact:** | Write I/Os cannot be serviced. | ++-------------------------+----------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +The pool has experienced I/O failures. Since the ZFS pool property +``failmode`` is set to 'continue', read I/Os will continue to be +serviced, but write I/Os are blocked. See the zpoolprops(8) manpage for +more information on the ``failmode`` property. Manual intervention is +required for write I/Os to be serviced. You can see which devices are +affected by running ``zpool status -x``: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: There are I/O failures. + action: Make sure the affected devices are connected, then run 'zpool clear'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 13 0 insufficient replicas + sda9 FAULTED 0 7 0 experienced I/O failures + sdb9 ONLINE 0 0 0 + + errors: 1 data errors, use '-v' for a list + +After you have made sure the affected devices are connected, run +``zpool clear`` to allow write I/O to the pool again: + +:: + + # zpool clear test + +If I/O failures continue to happen, then applications and commands +for the pool may hang. At this point, a reboot may be necessary to +allow I/O to the pool again. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-JQ`` indicates that the pool has +experienced I/O failures. Take the documented action to resolve the +problem. diff --git a/_sources/msg/ZFS-8000-K4/index.rst.txt b/_sources/msg/ZFS-8000-K4/index.rst.txt new file mode 100644 index 000000000..c8963d801 --- /dev/null +++ b/_sources/msg/ZFS-8000-K4/index.rst.txt @@ -0,0 +1,132 @@ +.. + CDDL HEADER START + + The contents of this file are subject to the terms of the + Common Development and Distribution License (the "License"). + You may not use this file except in compliance with the License. + + You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE + or http://www.opensolaris.org/os/licensing. + See the License for the specific language governing permissions + and limitations under the License. + + When distributing Covered Code, include this CDDL HEADER in each + file and include the License file at usr/src/OPENSOLARIS.LICENSE. + If applicable, add the following below this CDDL HEADER, with the + fields enclosed by brackets "[]" replaced with your own identifying + information: Portions Copyright [yyyy] [name of copyright owner] + + CDDL HEADER END + + Portions Copyright 2007 Sun Microsystems, Inc. + +.. highlight:: none + +Message ID: ZFS-8000-K4 +======================= + +ZFS intent log read failure +--------------------------- + ++-------------------------+--------------------------------------------+ +| **Type:** | Error | ++-------------------------+--------------------------------------------+ +| **Severity:** | Major | ++-------------------------+--------------------------------------------+ +| **Description:** | A ZFS intent log device could not be read. | ++-------------------------+--------------------------------------------+ +| **Automated Response:** | No automated response will be taken. | ++-------------------------+--------------------------------------------+ +| **Impact:** | The intent log(s) cannot be replayed. | ++-------------------------+--------------------------------------------+ + +.. rubric:: Suggested Action for System Administrator + +A ZFS intent log record could not be read due to an error. This may +be due to a missing or broken log device, or a device within the pool +may be experiencing I/O errors. The pool itself is not corrupt but is +missing some pool changes that happened shortly before a power loss +or system failure. These are pool changes that applications had +requested to be written synchronously but had not been committed in +the pool. This transaction group commit currently occurs every five +seconds, and so typically at most five seconds worth of synchronous +writes have been lost. ZFS itself cannot determine if the pool +changes lost are critical to those applications running at the time +of the system failure. This is a decision the administrator must +make. You may want to consider mirroring log devices. First determine +which pool is in error: + +:: + + # zpool status -x + pool: test + state: FAULTED + status: One or more of the intent logs could not be read. + Waiting for adminstrator intervention to fix the faulted pool. + action: Either restore the affected device(s) and run 'zpool online', + or ignore the intent log records by running 'zpool clear'. + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test FAULTED 0 0 0 bad intent log + c3t2d0 ONLINE 0 0 0 + logs FAULTED 0 0 0 bad intent log + c5t3d0 UNAVAIL 0 0 0 cannot open + +There are two courses of action to resolve this problem. +If the validity of the pool from an application perspective requires +the pool changes then the log devices must be recovered. Make sure +power and cables are connected and that the affected device is +online. Then run ``zpool online`` and then ``zpool clear``: + +:: + + # zpool online test c5t3d0 + # zpool clear test + # zpool status test + pool: test + state: ONLINE + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + c3t2d0 ONLINE 0 0 0 + logs ONLINE 0 0 0 + c5t3d0 ONLINE 0 0 0 + + errors: No known data errors + +The second alternative action is to ignore the most recent pool +changes that could not be read. To do this run ``zpool clear``: + +:: + + # zpool clear test + # zpool status test + pool: test + state: DEGRADED + status: One or more devices could not be opened. Sufficient replicas exist for + the pool to continue functioning in a degraded state. + action: Attach the missing device and online it using 'zpool online'. + see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q + scrub: none requested + config: + + NAME STATE READ WRITE CKSUM + test DEGRADED 0 0 0 + c3t2d0 ONLINE 0 0 0 + logs DEGRADED 0 0 0 + c5t3d0 UNAVAIL 0 0 0 cannot open + + errors: No known data errors + +Future log records will not use a failed log device but will be +written to the main pool. You should fix or replace any failed log +devices. + +.. rubric:: Details + +The Message ID: ``ZFS-8000-K4`` indicates that a log device is +missing or cannot be read. diff --git a/_sources/msg/index.rst.txt b/_sources/msg/index.rst.txt new file mode 100644 index 000000000..cbcbcdd3e --- /dev/null +++ b/_sources/msg/index.rst.txt @@ -0,0 +1,9 @@ +ZFS Messages +============ + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + :glob: + + ZFS-*/index diff --git a/_static/_sphinx_javascript_frameworks_compat.js b/_static/_sphinx_javascript_frameworks_compat.js new file mode 100644 index 000000000..81415803e --- /dev/null +++ b/_static/_sphinx_javascript_frameworks_compat.js @@ -0,0 +1,123 @@ +/* Compatability shim for jQuery and underscores.js. + * + * Copyright Sphinx contributors + * Released under the two clause BSD licence + */ + +/** + * small helper function to urldecode strings + * + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL + */ +jQuery.urldecode = function(x) { + if (!x) { + return x + } + return decodeURIComponent(x.replace(/\+/g, ' ')); +}; + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s === 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node, addItems) { + if (node.nodeType === 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && + !jQuery(node.parentNode).hasClass(className) && + !jQuery(node.parentNode).hasClass("nohighlight")) { + var span; + var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.className = className; + } + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + if (isInSVG) { + var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); + var bbox = node.parentElement.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute('class', className); + addItems.push({ + "parent": node.parentNode, + "target": rect}); + } + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this, addItems); + }); + } + } + var addItems = []; + var result = this.each(function() { + highlight(this, addItems); + }); + for (var i = 0; i < addItems.length; ++i) { + jQuery(addItems[i].parent).before(addItems[i].target); + } + return result; +}; + +/* + * backward compatibility for jQuery.browser + * This will be supported until firefox bug is fixed. + */ +if (!jQuery.browser) { + jQuery.uaMatch = function(ua) { + ua = ua.toLowerCase(); + + var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || + /(webkit)[ \/]([\w.]+)/.exec(ua) || + /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || + /(msie) ([\w.]+)/.exec(ua) || + ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || + []; + + return { + browser: match[ 1 ] || "", + version: match[ 2 ] || "0" + }; + }; + jQuery.browser = {}; + jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; +} diff --git a/_static/basic.css b/_static/basic.css new file mode 100644 index 000000000..cfc60b86c --- /dev/null +++ b/_static/basic.css @@ -0,0 +1,921 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} + +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ + +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} + +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +.sig dd { + margin-top: 0px; + margin-bottom: 0px; +} + +.sig dl { + margin-top: 0px; + margin-bottom: 0px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +.translated { + background-color: rgba(207, 255, 207, 0.2) +} + +.untranslated { + background-color: rgba(255, 207, 207, 0.2) +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/_static/css/badge_only.css b/_static/css/badge_only.css new file mode 100644 index 000000000..c718cee44 --- /dev/null +++ b/_static/css/badge_only.css @@ -0,0 +1 @@ +.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:FontAwesome;font-style:normal;font-weight:400;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#FontAwesome) format("svg")}.fa:before{font-family:FontAwesome;font-style:normal;font-weight:400;line-height:1}.fa:before,a .fa{text-decoration:inherit}.fa:before,a .fa,li .fa{display:inline-block}li .fa-large:before{width:1.875em}ul.fas{list-style-type:none;margin-left:2em;text-indent:-.8em}ul.fas li .fa{width:.8em}ul.fas li .fa-large:before{vertical-align:baseline}.fa-book:before,.icon-book:before{content:"\f02d"}.fa-caret-down:before,.icon-caret-down:before{content:"\f0d7"}.fa-caret-up:before,.icon-caret-up:before{content:"\f0d8"}.fa-caret-left:before,.icon-caret-left:before{content:"\f0d9"}.fa-caret-right:before,.icon-caret-right:before{content:"\f0da"}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60}.rst-versions .rst-current-version:after{clear:both;content:"";display:block}.rst-versions .rst-current-version .fa{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}} \ No newline at end of file diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff b/_static/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 000000000..6cb600001 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff2 b/_static/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 000000000..7059e2314 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff b/_static/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 000000000..f815f63f9 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff2 b/_static/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 000000000..f2c76e5bd Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/_static/css/fonts/fontawesome-webfont.eot b/_static/css/fonts/fontawesome-webfont.eot new file mode 100644 index 000000000..e9f60ca95 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.eot differ diff --git a/_static/css/fonts/fontawesome-webfont.svg b/_static/css/fonts/fontawesome-webfont.svg new file mode 100644 index 000000000..855c845e5 --- /dev/null +++ b/_static/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/_static/css/fonts/fontawesome-webfont.ttf b/_static/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 000000000..35acda2fa Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.ttf differ diff --git a/_static/css/fonts/fontawesome-webfont.woff b/_static/css/fonts/fontawesome-webfont.woff new file mode 100644 index 000000000..400014a4b Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff differ diff --git a/_static/css/fonts/fontawesome-webfont.woff2 b/_static/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 000000000..4d13fc604 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff2 differ diff --git a/_static/css/fonts/lato-bold-italic.woff b/_static/css/fonts/lato-bold-italic.woff new file mode 100644 index 000000000..88ad05b9f Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff differ diff --git a/_static/css/fonts/lato-bold-italic.woff2 b/_static/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 000000000..c4e3d804b Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff2 differ diff --git a/_static/css/fonts/lato-bold.woff b/_static/css/fonts/lato-bold.woff new file mode 100644 index 000000000..c6dff51f0 Binary files /dev/null and b/_static/css/fonts/lato-bold.woff differ diff --git a/_static/css/fonts/lato-bold.woff2 b/_static/css/fonts/lato-bold.woff2 new file mode 100644 index 000000000..bb195043c Binary files /dev/null and b/_static/css/fonts/lato-bold.woff2 differ diff --git a/_static/css/fonts/lato-normal-italic.woff b/_static/css/fonts/lato-normal-italic.woff new file mode 100644 index 000000000..76114bc03 Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff differ diff --git a/_static/css/fonts/lato-normal-italic.woff2 b/_static/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 000000000..3404f37e2 Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff2 differ diff --git a/_static/css/fonts/lato-normal.woff b/_static/css/fonts/lato-normal.woff new file mode 100644 index 000000000..ae1307ff5 Binary files /dev/null and b/_static/css/fonts/lato-normal.woff differ diff --git a/_static/css/fonts/lato-normal.woff2 b/_static/css/fonts/lato-normal.woff2 new file mode 100644 index 000000000..3bf984332 Binary files /dev/null and b/_static/css/fonts/lato-normal.woff2 differ diff --git a/_static/css/mandoc.css b/_static/css/mandoc.css new file mode 100644 index 000000000..8cf11fcc0 --- /dev/null +++ b/_static/css/mandoc.css @@ -0,0 +1,262 @@ +/* $Id: mandoc.css,v 1.46 2019/06/02 16:57:13 schwarze Exp $ */ +/* + * Standard style sheet for mandoc(1) -Thtml and man.cgi(8). + * + * Written by Ingo Schwarze . + * I place this file into the public domain. + * Permission to use, copy, modify, and distribute it for any purpose + * with or without fee is hereby granted, without any conditions. + */ + +/* + * Edited by George Melikov + * to be integrated with sphinx RTD theme. + */ + +/* override */ +.man_container code { + overflow-x: initial; + background: none; + border: none; + font-size: 100%; +} + +/* OpenZFS styles */ +.man_container .head { + max-width: 640px; + width: 100%; +} +.man_container .head .head-vol { + text-align: center; +} +.man_container .head .head-rtitle { + text-align: right; +} +.man_container .foot td { + padding: 1em; +} + +/* Fix for Chrome */ +.man_container dl dt { + display: initial !important; + color: black !important; +} + +/* Next CSS rules come from upstream file as is, only with needed changes */ + +/* Sections and paragraphs. */ + +.manual-text { + margin-left: 0em; } +.Nd { } +section.Sh { } +h1.Sh { margin-top: 1.2em; + margin-bottom: 0.6em; } +section.Ss { } +h2.Ss { margin-top: 1.2em; + margin-bottom: 0.6em; + font-size: 105%; } +.Pp { margin: 0.6em 0em; } +.Sx { } +.Xr { } + +/* Displays and lists. */ + +.Bd { } +.Bd-indent { margin-left: 3.8em; } + +.Bl-bullet { list-style-type: disc; + padding-left: 1em; } +.Bl-bullet > li { } +.Bl-dash { list-style-type: none; + padding-left: 0em; } +.Bl-dash > li:before { + content: "\2014 "; } +.Bl-item { list-style-type: none; + padding-left: 0em; } +.Bl-item > li { } +.Bl-compact > li { + margin-top: 0em; } + +.Bl-enum { padding-left: 2em; } +.Bl-enum > li { } +.Bl-compact > li { + margin-top: 0em; } + +.Bl-diag { } +.Bl-diag > dt { + font-style: normal; + font-weight: bold; } +.Bl-diag > dd { + margin-left: 0em; } +.Bl-hang { } +.Bl-hang > dt { } +.Bl-hang > dd { + margin-left: 5.5em; } +.Bl-inset { } +.Bl-inset > dt { } +.Bl-inset > dd { + margin-left: 0em; } +.Bl-ohang { } +.Bl-ohang > dt { } +.Bl-ohang > dd { + margin-left: 0em; } +.Bl-tag { margin-top: 0.6em; + margin-left: 5.5em; } +.Bl-tag > dt { + float: left; + margin-top: 0em; + margin-left: -5.5em; + padding-right: 0.5em; + vertical-align: top; } +.Bl-tag > dd { + clear: right; + width: 100%; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0.6em; + vertical-align: top; + overflow: auto; } +.Bl-compact { margin-top: 0em; } +.Bl-compact > dd { + margin-bottom: 0em; } +.Bl-compact > dt { + margin-top: 0em; } + +.Bl-column { } +.Bl-column > tbody > tr { } +.Bl-column > tbody > tr > td { + margin-top: 1em; } +.Bl-compact > tbody > tr > td { + margin-top: 0em; } + +.Rs { font-style: normal; + font-weight: normal; } +.RsA { } +.RsB { font-style: italic; + font-weight: normal; } +.RsC { } +.RsD { } +.RsI { font-style: italic; + font-weight: normal; } +.RsJ { font-style: italic; + font-weight: normal; } +.RsN { } +.RsO { } +.RsP { } +.RsQ { } +.RsR { } +.RsT { text-decoration: underline; } +.RsU { } +.RsV { } + +.eqn { } +.tbl td { vertical-align: middle; } + +.HP { margin-left: 3.8em; + text-indent: -3.8em; } + +/* Semantic markup for command line utilities. */ + +table.Nm { } +code.Nm { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Fl { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Cm { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Ar { font-style: italic; + font-weight: normal; } +.Op { display: inline; } +.Ic { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Ev { font-style: normal; + font-weight: normal; + font-family: monospace; } +.Pa { font-style: italic; + font-weight: normal; } + +/* Semantic markup for function libraries. */ + +.Lb { } +code.In { font-style: normal; + font-weight: bold; + font-family: inherit; } +a.In { } +.Fd { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Ft { font-style: italic; + font-weight: normal; } +.Fn { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Fa { font-style: italic; + font-weight: normal; } +.Vt { font-style: italic; + font-weight: normal; } +.Va { font-style: italic; + font-weight: normal; } +.Dv { font-style: normal; + font-weight: normal; + font-family: monospace; } +.Er { font-style: normal; + font-weight: normal; + font-family: monospace; } + +/* Various semantic markup. */ + +.An { } +.Lk { } +.Mt { } +.Cd { font-style: normal; + font-weight: bold; + font-family: inherit; } +.Ad { font-style: italic; + font-weight: normal; } +.Ms { font-style: normal; + font-weight: bold; } +.St { } +.Ux { } + +/* Physical markup. */ + +.Bf { display: inline; } +.No { font-style: normal; + font-weight: normal; } +.Em { font-style: italic; + font-weight: normal; } +.Sy { font-style: normal; + font-weight: bold; } +.Li { font-style: normal; + font-weight: normal; + font-family: monospace; } + +/* Tooltip support. */ + +h1.Sh, h2.Ss { position: relative; } +.An, .Ar, .Cd, .Cm, .Dv, .Em, .Er, .Ev, .Fa, .Fd, .Fl, .Fn, .Ft, +.Ic, code.In, .Lb, .Lk, .Ms, .Mt, .Nd, code.Nm, .Pa, .Rs, +.St, .Sx, .Sy, .Va, .Vt, .Xr { + display: inline-block; + position: relative; }? + +/* Overrides to avoid excessive margins on small devices. */ + +@media (max-width: 37.5em) { +.manual-text { + margin-left: 0.5em; } +h1.Sh, h2.Ss { margin-left: 0em; } +.Bd-indent { margin-left: 2em; } +.Bl-hang > dd { + margin-left: 2em; } +.Bl-tag { margin-left: 2em; } +.Bl-tag > dt { + margin-left: -2em; } +.HP { margin-left: 2em; + text-indent: -2em; } +} diff --git a/_static/css/theme.css b/_static/css/theme.css new file mode 100644 index 000000000..19a446a0e --- /dev/null +++ b/_static/css/theme.css @@ -0,0 +1,4 @@ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel,.rst-content .menuselection{font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content .guilabel,.rst-content .menuselection{border:1px solid #7fbbe3;background:#e7f2fa}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} \ No newline at end of file diff --git a/_static/css/theme_overrides.css b/_static/css/theme_overrides.css new file mode 100644 index 000000000..730b6fe94 --- /dev/null +++ b/_static/css/theme_overrides.css @@ -0,0 +1,17 @@ +/* override table width restrictions */ +@media screen and (min-width: 767px) { + + .wy-table-responsive table td { + /* !important prevents the common CSS stylesheets from overriding + this as on RTD they are loaded after this stylesheet */ + white-space: normal !important; + } + + .wy-table-responsive { + overflow: visible !important; + } + + .wy-nav-content { + max-width: 1500px !important; + } + } diff --git a/_static/doctools.js b/_static/doctools.js new file mode 100644 index 000000000..d06a71d75 --- /dev/null +++ b/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/_static/documentation_options.js b/_static/documentation_options.js new file mode 100644 index 000000000..b57ae3b83 --- /dev/null +++ b/_static/documentation_options.js @@ -0,0 +1,14 @@ +var DOCUMENTATION_OPTIONS = { + URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), + VERSION: '', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'html', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/_static/favicon.ico b/_static/favicon.ico new file mode 100644 index 000000000..35ad3d5c1 Binary files /dev/null and b/_static/favicon.ico differ diff --git a/_static/file.png b/_static/file.png new file mode 100644 index 000000000..a858a410e Binary files /dev/null and b/_static/file.png differ diff --git a/_static/img/draid-resilver-hours.png b/_static/img/draid-resilver-hours.png new file mode 100644 index 000000000..41899d28f Binary files /dev/null and b/_static/img/draid-resilver-hours.png differ diff --git a/_static/img/favicon.ico b/_static/img/favicon.ico new file mode 100644 index 000000000..35ad3d5c1 Binary files /dev/null and b/_static/img/favicon.ico differ diff --git a/_static/img/logo/320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png b/_static/img/logo/320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png new file mode 100644 index 000000000..4338899a4 Binary files /dev/null and b/_static/img/logo/320px-Open-ZFS-Secondary-Logo-Colour-halfsize.png differ diff --git a/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png b/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png new file mode 100644 index 000000000..af853062f Binary files /dev/null and b/_static/img/logo/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png differ diff --git a/_static/img/logo/800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png b/_static/img/logo/800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png new file mode 100644 index 000000000..32fd3e21e Binary files /dev/null and b/_static/img/logo/800px-Open-ZFS-Secondary-Logo-Colour-halfsize.png differ diff --git a/_static/img/logo/logo_main.png b/_static/img/logo/logo_main.png new file mode 100644 index 000000000..cc86e84e7 Binary files /dev/null and b/_static/img/logo/logo_main.png differ diff --git a/_static/img/logo/zof-logo.png b/_static/img/logo/zof-logo.png new file mode 100644 index 000000000..0612f6056 Binary files /dev/null and b/_static/img/logo/zof-logo.png differ diff --git a/_static/img/raidz_draid.png b/_static/img/raidz_draid.png new file mode 100644 index 000000000..b5617cd14 Binary files /dev/null and b/_static/img/raidz_draid.png differ diff --git a/_static/jquery.js b/_static/jquery.js new file mode 100644 index 000000000..c4c6022f2 --- /dev/null +++ b/_static/jquery.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=y.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=y.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),y.elements=c+" "+a,j(b)}function f(a){var b=x[a[v]];return b||(b={},w++,a[v]=w,x[w]=b),b}function g(a,c,d){if(c||(c=b),q)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():u.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||t.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),q)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return y.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(y,b.frag)}function j(a){a||(a=b);var d=f(a);return!y.shivCSS||p||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),q||i(a,d),a}function k(a){for(var b,c=a.getElementsByTagName("*"),e=c.length,f=RegExp("^(?:"+d().join("|")+")$","i"),g=[];e--;)b=c[e],f.test(b.nodeName)&&g.push(b.applyElement(l(b)));return g}function l(a){for(var b,c=a.attributes,d=c.length,e=a.ownerDocument.createElement(A+":"+a.nodeName);d--;)b=c[d],b.specified&&e.setAttribute(b.nodeName,b.nodeValue);return e.style.cssText=a.style.cssText,e}function m(a){for(var b,c=a.split("{"),e=c.length,f=RegExp("(^|[\\s,>+~])("+d().join("|")+")(?=[[\\s,>+~#.:]|$)","gi"),g="$1"+A+"\\:$2";e--;)b=c[e]=c[e].split("}"),b[b.length-1]=b[b.length-1].replace(f,g),c[e]=b.join("}");return c.join("{")}function n(a){for(var b=a.length;b--;)a[b].removeNode()}function o(a){function b(){clearTimeout(g._removeSheetTimer),d&&d.removeNode(!0),d=null}var d,e,g=f(a),h=a.namespaces,i=a.parentWindow;return!B||a.printShived?a:("undefined"==typeof h[A]&&h.add(A),i.attachEvent("onbeforeprint",function(){b();for(var f,g,h,i=a.styleSheets,j=[],l=i.length,n=Array(l);l--;)n[l]=i[l];for(;h=n.pop();)if(!h.disabled&&z.test(h.media)){try{f=h.imports,g=f.length}catch(o){g=0}for(l=0;g>l;l++)n.push(f[l]);try{j.push(h.cssText)}catch(o){}}j=m(j.reverse().join("")),e=k(a),d=c(a,j)}),i.attachEvent("onafterprint",function(){n(e),clearTimeout(g._removeSheetTimer),g._removeSheetTimer=setTimeout(b,500)}),a.printShived=!0,a)}var p,q,r="3.7.3",s=a.html5||{},t=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,u=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,v="_html5shiv",w=0,x={};!function(){try{var a=b.createElement("a");a.innerHTML="",p="hidden"in a,q=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){p=!0,q=!0}}();var y={elements:s.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:r,shivCSS:s.shivCSS!==!1,supportsUnknownElements:q,shivMethods:s.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=y,j(b);var z=/^$|\b(?:all|print)\b/,A="html5shiv",B=!q&&function(){var c=b.documentElement;return!("undefined"==typeof b.namespaces||"undefined"==typeof b.parentWindow||"undefined"==typeof c.applyElement||"undefined"==typeof c.removeNode||"undefined"==typeof a.attachEvent)}();y.type+=" print",y.shivPrint=o,o(b),"object"==typeof module&&module.exports&&(module.exports=y)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/_static/js/html5shiv.min.js b/_static/js/html5shiv.min.js new file mode 100644 index 000000000..cd1c674f5 --- /dev/null +++ b/_static/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3-pre",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/_static/js/theme.js b/_static/js/theme.js new file mode 100644 index 000000000..1fddb6ee4 --- /dev/null +++ b/_static/js/theme.js @@ -0,0 +1 @@ +!function(n){var e={};function t(i){if(e[i])return e[i].exports;var o=e[i]={i:i,l:!1,exports:{}};return n[i].call(o.exports,o,o.exports,t),o.l=!0,o.exports}t.m=n,t.c=e,t.d=function(n,e,i){t.o(n,e)||Object.defineProperty(n,e,{enumerable:!0,get:i})},t.r=function(n){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(n,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(n,"__esModule",{value:!0})},t.t=function(n,e){if(1&e&&(n=t(n)),8&e)return n;if(4&e&&"object"==typeof n&&n&&n.__esModule)return n;var i=Object.create(null);if(t.r(i),Object.defineProperty(i,"default",{enumerable:!0,value:n}),2&e&&"string"!=typeof n)for(var o in n)t.d(i,o,function(e){return n[e]}.bind(null,o));return i},t.n=function(n){var e=n&&n.__esModule?function(){return n.default}:function(){return n};return t.d(e,"a",e),e},t.o=function(n,e){return Object.prototype.hasOwnProperty.call(n,e)},t.p="",t(t.s=0)}([function(n,e,t){t(1),n.exports=t(3)},function(n,e,t){(function(){var e="undefined"!=typeof window?window.jQuery:t(2);n.exports.ThemeNav={navBar:null,win:null,winScroll:!1,winResize:!1,linkScroll:!1,winPosition:0,winHeight:null,docHeight:null,isRunning:!1,enable:function(n){var t=this;void 0===n&&(n=!0),t.isRunning||(t.isRunning=!0,e((function(e){t.init(e),t.reset(),t.win.on("hashchange",t.reset),n&&t.win.on("scroll",(function(){t.linkScroll||t.winScroll||(t.winScroll=!0,requestAnimationFrame((function(){t.onScroll()})))})),t.win.on("resize",(function(){t.winResize||(t.winResize=!0,requestAnimationFrame((function(){t.onResize()})))})),t.onResize()})))},enableSticky:function(){this.enable(!0)},init:function(n){n(document);var e=this;this.navBar=n("div.wy-side-scroll:first"),this.win=n(window),n(document).on("click","[data-toggle='wy-nav-top']",(function(){n("[data-toggle='wy-nav-shift']").toggleClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift")})).on("click",".wy-menu-vertical .current ul li a",(function(){var t=n(this);n("[data-toggle='wy-nav-shift']").removeClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift"),e.toggleCurrent(t),e.hashChange()})).on("click","[data-toggle='rst-current-version']",(function(){n("[data-toggle='rst-versions']").toggleClass("shift-up")})),n("table.docutils:not(.field-list,.footnote,.citation)").wrap("
"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + diff --git a/_static/logo_main.png b/_static/logo_main.png new file mode 100644 index 000000000..cc86e84e7 Binary files /dev/null and b/_static/logo_main.png differ diff --git a/_static/minus.png b/_static/minus.png new file mode 100644 index 000000000..d96755fda Binary files /dev/null and b/_static/minus.png differ diff --git a/_static/plus.png b/_static/plus.png new file mode 100644 index 000000000..7107cec93 Binary files /dev/null and b/_static/plus.png differ diff --git a/_static/pygments.css b/_static/pygments.css new file mode 100644 index 000000000..84ab3030a --- /dev/null +++ b/_static/pygments.css @@ -0,0 +1,75 @@ +pre { line-height: 125%; } +td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +.highlight .hll { background-color: #ffffcc } +.highlight { background: #f8f8f8; } +.highlight .c { color: #3D7B7B; font-style: italic } /* Comment */ +.highlight .err { border: 1px solid #FF0000 } /* Error */ +.highlight .k { color: #008000; font-weight: bold } /* Keyword */ +.highlight .o { color: #666666 } /* Operator */ +.highlight .ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */ +.highlight .cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #9C6500 } /* Comment.Preproc */ +.highlight .cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */ +.highlight .c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #3D7B7B; font-style: italic } /* Comment.Special */ +.highlight .gd { color: #A00000 } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ +.highlight .gr { color: #E40000 } /* Generic.Error */ +.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.highlight .gi { color: #008400 } /* Generic.Inserted */ +.highlight .go { color: #717171 } /* Generic.Output */ +.highlight .gp { color: #000080; font-weight: bold } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.highlight .gt { color: #0044DD } /* Generic.Traceback */ +.highlight .kc { color: #008000; font-weight: bold } /* Keyword.Constant */ +.highlight .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { color: #008000 } /* Keyword.Pseudo */ +.highlight .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #B00040 } /* Keyword.Type */ +.highlight .m { color: #666666 } /* Literal.Number */ +.highlight .s { color: #BA2121 } /* Literal.String */ +.highlight .na { color: #687822 } /* Name.Attribute */ +.highlight .nb { color: #008000 } /* Name.Builtin */ +.highlight .nc { color: #0000FF; font-weight: bold } /* Name.Class */ +.highlight .no { color: #880000 } /* Name.Constant */ +.highlight .nd { color: #AA22FF } /* Name.Decorator */ +.highlight .ni { color: #717171; font-weight: bold } /* Name.Entity */ +.highlight .ne { color: #CB3F38; font-weight: bold } /* Name.Exception */ +.highlight .nf { color: #0000FF } /* Name.Function */ +.highlight .nl { color: #767600 } /* Name.Label */ +.highlight .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */ +.highlight .nt { color: #008000; font-weight: bold } /* Name.Tag */ +.highlight .nv { color: #19177C } /* Name.Variable */ +.highlight .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mb { color: #666666 } /* Literal.Number.Bin */ +.highlight .mf { color: #666666 } /* Literal.Number.Float */ +.highlight .mh { color: #666666 } /* Literal.Number.Hex */ +.highlight .mi { color: #666666 } /* Literal.Number.Integer */ +.highlight .mo { color: #666666 } /* Literal.Number.Oct */ +.highlight .sa { color: #BA2121 } /* Literal.String.Affix */ +.highlight .sb { color: #BA2121 } /* Literal.String.Backtick */ +.highlight .sc { color: #BA2121 } /* Literal.String.Char */ +.highlight .dl { color: #BA2121 } /* Literal.String.Delimiter */ +.highlight .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #BA2121 } /* Literal.String.Double */ +.highlight .se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */ +.highlight .sh { color: #BA2121 } /* Literal.String.Heredoc */ +.highlight .si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */ +.highlight .sx { color: #008000 } /* Literal.String.Other */ +.highlight .sr { color: #A45A77 } /* Literal.String.Regex */ +.highlight .s1 { color: #BA2121 } /* Literal.String.Single */ +.highlight .ss { color: #19177C } /* Literal.String.Symbol */ +.highlight .bp { color: #008000 } /* Name.Builtin.Pseudo */ +.highlight .fm { color: #0000FF } /* Name.Function.Magic */ +.highlight .vc { color: #19177C } /* Name.Variable.Class */ +.highlight .vg { color: #19177C } /* Name.Variable.Global */ +.highlight .vi { color: #19177C } /* Name.Variable.Instance */ +.highlight .vm { color: #19177C } /* Name.Variable.Magic */ +.highlight .il { color: #666666 } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/_static/searchtools.js b/_static/searchtools.js new file mode 100644 index 000000000..97d56a74d --- /dev/null +++ b/_static/searchtools.js @@ -0,0 +1,566 @@ +/* + * searchtools.js + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for the full-text search. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +/** + * Simple result scoring code. + */ +if (typeof Scorer === "undefined") { + var Scorer = { + // Implement the following function to further tweak the score for each result + // The function takes a result array [docname, title, anchor, descr, score, filename] + // and returns the new score. + /* + score: result => { + const [docname, title, anchor, descr, score, filename] = result + return score + }, + */ + + // query matches the full name of an object + objNameMatch: 11, + // or matches in the last dotted part of the object name + objPartialMatch: 6, + // Additive scores depending on the priority of the object + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, + // Used when the priority is not in the mapping. + objPrioDefault: 0, + + // query found in title + title: 15, + partialTitle: 7, + // query found in terms + term: 5, + partialTerm: 2, + }; +} + +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docUrlRoot = DOCUMENTATION_OPTIONS.URL_ROOT; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = docUrlRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = docUrlRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; + } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms) + ); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + `Search finished, found ${resultCount} page(s) matching the search query.` + ); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings +} + +/** + * Search Module + */ +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() }); + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent !== undefined) return docContent.textContent; + console.warn( + "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template." + ); + return ""; + }, + + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); + }, + + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), + + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); + } + }, + + hasIndex: () => Search._index !== null, + + deferQuery: (query) => (Search._queued_query = query), + + stopPulse: () => (Search._pulse_status = -1), + + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { + Search._pulse_status = (Search._pulse_status + 1) % 4; + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something (or wait until index is loaded) + */ + performSearch: (query) => { + // create the required interface elements + const searchText = document.createElement("h2"); + searchText.textContent = _("Searching"); + const searchSummary = document.createElement("p"); + searchSummary.classList.add("search-summary"); + searchSummary.innerText = ""; + const searchList = document.createElement("ul"); + searchList.classList.add("search"); + + const out = document.getElementById("search-results"); + Search.title = out.appendChild(searchText); + Search.dots = Search.title.appendChild(document.createElement("span")); + Search.status = out.appendChild(searchSummary); + Search.output = out.appendChild(searchList); + + const searchProgress = document.getElementById("search-progress"); + // Some themes don't use the search progress node + if (searchProgress) { + searchProgress.innerText = _("Preparing search..."); + } + Search.startPulse(); + + // index already loaded, the browser was quick! + if (Search.hasIndex()) Search.query(query); + else Search.deferQuery(query); + }, + + /** + * execute search (requires search index to be loaded) + */ + query: (query) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // stem the search terms and add them to the correct list + const stemmer = new Stemmer(); + const searchTerms = new Set(); + const excludedTerms = new Set(); + const highlightTerms = new Set(); + const objectTerms = new Set(splitQuery(query.toLowerCase().trim())); + splitQuery(query.trim()).forEach((queryTerm) => { + const queryTermLower = queryTerm.toLowerCase(); + + // maybe skip this "word" + // stopwords array is from language_data.js + if ( + stopwords.indexOf(queryTermLower) !== -1 || + queryTerm.match(/^\d+$/) + ) + return; + + // stem the word + let word = stemmer.stemWord(queryTermLower); + // select the correct list + if (word[0] === "-") excludedTerms.add(word.substr(1)); + else { + searchTerms.add(word); + highlightTerms.add(queryTermLower); + } + }); + + if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js + localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" ")) + } + + // console.debug("SEARCH: searching for:"); + // console.info("required: ", [...searchTerms]); + // console.info("excluded: ", [...excludedTerms]); + + // array of [docname, title, anchor, descr, score, filename] + let results = []; + _removeChildren(document.getElementById("search-progress")); + + const queryLower = query.toLowerCase(); + for (const [title, foundTitles] of Object.entries(allTitles)) { + if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) { + for (const [file, id] of foundTitles) { + let score = Math.round(100 * queryLower.length / title.length) + results.push([ + docNames[file], + titles[file] !== title ? `${titles[file]} > ${title}` : title, + id !== null ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // search for explicit entries in index directives + for (const [entry, foundEntries] of Object.entries(indexEntries)) { + if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { + for (const [file, id] of foundEntries) { + let score = Math.round(100 * queryLower.length / entry.length) + results.push([ + docNames[file], + titles[file], + id ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // lookup as object + objectTerms.forEach((term) => + results.push(...Search.performObjectSearch(term, objectTerms)) + ); + + // lookup as search terms in fulltext + results.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + + // let the scorer override scores with a custom scoring function + if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item))); + + // now sort the results by score (in opposite order of appearance, since the + // display function below uses pop() to retrieve items) and then + // alphabetically + results.sort((a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; + }); + + // remove duplicate search results + // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept + let seen = new Set(); + results = results.reverse().reduce((acc, result) => { + let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(','); + if (!seen.has(resultStr)) { + acc.push(result); + seen.add(resultStr); + } + return acc; + }, []); + + results = results.reverse(); + + // for debugging + //Search.lastresults = results.slice(); // a copy + // console.info("search results:", Search.lastresults); + + // print the results + _displayNextItem(results, results.length, searchTerms); + }, + + /** + * search for object names + */ + performObjectSearch: (object, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const objects = Search._index.objects; + const objNames = Search._index.objnames; + const titles = Search._index.titles; + + const results = []; + + const objectSearchCallback = (prefix, match) => { + const name = match[4] + const fullname = (prefix ? prefix + "." : "") + name; + const fullnameLower = fullname.toLowerCase(); + if (fullnameLower.indexOf(object) < 0) return; + + let score = 0; + const parts = fullnameLower.split("."); + + // check for different match types: exact matches of full name or + // "last name" (i.e. last dotted part) + if (fullnameLower === object || parts.slice(-1)[0] === object) + score += Scorer.objNameMatch; + else if (parts.slice(-1)[0].indexOf(object) > -1) + score += Scorer.objPartialMatch; // matches in last name + + const objName = objNames[match[1]][2]; + const title = titles[match[0]]; + + // If more than one term searched for, we require other words to be + // found in the name/title/description + const otherTerms = new Set(objectTerms); + otherTerms.delete(object); + if (otherTerms.size > 0) { + const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase(); + if ( + [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0) + ) + return; + } + + let anchor = match[3]; + if (anchor === "") anchor = fullname; + else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname; + + const descr = objName + _(", in ") + title; + + // add custom score for some objects according to scorer + if (Scorer.objPrio.hasOwnProperty(match[2])) + score += Scorer.objPrio[match[2]]; + else score += Scorer.objPrioDefault; + + results.push([ + docNames[match[0]], + fullname, + "#" + anchor, + descr, + score, + filenames[match[0]], + ]); + }; + Object.keys(objects).forEach((prefix) => + objects[prefix].forEach((array) => + objectSearchCallback(prefix, array) + ) + ); + return results; + }, + + /** + * search for full-text terms in the index + */ + performTermsSearch: (searchTerms, excludedTerms) => { + // prepare search + const terms = Search._index.terms; + const titleTerms = Search._index.titleterms; + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + + const scoreMap = new Map(); + const fileMap = new Map(); + + // perform the search on the required terms + searchTerms.forEach((word) => { + const files = []; + const arr = [ + { files: terms[word], score: Scorer.term }, + { files: titleTerms[word], score: Scorer.title }, + ]; + // add support for partial matches + if (word.length > 2) { + const escapedWord = _escapeRegExp(word); + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord) && !terms[word]) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord) && !titleTerms[word]) + arr.push({ files: titleTerms[word], score: Scorer.partialTitle }); + }); + } + + // no match but word was a required one + if (arr.every((record) => record.files === undefined)) return; + + // found search word in contents + arr.forEach((record) => { + if (record.files === undefined) return; + + let recordFiles = record.files; + if (recordFiles.length === undefined) recordFiles = [recordFiles]; + files.push(...recordFiles); + + // set score for the word in each file + recordFiles.forEach((file) => { + if (!scoreMap.has(file)) scoreMap.set(file, {}); + scoreMap.get(file)[word] = record.score; + }); + }); + + // create the mapping + files.forEach((file) => { + if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1) + fileMap.get(file).push(word); + else fileMap.set(file, [word]); + }); + }); + + // now check if the files don't contain excluded terms + const results = []; + for (const [file, wordList] of fileMap) { + // check if all requirements are matched + + // as search terms with length < 3 are discarded + const filteredTermCount = [...searchTerms].filter( + (term) => term.length > 2 + ).length; + if ( + wordList.length !== searchTerms.size && + wordList.length !== filteredTermCount + ) + continue; + + // ensure that none of the excluded terms is in the search result + if ( + [...excludedTerms].some( + (term) => + terms[term] === file || + titleTerms[term] === file || + (terms[term] || []).includes(file) || + (titleTerms[term] || []).includes(file) + ) + ) + break; + + // select one (max) score for the file. + const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w])); + // add result to the result list + results.push([ + docNames[file], + titles[file], + "", + null, + score, + filenames[file], + ]); + } + return results; + }, + + /** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words. + */ + makeSearchSummary: (htmlText, keywords) => { + const text = Search.htmlToText(htmlText); + if (text === "") return null; + + const textLower = text.toLowerCase(); + const actualStartPosition = [...keywords] + .map((k) => textLower.indexOf(k.toLowerCase())) + .filter((i) => i > -1) + .slice(-1)[0]; + const startWithContext = Math.max(actualStartPosition - 120, 0); + + const top = startWithContext === 0 ? "" : "..."; + const tail = startWithContext + 240 < text.length ? "..." : ""; + + let summary = document.createElement("p"); + summary.classList.add("context"); + summary.textContent = top + text.substr(startWithContext, 240).trim() + tail; + + return summary; + }, +}; + +_ready(Search.init); diff --git a/_static/sphinx_highlight.js b/_static/sphinx_highlight.js new file mode 100644 index 000000000..aae669d7e --- /dev/null +++ b/_static/sphinx_highlight.js @@ -0,0 +1,144 @@ +/* Highlighting utilities for Sphinx HTML documentation. */ +"use strict"; + +const SPHINX_HIGHLIGHT_ENABLED = true + +/** + * highlight a given string on a node by wrapping it in + * span elements with the given class name. + */ +const _highlight = (node, addItems, text, className) => { + if (node.nodeType === Node.TEXT_NODE) { + const val = node.nodeValue; + const parent = node.parentNode; + const pos = val.toLowerCase().indexOf(text); + if ( + pos >= 0 && + !parent.classList.contains(className) && + !parent.classList.contains("nohighlight") + ) { + let span; + + const closestNode = parent.closest("body, svg, foreignObject"); + const isInSVG = closestNode && closestNode.matches("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.classList.add(className); + } + + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + parent.insertBefore( + span, + parent.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling + ) + ); + node.nodeValue = val.substr(0, pos); + + if (isInSVG) { + const rect = document.createElementNS( + "http://www.w3.org/2000/svg", + "rect" + ); + const bbox = parent.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute("class", className); + addItems.push({ parent: parent, target: rect }); + } + } + } else if (node.matches && !node.matches("button, select, textarea")) { + node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); + } +}; +const _highlightText = (thisNode, text, className) => { + let addItems = []; + _highlight(thisNode, addItems, text, className); + addItems.forEach((obj) => + obj.parent.insertAdjacentElement("beforebegin", obj.target) + ); +}; + +/** + * Small JavaScript module for the documentation. + */ +const SphinxHighlight = { + + /** + * highlight the search words provided in localstorage in the text + */ + highlightSearchWords: () => { + if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight + + // get and clear terms from localstorage + const url = new URL(window.location); + const highlight = + localStorage.getItem("sphinx_highlight_terms") + || url.searchParams.get("highlight") + || ""; + localStorage.removeItem("sphinx_highlight_terms") + url.searchParams.delete("highlight"); + window.history.replaceState({}, "", url); + + // get individual terms from highlight string + const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); + if (terms.length === 0) return; // nothing to do + + // There should never be more than one element matching "div.body" + const divBody = document.querySelectorAll("div.body"); + const body = divBody.length ? divBody[0] : document.querySelector("body"); + window.setTimeout(() => { + terms.forEach((term) => _highlightText(body, term, "highlighted")); + }, 10); + + const searchBox = document.getElementById("searchbox"); + if (searchBox === null) return; + searchBox.appendChild( + document + .createRange() + .createContextualFragment( + '" + ) + ); + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords: () => { + document + .querySelectorAll("#searchbox .highlight-link") + .forEach((el) => el.remove()); + document + .querySelectorAll("span.highlighted") + .forEach((el) => el.classList.remove("highlighted")); + localStorage.removeItem("sphinx_highlight_terms") + }, + + initEscapeListener: () => { + // only install a listener if it is really needed + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; + if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { + SphinxHighlight.hideSearchWords(); + event.preventDefault(); + } + }); + }, +}; + +_ready(SphinxHighlight.highlightSearchWords); +_ready(SphinxHighlight.initEscapeListener); diff --git a/genindex.html b/genindex.html new file mode 100644 index 000000000..b362119d2 --- /dev/null +++ b/genindex.html @@ -0,0 +1,117 @@ + + + + + + Index — OpenZFS documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + +

Index

+ +
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 000000000..fa77491e9 --- /dev/null +++ b/index.html @@ -0,0 +1,236 @@ + + + + + + + OpenZFS Documentation — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

OpenZFS Documentation

+

Welcome to the OpenZFS Documentation. This resource provides documentation for +users and developers working with (or contributing to) the OpenZFS +project. New users or system administrators should refer to the +documentation for their favorite platform to get started.

+ + + + + + + + + + + + + +

Getting Started

Project and +Community

Developer +Resources

How to get started +with OpenZFS on your +favorite platform

About the project +and how to +contribute

Technical +documentation +discussing the +OpenZFS +implementation

+
+

Table of Contents:

+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/1/arcstat.1.html b/man/1/arcstat.1.html new file mode 100644 index 000000000..2539b53e2 --- /dev/null +++ b/man/1/arcstat.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/cstyle.1.html b/man/1/cstyle.1.html new file mode 100644 index 000000000..f0acf936c --- /dev/null +++ b/man/1/cstyle.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/index.html b/man/1/index.html new file mode 100644 index 000000000..9154e3af3 --- /dev/null +++ b/man/1/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/raidz_test.1.html b/man/1/raidz_test.1.html new file mode 100644 index 000000000..b2cb6d59a --- /dev/null +++ b/man/1/raidz_test.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/test-runner.1.html b/man/1/test-runner.1.html new file mode 100644 index 000000000..57e7fbf37 --- /dev/null +++ b/man/1/test-runner.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/zhack.1.html b/man/1/zhack.1.html new file mode 100644 index 000000000..184102ada --- /dev/null +++ b/man/1/zhack.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/ztest.1.html b/man/1/ztest.1.html new file mode 100644 index 000000000..ae0758377 --- /dev/null +++ b/man/1/ztest.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/1/zvol_wait.1.html b/man/1/zvol_wait.1.html new file mode 100644 index 000000000..490e97d5a --- /dev/null +++ b/man/1/zvol_wait.1.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/4/index.html b/man/4/index.html new file mode 100644 index 000000000..9c72daa96 --- /dev/null +++ b/man/4/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/4/spl.4.html b/man/4/spl.4.html new file mode 100644 index 000000000..f939a3465 --- /dev/null +++ b/man/4/spl.4.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/4/zfs.4.html b/man/4/zfs.4.html new file mode 100644 index 000000000..225979924 --- /dev/null +++ b/man/4/zfs.4.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/5/index.html b/man/5/index.html new file mode 100644 index 000000000..d885643bc --- /dev/null +++ b/man/5/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/5/vdev_id.conf.5.html b/man/5/vdev_id.conf.5.html new file mode 100644 index 000000000..4fc70b8cc --- /dev/null +++ b/man/5/vdev_id.conf.5.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/dracut.zfs.7.html b/man/7/dracut.zfs.7.html new file mode 100644 index 000000000..13c1d2c7e --- /dev/null +++ b/man/7/dracut.zfs.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/index.html b/man/7/index.html new file mode 100644 index 000000000..87c0d7102 --- /dev/null +++ b/man/7/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/vdevprops.7.html b/man/7/vdevprops.7.html new file mode 100644 index 000000000..8a273c9e0 --- /dev/null +++ b/man/7/vdevprops.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zfsconcepts.7.html b/man/7/zfsconcepts.7.html new file mode 100644 index 000000000..e88177394 --- /dev/null +++ b/man/7/zfsconcepts.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zfsprops.7.html b/man/7/zfsprops.7.html new file mode 100644 index 000000000..cd36490a2 --- /dev/null +++ b/man/7/zfsprops.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zpool-features.7.html b/man/7/zpool-features.7.html new file mode 100644 index 000000000..02540b17d --- /dev/null +++ b/man/7/zpool-features.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zpoolconcepts.7.html b/man/7/zpoolconcepts.7.html new file mode 100644 index 000000000..937d64a76 --- /dev/null +++ b/man/7/zpoolconcepts.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/7/zpoolprops.7.html b/man/7/zpoolprops.7.html new file mode 100644 index 000000000..a2e6861c4 --- /dev/null +++ b/man/7/zpoolprops.7.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/fsck.zfs.8.html b/man/8/fsck.zfs.8.html new file mode 100644 index 000000000..8e959547d --- /dev/null +++ b/man/8/fsck.zfs.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/index.html b/man/8/index.html new file mode 100644 index 000000000..fc203fc62 --- /dev/null +++ b/man/8/index.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/mount.zfs.8.html b/man/8/mount.zfs.8.html new file mode 100644 index 000000000..7aeb8c5a4 --- /dev/null +++ b/man/8/mount.zfs.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/vdev_id.8.html b/man/8/vdev_id.8.html new file mode 100644 index 000000000..25fccebb8 --- /dev/null +++ b/man/8/vdev_id.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zdb.8.html b/man/8/zdb.8.html new file mode 100644 index 000000000..54567512b --- /dev/null +++ b/man/8/zdb.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zed.8.html b/man/8/zed.8.html new file mode 100644 index 000000000..e8cc4f05e --- /dev/null +++ b/man/8/zed.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-allow.8.html b/man/8/zfs-allow.8.html new file mode 100644 index 000000000..35b403186 --- /dev/null +++ b/man/8/zfs-allow.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-bookmark.8.html b/man/8/zfs-bookmark.8.html new file mode 100644 index 000000000..80642d0e6 --- /dev/null +++ b/man/8/zfs-bookmark.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-change-key.8.html b/man/8/zfs-change-key.8.html new file mode 100644 index 000000000..5ef1b1842 --- /dev/null +++ b/man/8/zfs-change-key.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-clone.8.html b/man/8/zfs-clone.8.html new file mode 100644 index 000000000..7136da371 --- /dev/null +++ b/man/8/zfs-clone.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-create.8.html b/man/8/zfs-create.8.html new file mode 100644 index 000000000..4c3f4c2ee --- /dev/null +++ b/man/8/zfs-create.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-destroy.8.html b/man/8/zfs-destroy.8.html new file mode 100644 index 000000000..f18354ab7 --- /dev/null +++ b/man/8/zfs-destroy.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-diff.8.html b/man/8/zfs-diff.8.html new file mode 100644 index 000000000..21f4ad6ab --- /dev/null +++ b/man/8/zfs-diff.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-get.8.html b/man/8/zfs-get.8.html new file mode 100644 index 000000000..7655d9f3d --- /dev/null +++ b/man/8/zfs-get.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-groupspace.8.html b/man/8/zfs-groupspace.8.html new file mode 100644 index 000000000..17ae56091 --- /dev/null +++ b/man/8/zfs-groupspace.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-hold.8.html b/man/8/zfs-hold.8.html new file mode 100644 index 000000000..ff1d4c571 --- /dev/null +++ b/man/8/zfs-hold.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-inherit.8.html b/man/8/zfs-inherit.8.html new file mode 100644 index 000000000..1ac647217 --- /dev/null +++ b/man/8/zfs-inherit.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-jail.8.html b/man/8/zfs-jail.8.html new file mode 100644 index 000000000..b627aac63 --- /dev/null +++ b/man/8/zfs-jail.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-list.8.html b/man/8/zfs-list.8.html new file mode 100644 index 000000000..1d7eefab3 --- /dev/null +++ b/man/8/zfs-list.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-load-key.8.html b/man/8/zfs-load-key.8.html new file mode 100644 index 000000000..3e2606394 --- /dev/null +++ b/man/8/zfs-load-key.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-mount-generator.8.html b/man/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..a5c87620a --- /dev/null +++ b/man/8/zfs-mount-generator.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-mount.8.html b/man/8/zfs-mount.8.html new file mode 100644 index 000000000..cbb5a6cf6 --- /dev/null +++ b/man/8/zfs-mount.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-program.8.html b/man/8/zfs-program.8.html new file mode 100644 index 000000000..46d561274 --- /dev/null +++ b/man/8/zfs-program.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-project.8.html b/man/8/zfs-project.8.html new file mode 100644 index 000000000..7c51c4b38 --- /dev/null +++ b/man/8/zfs-project.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-projectspace.8.html b/man/8/zfs-projectspace.8.html new file mode 100644 index 000000000..4c8edeb1e --- /dev/null +++ b/man/8/zfs-projectspace.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-promote.8.html b/man/8/zfs-promote.8.html new file mode 100644 index 000000000..e2319cd4d --- /dev/null +++ b/man/8/zfs-promote.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-receive.8.html b/man/8/zfs-receive.8.html new file mode 100644 index 000000000..48062c2a1 --- /dev/null +++ b/man/8/zfs-receive.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-recv.8.html b/man/8/zfs-recv.8.html new file mode 100644 index 000000000..f17ac89ea --- /dev/null +++ b/man/8/zfs-recv.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-redact.8.html b/man/8/zfs-redact.8.html new file mode 100644 index 000000000..a56a1c890 --- /dev/null +++ b/man/8/zfs-redact.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-release.8.html b/man/8/zfs-release.8.html new file mode 100644 index 000000000..be788f5c7 --- /dev/null +++ b/man/8/zfs-release.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-rename.8.html b/man/8/zfs-rename.8.html new file mode 100644 index 000000000..f34a981f9 --- /dev/null +++ b/man/8/zfs-rename.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-rollback.8.html b/man/8/zfs-rollback.8.html new file mode 100644 index 000000000..f3c24e6a5 --- /dev/null +++ b/man/8/zfs-rollback.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-send.8.html b/man/8/zfs-send.8.html new file mode 100644 index 000000000..b9b019a3f --- /dev/null +++ b/man/8/zfs-send.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-set.8.html b/man/8/zfs-set.8.html new file mode 100644 index 000000000..d57666e11 --- /dev/null +++ b/man/8/zfs-set.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-share.8.html b/man/8/zfs-share.8.html new file mode 100644 index 000000000..8fa24ff49 --- /dev/null +++ b/man/8/zfs-share.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-snapshot.8.html b/man/8/zfs-snapshot.8.html new file mode 100644 index 000000000..1007b2c19 --- /dev/null +++ b/man/8/zfs-snapshot.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unallow.8.html b/man/8/zfs-unallow.8.html new file mode 100644 index 000000000..732097b36 --- /dev/null +++ b/man/8/zfs-unallow.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unjail.8.html b/man/8/zfs-unjail.8.html new file mode 100644 index 000000000..9f4351cfd --- /dev/null +++ b/man/8/zfs-unjail.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unload-key.8.html b/man/8/zfs-unload-key.8.html new file mode 100644 index 000000000..05094fa35 --- /dev/null +++ b/man/8/zfs-unload-key.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unmount.8.html b/man/8/zfs-unmount.8.html new file mode 100644 index 000000000..2dad6d881 --- /dev/null +++ b/man/8/zfs-unmount.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-unzone.8.html b/man/8/zfs-unzone.8.html new file mode 100644 index 000000000..fbbc20766 --- /dev/null +++ b/man/8/zfs-unzone.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-upgrade.8.html b/man/8/zfs-upgrade.8.html new file mode 100644 index 000000000..844cd7e83 --- /dev/null +++ b/man/8/zfs-upgrade.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-userspace.8.html b/man/8/zfs-userspace.8.html new file mode 100644 index 000000000..dfef04ee1 --- /dev/null +++ b/man/8/zfs-userspace.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-wait.8.html b/man/8/zfs-wait.8.html new file mode 100644 index 000000000..5403c8afc --- /dev/null +++ b/man/8/zfs-wait.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs-zone.8.html b/man/8/zfs-zone.8.html new file mode 100644 index 000000000..f96d064e5 --- /dev/null +++ b/man/8/zfs-zone.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs.8.html b/man/8/zfs.8.html new file mode 100644 index 000000000..a2a1af8e8 --- /dev/null +++ b/man/8/zfs.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs_ids_to_path.8.html b/man/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..5de973af5 --- /dev/null +++ b/man/8/zfs_ids_to_path.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zfs_prepare_disk.8.html b/man/8/zfs_prepare_disk.8.html new file mode 100644 index 000000000..a7d9658b9 --- /dev/null +++ b/man/8/zfs_prepare_disk.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zgenhostid.8.html b/man/8/zgenhostid.8.html new file mode 100644 index 000000000..bbf3b5f65 --- /dev/null +++ b/man/8/zgenhostid.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zinject.8.html b/man/8/zinject.8.html new file mode 100644 index 000000000..5a8acfda3 --- /dev/null +++ b/man/8/zinject.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-add.8.html b/man/8/zpool-add.8.html new file mode 100644 index 000000000..e6263a5fb --- /dev/null +++ b/man/8/zpool-add.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-attach.8.html b/man/8/zpool-attach.8.html new file mode 100644 index 000000000..ae8cb885f --- /dev/null +++ b/man/8/zpool-attach.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-checkpoint.8.html b/man/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..ad9da4b3b --- /dev/null +++ b/man/8/zpool-checkpoint.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-clear.8.html b/man/8/zpool-clear.8.html new file mode 100644 index 000000000..ba74ee942 --- /dev/null +++ b/man/8/zpool-clear.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-create.8.html b/man/8/zpool-create.8.html new file mode 100644 index 000000000..48fc5062c --- /dev/null +++ b/man/8/zpool-create.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-destroy.8.html b/man/8/zpool-destroy.8.html new file mode 100644 index 000000000..b8ae44e25 --- /dev/null +++ b/man/8/zpool-destroy.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-detach.8.html b/man/8/zpool-detach.8.html new file mode 100644 index 000000000..01eb37fad --- /dev/null +++ b/man/8/zpool-detach.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-events.8.html b/man/8/zpool-events.8.html new file mode 100644 index 000000000..2d2019c77 --- /dev/null +++ b/man/8/zpool-events.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-export.8.html b/man/8/zpool-export.8.html new file mode 100644 index 000000000..8c905dad3 --- /dev/null +++ b/man/8/zpool-export.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-get.8.html b/man/8/zpool-get.8.html new file mode 100644 index 000000000..d88445783 --- /dev/null +++ b/man/8/zpool-get.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-history.8.html b/man/8/zpool-history.8.html new file mode 100644 index 000000000..a9ac60933 --- /dev/null +++ b/man/8/zpool-history.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-import.8.html b/man/8/zpool-import.8.html new file mode 100644 index 000000000..d8d7b6341 --- /dev/null +++ b/man/8/zpool-import.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-initialize.8.html b/man/8/zpool-initialize.8.html new file mode 100644 index 000000000..069c58d04 --- /dev/null +++ b/man/8/zpool-initialize.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-iostat.8.html b/man/8/zpool-iostat.8.html new file mode 100644 index 000000000..fc0369b59 --- /dev/null +++ b/man/8/zpool-iostat.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-labelclear.8.html b/man/8/zpool-labelclear.8.html new file mode 100644 index 000000000..8f70028c2 --- /dev/null +++ b/man/8/zpool-labelclear.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-list.8.html b/man/8/zpool-list.8.html new file mode 100644 index 000000000..7fa9bc2d2 --- /dev/null +++ b/man/8/zpool-list.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-offline.8.html b/man/8/zpool-offline.8.html new file mode 100644 index 000000000..2af57e581 --- /dev/null +++ b/man/8/zpool-offline.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-online.8.html b/man/8/zpool-online.8.html new file mode 100644 index 000000000..18c7f787f --- /dev/null +++ b/man/8/zpool-online.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-reguid.8.html b/man/8/zpool-reguid.8.html new file mode 100644 index 000000000..c1afa1145 --- /dev/null +++ b/man/8/zpool-reguid.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-remove.8.html b/man/8/zpool-remove.8.html new file mode 100644 index 000000000..fb7abab3d --- /dev/null +++ b/man/8/zpool-remove.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-reopen.8.html b/man/8/zpool-reopen.8.html new file mode 100644 index 000000000..0a70ecf71 --- /dev/null +++ b/man/8/zpool-reopen.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-replace.8.html b/man/8/zpool-replace.8.html new file mode 100644 index 000000000..ef59f0fc8 --- /dev/null +++ b/man/8/zpool-replace.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-resilver.8.html b/man/8/zpool-resilver.8.html new file mode 100644 index 000000000..bc4b40297 --- /dev/null +++ b/man/8/zpool-resilver.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-scrub.8.html b/man/8/zpool-scrub.8.html new file mode 100644 index 000000000..7cb99faf9 --- /dev/null +++ b/man/8/zpool-scrub.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-set.8.html b/man/8/zpool-set.8.html new file mode 100644 index 000000000..677b54388 --- /dev/null +++ b/man/8/zpool-set.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-split.8.html b/man/8/zpool-split.8.html new file mode 100644 index 000000000..716ea93ee --- /dev/null +++ b/man/8/zpool-split.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-status.8.html b/man/8/zpool-status.8.html new file mode 100644 index 000000000..1d6fd2346 --- /dev/null +++ b/man/8/zpool-status.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-sync.8.html b/man/8/zpool-sync.8.html new file mode 100644 index 000000000..e17241671 --- /dev/null +++ b/man/8/zpool-sync.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-trim.8.html b/man/8/zpool-trim.8.html new file mode 100644 index 000000000..0ba154699 --- /dev/null +++ b/man/8/zpool-trim.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-upgrade.8.html b/man/8/zpool-upgrade.8.html new file mode 100644 index 000000000..480605f57 --- /dev/null +++ b/man/8/zpool-upgrade.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool-wait.8.html b/man/8/zpool-wait.8.html new file mode 100644 index 000000000..0e0cde4ed --- /dev/null +++ b/man/8/zpool-wait.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool.8.html b/man/8/zpool.8.html new file mode 100644 index 000000000..3dd2fcad6 --- /dev/null +++ b/man/8/zpool.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zpool_influxdb.8.html b/man/8/zpool_influxdb.8.html new file mode 100644 index 000000000..c1fe7c6c3 --- /dev/null +++ b/man/8/zpool_influxdb.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zstream.8.html b/man/8/zstream.8.html new file mode 100644 index 000000000..6ee702beb --- /dev/null +++ b/man/8/zstream.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/8/zstreamdump.8.html b/man/8/zstreamdump.8.html new file mode 100644 index 000000000..7002d45c9 --- /dev/null +++ b/man/8/zstreamdump.8.html @@ -0,0 +1,15 @@ + + + + + + + +

You should have been redirected.

+ If not, click here to continue. + + diff --git a/man/index.html b/man/index.html new file mode 100644 index 000000000..4431bf420 --- /dev/null +++ b/man/index.html @@ -0,0 +1,140 @@ + + + + + + + Man Pages — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Man Pages

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/arcstat.1.html b/man/master/1/arcstat.1.html new file mode 100644 index 000000000..1d4726301 --- /dev/null +++ b/man/master/1/arcstat.1.html @@ -0,0 +1,411 @@ + + + + + + + arcstat.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

arcstat.1

+
+ + + + + +
ARCSTAT(1)General Commands ManualARCSTAT(1)
+
+
+

+

arcstatreport + ZFS ARC and L2ARC statistics

+
+
+

+ + + + + +
arcstat[-havxp] [-f + field[,field…]] + [-o file] + [-s string] + [interval] [count]
+
+
+

+

arcstat prints various ZFS ARC and L2ARC + statistics in vmstat-like fashion:

+
+
+
+
ARC target size
+
+
Demand hit percentage
+
+
Demand I/O hit percentage
+
+
Demand miss percentage
+
+
Demand data hit percentage
+
+
Demand data I/O hit percentage
+
+
Demand data miss percentage
+
+
Demand metadata hit percentage
+
+
Demand metadata I/O hit percentage
+
+
Demand metadata miss percentage
+
+
MFU list hits per second
+
+
Metadata hit percentage
+
+
Metadata I/O hit percentage
+
+
Metadata miss percentage
+
+
MRU list hits per second
+
+
Prefetch hits percentage
+
+
Prefetch I/O hits percentage
+
+
Prefetch miss percentage
+
+
Prefetch data hits percentage
+
+
Prefetch data I/O hits percentage
+
+
Prefetch data miss percentage
+
+
Prefetch metadata hits percentage
+
+
Prefetch metadata I/O hits percentage
+
+
Prefetch metadata miss percentage
+
+
Demand hits per second
+
+
Demand I/O hits per second
+
+
Demand misses per second
+
+
Demand data hits per second
+
+
Demand data I/O hits per second
+
+
Demand data misses per second
+
+
Demand metadata hits per second
+
+
Demand metadata I/O hits per second
+
+
Demand metadata misses per second
+
+
ARC hit percentage
+
+
ARC hits per second
+
+
ARC I/O hits percentage
+
+
ARC I/O hits per second
+
+
MFU ghost list hits per second
+
+
Metadata hits per second
+
+
Metadata I/O hits per second
+
+
ARC misses per second
+
+
Metadata misses per second
+
+
MRU ghost list hits per second
+
+
Prefetch hits per second
+
+
Prefetch I/O hits per second
+
+
Prefetch misses per second
+
+
Prefetch data hits per second
+
+
Prefetch data I/O hits per second
+
+
Prefetch data misses per second
+
+
Prefetch metadata hits per second
+
+
Prefetch metadata I/O hits per second
+
+
Prefetch metadata misses per second
+
+
Total ARC accesses per second
+
+
Current time
+
+
ARC size
+
+
Alias for size
+
+
Uncached list hits per second
+
+
Demand accesses per second
+
+
Demand data accesses per second
+
+
Demand metadata accesses per second
+
+
evict_skip per second
+
+
ARC miss percentage
+
+
Metadata accesses per second
+
+
Prefetch accesses per second
+
+
Prefetch data accesses per second
+
+
Prefetch metadata accesses per second
+
+
L2ARC access hit percentage
+
+
L2ARC hits per second
+
+
L2ARC misses per second
+
+
Total L2ARC accesses per second
+
+
L2ARC prefetch allocated size per second
+
+
L2ARC prefetch allocated size percentage
+
+
L2ARC MFU allocated size per second
+
+
L2ARC MFU allocated size percentage
+
+
L2ARC MRU allocated size per second
+
+
L2ARC MRU allocated size percentage
+
+
L2ARC data (buf content) allocated size per second
+
+
L2ARC data (buf content) allocated size percentage
+
+
L2ARC metadata (buf content) allocated size per second
+
+
L2ARC metadata (buf content) allocated size percentage
+
+
Size of the L2ARC
+
+
mutex_miss per second
+
+
Bytes read per second from the L2ARC
+
+
L2ARC access miss percentage
+
+
Actual (compressed) size of the L2ARC
+
+
ARC grow disabled
+
+
ARC reclaim needed
+
+
The ARC's idea of how much free memory there is, which includes evictable + memory in the page cache. Since the ARC tries to keep + avail above zero, avail is usually + more instructive to observe than free.
+
+
The ARC's idea of how much free memory is available to it, which is a bit + less than free. May temporarily be negative, in which + case the ARC will reduce the target size c.
+
+
+
+
+

+
+
+
Print all possible stats.
+
+
Display only specific fields. See + DESCRIPTION for supported + statistics.
+
+
Display help message.
+
+
Report statistics to a file instead of the standard output.
+
+
Disable auto-scaling of numerical fields (for raw, machine-parsable + values).
+
+
Display data with a specified separator (default: 2 spaces).
+
+
Print extended stats (same as -f + time,mfu,mru,mfug,mrug,eskip,mtxmis,dread,pread,read).
+
+
Show field headers and definitions
+
+
+
+

+

The following operands are supported:

+
+
+
interval
+
Specify the sampling interval in seconds.
+
count
+
Display only count reports.
+
+
+
+
+ + + + + +
December 23, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/cstyle.1.html b/man/master/1/cstyle.1.html new file mode 100644 index 000000000..6cb26ce02 --- /dev/null +++ b/man/master/1/cstyle.1.html @@ -0,0 +1,293 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
CSTYLE(1)General Commands ManualCSTYLE(1)
+
+
+

+

cstylecheck for + some common stylistic errors in C source files

+
+
+

+ + + + + +
cstyle[-chpvCP] + [file]…
+
+
+

+

cstyle inspects C source files (*.c and + *.h) for common stylistic errors. It attempts to check for the cstyle + documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. + Note that there is much in that document that + + be checked for; just because your code is + cstyle-clean does not mean that you've followed + Sun's C style. + .

+
+
+

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented + + four spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see + , below.
+
+
Performs some of the more picky checks. Includes ANSI + + and + + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current + continuation block.
+
+
Check for use of non-POSIX types. Historically, types like + + and + + were used, but they are now deprecated in favor of the POSIX types + , + , + etc. This detects any use of the deprecated types. Used as part of the + putback checks.
+
+
Also print GitHub-Actions-style ::error + output.
+
+
+
+

+
+
+
If set and nonempty, equivalent to -g.
+
+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
    +
  1. Preprocessor macros which cause unmatched parenthesis will + confuse the checker for that line. To fix this, you'll need to make sure + that each branch of the + statement has + balanced parenthesis.
  2. +
  3. Some cpp(1) macros do not require + ;s after them. Any such macros + be ALL_CAPS; + any lower case letters will cause bad output. +

    The bad output will generally be corrected after the + next ;, + , + or + .

    +
  4. +
+Some continuation error messages deserve some additional explanation: +
+
+
A multi-line statement which is not broken at statement boundaries. For + example: +
+
if (this_is_a_long_variable == another_variable) a =
+    b + c;
+
+

Will trigger this error. Instead, do:

+
+
if (this_is_a_long_variable == another_variable)
+    a = b + c;
+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example: +
+
while (do_something(&x) == 0);
+
+

Will trigger this error. Instead, do:

+
+
while (do_something(&x) == 0)
+    ;
+
+
+
+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/index.html b/man/master/1/index.html new file mode 100644 index 000000000..0a56beefb --- /dev/null +++ b/man/master/1/index.html @@ -0,0 +1,159 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/raidz_test.1.html b/man/master/1/raidz_test.1.html new file mode 100644 index 000000000..b051a0406 --- /dev/null +++ b/man/master/1/raidz_test.1.html @@ -0,0 +1,254 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
RAIDZ_TEST(1)General Commands ManualRAIDZ_TEST(1)
+
+
+

+

raidz_testraidz + implementation verification and benchmarking tool

+
+
+

+ + + + + +
raidz_test[-StBevTD] [-a + ashift] [-o + zio_off_shift] [-d + raidz_data_disks] [-s + zio_size_shift] [-r + reflow_offset]
+
+
+

+

The purpose of this tool is to run all supported raidz + implementation and verify the results of all methods. It also contains a + parameter sweep option where all parameters affecting a RAID-Z block are + verified (like ashift size, data offset, data size, etc.). The tool also + supports a benchmarking mode using the -B + option.

+
+
+

+
+
+
Print a help summary.
+
+ ashift (default: + )
+
Ashift value.
+
+ zio_off_shift (default: + )
+
ZIO offset for each raidz block. The offset's value is + .
+
+ raidz_data_disks (default: + )
+
Number of raidz data disks to use. Additional disks will be used for + parity.
+
+ zio_size_shift (default: + )
+
Size of data for raidz block. The real size is + .
+
+ reflow_offset (default: + )
+
Set raidz expansion offset. The expanded raidz map allocation function + will produce different map configurations depending on this value.
+
(weep)
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for the + -aods options. Runtime using this option will be + long.
+
(imeout)
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
(enchmark)
+
All implementations are benchmarked using increasing per disk data size. + Results are given as throughput per disk, measured in MiB/s.
+
(xpansion)
+
Use expanded raidz map allocation function.
+
(erbose)
+
Increase verbosity.
+
(est + the test)
+
Debugging option: fail all tests. This is to check if tests would properly + verify bit-exactness.
+
(ebug)
+
Debugging option: attach gdb(1) when + + or + + are received.
+
+
+
+

+

ztest(1)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/test-runner.1.html b/man/master/1/test-runner.1.html new file mode 100644 index 000000000..034e3b18d --- /dev/null +++ b/man/master/1/test-runner.1.html @@ -0,0 +1,437 @@ + + + + + + + test-runner.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

test-runner.1

+
+ + + + + +
RUN(1)General Commands ManualRUN(1)
+
+
+

+

runfind, + execute, and log the results of tests

+
+
+

+ + + + + +
run[-dgq] [-o + outputdir] [-pP + script] [-t + -seconds] [-uxX + username] + pathname
+

+
+ + + + + +
run-w runfile + [-gq] [-o + outputdir] [-pP + script] [-t + -seconds] [-uxX + username] + pathname
+

+
+ + + + + +
run-c runfile + [-dq]
+

+
+ + + + + +
run[-h]
+
+
+

+

run command has three basic modes of + operation. With neither -c nor + -w, run processes the + arguments provided on the command line, adding them to the list for this + run. If a specified pathname is an executable file, it + is added as a test. If a specified pathname is a + directory, the behavior depends upon the presence of + -g. If -g is specified, the + directory is treated as a test group. See the section on + below. Without -g, + run simply descends into the directory looking for + executable files. The tests are then executed, and the results are + logged.

+

With -w, run finds + tests in the manner described above. Rather than executing the tests and + logging the results, the test configuration is stored in a + runfile, which can be used in future invocations, or + edited to modify which tests are executed and which options are applied. + Options included on the command line with -w become + defaults in the runfile.

+

With -c, run + parses a runfile, which can specify a series of tests + and test groups to be executed. The tests are then executed, and the results + are logged.

+
+

+

A test group is comprised of a set of executable files, all of + which exist in one directory. The options specified on the command line or + in a runfile apply to individual tests in the group. + The exception is options pertaining to pre and post scripts, which act on + all tests as a group. Rather than running before and after each test, these + scripts are run only once each at the start and end of the test group.

+
+
+

+

The specified tests run serially, and are typically assigned + results according to exit values. Tests that exit zero and non-zero are + marked + and + , + respectively. When a pre script fails for a test group, only the post script + is executed, and the remaining tests are marked + . + Any test that exceeds its timeout is terminated, and + marked + .

+

By default, tests are executed with the credentials of the + run script. Executing tests with other credentials + is done via sudo(1m), which must be configured to allow + execution without prompting for a password. Environment variables from the + calling shell are available to individual tests. During test execution, the + working directory is changed to outputdir.

+
+
+

+

By default, run will print one line on + standard output at the conclusion of each test indicating the test name, + result and elapsed time. Additionally, for each invocation of + run, a directory is created using the ISO 8601 date + format. Within this directory is a file named + + containing all the test output with timestamps, and a directory for each + test. Within the test directories, there is one file each for standard + output, standard error and merged output. The default location for the + outputdir is + /var/tmp/test_results.

+
+
+

+

The runfile is an INI-style configuration + file that describes a test run. The file has one section named + , + which contains configuration option names and their values in + + = value format. The values in + this section apply to all the subsequent sections, unless they are also + specified there, in which case the default is overridden. The remaining + section names are the absolute pathnames of files and directories, + describing tests and test groups respectively. The legal option names + are:

+
+
+ = pathname
+
The name of the directory that holds test logs.
+
+ = script
+
Run script prior to the test or test group.
+
+ = username
+
Execute the pre script as username.
+
+ = script
+
Run script after the test or test group.
+
+ = username
+
Execute the post script as username.
+
+ = + True|
+
If True, only the results summary is printed to standard + out.
+
+ = ['filename', + ]
+
Specify a list of filenames for this test group. + Only the basename of the absolute path is required. This option is only + valid for test groups, and each filename must be + single quoted.
+
+ = n
+
A timeout value of n seconds.
+
+ = username
+
Execute the test or test group as username.
+
+
+
+
+

+
+
+ runfile
+
Specify a runfile to be consumed by the run + command.
+
+
Dry run mode. Execute no tests, but print a description of each test that + would have been run.
+
+
Enable kmemleak reporting (Linux only)
+
+
Create test groups from any directories found while searching for + tests.
+
+ outputdir
+
Specify the directory in which to write test results.
+
+ script
+
Run script prior to any test or test group.
+
+ script
+
Run script after any test or test group.
+
+
Print only the results summary to the standard output.
+
+ script
+
Run script as a failsafe after any test is + killed.
+
+ username
+
Execute the failsafe script as username.
+
+ n
+
Specify a timeout value of n seconds per test.
+
+ username
+
Execute tests or test groups as username.
+
+ runfile
+
Specify the name of the runfile to create.
+
+ username
+
Execute the pre script as username.
+
+ username
+
Execute the post script as username.
+
+
+
+

+
+
: Running ad-hoc tests.
+
This example demonstrates the simplest invocation of + run. +
+
% run my-tests
+Test: /home/jkennedy/my-tests/test-01                    [00:02] [PASS]
+Test: /home/jkennedy/my-tests/test-02                    [00:04] [PASS]
+Test: /home/jkennedy/my-tests/test-03                    [00:01] [PASS]
+
+Results Summary
+PASS       3
+
+Running Time:   00:00:07
+Percent passed: 100.0%
+Log directory:  /var/tmp/test_results/20120923T180654
+
+
+
: Creating a runfile + for future use.
+
This example demonstrates creating a runfile with + non-default options. +
+
% run -p setup -x root -g -w new-tests.run new-tests
+% cat new-tests.run
+[DEFAULT]
+pre = setup
+post_user =
+quiet = False
+user =
+timeout = 60
+post =
+pre_user = root
+outputdir = /var/tmp/test_results
+
+[/home/jkennedy/new-tests]
+tests = ['test-01', 'test-02', 'test-03']
+
+
+
+
+
+

+

sudo(1m)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/zhack.1.html b/man/master/1/zhack.1.html new file mode 100644 index 000000000..2c769f166 --- /dev/null +++ b/man/master/1/zhack.1.html @@ -0,0 +1,297 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
ZHACK(1)General Commands ManualZHACK(1)
+
+
+

+

zhacklibzpool + debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+
+
+ + + + + +
zhackfeature stat pool
+
+
List feature flags.
+
+ + + + + +
zhackfeature enable [-d + description] [-r] + pool guid
+
+
Add a new feature to pool that is uniquely + identified by guid, which is specified in the same + form as a zfs(8) user property. +

The description is a short human + readable explanation of the new feature.

+

The -r flag indicates that + pool can be safely opened in read-only mode by a + system that does not understand the guid + feature.

+
+
+ + + + + +
zhackfeature ref + [-d|-m] + pool guid
+
+
Increment the reference count of the guid feature in + pool. +

The -d flag decrements the reference + count of the guid feature in + pool instead.

+

The -m flag indicates that the + guid feature is now required to read the pool + MOS.

+
+
+ + + + + +
zhacklabel repair [-cu] + device
+
+
Repair labels of a specified device according to + options. +

Flags may be combined to do their functions + simultaneously.

+

The -c flag repairs corrupted label + checksums

+

The -u flag restores the label on a + detached device

+

Example:

+
+ + + + + +
zhack label repair + -cu device +
+ Fix checksums and undetach a device
+
+
+
+
+

+

The following can be passed to all zhack + invocations before any subcommand:

+
+
+ cachefile
+
Read pool configuration from the + cachefile, which is + /etc/zfs/zpool.cache by default.
+
+ dir
+
Search for pool members in + dir. Can be specified more than once.
+
+
+
+

+
+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
+# zhack feature enable -d 'Predict future disk failures.' tank com.example:clairvoyance
+# zhack feature ref tank com.example:clairvoyance
+
+
+
+

+

ztest(1), zpool-features(7), + zfs(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/ztest.1.html b/man/master/1/ztest.1.html new file mode 100644 index 000000000..8fefbdd61 --- /dev/null +++ b/man/master/1/ztest.1.html @@ -0,0 +1,402 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ZTEST(1)General Commands ManualZTEST(1)
+
+
+

+

ztestwas + written by the ZFS Developers as a ZFS unit test

+
+
+

+ + + + + +
ztest[-VEG] [-v + vdevs] [-s + size_of_each_vdev] [-a + alignment_shift] [-m + mirror_copies] [-r + raidz_disks/draid_disks] [-R + raid_parity] [-K + raid_kind] [-D + draid_data] [-S + draid_spares] [-C + vdev_class_state] [-d + datasets] [-t + threads] [-g + gang_block_threshold] [-i + initialize_pool_i_times] [-k + kill_percentage] [-p + pool_name] [-T + time] [-z + zil_failure_rate]
+
+ + + + + +
ztest-X [-VG] + [-s size_of_each_vdev] + [-a alignment_shift] + [-r raidz_disks] + [-R raid_parity] + [-d datasets] + [-t threads]
+
+
+

+

ztest was written by the ZFS Developers as + a ZFS unit test. The tool was developed in tandem with the ZFS functionality + and was executed nightly as one of the many regression test against the + daily build. As features were added to ZFS, unit tests were also added to + ztest. In addition, a separate test development team + wrote and executed more functional and stress tests.

+

By default ztest runs for ten minutes and + uses block files (stored in /tmp) to create pools + rather than using physical disks. Block files afford + ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you + have a small tmp directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 + minutes. The -V option can be used to increase the + verbosity of the tool. Adding multiple -V options is + allowed and the more you add the more chatty ztest + becomes.

+

After the ztest run completes, you should + notice many ztest.* files lying around. Once the run + completes you can safely remove these files. Note that you shouldn't remove + these files during a run. You can re-use these files in your next + ztest run by using the -E + option.

+
+
+

+
+
, + -?, --help
+
Print a help summary.
+
, + --vdevs= (default: + )
+
Number of vdevs.
+
, + --vdev-size= (default: + )
+
Size of each vdev.
+
, + --alignment-shift= (default: + ) + (use + + for random)
+
Alignment shift used in test.
+
, + --mirror-copies= (default: + )
+
Number of mirror copies.
+
, + --raid-disks= (default: 4 + for + raidz/ + for draid)
+
Number of raidz/draid disks.
+
, + --raid-parity= (default: 1)
+
Raid parity (raidz & draid).
+
, + --raid-kind=|||random + (default: random)
+
The kind of RAID config to use. With random the kind + alternates between raidz, eraidz (expandable raidz) and draid.
+
, + --draid-data= (default: 4)
+
Number of data disks in a dRAID redundancy group.
+
, + --draid-spares= (default: 1)
+
Number of dRAID distributed spare disks.
+
, + --datasets= (default: + )
+
Number of datasets.
+
, + --threads= (default: + )
+
Number of threads.
+
, + --gang-block-threshold= (default: + 32K)
+
Gang block threshold.
+
, + --init-count= (default: 1)
+
Number of pool initializations.
+
, + --kill-percentage= (default: + )
+
Kill percentage.
+
, + --pool-name= (default: + )
+
Pool name.
+
, + --vdev-file-directory= (default: + /tmp)
+
File directory for vdev files.
+
, + --multi-host
+
Multi-host; simulate pool imported on remote host.
+
, + --use-existing-pool
+
Use existing pool (use existing pool instead of creating new one).
+
, + --run-time= (default: + s)
+
Total test run time.
+
, + --pass-time= (default: + s)
+
Time per pass.
+
, + --freeze-loops= (default: + )
+
Max loops in + ().
+
, + --alt-ztest=
+
Path to alternate ("older") ztest to + drive, which will be used to initialise the pool, and, a stochastic half + the time, to run the tests. The parallel lib + directory is prepended to LD_LIBRARY_PATH; i.e. + given -B + ./chroots/lenny/usr/bin/ztest, + ./chroots/lenny/usr/lib will be loaded.
+
, + --vdev-class-state=||random + (default: random)
+
The vdev allocation class state.
+
, + --option=variable=value
+
Set global variable to an unsigned 32-bit integer + value (little-endian only).
+
, + --dump-debug
+
Dump zfs_dbgmsg buffer before exiting due to an error.
+
, + --verbose
+
Verbose (use multiple times for ever more verbosity).
+
, + --raidz-expansion
+
Perform a dedicated raidz expansion test.
+
+
+
+

+

To override /tmp as your location for + block files, you can use the -f option:

+
# ztest -f /
+

To get an idea of what ztest is actually + testing try this:

+
# ztest -f / -VVV
+

Maybe you'd like to run ztest for longer? + To do so simply use the -T option and specify the + runlength in seconds like so:

+
# ztest -f / -V -T 120
+
+
+

+
+
=id
+
Use id instead of the SPL hostid to identify this host. + Intended for use with ztest, but this environment + variable will affect any utility which uses libzpool, including + zpool(8). Since the kernel is unaware of this setting, + results with utilities other than ztest are undefined.
+
=stacksize
+
Limit the default stack size to stacksize bytes for the + purpose of detecting and debugging kernel stack overflows. This value + defaults to 32K which is double the default + Linux + kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to + .

+
+
+
+
+

+

zdb(1), zfs(1), + zpool(1), spl(4)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/1/zvol_wait.1.html b/man/master/1/zvol_wait.1.html new file mode 100644 index 000000000..45089dce4 --- /dev/null +++ b/man/master/1/zvol_wait.1.html @@ -0,0 +1,191 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands ManualZVOL_WAIT(1)
+
+
+

+

zvol_waitwait + for ZFS volume links to appear in /dev

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, the volumes within it will appear as + block devices. As they're registered, udev(7) + asynchronously creates symlinks under /dev/zvol + using the volumes' names. zvol_wait will wait for + all those symlinks to be created before exiting.

+
+
+

+

udev(7)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/4/index.html b/man/master/4/index.html new file mode 100644 index 000000000..9c3813b19 --- /dev/null +++ b/man/master/4/index.html @@ -0,0 +1,149 @@ + + + + + + + Devices and Special Files (4) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Devices and Special Files (4)

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/4/spl.4.html b/man/master/4/spl.4.html new file mode 100644 index 000000000..0f82f8503 --- /dev/null +++ b/man/master/4/spl.4.html @@ -0,0 +1,317 @@ + + + + + + + spl.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

spl.4

+
+ + + + + +
SPL(4)Device Drivers ManualSPL(4)
+
+
+

+

splparameters + of the SPL kernel module

+
+
+

+
+
=4 + (uint)
+
The number of threads created for the spl_kmem_cache task queue. This task + queue is responsible for allocating new slabs for use by the kmem caches. + For the majority of systems and workloads only a small number of threads + are required.
+
= + (uint)
+
The preferred number of objects per slab in the cache. In general, a + larger value will increase the caches memory footprint while decreasing + the time required to perform an allocation. Conversely, a smaller value + will minimize the footprint and improve cache reclaim time but individual + allocations may take longer.
+
= + (64-bit) or 4 (32-bit) (uint)
+
The maximum size of a kmem cache slab in MiB. This effectively limits the + maximum cache object size to + spl_kmem_cache_max_size/spl_kmem_cache_obj_per_slab. +

Caches may not be created with object sized larger than this + limit.

+
+
= + (uint)
+
For small objects the Linux slab allocator should be used to make the most + efficient use of the memory. However, large objects are not supported by + the Linux slab and therefore the SPL implementation is preferred. This + value is used to determine the cutoff between a small and large object. +

Objects of size spl_kmem_cache_slab_limit or + smaller will be allocated using the Linux slab allocator, large objects + use the SPL allocator. A cutoff of 16K was determined to be optimal for + architectures using 4K pages.

+
+
= + (uint)
+
As a general rule + () + allocations should be small, preferably just a few pages, since they must + by physically contiguous. Therefore, a rate limited warning will be + printed to the console for any kmem_alloc() which + exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped + at 32K to accommodate systems using large pages. This value was selected + to be small enough to ensure the largest allocations are quickly noticed + and fixed. But large enough to avoid logging any warnings when a + allocation size is larger than optimal but not a serious concern. Since + this value is tunable, developers are encouraged to set it lower when + testing so any new largish allocations are quickly caught. These + warnings may be disabled by setting the threshold to zero.

+
+
=KMALLOC_MAX_SIZE/4 + (uint)
+
Large + () + allocations will fail if they exceed KMALLOC_MAX_SIZE. + Allocations which are marginally smaller than this limit may succeed but + should still be avoided due to the expense of locating a contiguous range + of free pages. Therefore, a maximum kmem size with reasonable safely + margin of 4x is set. kmem_alloc() allocations + larger than this maximum will quickly fail. + () + allocations less than or equal to this value will use + (), + but shift to + () + when exceeding this value.
+
=0 + (uint)
+
Cache magazines are an optimization designed to minimize the cost of + allocating memory. They do this by keeping a per-cpu cache of recently + freed objects, which can then be reallocated without taking a lock. This + can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being + immediately released this may not be ideal for low memory machines. +

For this reason, + spl_kmem_cache_magazine_size can be used to set a + maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+
+
=0 + (ulong)
+
The system hostid, when set this can be used to uniquely identify a + system. By default this value is set to zero which indicates the hostid is + disabled. It can be explicitly enabled by placing a unique non-zero value + in /etc/hostid.
+
=/etc/hostid + (charp)
+
The expected path to locate the system hostid when specified. This value + may be overridden for non-standard configurations.
+
=0 + (uint)
+
Cause a kernel panic on assertion failures. When not enabled, the thread + is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+
+
=0 + (uint)
+
Kick stuck taskq to spawn threads. When writing a non-zero value to it, it + will scan all the taskqs. If any of them have a pending task more than 5 + seconds old, it will kick it to spawn more threads. This can be used if + you find a rare deadlock occurs because one or more taskqs didn't spawn a + thread when it should.
+
=0 + (int)
+
Bind taskq threads to specific CPUs. When enabled all taskq threads will + be distributed evenly across the available CPUs. By default, this behavior + is disabled to allow the Linux scheduler the maximum flexibility to + determine where a thread should run.
+
=1 + (int)
+
Allow dynamic taskqs. When enabled taskqs which set the + + flag will by default create only a single thread. New threads will be + created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will + be promptly destroyed. By default this behavior is enabled but it can be + disabled to aid performance analysis or troubleshooting.
+
=1 + (int)
+
Allow newly created taskq threads to set a non-default scheduler priority. + When enabled, the priority specified when a taskq is created will be + applied to all threads created by that taskq. When disabled all threads + will use the default Linux kernel thread priority. By default, this + behavior is enabled.
+
=4 + (int)
+
The number of items a taskq worker thread must handle without interruption + before requesting a new worker thread be spawned. This is used to control + how quickly taskqs ramp up the number of threads processing the queue. + Because Linux thread creation and destruction are relatively inexpensive a + small default value has been selected. This means that normally threads + will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for + some configurations.
+
= + (uint)
+
The maximum number of tasks per pending list in each taskq shown in + /proc/spl/taskq{,-all}. Write 0 + to turn off the limit. The proc file will walk the lists with lock held, + reading it could cause a lock-up if the list grow too large without + limiting the output. "(truncated)" will be shown if the list is + larger than the limit.
+
= + (uint)
+
Minimum idle threads exit interval for dynamic taskqs. Smaller values + allow idle threads exit more often and potentially be respawned again on + demand, causing more churn.
+
+
+
+ + + + + +
August 24, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/4/zfs.4.html b/man/master/4/zfs.4.html new file mode 100644 index 000000000..7a0eb36bc --- /dev/null +++ b/man/master/4/zfs.4.html @@ -0,0 +1,2750 @@ + + + + + + + zfs.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.4

+
+ + + + + +
ZFS(4)Device Drivers ManualZFS(4)
+
+
+

+

zfstuning of + the ZFS kernel module

+
+
+

+

The ZFS module supports these parameters:

+
+
=UINT64_MAXB + (u64)
+
Maximum size in bytes of the dbuf cache. The target size is determined by + the MIN versus + 1/2^dbuf_cache_shift (1/32nd) of + the target ARC size. The behavior of the dbuf cache and its associated + settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=UINT64_MAXB + (u64)
+
Maximum size in bytes of the metadata dbuf cache. The target size is + determined by the MIN versus + 1/2^dbuf_metadata_cache_shift + (1/64th) of the target ARC size. The behavior of the metadata dbuf cache + and its associated settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=10% + (uint)
+
The percentage over dbuf_cache_max_bytes when dbufs must + be evicted directly.
+
=10% + (uint)
+
The percentage below dbuf_cache_max_bytes when the evict + thread stops evicting dbufs.
+
=5 + (uint)
+
Set the size of the dbuf cache (dbuf_cache_max_bytes) to + a log2 fraction of the target ARC size.
+
= + (uint)
+
Set the size of the dbuf metadata cache + (dbuf_metadata_cache_max_bytes) to a log2 fraction of + the target ARC size.
+
=0 + (uint)
+
Set the size of the mutex array for the dbuf cache. When set to + 0 the array is dynamically sized based on total system + memory.
+
=7 + (128) (uint)
+
dnode slots allocated in a single operation as a power of 2. The default + value minimizes lock contention for the bulk operation performed.
+
=134217728B + (128 MiB) (uint)
+
Limit the amount we can prefetch with one call to this amount in bytes. + This helps to limit the amount of memory that can be used by + prefetching.
+
+ (int)
+
Alias for send_holes_without_birth_time.
+
=1|0 + (int)
+
Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set + as fast as possible.
+
=200 + (u64)
+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only + applicable in related situations.
+
=1 + (u64)
+
Seconds between L2ARC writing.
+
=8 + (u64)
+
How far through the ARC lists to search for L2ARC cacheable content, + expressed as a multiplier of l2arc_write_max. ARC + persistence across reboots can be achieved with persistent L2ARC by + setting this parameter to 0, allowing the full length of + ARC lists to be searched for cacheable content.
+
=200% + (u64)
+
Scales l2arc_headroom by this percentage when L2ARC + contents are being successfully compressed before writing. A value of + 100 disables this feature.
+
=0|1 + (int)
+
Controls whether buffers present on special vdevs are eligible for caching + into L2ARC. If set to 1, exclude dbufs on special vdevs from being cached + to L2ARC.
+
=0|1 + (int)
+
Controls whether only MFU metadata and data are cached from ARC into + L2ARC. This may be desired to avoid wasting space on L2ARC when + reading/writing large amounts of data that are not expected to be accessed + more than once. +

The default is off, meaning both MRU and MFU data and metadata + are cached. When turning off this feature, some MRU buffers will still + be present in ARC and eventually cached on L2ARC. + If + l2arc_noprefetch=0, some prefetched + buffers will be cached to L2ARC, and those might later transition to + MRU, in which case the l2arc_mru_asize + arcstat will not be 0.

+

Regardless of l2arc_noprefetch, some MFU + buffers might be evicted from ARC, accessed later on as prefetches and + transition to MRU as prefetches. If accessed again they are counted as + MRU and the l2arc_mru_asize arcstat + will not be 0.

+

The ARC status of L2ARC buffers when they + were first cached in L2ARC can be seen in the + l2arc_mru_asize, + , + and + + arcstats when importing the pool or onlining a cache device if + persistent L2ARC is enabled.

+

The + + arcstat does not take into account if this option is enabled as the + information provided by the + + arcstats can be used to decide if toggling this option is appropriate + for the current workload.

+
+
=% + (uint)
+
Percent of ARC size allowed for L2ARC-only headers. Since L2ARC buffers + are not evicted on memory pressure, too many headers on a system with an + irrationally large L2ARC can render it slow or unusable. This parameter + limits L2ARC writes and rebuilds to achieve the target.
+
=0% + (u64)
+
Trims ahead of the current write size (l2arc_write_max) + on L2ARC devices by this percentage of write size if we have filled the + device. If set to 100 we TRIM twice the space required + to accommodate upcoming writes. A minimum of 64 MiB will + be trimmed. It also enables TRIM of the whole L2ARC device upon creation + or addition to an existing pool or if the header of the device is invalid + upon importing a pool or onlining a cache device. A value of + 0 disables TRIM on L2ARC altogether and is the default + as it can put significant stress on the underlying storage devices. This + will vary depending of how well the specific device handles these + commands.
+
=1|0 + (int)
+
Do not write buffers to L2ARC if they were prefetched but not used by + applications. In case there are prefetched buffers in L2ARC and this + option is later set, we do not read the prefetched buffers from L2ARC. + Unsetting this option is useful for caching sequential reads from the + disks to L2ARC and serve those reads from L2ARC later on. This may be + beneficial in case the L2ARC device is significantly faster in sequential + reads than the disks of the pool. +

Use 1 to disable and 0 to + enable caching/reading prefetches to/from L2ARC.

+
+
=0|1 + (int)
+
No reads during writes.
+
=33554432B + (32 MiB) (u64)
+
Cold L2ARC devices will have l2arc_write_max increased + by this amount while they remain cold.
+
=33554432B + (32 MiB) (u64)
+
Max write bytes per interval.
+
=1|0 + (int)
+
Rebuild the L2ARC when importing a pool (persistent L2ARC). This can be + disabled if there are problems importing a pool or attaching an L2ARC + device (e.g. the L2ARC device is slow in reading stored log metadata, or + the metadata has become somehow fragmented/unusable).
+
=1073741824B + (1 GiB) (u64)
+
Mininum size of an L2ARC device required in order to write log blocks in + it. The log blocks are used upon importing the pool to rebuild the + persistent L2ARC. +

For L2ARC devices less than 1 GiB, the amount + of data + () + evicts is significant compared to the amount of restored L2ARC data. In + this case, do not write log blocks in L2ARC in order not to waste + space.

+
+
=1048576B + (1 MiB) (u64)
+
Metaslab granularity, in bytes. This is roughly similar to what would be + referred to as the "stripe size" in traditional RAID arrays. In + normal operation, ZFS will try to write this amount of data to each disk + before moving on to the next top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group biasing based on their vdevs' over- or + under-utilization relative to the pool.
+
=B + (16 MiB + 1 B) (u64)
+
Make some blocks above a certain size be gang blocks. This option is used + by the test suite to facilitate testing.
+
=3% + (uint)
+
For blocks that could be forced to be a gang block (due to + metaslab_force_ganging), force this many of them to be + gang blocks.
+
=1|0 + (int)
+
Controls prefetching BRT records for blocks which are going to be + cloned.
+
=12 + (4 KiB) (int)
+
Default BRT ZAP data block size as a power of 2. Note that changing this + after creating a BRT on the pool will not affect existing BRTs, only newly + created ones.
+
=12 + (4 KiB) (int)
+
Default BRT ZAP indirect block size as a power of 2. Note that changing + this after creating a BRT on the pool will not affect existing BRTs, only + newly created ones.
+
=15 + (32 KiB) (int)
+
Default DDT ZAP data block size as a power of 2. Note that changing this + after creating a DDT on the pool will not affect existing DDTs, only newly + created ones.
+
=15 + (32 KiB) (int)
+
Default DDT ZAP indirect block size as a power of 2. Note that changing + this after creating a DDT on the pool will not affect existing DDTs, only + newly created ones.
+
=9 + (512 B) (int)
+
Default dnode block size as a power of 2.
+
= + (128 KiB) (int)
+
Default dnode indirect block size as a power of 2.
+
=1048576B + (1 MiB) (u64)
+
When attempting to log an output nvlist of an ioctl in the on-disk + history, the output will not be stored if it is larger than this size (in + bytes). This must be less than + + (64 MiB). This applies primarily to + () + (cf. zfs-program(8)).
+
=0|1 + (int)
+
Prevent log spacemaps from being destroyed during pool exports and + destroys.
+
=1|0 + (int)
+
Enable/disable segment-based metaslab selection.
+
=2 + (int)
+
When using segment-based metaslab selection, continue allocating from the + active metaslab until this option's worth of buckets have been + exhausted.
+
=0|1 + (int)
+
Load all metaslabs during pool import.
+
=0|1 + (int)
+
Prevent metaslabs from being unloaded.
+
=1|0 + (int)
+
Enable use of the fragmentation metric in computing metaslab weights.
+ +
Maximum distance to search forward from the last offset. Without this + limit, fragmented pools can see + + iterations and + () + becomes the performance limiting factor on high-performance storage. +

With the default setting of 16 + MiB, we typically see less than 500 iterations, + even with very fragmented ashift=9 + pools. The maximum number of iterations possible is + metaslab_df_max_search / 2^(ashift+1). With the + default setting of 16 MiB this is + (with + ashift=9) or + + (with ashift=12).

+
+
=0|1 + (int)
+
If not searching forward (due to metaslab_df_max_search, + , + or + ), + this tunable controls which segment is used. If set, we will use the + largest free segment. If unset, we will use a segment of at least the + requested size.
+
=s + (1 hour) (u64)
+
When we unload a metaslab, we cache the size of the largest free chunk. We + use that cached size to determine whether or not to load a metaslab for a + given allocation. As more frees accumulate in that metaslab while it's + unloaded, the cached max size becomes less and less accurate. After a + number of seconds controlled by this tunable, we stop considering the + cached max size and start considering only the histogram instead.
+
=25% + (uint)
+
When we are loading a new metaslab, we check the amount of memory being + used to store metaslab range trees. If it is over a threshold, we attempt + to unload the least recently used metaslab to prevent the system from + clogging all of its memory with range trees. This tunable sets the + percentage of total system memory that is the threshold.
+
=0|1 + (int)
+
+
    +
  • If unset, we will first try normal allocation.
  • +
  • If that fails then we will do a gang allocation.
  • +
  • If that fails then we will do a "try hard" gang + allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+

+
    +
  • If set, we will first try normal allocation.
  • +
  • If that fails then we will do a "try hard" allocation.
  • +
  • If that fails we will do a gang allocation.
  • +
  • If that fails we will do a "try hard" gang allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+
+
=100 + (uint)
+
When not trying hard, we only consider this number of the best metaslabs. + This improves performance, especially when there are many metaslabs per + vdev and the allocation can't actually be satisfied (so we would otherwise + iterate all metaslabs).
+
=200 + (uint)
+
When a vdev is added, target this number of metaslabs per top-level + vdev.
+
= + (512 MiB) (uint)
+
Default lower limit for metaslab size.
+
= + (16 GiB) (uint)
+
Default upper limit for metaslab size.
+
= + (uint)
+
Maximum ashift used when optimizing for logical → physical sector + size on new top-level vdevs. May be increased up to + + (16), but this may negatively impact pool space efficiency.
+
= + (9) (uint)
+
Minimum ashift used when creating new top-level vdevs.
+
=16 + (uint)
+
Minimum number of metaslabs to create in a top-level vdev.
+
=0|1 + (int)
+
Skip label validation steps during pool import. Changing is not + recommended unless you know what you're doing and are recovering a damaged + label.
+
=131072 + (128k) (uint)
+
Practical upper limit of total metaslabs per top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group preloading.
+
=10 + (uint)
+
Maximum number of metaslabs per group to preload
+
=50 + (uint)
+
Percentage of CPUs to run a metaslab preload taskq
+
=1|0 + (int)
+
Give more weight to metaslabs with lower LBAs, assuming they have greater + bandwidth, as is typically the case on a modern constant angular velocity + disk drive.
+
=32 + (uint)
+
After a metaslab is used, we keep it loaded for this many TXGs, to attempt + to reduce unnecessary reloading. Note that both this many TXGs and + metaslab_unload_delay_ms milliseconds must pass before + unloading will occur.
+
=600000ms + (10 min) (uint)
+
After a metaslab is used, we keep it loaded for this many milliseconds, to + attempt to reduce unnecessary reloading. Note, that both this many + milliseconds and metaslab_unload_delay TXGs must pass + before unloading will occur.
+
=3 + (uint)
+
Maximum reference holders being tracked when reference_tracking_enable is + active.
+
= + (ulong)
+
Max amount of memory to use for RAID-Z expansion I/O. This limits how much + I/O can be outstanding at once.
+
=0 + (ulong)
+
For testing, pause RAID-Z expansion when reflow amount reaches this + value.
+
=4 + (ulong)
+
For expanded RAID-Z, aggregate reads that have more rows than this.
+
=3 + (int)
+
Maximum reference holders being tracked when reference_tracking_enable is + active.
+
=0|1 + (int)
+
Track reference holders to + + objects (debug builds only).
+
=1|0 + (int)
+
When set, the hole_birth optimization will not be used, + and all holes will always be sent during a zfs + send. This is useful if you suspect your datasets + are affected by a bug in hole_birth.
+
=/etc/zfs/zpool.cache + (charp)
+
SPA config file.
+
= + (uint)
+
Multiplication factor used to estimate actual disk consumption from the + size of data being written. The default value is a worst case estimate, + but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits.
+
=0|1 + (int)
+
Whether to print the vdev tree in the debugging message buffer during pool + import.
+
=1|0 + (int)
+
Whether to traverse data blocks during an "extreme rewind" + (-X) import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal skips non-metadata blocks. It can be toggled once the import + has started to stop or start the traversal of non-metadata blocks.

+
+
=1|0 + (int)
+
Whether to traverse blocks during an "extreme rewind" + (-X) pool import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal is not performed. It can be toggled once the import has + started to stop or start the traversal.

+
+
=4 + (1/16th) (uint)
+
Sets the maximum number of bytes to consume during pool import to the log2 + fraction of the target ARC size.
+
=5 + (1/32nd) (int)
+
Normally, we don't allow the last + + () + of space in the pool to be consumed. This ensures that we don't run the + pool completely out of space, due to unaccounted changes (e.g. to the + MOS). It also limits the worst-case time to allocate space. If we have + less than this amount of free space, most ZPL operations (e.g. write, + create) will return + .
+
=4 + (int)
+
Determines the number of block alloctators to use per spa instance. Capped + by the number of actual CPUs in the system. +

Note that setting this value too high could result in + performance degredation and/or excess fragmentation.

+
+
=0 + (uint)
+
Limits the number of on-disk error log entries that will be converted to + the new format when enabling the + + feature. The default is to convert all log entries.
+
=32768B + (32 KiB) (uint)
+
During top-level vdev removal, chunks of data are copied from the vdev + which may include free space in order to trade bandwidth for IOPS. This + parameter determines the maximum span of free space, in bytes, which will + be included as "unnecessary" data in a chunk of copied data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept + when doing regular reads (but there's no reason it has to be the + same).

+
+
=9 + (512 B) (u64)
+
Logical ashift for file-based devices.
+
=9 + (512 B) (u64)
+
Physical ashift for file-based devices.
+
=1|0 + (int)
+
If set, when we start iterating over a ZAP object, prefetch the entire + object (all leaf blocks). However, this is limited by + dmu_prefetch_max.
+
=131072B + (128 KiB) (int)
+
Maximum micro ZAP size. A micro ZAP is upgraded to a fat ZAP, once it + grows beyond the specified size.
+
=4194304B + (4 MiB) (uint)
+
Min bytes to prefetch per stream. Prefetch distance starts from the demand + access size and quickly grows to this value, doubling on each hit. After + that it may grow further by 1/8 per hit, but only if some prefetch since + last time haven't completed in time to satisfy demand request, i.e. + prefetch depth didn't cover the read latency or the pool got + saturated.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch per stream.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch indirects for per stream.
+
=8 + (uint)
+
Max number of streams per zfetch (prefetch streams per file).
+
=1 + (uint)
+
Min time before inactive prefetch stream can be reclaimed
+
=2 + (uint)
+
Max time before inactive prefetch stream can be deleted
+
=1|0 + (int)
+
Enables ARC from using scatter/gather lists and forces all allocations to + be linear in kernel memory. Disabling can improve performance in some code + paths at the expense of fragmented kernel memory.
+
=MAX_ORDER-1 + (uint)
+
Maximum number of consecutive memory pages allocated in a single block for + scatter/gather lists. +

The value of MAX_ORDER depends on kernel + configuration.

+
+
=B + (1.5 KiB) (uint)
+
This is the minimum allocation size that will use scatter (page-based) + ABDs. Smaller allocations will use linear ABDs.
+
=0B + (u64)
+
When the number of bytes consumed by dnodes in the ARC exceeds this number + of bytes, try to unpin some of it in response to demand for non-metadata. + This value acts as a ceiling to the amount of dnode metadata, and defaults + to 0, which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that + may be used for dnodes.
+
=10% + (u64)
+
Percentage that can be consumed by dnodes of ARC meta buffers. +

See also zfs_arc_dnode_limit, which serves a + similar purpose but has a higher priority if nonzero.

+
+
=10% + (u64)
+
Percentage of ARC dnodes to try to scan in response to demand for + non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit.
+
=B + (8 KiB) (uint)
+
The ARC's buffer hash table is sized based on the assumption of an average + block size of this value. This works out to roughly 1 MiB of hash table + per 1 GiB of physical memory with 8-byte pointers. For configurations with + a known larger average block size, this value can be increased to reduce + the memory footprint.
+
=200% + (uint)
+
When + (), + () + waits for this percent of the requested amount of data to be evicted. For + example, by default, for every 2 KiB that's evicted, + 1 KiB of it may be "reused" by a new + allocation. Since this is above 100%, it ensures that + progress is made towards getting arc_size + under arc_c. Since this is + finite, it ensures that allocations can still happen, even during the + potentially long time that arc_size is + more than arc_c.
+
=10 + (uint)
+
Number ARC headers to evict per sub-list before proceeding to another + sub-list. This batch-style operation prevents entire sub-lists from being + evicted at once but comes at a cost of additional unlocking and + locking.
+
=0s + (uint)
+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The + arc_grow_retry value (default + 5s) is the number of seconds the ARC will wait before + trying to resume growth after a memory pressure event.
+
=10% + (int)
+
Throttle I/O when free system memory drops below this percentage of total + system memory. Setting this value to 0 will disable the + throttle.
+
=0B + (u64)
+
Max size of ARC in bytes. If 0, then the max size of ARC + is determined by the amount of system memory installed. The larger of + all_system_memory - + 1 GiB and + + × all_system_memory will + be used as the limit. This value must be at least + 67108864B (64 MiB). +

This value can be changed dynamically, with some caveats. It + cannot be set back to 0 while running, and reducing it + below the current ARC size will not cause the ARC to shrink without + memory pressure to induce shrinking.

+
+
=500 + (uint)
+
Balance between metadata and data on ghost hits. Values above 100 increase + metadata caching by proportionally reducing effect of ghost data hits on + target data/metadata rate.
+
=0B + (u64)
+
Min size of ARC in bytes. If set to + 0, + + will default to consuming the larger of 32 MiB and + all_system_memory / + 32.
+
=0ms(≡1s) + (uint)
+
Minimum time prefetched blocks are locked in the ARC.
+
=0ms(≡6s) + (uint)
+
Minimum time "prescient prefetched" blocks are locked in the + ARC. These blocks are meant to be prefetched fairly aggressively ahead of + the code that may use them.
+
=1 + (int)
+
Number of arc_prune threads. FreeBSD does not need + more than one. Linux may theoretically use one per mount point up to + number of CPUs, but that was not proven to be useful.
+
=0 + (int)
+
Number of missing top-level vdevs which will be allowed during pool import + (only in read-only mode).
+
= + 0 (u64)
+
Maximum size in bytes allowed to be passed as + + for ioctls on /dev/zfs. This prevents a user from + causing the kernel to allocate an excessive amount of memory. When the + limit is exceeded, the ioctl fails with + + and a description of the error is sent to the + zfs-dbgmsg log. This parameter should not need to + be touched under normal circumstances. If 0, equivalent + to a quarter of the user-wired memory limit under + FreeBSD and to 134217728B (128 + MiB) under Linux.
+
=0 + (uint)
+
To allow more fine-grained locking, each ARC state contains a series of + lists for both data and metadata objects. Locking is performed at the + level of these "sub-lists". This parameters controls the number + of sub-lists per ARC state, and also applies to other uses of the + multilist data structure. +

If 0, equivalent to the greater of the + number of online CPUs and 4.

+
+
=8 + (int)
+
The ARC size is considered to be overflowing if it exceeds the current ARC + target size (arc_c) by thresholds determined by this + parameter. Exceeding by (arc_c + >> zfs_arc_overflow_shift) + / 2 starts ARC reclamation + process. If that appears insufficient, exceeding by + (arc_c >> + zfs_arc_overflow_shift) × + blocks + new buffer allocation until the reclaim thread catches up. Started + reclamation process continues till ARC size returns below the target size. +

The default value of 8 causes the + ARC to start reclamation if it exceeds the target size by + of the + target size, and block allocations by + .

+
+
=0 + (uint)
+
If nonzero, this will update + + (default 7) with the new value.
+
=0% + (off) (uint)
+
Percent of pagecache to reclaim ARC to. +

This tunable allows the ZFS ARC to play + more nicely with the kernel's LRU pagecache. It can guarantee that the + ARC size won't collapse under scanning pressure on the pagecache, yet + still allows the ARC to be reclaimed down to + zfs_arc_min if necessary. This value is specified as + percent of pagecache size (as measured by + ), + where that percent may exceed 100. This only operates + during memory pressure/reclaim.

+
+
=10000 + (int)
+
This is a limit on how many pages the ARC shrinker makes available for + eviction in response to one page allocation attempt. Note that in + practice, the kernel's shrinker can ask us to evict up to about four times + this for one allocation attempt. +

The default limit of 10000 (in + practice, + per allocation attempt with 4 KiB pages) limits + the amount of time spent attempting to reclaim ARC memory to less than + 100 ms per allocation attempt, even with a small average compressed + block size of ~8 KiB.

+

The parameter can be set to 0 (zero) to disable the limit, and + only applies on Linux.

+
+
=0B + (u64)
+
The target number of bytes the ARC should leave as free memory on the + system. If zero, equivalent to the bigger of 512 KiB + and + .
+
=1|0 + (int)
+
Disable pool import at module load by ignoring the cache file + (spa_config_path).
+
=20/s + (uint)
+
Rate limit checksum events to this many per second. Note that this should + not be set below the ZED thresholds (currently 10 checksums over 10 + seconds) or else the daemon may not trigger any action.
+
=10% + (uint)
+
This controls the amount of time that a ZIL block (lwb) will remain + "open" when it isn't "full", and it has a thread + waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly + impacting the latency of each individual transaction record (itx).
+
=0ms + (int)
+
Vdev indirection layer (used for device removal) sleeps for this many + milliseconds during mapping generation. Intended for use with the test + suite to throttle vdev removal speed.
+
=25% + (uint)
+
Minimum percent of obsolete bytes in vdev mapping required to attempt to + condense (see zfs_condense_indirect_vdevs_enable). + Intended for use with the test suite to facilitate triggering condensing + as needed.
+
=1|0 + (int)
+
Enable condensing indirect vdev mappings. When set, attempt to condense + indirect vdev mappings if the mapping uses more than + zfs_condense_min_mapping_bytes bytes of memory and if + the obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The + condensing process is an attempt to save memory by removing obsolete + mappings.
+
=1073741824B + (1 GiB) (u64)
+
Only attempt to condense indirect vdev mappings if the on-disk size of the + obsolete space map object is greater than this number of bytes (see + zfs_condense_indirect_vdevs_enable).
+
=131072B + (128 KiB) (u64)
+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable).
+
=1|0 + (int)
+
Internally ZFS keeps a small log to facilitate debugging. The log is + enabled by default, and can be disabled by unsetting this option. The + contents of the log can be accessed by reading + /proc/spl/kstat/zfs/dbgmsg. Writing + 0 to the file clears the log. +

This setting does not influence debug prints due to + zfs_flags.

+
+
=4194304B + (4 MiB) (uint)
+
Maximum size of the internal ZFS debug log.
+
=0 + (int)
+
Historically used for controlling what reporting was available under + /proc/spl/kstat/zfs. No effect.
+
=1|0 + (int)
+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms, or when an individual I/O + operation takes longer than zfs_deadman_ziotime_ms, then + the operation is considered to be "hung". If + zfs_deadman_enabled is set, then the deadman behavior is + invoked as described by zfs_deadman_failmode. By + default, the deadman is enabled and set to wait which + results in "hung" I/O operations only being logged. The deadman + is automatically disabled when a pool gets suspended.
+
=wait + (charp)
+
Controls the failure behavior when the deadman detects a "hung" + I/O operation. Valid values are: +
+
+
+
Wait for a "hung" operation to complete. For each + "hung" operation a "deadman" event will be posted + describing that operation.
+
+
Attempt to recover from a "hung" operation by re-dispatching + it to the I/O pipeline if possible.
+
+
Panic the system. This can be used to facilitate automatic fail-over + to a properly configured fail-over partner.
+
+
+
+
=ms + (1 min) (u64)
+
Check time in milliseconds. This defines the frequency at which we check + for hung I/O requests and potentially invoke the + zfs_deadman_failmode behavior.
+
=600000ms + (10 min) (u64)
+
Interval in milliseconds after which the deadman is triggered and also the + interval after which a pool sync operation is considered to be + "hung". Once this limit is exceeded the deadman will be invoked + every zfs_deadman_checktime_ms milliseconds until the + pool sync completes.
+
=ms + (5 min) (u64)
+
Interval in milliseconds after which the deadman is triggered and an + individual I/O operation is considered to be "hung". As long as + the operation remains "hung", the deadman will be invoked every + zfs_deadman_checktime_ms milliseconds until the + operation completes.
+
=0|1 + (int)
+
Enable prefetching dedup-ed blocks which are going to be freed.
+
=60% + (uint)
+
Start to delay each transaction once there is this amount of dirty data, + expressed as a percentage of zfs_dirty_data_max. This + value should be at least + zfs_vdev_async_write_active_max_dirty_percent. + See + ZFS TRANSACTION + DELAY.
+
=500000 + (int)
+
This controls how quickly the transaction delay approaches infinity. + Larger values cause longer delays for a given amount of dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will + smoothly handle between ten times and a tenth of this number. + See + ZFS TRANSACTION + DELAY.

+

zfs_delay_scale + × zfs_dirty_data_max + + be smaller than + .

+
+
=0|1 + (int)
+
Disables requirement for IVset GUIDs to be present and match when doing a + raw receive of encrypted datasets. Intended for users whose pools were + created with OpenZFS pre-release versions and now have compatibility + issues.
+
= + (4*10^8) (ulong)
+
Maximum number of uses of a single salt value before generating a new one + for encrypted datasets. The default value is also the maximum.
+
=64 + (uint)
+
Size of the znode hashtable used for holds. +

Due to the need to hold locks on objects that may not exist + yet, kernel mutexes are not created per-object and instead a hashtable + is used where collisions will result in objects waiting when there is + not actually contention on the same object.

+
+
=20/s + (int)
+
Rate limit delay and deadman zevents (which report slow I/O operations) to + this many per second.
+
=1073741824B + (1 GiB) (u64)
+
Upper-bound limit for unflushed metadata changes to be held by the log + spacemap in memory, in bytes.
+
=1000ppm + (0.1%) (u64)
+
Part of overall system memory that ZFS allows to be used for unflushed + metadata changes by the log spacemap, in millionths.
+
=131072 + (128k) (u64)
+
Describes the maximum number of log spacemap blocks allowed for each pool. + The default value means that the space in all the log spacemaps can add up + to no more than 131072 blocks (which means + 16 GiB of logical space before compression and ditto + blocks, assuming that blocksize is 128 KiB). +

This tunable is important because it involves a trade-off + between import time after an unclean export and the frequency of + flushing metaslabs. The higher this number is, the more log blocks we + allow when the pool is active which means that we flush metaslabs less + often and thus decrease the number of I/O operations for spacemap + updates per TXG. At the same time though, that means that in the event + of an unclean export, there will be more log spacemap blocks for us to + read, inducing overhead in the import time of the pool. The lower the + number, the amount of flushing increases, destroying log blocks quicker + as they become obsolete faster, which leaves less blocks to be read + during import time after a crash.

+

Each log spacemap block existing during pool import leads to + approximately one extra logical I/O issued. This is the reason why this + tunable is exposed in terms of blocks rather than space used.

+
+
=1000 + (u64)
+
If the number of metaslabs is small and our incoming rate is high, we + could get into a situation that we are flushing all our metaslabs every + TXG. Thus we always allow at least this many log blocks.
+
=% + (u64)
+
Tunable used to determine the number of blocks that can be used for the + spacemap log, expressed as a percentage of the total number of unflushed + metaslabs in the pool.
+
=1000 + (u64)
+
Tunable limiting maximum time in TXGs any metaslab may remain unflushed. + It effectively limits maximum number of unflushed per-TXG spacemap logs + that need to be read after unclean pool export.
+ +
When enabled, files will not be asynchronously removed from the list of + pending unlinks and the space they consume will be leaked. Once this + option has been disabled and the dataset is remounted, the pending unlinks + will be processed and the freed space returned to the pool. This option is + used by the test suite.
+
= + (ulong)
+
This is the used to define a large file for the purposes of deletion. + Files containing more than zfs_delete_blocks will be + deleted asynchronously, while smaller files are deleted synchronously. + Decreasing this value will reduce the time spent in an + unlink(2) system call, at the expense of a longer delay + before the freed space is available. This only applies on Linux.
+
= + (int)
+
Determines the dirty space limit in bytes. Once this limit is exceeded, + new writes are halted until space frees up. This parameter takes + precedence over zfs_dirty_data_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to + , + capped at zfs_dirty_data_max_max.

+
+
= + (int)
+
Maximum allowable value of zfs_dirty_data_max, expressed + in bytes. This limit is only enforced at module load time, and will be + ignored if zfs_dirty_data_max is later changed. This + parameter takes precedence over + zfs_dirty_data_max_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to min(physical_ram/4, 4GiB), or + min(physical_ram/4, 1GiB) for 32-bit systems.

+
+
=25% + (uint)
+
Maximum allowable value of zfs_dirty_data_max, expressed + as a percentage of physical RAM. This limit is only enforced at module + load time, and will be ignored if zfs_dirty_data_max is + later changed. The parameter zfs_dirty_data_max_max + takes precedence over this one. See + ZFS TRANSACTION + DELAY.
+
=10% + (uint)
+
Determines the dirty space limit, expressed as a percentage of all memory. + Once this limit is exceeded, new writes are halted until space frees up. + The parameter zfs_dirty_data_max takes precedence over + this one. See + ZFS TRANSACTION DELAY. +

Subject to zfs_dirty_data_max_max.

+
+
=20% + (uint)
+
Start syncing out a transaction group if there's at least this much dirty + data (as a percentage of zfs_dirty_data_max). This + should be less than + zfs_vdev_async_write_active_min_dirty_percent.
+
= + (int)
+
The upper limit of write-transaction zil log data size in bytes. Write + operations are throttled when approaching the limit until log data is + cleared out after transaction group sync. Because of some overhead, it + should be set at least 2 times the size of + zfs_dirty_data_max to prevent harming + normal write throughput. It also should be smaller than the size of + the slog device if slog is present. +

Defaults to +

+
+
=% + (uint)
+
Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be + preallocated for a file in order to guarantee that later writes will not + run out of space. Instead, fallocate(2) space + preallocation only checks that sufficient space is currently available in + the pool or the user's project quota allocation, and then creates a sparse + file of the requested size. The requested space is multiplied by + zfs_fallocate_reserve_percent to allow additional space + for indirect blocks and other internal metadata. Setting this to + 0 disables support for fallocate(2) + and causes it to return + .
+
=fastest + (string)
+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, + scalar, sse2, + , + avx2, + , + , + and + . + All except fastest and + scalar require instruction set extensions to be + available, and will only appear if ZFS detects that they are present at + runtime. If multiple implementations of fletcher 4 are available, the + fastest will be chosen using a micro benchmark. + Selecting scalar results in the original CPU-based + calculation being used. Selecting any option other than + fastest or + scalar results in vector instructions from the + respective CPU instruction set being used.

+
+
=1|0 + (int)
+
Enable the experimental block cloning feature. If this setting is 0, then + even if feature@block_cloning is enabled, attempts to clone blocks will + act as though the feature is disabled.
+
=0|1 + (int)
+
When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to + be written to disk. This allows the clone operation to reliably succeed + when a file is modified and then immediately cloned. For small files this + may be slower than making a copy of the file. Therefore, this setting + defaults to 0 which causes a clone operation to immediately fail when + encountering a dirty block.
+
=fastest + (string)
+
Select a BLAKE3 implementation. +

Supported selectors are: cycle, + fastest, generic, + sse2, + , + avx2, + . + All except cycle, fastest + and generic require + instruction set extensions to be available, and will only appear if ZFS + detects that they are present at runtime. If multiple implementations of + BLAKE3 are available, the fastest will be chosen using a + micro benchmark. You can see the benchmark results by reading this + kstat file: + /proc/spl/kstat/zfs/chksum_bench.

+
+
=1|0 + (int)
+
Enable/disable the processing of the free_bpobj object.
+
=UINT64_MAX + (unlimited) (u64)
+
Maximum number of blocks freed in a single TXG.
+
= + (10^5) (u64)
+
Maximum number of dedup blocks freed in a single TXG.
+
=3 + (uint)
+
Maximum asynchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum asynchronous read I/O operation active to each device. + See ZFS + I/O SCHEDULER.
+
=60% + (uint)
+
When the pool has more than this much dirty data, use + zfs_vdev_async_write_max_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=30% + (uint)
+
When the pool has less than this much dirty data, use + zfs_vdev_async_write_min_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=10 + (uint)
+
Maximum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Minimum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER. +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of + 2 was chosen as a compromise. A value of + 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+
+
=1 + (uint)
+
Maximum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
The maximum number of I/O operations active to each device. Ideally, this + will be at least the sum of each queue's max_active. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
Timeout value to wait before determining a device is missing during + import. This is helpful for transient missing paths due to links being + briefly removed and recreated in response to udev events.
+
=3 + (uint)
+
Maximum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Maximum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Minimum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Maximum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Minimum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=5 + (uint)
+
For non-interactive I/O (scrub, resilver, removal, initialize and + rebuild), the number of concurrently-active I/O operations is limited to + , + unless the vdev is "idle". When there are no interactive I/O + operations active (synchronous or otherwise), and + zfs_vdev_nia_delay operations have completed since the + last interactive operation, then the vdev is considered to be + "idle", and the number of concurrently-active non-interactive + operations is increased to zfs_*_max_active. + See ZFS + I/O SCHEDULER.
+
=5 + (uint)
+
Some HDDs tend to prioritize sequential I/O so strongly, that concurrent + random I/O latency reaches several seconds. On some HDDs this happens even + if sequential I/O operations are submitted one at a time, and so setting + zfs_*_max_active= 1 does not help. To + prevent non-interactive I/O, like scrub, from monopolizing the device, no + more than zfs_vdev_nia_credit operations can be sent + while there are outstanding incomplete interactive operations. This + enforced wait ensures the HDD services the interactive I/O within a + reasonable amount of time. See + ZFS I/O SCHEDULER.
+
=1000% + (uint)
+
Maximum number of queued allocations per top-level vdev expressed as a + percentage of zfs_vdev_async_write_max_active, which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. This allows for + dynamic allocation distribution when devices are imbalanced, as fuller + devices will tend to be slower than empty devices. +

Also see zio_dva_throttle_enabled.

+
+
=32 + (uint)
+
Default queue depth for each vdev IO allocator. Higher values allow for + better coalescing of sequential writes before sending them to the disk, + but can increase transaction commit times.
+
=1 + (uint)
+
Defines if the driver should retire on a given error type. The following + options may be bitwise-ored together: + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueNameDescription
1DeviceNo driver retries on device errors
2TransportNo driver retries on transport errors.
4DriverNo driver retries on driver errors.
+
+
=0 + (uint)
+
Maximum number of segments to add to a BIO (min 4). If this is higher than + the maximum allowed by the device queue or the kernel itself, it will be + clamped. Setting it to zero will cause the kernel's ideal size to be used. + This parameter only applies on Linux. This parameter is ignored if + zfs_vdev_disk_classic=1.
+
=0|1 + (uint)
+
If set to 1, OpenZFS will submit IO to Linux using the method it used in + 2.2 and earlier. This "classic" method has known issues with + highly fragmented IO requests and is slower on many workloads, but it has + been in use for many years and is known to be very stable. If you set this + parameter, please also open a bug report why you did so, including the + workload involved and any error messages. +

This parameter and the classic submission method will be + removed once we have total confidence in the new method.

+

This parameter only applies on Linux, and can only be set at + module load time.

+
+
=s + (int)
+
Time before expiring .zfs/snapshot.
+
=0|1 + (int)
+
Allow the creation, removal, or renaming of entries in the + + directory to cause the creation, destruction, or renaming of snapshots. + When enabled, this functionality works both locally and over NFS exports + which have the + + option set.
+
=0 + (int)
+
Set additional debugging flags. The following flags may be bitwise-ored + together: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueNameDescription
1ZFS_DEBUG_DPRINTFEnable dprintf entries in the debug log.
*2ZFS_DEBUG_DBUF_VERIFYEnable extra dbuf verifications.
*4ZFS_DEBUG_DNODE_VERIFYEnable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMESEnable snapshot name verification.
*16ZFS_DEBUG_MODIFYCheck for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREEEnable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFYEnable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFYVerify space accounting on disk matches in-memory + range_trees.
512ZFS_DEBUG_SET_ERROREnable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAPVerify split blocks created by device removal.
2048ZFS_DEBUG_TRIMVerify TRIM ranges are always within the allocatable range + tree.
4096ZFS_DEBUG_LOG_SPACEMAPVerify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and + flushing.
+ * Requires debug build.
+
=0 + (uint)
+
Enables btree verification. The following settings are culminative: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueDescription
1Verify height.
2Verify pointers from children to parent.
3Verify element counts.
4Verify element order. (expensive)
*5Verify unused memory is poisoned. (expensive)
+ * Requires debug build.
+
=0|1 + (int)
+
If destroy encounters an EIO while reading metadata + (e.g. indirect blocks), space referenced by the missing metadata can not + be freed. Normally this causes the background destroy to become + "stalled", as it is unable to make forward progress. While in + this stalled state, all remaining space to free from the + error-encountering filesystem is "temporarily leaked". Set this + flag to cause it to ignore the EIO, permanently leak the + space from indirect blocks that can not be read, and continue to free + everything else that it can. +

The default "stalling" behavior is useful if the + storage partially fails (i.e. some but not all I/O operations fail), and + then later recovers. In this case, we will be able to continue pool + operations while it is partially failed, and when it recovers, we can + continue to free the space, with no leaks. Note, however, that this case + is actually fairly rare.

+

Typically pools either

+
    +
  1. fail completely (but perhaps temporarily, e.g. due to a top-level vdev + going offline), or
  2. +
  3. have localized, permanent errors (e.g. disk returns the wrong data due + to bit flip or firmware bug).
  4. +
+ In the former case, this setting does not matter because the pool will be + suspended and the sync thread will not be able to make forward progress + regardless. In the latter, because the error is permanent, the best we can + do is leak the minimum amount of space, which is what setting this flag + will do. It is therefore reasonable for this flag to normally be set, but + we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.
+
=1000ms + (1s) (uint)
+
During a zfs destroy + operation using the + + feature, a minimum of this much time will be spent working on freeing + blocks per TXG.
+
=500ms + (uint)
+
Similar to zfs_free_min_time_ms, but for cleanup of old + indirection records for removed vdevs.
+
=32768B + (32 KiB) (s64)
+
Largest data block to write to the ZIL. Larger blocks will be treated as + if the dataset being written to had the + = + property set.
+
= + (0xDEADBEEFDEADBEEE) (u64)
+
Pattern written to vdev free space by + zpool-initialize(8).
+
=1048576B + (1 MiB) (u64)
+
Size of writes used by zpool-initialize(8). This option + is used by the test suite.
+
=500000 + (5*10^5) (u64)
+
The threshold size (in block pointers) at which we create a new + sub-livelist. Larger sublists are more costly from a memory perspective + but the fewer sublists there are, the lower the cost of insertion.
+
=% + (int)
+
If the amount of shared space between a snapshot and its clone drops below + this threshold, the clone turns off the livelist and reverts to the old + deletion method. This is in place because livelists no long give us a + benefit once a clone has been overwritten enough.
+
=0 + (int)
+
Incremented each time an extra ALLOC blkptr is added to a livelist entry + while it is being condensed. This option is used by the test suite to + track race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the synctask — + spa_livelist_condense_sync(). This option is used + by the test suite to trigger race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the open context condensing work in + spa_livelist_condense_cb(). This option is used by + the test suite to trigger race conditions.
+
= + (10^8) (u64)
+
The maximum execution time limit that can be set for a ZFS channel + program, specified as a number of Lua instructions.
+
= + (100 MiB) (u64)
+
The maximum memory limit that can be set for a ZFS channel program, + specified in bytes.
+
=50 + (int)
+
The maximum depth of nested datasets. This value can be tuned temporarily + to fix existing datasets that exceed the predefined limit.
+
=5 + (u64)
+
The number of past TXGs that the flushing algorithm of the log spacemap + feature uses to estimate incoming log blocks.
+
=10 + (u64)
+
Maximum number of rows allowed in the summary of the spacemap log.
+
=16777216 + (16 MiB) (uint)
+
We currently support block sizes from 512 (512 B) + to 16777216 (16 MiB). The + benefits of larger blocks, and thus larger I/O, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very + large blocks can have an impact on I/O latency, and also potentially on + the memory allocator. Therefore, we formerly forbade creating blocks + larger than 1M. Larger blocks could be created by changing it, and pools + with larger blocks can always be imported and used, regardless of this + setting.
+
=0|1 + (int)
+
Allow datasets received with redacted send/receive to be mounted. Normally + disabled because these datasets may be missing key data.
+
=1 + (u64)
+
Minimum number of metaslabs to flush per dirty TXG.
+
=% + (uint)
+
Allow metaslabs to keep their active state as long as their fragmentation + percentage is no more than this value. An active metaslab that exceeds + this threshold will no longer keep its active status allowing better + metaslabs to be selected.
+
=% + (uint)
+
Metaslab groups are considered eligible for allocations if their + fragmentation metric (measured as a percentage) is less than or equal to + this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also + crossed this threshold.
+
=0% + (uint)
+
Defines a threshold at which metaslab groups should be eligible for + allocations. The value is expressed as a percentage of free space beyond + which a metaslab group is always eligible for allocations. If a metaslab + group's free space is less than or equal to the threshold, the allocator + will avoid allocating to that group unless all groups in the pool have + reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of + 0 disables the feature and causes all metaslab groups to + be eligible for allocations. +

This parameter allows one to deal + with pools having heavily imbalanced vdevs such as would be the case + when a new vdev has been added. Setting the threshold to a non-zero + percentage will stop allocations from being made to vdevs that aren't + filled to the specified percentage and allow lesser filled vdevs to + acquire more allocations than they otherwise would under the old + + facility.

+
+
=1|0 + (int)
+
If enabled, ZFS will place DDT data into the special allocation + class.
+
=1|0 + (int)
+
If enabled, ZFS will place user data indirect blocks into the special + allocation class.
+
=0 + (uint)
+
Historical statistics for this many latest multihost updates will be + available in + /proc/spl/kstat/zfs/pool/multihost.
+
=1000ms + (1 s) (u64)
+
Used to control the frequency of multihost writes which are performed when + the + + pool property is on. This is one of the factors used to determine the + length of the activity check during import. +

The multihost write period is + zfs_multihost_interval / + . + On average a multihost write will be issued for each leaf vdev every + zfs_multihost_interval milliseconds. In practice, the + observed period can vary with the I/O load and this observed value is + the delay which is stored in the uberblock.

+
+
=20 + (uint)
+
Used to control the duration of the activity test on import. Smaller + values of zfs_multihost_import_intervals will reduce the + import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval + × + zfs_multihost_import_intervals, or the same product + computed on the host which last had the pool imported, whichever is + greater. The activity check time may be further extended if the value of + MMP delay found in the best uberblock indicates actual multihost updates + happened at longer intervals than + zfs_multihost_interval. A minimum of 100 + ms is enforced.

+

0 is equivalent to + 1.

+
+
=10 + (uint)
+
Controls the behavior of the pool when multihost write failures or delays + are detected. +

When 0, multihost write failures or delays + are ignored. The failures will still be reported to the ZED which + depending on its configuration may take action such as suspending the + pool or offlining a device.

+

Otherwise, the pool will be suspended if + zfs_multihost_fail_intervals + × + zfs_multihost_interval milliseconds pass without a + successful MMP write. This guarantees the activity test will see MMP + writes if the pool is imported. 1 is + equivalent to 2; this is necessary to prevent + the pool from being suspended due to normal, small I/O latency + variations.

+
+
=0|1 + (int)
+
Set to disable scrub I/O. This results in scrubs not actually scrubbing + data and simply doing a metadata crawl of the pool instead.
+
=0|1 + (int)
+
Set to disable block prefetching for scrubs.
+
=0|1 + (int)
+
Disable cache flush operations on disks when writing. Setting this will + cause pool corruption on power loss if a volatile out-of-order write cache + is enabled.
+
=1|0 + (int)
+
Allow no-operation writes. The occurrence of nopwrites will further depend + on other pool properties (i.a. the checksumming and compression + algorithms).
+
=1|0 + (int)
+
Enable forcing TXG sync to find holes. When enabled forces ZFS to sync + data when + + or + + flags are used allowing holes in a file to be accurately reported. When + disabled holes will not be reported in recently dirtied files.
+
=B + (50 MiB) (int)
+
The number of bytes which should be prefetched during a pool traversal, + like zfs send or other + data crawling operations.
+
=32 + (uint)
+
The number of blocks pointed by indirect (non-L0) block which should be + prefetched during a pool traversal, like zfs + send or other data crawling operations.
+
=30% + (u64)
+
Control percentage of dirtied indirect blocks from frees allowed into one + TXG. After this threshold is crossed, additional frees will wait until the + next TXG. 0 disables this + throttle.
+
=0|1 + (int)
+
Disable predictive prefetch. Note that it leaves "prescient" + prefetch (for, e.g., zfs + send) intact. Unlike predictive prefetch, + prescient prefetch never issues I/O that ends up not being needed, so it + can't hurt performance.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for SHA256 checksums. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for gzip compression. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for AES-GCM encryption. May be unset + after the ZFS modules have been loaded to initialize the QAT hardware as + long as support is compiled in and the QAT driver is present.
+
=1048576B + (1 MiB) (u64)
+
Bytes to read per chunk.
+
=0 + (uint)
+
Historical statistics for this many latest reads will be available in + /proc/spl/kstat/zfs/pool/reads.
+
=0|1 + (int)
+
Include cache hits in read history
+
=1048576B + (1 MiB) (u64)
+
Maximum read segment size to issue when sequentially resilvering a + top-level vdev.
+
=1|0 + (int)
+
Automatically start a pool scrub when the last active sequential resilver + completes in order to verify the checksums of all blocks which have been + resilvered. This is enabled by default and strongly recommended.
+
=67108864B + (64 MiB) (u64)
+
Maximum amount of I/O that can be concurrently issued for a sequential + resilver per leaf device, given in bytes.
+
=4096 + (int)
+
If an indirect split block contains more than this many possible unique + combinations when being reconstructed, consider it too computationally + expensive to check them all. Instead, try at most this many randomly + selected combinations each time the block is accessed. This allows all + segment copies to participate fairly in the reconstruction when all + combinations cannot be checked and prevents repeated use of one bad + copy.
+
=0|1 + (int)
+
Set to attempt to recover from fatal errors. This should only be used as a + last resort, as it typically results in leaked space, or worse.
+
=0|1 + (int)
+
Ignore hard I/O errors during device removal. When set, if a device + encounters a hard I/O error during the removal process the removal will + not be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is hence not recommended. This should only be used + as a last resort when the pool cannot be returned to a healthy state prior + to removing the device.
+
=0|1 + (uint)
+
This is used by the test suite so that it can ensure that certain actions + happen while in the middle of a removal.
+
=16777216B + (16 MiB) (uint)
+
The largest contiguous segment that we will attempt to allocate when + removing a device. If there is a performance problem with attempting to + allocate large blocks, consider decreasing this. The default value is also + the maximum.
+
=0|1 + (int)
+
Ignore the + + feature, causing an operation that would start a resilver to immediately + restart the one in progress.
+
=ms + (3 s) (uint)
+
Resilvers are processed by the sync thread. While resilvering, it will + spend at least this much time working on a resilver between TXG + flushes.
+
=0|1 + (int)
+
If set, remove the DTL (dirty time list) upon completion of a pool scan + (scrub), even if there were unrepairable errors. Intended to be used + during pool repair or recovery to stop resilvering when the pool is next + imported.
+
=1|0 + (int)
+
Automatically start a pool scrub after a RAIDZ expansion completes in + order to verify the checksums of all blocks which have been copied during + the expansion. This is enabled by default and strongly recommended.
+
=1000ms + (1 s) (uint)
+
Scrubs are processed by the sync thread. While scrubbing, it will spend at + least this much time working on a scrub between TXG flushes.
+
=4096 + (uint)
+
Error blocks to be scrubbed in one txg.
+
=s + (2 hour) (uint)
+
To preserve progress across reboots, the sequential scan algorithm + periodically needs to stop metadata scanning and issue all the + verification I/O to disk. The frequency of this flushing is determined by + this tunable.
+
=3 + (uint)
+
This tunable affects how scrub and resilver I/O segments are ordered. A + higher number indicates that we care more about how filled in a segment + is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + effect on scrub or resilver performance.
+
=0 + (uint)
+
Determines the order that data will be verified while scrubbing or + resilvering: +
+
+
+
Data will be verified as sequentially as possible, given the amount of + memory reserved for scrubbing (see + zfs_scan_mem_lim_fact). This may improve scrub + performance if the pool's data is very fragmented.
+
+
The largest mostly-contiguous chunk of found data will be verified + first. By deferring scrubbing of small segments, we may later find + adjacent data to coalesce and increase the segment size.
+
+
1 during normal + verification and strategy + 2 while taking a + checkpoint.
+
+
+
+
=0|1 + (int)
+
If unset, indicates that scrubs and resilvers will gather metadata in + memory before issuing sequential I/O. Otherwise indicates that the legacy + algorithm will be used, where I/O is initiated as soon as it is + discovered. Unsetting will not affect scrubs or resilvers that are already + in progress.
+
=B + (2 MiB) (int)
+
Sets the largest gap in bytes between scrub/resilver I/O operations that + will still be considered sequential for sorting purposes. Changing this + value will not affect scrubs or resilvers that are already in + progress.
+
=20^-1 + (uint)
+
Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. + This tunable determines the hard limit for I/O sorting memory usage. When + the hard limit is reached we stop scanning metadata and start issuing data + verification I/O. This is done until we get below the soft limit.
+
=20^-1 + (uint)
+
The fraction of the hard limit used to determined the soft limit for I/O + sorting by the sequential scan algorithm. When we cross this limit from + below no action is taken. When we cross this limit from above it is + because we are issuing verification I/O. In this case (unless the metadata + scan is done) we stop issuing verification I/O and start scanning metadata + again until we get to the hard limit.
+
=0|1 + (uint)
+
When reporting resilver throughput and estimated completion time use the + performance observed over roughly the last + zfs_scan_report_txgs TXGs. When set to zero performance + is calculated over the time between checkpoints.
+
=0|1 + (int)
+
Enforce tight memory limits on pool scans when a sequential scan is in + progress. When disabled, the memory limit may be exceeded by fast + disks.
+
=0|1 + (int)
+
Freezes a scrub/resilver in progress without actually pausing it. Intended + for testing/debugging.
+
=16777216B + (16 MiB) (int)
+
Maximum amount of data that can be concurrently issued at once for scrubs + and resilvers per leaf device, given in bytes.
+
=0|1 + (int)
+
Allow sending of corrupt data (ignore read/checksum errors when + sending).
+
=1|0 + (int)
+
Include unmodified spill blocks in the send stream. Under certain + circumstances, previous versions of ZFS could incorrectly remove the spill + block from an existing object. Including unmodified copies of the spill + blocks creates a backwards-compatible stream which will recreate a spill + block if it was incorrectly removed.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + send internal queues. The fill fraction controls + the timing with which internal threads are woken up.
+
=1048576B + (1 MiB) (uint)
+
The maximum number of bytes allowed in zfs + send's internal queues.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + send prefetch queue. The fill fraction controls + the timing with which internal threads are woken up.
+
=16777216B + (16 MiB) (uint)
+
The maximum number of bytes allowed that will be prefetched by + zfs send. This value must + be at least twice the maximum block size in use.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + receive queue. The fill fraction controls the + timing with which internal threads are woken up.
+
=16777216B + (16 MiB) (uint)
+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice + the maximum block size in use.
+
=1048576B + (1 MiB) (uint)
+
The maximum amount of data, in bytes, that zfs + receive will write in one DMU transaction. This is + the uncompressed size, even when receiving a compressed send stream. This + setting will not reduce the write size below a single block. Capped at a + maximum of 32 MiB.
+
=0 + (int)
+
When this variable is set to non-zero a corrective receive: +
    +
  1. Does not enforce the restriction of source & destination snapshot + GUIDs matching.
  2. +
  3. If there is an error during healing, the healing receive is not + terminated instead it moves on to the next record.
  4. +
+
+
=0|1 + (uint)
+
Setting this variable overrides the default logic for estimating block + sizes when doing a zfs + send. The default heuristic is that the average + block size will be the current recordsize. Override this value if most + data in your dataset is not of that size and you require accurate zfs send + size estimates.
+
=2 + (uint)
+
Flushing of data to disk is done in passes. Defer frees starting in this + pass.
+
=16777216B + (16 MiB) (int)
+
Maximum memory used for prefetching a checkpoint's space map on each vdev + while discarding the checkpoint.
+
=25% + (uint)
+
Only allow small data blocks to be allocated on the special and dedup vdev + types when the available free space percentage on these vdevs exceeds this + value. This ensures reserved space is available for pool metadata as the + special vdevs approach capacity.
+
=8 + (uint)
+
Starting in this sync pass, disable compression (including of metadata). + With the default setting, in practice, we don't have this many sync + passes, so this has no effect. +

The original intent was that disabling compression would help + the sync passes to converge. However, in practice, disabling compression + increases the average number of sync passes; because when we turn + compression off, many blocks' size will change, and thus we have to + re-allocate (not overwrite) them. It also increases the number of + 128 KiB allocations (e.g. for indirect blocks and + spacemaps) because these will not be compressed. The 128 + KiB allocations are especially detrimental to performance on highly + fragmented systems, which may have very few free segments of this size, + and may need to load new metaslabs to satisfy these allocations.

+
+
=2 + (uint)
+
Rewrite new block pointers starting in this pass.
+
=134217728B + (128 MiB) (uint)
+
Maximum size of TRIM command. Larger ranges will be split into chunks no + larger than this value before issuing.
+
=32768B + (32 KiB) (uint)
+
Minimum size of TRIM commands. TRIM ranges smaller than this will be + skipped, unless they're part of a larger range which was chunked. This is + done because it's common for these small TRIMs to negatively impact + overall performance.
+
=0|1 + (uint)
+
Skip uninitialized metaslabs during the TRIM process. This option is + useful for pools constructed from large thinly-provisioned devices where + TRIM operations are slow. As a pool ages, an increasing fraction of the + pool's metaslabs will be initialized, progressively degrading the + usefulness of this option. This setting is stored when starting a manual + TRIM and will persist for the duration of the requested TRIM.
+
=10 + (uint)
+
Maximum number of queued TRIMs outstanding per leaf vdev. The number of + concurrent TRIM commands issued to the device is controlled by + zfs_vdev_trim_min_active and + zfs_vdev_trim_max_active.
+
=32 + (uint)
+
The number of transaction groups' worth of frees which should be + aggregated before TRIM operations are issued to the device. This setting + represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available + for use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite + effect. The default of 32 was determined to be a + reasonable compromise.

+
+
=0 + (uint)
+
Historical statistics for this many latest TXGs will be available in + /proc/spl/kstat/zfs/pool/TXGs.
+
=5s + (uint)
+
Flush dirty data to disk at least every this many seconds (maximum TXG + duration).
+
=1048576B + (1 MiB) (uint)
+
Max vdev I/O aggregation size.
+
=131072B + (128 KiB) (uint)
+
Max vdev I/O aggregation size for non-rotating media.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load.
+
=5 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=1048576B + (1 MiB) (int)
+
The maximum distance for the last queued I/O operation in which the + balancing algorithm considers an operation to have locality. + See ZFS + I/O SCHEDULER.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/O operations do not immediately follow one + another.
+
=1 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=32768B + (32 KiB) (uint)
+
Aggregate read I/O operations if the on-disk gap between them is within + this threshold.
+
=4096B + (4 KiB) (uint)
+
Aggregate write I/O operations if the on-disk gap between them is within + this threshold.
+
=fastest + (string)
+
Select the raidz parity implementation to use. +

Variants that don't depend on CPU-specific features may be + selected on module load, as they are supported on all systems. The + remaining options may only be set after the module is loaded, as they + are available only if the implementations are compiled in and supported + on the running system.

+

Once the module is loaded, + /sys/module/zfs/parameters/zfs_vdev_raidz_impl + will show the available options, with the currently selected one + enclosed in square brackets.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
fastestselected by built-in benchmark
originaloriginal implementation
scalarscalar implementation
sse2SSE2 instruction set64-bit x86
ssse3SSSE3 instruction set64-bit x86
avx2AVX2 instruction set64-bit x86
avx512fAVX512F instruction set64-bit x86
avx512bwAVX512F & AVX512BW instruction sets64-bit x86
aarch64_neonNEONAarch64/64-bit ARMv8
aarch64_neonx2NEON with more unrollingAarch64/64-bit ARMv8
powerpc_altivecAltivecPowerPC
+
+
+ (charp)
+
. + Prints warning to kernel log for compatibility.
+
=512 + (uint)
+
Max event queue length. Events in the queue can be viewed with + zpool-events(8).
+
=2000 + (int)
+
Maximum recent zevent records to retain for duplicate checking. Setting + this to 0 disables duplicate detection.
+
=s + (15 min) (int)
+
Lifespan for a recent ereport that was retained for duplicate + checking.
+
=1048576 + (int)
+
The maximum number of taskq entries that are allowed to be cached. When + this limit is exceeded transaction records (itxs) will be cleaned + synchronously.
+
= + (int)
+
The number of taskq entries that are pre-populated when the taskq is first + created and are immediately available for use.
+
=100% + (int)
+
This controls the number of threads used by + . + The default value of + + will create a maximum of one thread per cpu.
+
=131072B + (128 KiB) (uint)
+
This sets the maximum block size used by the ZIL. On very fragmented + pools, lowering this (typically to + ) can + improve performance.
+
=B + (7.5 KiB) (uint)
+
This sets the maximum number of write bytes logged via WR_COPIED. It tunes + a tradeoff between additional memory copy and possibly worse log space + efficiency vs additional range lock/unlock.
+
=0|1 + (int)
+
Disable the cache flush commands that are normally sent to disk by the ZIL + after an LWB write has completed. Setting this will cause ZIL corruption + on power loss if a volatile out-of-order write cache is enabled.
+
=0|1 + (int)
+
Disable intent logging replay. Can be disabled for recovery from corrupted + ZIL.
+
=67108864B + (64 MiB) (u64)
+
Limit SLOG write size per commit executed with synchronous priority. Any + writes above that will be executed with lower (asynchronous) priority to + limit potential SLOG device abuse by single active ZIL writer.
+
=1|0 + (int)
+
Setting this tunable to zero disables ZIL logging of new + = + records if the + + feature is enabled on the pool. This would only be necessary to work + around bugs in the ZIL logging or replay code for this record type. The + tunable has no effect if the feature is disabled.
+
=64 + (uint)
+
Usually, one metaslab from each normal-class vdev is dedicated for use by + the ZIL to log synchronous writes. However, if there are fewer than + zfs_embedded_slog_min_ms metaslabs in the vdev, this + functionality is disabled. This ensures that we don't set aside an + unreasonable amount of space for the ZIL.
+
=1 + (uint)
+
Whether heuristic for detection of incompressible data with zstd levels + >= 3 using LZ4 and zstd-1 passes is enabled.
+
=131072 + (uint)
+
Minimal uncompressed size (inclusive) of a record before the early abort + heuristic will be attempted.
+
=0|1 + (int)
+
If non-zero, the zio deadman will produce debugging messages (see + zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to + gain diagnostic information for hang conditions which don't involve a + mutex or other locking primitive: typically conditions in which a thread + in the zio pipeline is looping indefinitely.
+
=ms + (30 s) (int)
+
When an I/O operation takes more than this much time to complete, it's + marked as slow. Each slow operation causes a delay zevent. Slow I/O + counters can be seen with zpool + status -s.
+
=1|0 + (int)
+
Throttle block allocations in the I/O pipeline. This allows for dynamic + allocation distribution when devices are imbalanced. When enabled, the + maximum number of pending allocations per top-level vdev is limited by + zfs_vdev_queue_depth_pct.
+
=0|1 + (int)
+
Control the naming scheme used when setting new xattrs in the user + namespace. If 0 (the default on Linux), user namespace + xattr names are prefixed with the namespace, to be backwards compatible + with previous versions of ZFS on Linux. If 1 (the + default on FreeBSD), user namespace xattr names + are not prefixed, to be backwards compatible with previous versions of ZFS + on illumos and FreeBSD. +

Either naming scheme can be read on this and future versions + of ZFS, regardless of this tunable, but legacy ZFS on illumos or + FreeBSD are unable to read user namespace xattrs + written in the Linux format, and legacy versions of ZFS on Linux are + unable to read user namespace xattrs written in the legacy ZFS + format.

+

An existing xattr with the alternate naming scheme is removed + when overwriting the xattr so as to not accumulate duplicates.

+
+
=0|1 + (int)
+
Prioritize requeued I/O.
+
=% + (uint)
+
Percentage of online CPUs which will run a worker thread for I/O. These + workers are responsible for I/O work such as compression and checksum + calculations. Fractional number of CPUs will be rounded down. +

The default value of + was chosen to + avoid using all CPUs which can result in latency issues and inconsistent + application performance, especially when slower compression and/or + checksumming is enabled.

+
+
=0 + (uint)
+
Number of worker threads per taskq. Lower values improve I/O ordering and + CPU utilization, while higher reduces lock contention. +

If 0, generate a system-dependent value + close to 6 threads per taskq.

+
+
=0 + (uint)
+
Determines the number of CPUs to run write issue taskqs. +

When 0 (the default), the value to use is computed internally + as the number of actual CPUs in the system divided by the + spa_num_allocators value.

+
+
= (charp)
+
Set the queue and thread configuration for the IO read queues. This is an + advanced debugging parameter. Don't change this unless you understand what + it does.
+
= (charp)
+
Set the queue and thread configuration for the IO write queues. This is an + advanced debugging parameter. Don't change this unless you understand what + it does.
+
=0|1 + (uint)
+
Do not create zvol device nodes. This may slightly improve startup time on + systems with a very large number of zvols.
+
= + (uint)
+
Major number for zvol block devices.
+
= + (long)
+
Discard (TRIM) operations done on zvols will be done in batches of this + many blocks, where block size is determined by the + volblocksize property of a zvol.
+
=131072B + (128 KiB) (uint)
+
When adding a zvol to the system, prefetch this many bytes from the start + and end of the volume. Prefetching these regions of the volume is + desirable, because they are likely to be accessed immediately by + blkid(8) or the kernel partitioner.
+
=0|1 + (uint)
+
When processing I/O requests for a zvol, submit them synchronously. This + effectively limits the queue depth to 1 for each I/O + submitter. When unset, requests are handled asynchronously by a thread + pool. The number of requests which can be handled concurrently is + controlled by zvol_threads. + zvol_request_sync is ignored when running on a kernel + that supports block multiqueue (blk-mq).
+
=0 + (uint)
+
The number of system wide threads to use for processing zvol block IOs. If + 0 (the default) then internally set + zvol_threads to the number of CPUs present or 32 + (whichever is greater).
+
=0 + (uint)
+
The number of threads per zvol to use for queuing IO requests. This + parameter will only appear if your kernel supports + blk-mq and is only read and assigned to a zvol at + zvol load time. If 0 (the default) then internally set + zvol_blk_mq_threads to the number of CPUs present.
+
=0|1 + (uint)
+
Set to 1 to use the blk-mq API + for zvols. Set to 0 (the default) to use the legacy zvol + APIs. This setting can give better or worse zvol performance depending on + the workload. This parameter will only appear if your kernel supports + blk-mq and is only read and assigned to a zvol at + zvol load time.
+
=8 + (uint)
+
If zvol_use_blk_mq is enabled, then process this number + of volblocksize-sized blocks per zvol thread. This + tunable can be use to favor better performance for zvol reads (lower + values) or writes (higher values). If set to 0, then the + zvol layer will process the maximum number of blocks per thread that it + can. This parameter will only appear if your kernel supports + blk-mq and is only applied at each zvol's load + time.
+
=0 + (uint)
+
The queue_depth value for the zvol blk-mq + interface. This parameter will only appear if your kernel supports + blk-mq and is only applied at each zvol's load + time. If 0 (the default) then use the kernel's default + queue depth. Values are clamped to the kernel's + BLKDEV_MIN_RQ and + BLKDEV_MAX_RQ/BLKDEV_DEFAULT_RQ + limits.
+
=1 + (uint)
+
Defines zvol block devices behaviour when + =: + +
+
=0|1 + (uint)
+
Enable strict ZVOL quota enforcement. The strict quota enforcement may + have a performance impact.
+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/O operations. The scheduler determines when and in what order those + operations are issued. The scheduler divides operations into five I/O + classes, prioritized in the following order: sync read, sync write, async + read, async write, and scrub/resilver. Each queue defines the minimum and + maximum number of concurrent operations that may be issued to the device. In + addition, the device has an aggregate maximum, + zfs_vdev_max_active. Note that the sum of the per-queue + minima must not exceed the aggregate maximum. If the sum of the per-queue + maxima exceeds the aggregate maximum, then the number of active operations + may reach zfs_vdev_max_active, in which case no further + operations will be issued, regardless of whether all per-queue minima have + been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Furthermore, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been + hit, or if there are no operations queued for an I/O class that has not hit + its maximum. Every time an I/O operation is queued or an operation + completes, the scheduler looks for new operations to issue.

+

In general, smaller max_actives will lead to + lower latency of synchronous operations. Larger + max_actives may lead to higher overall throughput, + depending on underlying storage.

+

The ratio of the queues' max_actives determines + the balance of performance between reads, writes, and scrubs. For example, + increasing zfs_vdev_scrub_max_active will cause the scrub + or resilver to complete more quickly, but reads and writes to have higher + latency and lower throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations, except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically, + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write operations + according to the amount of dirty data in the pool. Since both throughput and + latency typically increase with the number of concurrent operations issued + to physical devices, reducing the burstiness in the number of simultaneous + operations also stabilizes the response time of operations from other + queues, in particular synchronous ones. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there is + more dirty data in the pool.

+
+

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points:

+
+
       |              o---------| <-- zfs_vdev_async_write_max_active
+  ^    |             /^         |
+  |    |            / |         |
+active |           /  |         |
+ I/O   |          /   |         |
+count  |         /    |         |
+       |        /     |         |
+       |-------o      |         | <-- zfs_vdev_async_write_min_active
+      0|_______^______|_________|
+       0%      |      |       100% of zfs_dirty_data_max
+               |      |
+               |      `-- zfs_vdev_async_write_active_max_dirty_percent
+               `--------- zfs_vdev_async_write_active_min_dirty_percent
+
+

Until the amount of dirty data exceeds a minimum percentage of the + dirty data allowed in the pool, the I/O scheduler will limit the number of + concurrent operations to the minimum. As that threshold is crossed, the + number of concurrent operations issued increases linearly to the maximum at + the specified maximum percentage of the dirty data allowed in the pool.

+

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it + exceeds the maximum percentage, this indicates that the rate of incoming + data is greater than the rate that the backend storage can handle. In this + case, we must further throttle incoming writes, as described in the next + section.

+
+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as

+
min_time = min(zfs_delay_scale + × (dirty + - + ) / + ( + - dirty), 100ms)
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be + at or above zfs_vdev_async_write_active_max_dirty_percent, + so that we only start to delay after writing at full speed has failed to + keep up with the incoming write rate. The scale of the curve is defined by + zfs_delay_scale. Roughly speaking, this variable + determines the amount of delay at the midpoint of the curve.

+
+
delay
+ 10ms +-------------------------------------------------------------*+
+      |                                                             *|
+  9ms +                                                             *+
+      |                                                             *|
+  8ms +                                                             *+
+      |                                                            * |
+  7ms +                                                            * +
+      |                                                            * |
+  6ms +                                                            * +
+      |                                                            * |
+  5ms +                                                           *  +
+      |                                                           *  |
+  4ms +                                                           *  +
+      |                                                           *  |
+  3ms +                                                          *   +
+      |                                                          *   |
+  2ms +                                              (midpoint) *    +
+      |                                                  |    **     |
+  1ms +                                                  v ***       +
+      |             zfs_delay_scale ---------->     ********         |
+    0 +-------------------------------------*********----------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note, that since the delay is added to the outstanding time + remaining on the most recent transaction it's effectively the inverse of + IOPS. Here, the midpoint of 500 us translates to + 2000 IOPS. The shape of the curve was chosen such that + small changes in the amount of accumulated dirty data in the first three + quarters of the curve yield relatively small differences in the amount of + delay.

+

The effects can be easier to understand when the amount of delay + is represented on a logarithmic scale:

+
+
delay
+100ms +-------------------------------------------------------------++
+      +                                                              +
+      |                                                              |
+      +                                                             *+
+ 10ms +                                                             *+
+      +                                                           ** +
+      |                                              (midpoint)  **  |
+      +                                                  |     **    +
+  1ms +                                                  v ****      +
+      +             zfs_delay_scale ---------->        *****         +
+      |                                             ****             |
+      +                                          ****                +
+100us +                                        **                    +
+      +                                       *                      +
+      |                                      *                       |
+      +                                     *                        +
+ 10us +                                     *                        +
+      +                                                              +
+      |                                                              |
+      +                                                              +
+      +--------------------------------------------------------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the back-end storage, and then by changing the value + of zfs_delay_scale to increase the steepness of the + curve.

+
+
+ + + + + +
January 9, 2024Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/5/index.html b/man/master/5/index.html new file mode 100644 index 000000000..3849f9324 --- /dev/null +++ b/man/master/5/index.html @@ -0,0 +1,147 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/5/vdev_id.conf.5.html b/man/master/5/vdev_id.conf.5.html new file mode 100644 index 000000000..c86fa5792 --- /dev/null +++ b/man/master/5/vdev_id.conf.5.html @@ -0,0 +1,367 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
VDEV_ID.CONF(5)File Formats ManualVDEV_ID.CONF(5)
+
+
+

+

vdev_id.conf — + configuration file for vdev_id(8)

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of + vdev_id(8) while it is mapping a disk device name to an + alias.

+

The vdev_id.conf file uses a simple format + consisting of a keyword followed by one or more values on a single line. Any + line not beginning with a recognized keyword is ignored. Comments may + optionally begin with a hash character.

+

The following keywords and values are used.

+
+
+ name devlink
+
Maps a device link in the /dev directory hierarchy + to a new device name. The udev rule defining the device link must have run + prior to vdev_id(8). A defined alias takes precedence + over a topology-derived name, but the two naming methods can otherwise + coexist. For example, one might name drives in a JBOD with the + sas_direct topology while naming an internal L2ARC + device with an alias. +

name is the name of the link to the + device that will by created under + /dev/disk/by-vdev.

+

devlink is the name of the device link + that has already been defined by udev. This may be an absolute path or + the base filename.

+
+
+ [pci_slot] port + name
+
Maps a physical path to a channel name (typically representing a single + disk enclosure).
+ +
Additionally create /dev/by-enclosure symlinks to + the disk enclosure + devices + using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for + sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form + /dev/by-enclosure/prefix⟩-⟨channel⟩⟨num⟩ +

Defaults to + “”.

+
+
+ prefix new + [channel]
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is + specified then the mapping is only applied to slots in the named channel, + otherwise the mapping is applied to all channels. The first-specified + slot rule that can match a slot takes precedence. + Therefore a channel-specific mapping for a given slot should generally + appear before a generic mapping for the same slot. In this way a custom + mapping may be applied to a particular channel and a default mapping + applied to the others.
+
+ yes|no
+
Specifies whether vdev_id(8) will handle only + dm-multipath devices. If set to yes then + vdev_id(8) will examine the first running component disk + of a dm-multipath device as provided by the driver command to determine + the physical path.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+ num
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+ bay|phy|port|id|lun|ses
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay: +
+
+
read the slot number from the bay identifier.
+
+
read the slot number from the phy identifier.
+
+
use the SAS port as the slot number.
+
+
use the scsi id as the slot number.
+
+
use the scsi lun as the slot number.
+
+
use the SCSI Enclosure Services (SES) enclosure device slot number, as + reported by sg_ses(8). Intended for use only on + systems where bay is unsupported, noting that + port and id may be unstable across + disk replacement.
+
+
+
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping:

+
+
multipath     no
+topology      sas_direct
+phys_per_port 4
+slot          bay
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         C
+channel 86:00.0  0         D
+
+# Custom mapping for Channel A
+
+#    Linux      Mapped
+#    Slot       Slot      Channel
+slot 1          7         A
+slot 2          10        A
+slot 3          3         A
+slot 4          6         A
+
+# Default mapping for B, C, and D
+
+slot 1          4
+slot 2          2
+slot 3          1
+slot 4          3
+
+

A SAS-switch topology. Note, that the + channel keyword takes only two arguments in this + example:

+
+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path:

+
+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+

A configuration with enclosure_symlinks enabled:

+
+
multipath yes
+enclosure_symlinks yes
+
+#          PCI_ID      HBA PORT     CHANNEL NAME
+channel    05:00.0     1            U
+channel    05:00.0     0            L
+channel    06:00.0     1            U
+channel    06:00.0     0            L
+
+In addition to the disks symlinks, this configuration will create: +
+
/dev/by-enclosure/enc-L0
+/dev/by-enclosure/enc-L1
+/dev/by-enclosure/enc-U0
+/dev/by-enclosure/enc-U1
+
+

A configuration using device link aliases:

+
+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/dracut.zfs.7.html b/man/master/7/dracut.zfs.7.html new file mode 100644 index 000000000..032dbdfc2 --- /dev/null +++ b/man/master/7/dracut.zfs.7.html @@ -0,0 +1,403 @@ + + + + + + + dracut.zfs.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

dracut.zfs.7

+
+ + + + + +
DRACUT.ZFS(7)Miscellaneous Information ManualDRACUT.ZFS(7)
+
+
+

+

dracut.zfs — + overview of ZFS dracut hooks

+
+
+

+
+
                      parse-zfs.sh → dracut-cmdline.service
+                          |                     ↓
+                          |                     …
+                          |                     ↓
+                          \————————→ dracut-initqueue.service
+                                                |                      zfs-import-opts.sh
+   zfs-load-module.service                      ↓                          |       |
+     |                  |                sysinit.target                    ↓       |
+     ↓                  |                       |        zfs-import-scan.service   ↓
+zfs-import-scan.service ↓                       ↓           | zfs-import-cache.service
+     |   zfs-import-cache.service         basic.target      |     |
+     \__________________|                       |           ↓     ↓
+                        ↓                       |     zfs-load-key.sh
+     zfs-env-bootfs.service                     |         |
+                        ↓                       ↓         ↓
+                 zfs-import.target → dracut-pre-mount.service
+                        |          ↑            |
+                        | dracut-zfs-generator  |
+                        | _____________________/|
+                        |/                      ↓
+                        |                   sysroot.mount ←——— dracut-zfs-generator
+                        |                       |
+                        |                       ↓
+                        |             initrd-root-fs.target ←— zfs-nonroot-necessities.service
+                        |                       |                                 |
+                        |                       ↓                                 |
+                        ↓             dracut-mount.service                        |
+       zfs-snapshot-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        ↓                       …                                 |
+       zfs-rollback-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        |          /sysroot/{usr,etc,lib,&c.} ←———————————————————/
+                        |                       |
+                        |                       ↓
+                        |                initrd-fs.target
+                        \______________________ |
+                                               \|
+                                                ↓
+        export-zfs.sh                      initrd.target
+              |                                 |
+              ↓                                 ↓
+   dracut-shutdown.service                      …
+                                                |
+                                                ↓
+                 zfs-needshutdown.sh → initrd-cleanup.service
+
+

Compare dracut.bootup(7) for the full + flowchart.

+
+
+

+

Under dracut, booting with + ZFS-on-/ is facilitated by a + number of hooks in the 90zfs module.

+

Booting into a ZFS dataset requires + mountpoint=/ to be set on the + dataset containing the root filesystem (henceforth "the boot + dataset") and at the very least either the bootfs + property to be set to that dataset, or the root= kernel + cmdline (or dracut drop-in) argument to specify it.

+

All children of the boot dataset with + = + with mountpoints matching /etc, + /bin, /lib, + /lib??, /libx32, + and /usr globs are deemed + essential and will be mounted as well.

+

zfs-mount-generator(8) is recommended for proper + functioning of the system afterward (correct mount properties, remounting, + &c.).

+
+
+

+
+

+
+
dataset, + dataset
+
Use dataset as the boot dataset. All pluses + (‘+’) are replaced with spaces + (‘ ’).
+
, + root=zfs:, + , + [root=]
+
After import, search for the first pool with the bootfs + property set, use its value as-if specified as the + dataset above.
+
rootfstype=zfs root=dataset
+
Equivalent to + root=zfs:dataset.
+
+ [root=]
+
Equivalent to root=zfs:AUTO.
+
flags
+
Mount the boot dataset with -o + flags; cf. + Temporary Mount + Point Properties in zfsprops(7). These properties + will not last, since all filesystems will be re-mounted from the real + root.
+
+
If specified, dracut-zfs-generator logs to the + journal.
+
+

Be careful about setting neither rootfstype=zfs + nor root=zfs:dataset — other + automatic boot selection methods, like + systemd-gpt-auto-generator and + systemd-fstab-generator might take precedent.

+
+
+

+
+
[=snapshot-name]
+
Execute zfs snapshot + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
[=snapshot-name]
+
Execute zfs snapshot + -Rf + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
host-id
+
Use zgenhostid(8) to set the host ID to + host-id; otherwise, + /etc/hostid inherited from the real root is + used.
+
, + zfs.force, zfsforce
+
Appends -f to all zpool + import invocations; primarily useful in + conjunction with spl_hostid=, or if no host ID was + inherited.
+
+
+
+
+

+
+
parse-zfs.sh + ()
+
Processes spl_hostid=. If root= + matches a known pattern, above, provides /dev/root + and delays the initqueue until zfs(4) is loaded,
+
zfs-import-opts.sh + (systemd environment + generator)
+
Turns zfs_force, zfs.force, + or zfsforce into + ZPOOL_IMPORT_OPTS=-f for + zfs-import-scan.service or + zfs-import-cache.service.
+
zfs-load-key.sh + ()
+
Loads encryption keys for the boot dataset and its essential descendants. +
+
+
=
+
Is prompted for via systemd-ask-password + thrice.
+
=URL, + keylocation=URL
+
network-online.target is started before + loading.
+
=path
+
If path doesn't exist, + udevadm is + settled. If it still doesn't, it's waited for + for up to + s.
+
+
+
+
zfs-env-bootfs.service + (systemd service)
+
After pool import, sets BOOTFS= in the systemd + environment to the first non-null bootfs value in + iteration order.
+
dracut-zfs-generator + (systemd generator)
+
Generates sysroot.mount (using + rootflags=, if any). If an + explicit boot dataset was specified, also generates essential mountpoints + (sysroot-etc.mount, + sysroot-bin.mount, + &c.), otherwise generates + zfs-nonroot-necessities.service which mounts them + explicitly after /sysroot using + BOOTFS=.
+
zfs-snapshot-bootfs.service, + zfs-rollback-bootfs.service + (systemd services)
+
Consume bootfs.snapshot and + bootfs.rollback as described in + CMDLINE. Use + BOOTFS= if no explicit boot dataset was + specified.
+
zfs-needshutdown.sh + ()
+
If any pools were imported, signals that shutdown hooks are required.
+
export-zfs.sh + ()
+
Forcibly exports all pools.
+
/etc/hostid, + /etc/zfs/zpool.cache, + /etc/zfs/vdev_id.conf (regular files)
+
Included verbatim, hostonly.
+
mount-zfs.sh + ()
+
Does nothing on systemd systems (if + dracut-zfs-generator + succeeded). Otherwise, loads encryption key for + the boot dataset from the console or via plymouth. It may not work at + all!
+
+
+
+

+

zfsprops(7), + zpoolprops(7), + dracut-shutdown.service(8), + systemd-fstab-generator(8), + systemd-gpt-auto-generator(8), + zfs-mount-generator(8), + zgenhostid(8)

+
+
+ + + + + +
March 28, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/index.html b/man/master/7/index.html new file mode 100644 index 000000000..9c6a642bd --- /dev/null +++ b/man/master/7/index.html @@ -0,0 +1,159 @@ + + + + + + + Miscellaneous (7) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/master/7/vdevprops.7.html b/man/master/7/vdevprops.7.html new file mode 100644 index 000000000..392ea17c5 --- /dev/null +++ b/man/master/7/vdevprops.7.html @@ -0,0 +1,332 @@ + + + + + + + vdevprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdevprops.7

+
+ + + + + +
VDEVPROPS(7)Miscellaneous Information ManualVDEVPROPS(7)
+
+
+

+

vdevpropsnative + and user-defined properties of ZFS vdevs

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate vdevs in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every vdev has a set of properties that export statistics about + the vdev as well as control various behaviors. Properties are not inherited + from top-level vdevs, with the exception of checksum_n, checksum_t, io_n, + io_t, slow_io_n, and slow_io_t.

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, + 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase.

+

The following native properties consist of read-only statistics + about the vdev. These properties can not be changed.

+
+
+
Percentage of vdev space used
+
+
state of this vdev such as online, faulted, or offline
+
+
globally unique id of this vdev
+
+
The allocable size of this vdev
+
+
The physical size of this vdev
+
+
The physical sector size of this vdev expressed as the power of two
+
+
The total size of this vdev
+
+
The amount of remaining free space on this vdev
+
+
The amount of allocated space on this vdev
+
+
How much this vdev can expand by
+
+
Percent of fragmentation in this vdev
+
+
The level of parity for this vdev
+
+
The device id for this vdev
+
+
The physical path to the device
+
+
The enclosure path to the device
+
+
Field Replacable Unit, usually a model number
+
+
Parent of this vdev
+
+
Comma separated list of children of this vdev
+
+
The number of children belonging to this vdev
+
, + , + , +
+
The number of errors of each type encountered by this vdev
+
, + , + , + , + , +
+
The number of I/O operations of each type performed by this vdev
+
, + , + , + , + , +
+
The cumulative size of all operations of each type performed by this + vdev
+
+
If this device is currently being removed from the pool
+
+

The following native properties can be used to change the behavior + of a vdev.

+
+
, + , + , + , + , +
+
Tune the fault management daemon by specifying checksum/io thresholds of + <N> errors in <T> seconds, respectively. These properties can + be set on leaf and top-level vdevs. When the property is set on the leaf + and top-level vdev, the value of the leaf vdev will be used. If the + property is only set on the top-level vdev, this value will be used. The + value of these properties do not persist across vdev replacement. For this + reason, it is advisable to set the property on the top-level vdev - not on + the leaf vdev itself. The default values are 10 errors in 600 + seconds.
+
+
A text comment up to 8192 characters long
+
+
The amount of space to reserve for the EFI system partition
+
+
If this device should propage BIO errors back to ZFS, used to disable + failfast.
+
+
The path to the device for this vdev
+
+
If this device should perform new allocations, used to disable a device + when it is scheduled for later removal. See + zpool-remove(8).
+
+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate vdevs.

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings and are never + validated. Use the zpool set + command with a blank value to clear a user property. Property values are + limited to 8192 bytes.

+
+
+
+

+

zpoolprops(7), + zpool-set(8)

+
+
+ + + + + +
October 30, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zfsconcepts.7.html b/man/master/7/zfsconcepts.7.html new file mode 100644 index 000000000..54dab3ba3 --- /dev/null +++ b/man/master/7/zfsconcepts.7.html @@ -0,0 +1,326 @@ + + + + + + + zfsconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsconcepts.7

+
+ + + + + +
ZFSCONCEPTS(7)Miscellaneous Information ManualZFSCONCEPTS(7)
+
+
+

+

zfsconcepts — + overview of ZFS concepts

+
+
+

+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of + volumes can be cloned or rolled back, visibility is determined by the + property + of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the + + property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a + snapshot is cloned, it creates an implicit dependency between the parent and + child. Even though the clone is created somewhere else in the dataset + hierarchy, the original snapshot cannot be destroyed as long as a clone + exists. The + property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set in + the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if + has a + mount point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + prevents the + file system from being mounted.

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, + umount, /etc/fstab). If a + file system's mount point is set to + , ZFS makes + no attempt to manage the file system, and the administrator is responsible + for mounting and unmounting the file system. Because pools must be imported + before a legacy mount can succeed, administrators should ensure that legacy + mounts are only attempted after the zpool import process finishes at boot + time. For example, on machines using systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for + details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the + + property enabled, duplicate data blocks are removed synchronously. The + result is that only unique data is stored and common components are shared + among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow I/O and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk I/O.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. Consider using the + + property as a less resource-intensive alternative.

+
+
+

+

Block cloning is a facility that allows a file (or parts of a + file) to be "cloned", that is, a shallow copy made where the + existing data blocks are referenced rather than copied. Later modifications + to the data will cause a copy of the data block to be taken and that copy + modified. This facility is used to implement "reflinks" or + "file-level copy-on-write".

+

Cloned blocks are tracked in a special on-disk structure called + the Block Reference Table (BRT). Unlike deduplication, this table has + minimal overhead, so can be enabled at all times.

+

Also unlike deduplication, cloning must be requested by a user + program. Many common file copying programs, including newer versions of + /bin/cp, will try to create clones automatically. + Look for "clone", "dedupe" or "reflink" in the + documentation for more information.

+

There are some limitations to block cloning. Only + whole blocks can be cloned, and blocks can not be cloned if they are not yet + written to disk, or if they are encrypted, or the source and destination + + properties differ. The OS may add additional restrictions; for example, most + versions of Linux will not allow clones across datasets.

+
+
+
+ + + + + +
October 6, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zfsprops.7.html b/man/master/7/zfsprops.7.html new file mode 100644 index 000000000..4691a24d6 --- /dev/null +++ b/man/master/7/zfsprops.7.html @@ -0,0 +1,1553 @@ + + + + + + + zfsprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsprops.7

+
+ + + + + +
ZFSPROPS(7)Miscellaneous Information ManualZFSPROPS(7)
+
+
+

+

zfspropsnative + and user-defined properties of ZFS datasets

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, + 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the + Encryption section of + zfs-load-key(8) for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + . + See zfs load-key and + zfs unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid, the + objsetid of a dataset is not transferred to other pools + when the snapshot is copied with a send/receive operation. The + objsetid can be reused (for a new dataset) after the + dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive + -s, this opaque token can be provided to + zfs send + -t to resume and complete the + zfs receive.
+
+
For bookmarks, this is the list of snapshot guids the bookmark contains a + redaction list for. For snapshots, this is the list of snapshot guids the + snapshot is redacted with respect to.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: + , + , + , + or + .
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section of + zfsconcepts(7)) is space that is referenced + exclusively by this snapshot. If this snapshot is destroyed, the amount + of used space will be freed. Space that is shared by + multiple snapshots isn't accounted for in this metric. When a snapshot + is destroyed, space that was previously shared with this snapshot can + become unique to snapshots adjacent to it, thus changing the used space + of those snapshots. The used space of the latest snapshot can also be + affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced + does not take into account pending changes. Pending changes are + generally accounted for within a few seconds. Committing a change to a + disk using fsync(2) or + does + not necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du + and ls + -s. See the zfs + userspace command for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@ + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on + is set on a file system additional objects will be created per-file to + store extended attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal + objects are normally required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property + xattr=on is set on a fileset, ZFS will + create additional objects per-file to store extended attributes. These + additional objects are reflected in the projectobjused + value and are counted against the project's + projectobjquota. When a filesystem is configured to use + xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
Provides a mechanism to quickly determine whether snapshot list has + changed without having to mount a dataset or iterate the snapshot list. + Specifies the time at which a snapshot for a dataset was last created or + deleted. +

This allows us to be more efficient + how often we query snapshots. The property is persistent across mount + and unmount operations only if the + + feature is enabled.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 16 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which + for clones may be a snapshot in the origin's filesystem (or the origin + of the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=discard|groupmask|passthrough|restricted
+
Controls how an ACL is modified during chmod(2) and how inherited ACEs are + modified by the file creation mode: +
+
+
+
default, deletes all + + except for those representing the mode of the file or directory + requested by chmod(2).
+
+
reduces permissions granted in all + + entries found in the + + such that they are no greater than the group permissions specified by + chmod(2).
+
+
indicates that no changes are made to the ACL other than creating or + updating the necessary ACL entries to represent the new mode of the + file or directory.
+
+
will cause the chmod(2) operation to return an error + when used on any file or directory which has a non-trivial ACL whose + entries can not be represented by a mode. chmod(2) + is required to change the set user ID, set group ID, or sticky bits on + a file or directory, as they do not have equivalent ACL entries. In + order to use chmod(2) on a file or directory with a + non-trivial ACL when aclmode is set to + restricted, you must first remove all ACL entries + which do not represent the current mode.
+
+
+
+
=off|nfsv4|posix
+
Controls whether ACLs are enabled and if so what type of ACL to use. When + this property is set to a type of ACL not supported by the current + platform, the behavior is the same as if it were set to + off. +
+
+
+
default on Linux, when a file system has the acltype + property set to off then ACLs are disabled.
+
+
an alias for off
+
+
default on FreeBSD, indicates that NFSv4-style + ZFS ACLs should be used. These ACLs can be managed with the + getfacl(1) and setfacl(1). The + nfsv4 ZFS ACL type is not yet supported on + Linux.
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+
an alias for posix
+
+
+

To obtain the best performance when setting + posix users are strongly encouraged to set the + xattr=sa property. This will result + in the POSIX ACL being stored more efficiently on disk. But as a + consequence, all new extended attributes will only be accessible from + OpenZFS implementations which support the + xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr|blake3
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, + edonr, and blake3 checksum + algorithms require enabling the appropriate features on the pool.

+

Please see zpool-features(7) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N
+
Controls the compression algorithm used for this dataset. +

When set to on (the default), indicates that + the current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(7) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where + N is an integer from 1 (fastest) to 9 (best + compression ratio). Currently, gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zstd compression algorithm + provides both high compression ratios and good performance. You can + specify the zstd level by using the value + zstd-N, where + N is an integer from 1 (fastest) to 19 (best + compression ratio). zstd is equivalent to + .

+

Faster speeds at the cost of the compression ratio can + be requested by setting a negative zstd level. This is + done using zstd-fast-N, where + N is an integer in + [1-, + , + , + , + , + , + 1000] which maps to a negative zstd + level. The lower the level the faster the compression — + 1000 provides the fastest compression and lowest + compression ratio. zstd-fast is equivalent to + zstd-fast-1.

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example, 8 KiB + blocks on disks with 4 KiB disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=1||
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a missing + top-level vdev. Do NOT create, for example a two-disk + striped pool and set copies=2 on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the + implementation stores some encryption metadata where the third copy + would normally be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify|sha256[,verify]|sha512[,verify]|skein[,verify]|edonr,verify|blake3[,verify]
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + sha256,verify. +

If set to verify, ZFS will do a byte-to-byte + comparison in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should + be enabled on + a system. See the Deduplication + section of zfsconcepts(7).

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy + requires the large_dnode + pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the + workload makes heavy use of extended attributes. This may be applicable + to SELinux-enabled systems, Lustre servers, and Samba servers, for + example. Literal values are supported for cases where the optimal size + is known in advance and for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode + feature, or if you need to import this pool on a system that doesn't + support the large_dnode + feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section of + zfs-load-key(8).

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
# dd + + /path/to/output/key
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt|/absolute/file/path|address|address
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is prompt. +

Even though the encryption suite cannot + be changed after dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via the + standard input stream, but users should be careful not to place keys + which should be kept secret on the command line. If a file URI is + selected, the key will be loaded from the specified absolute file path. + If an HTTPS or HTTP URL is selected, it will be GETted using + fetch(3), libcurl, or nothing, depending on + compile-time configuration and run-time availability. The + + environment variable can be set to set the location of the concatenated + certificate store. The + + environment variable can be set to override the location of the + directory containing the certificate authority bundle. The + + and + + environment variables can be set to configure the path to the client + certificate and its key.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=on|off
+
Controls internal zvol threading. The value off disables + zvol threading, and zvol relies on application threads. The default value + is on, which enables threading within a zvol. Please + note that this property will be overridden by + + module parameter. This property is only applicable to Linux.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(7)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512 up to 1048576 (1 MiB). The default size is 0 which + means no small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpoolconcepts(7) for more + details on the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section of + zfsconcepts(7) for more information on how this property + is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none. In addition, any shared file systems are + unshared and shared in the new location.

+

When the mountpoint property is set with + zfs set + -u , the mountpoint property + is updated but dataset is not mounted or unmounted and remains as it was + before.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non-blocking mandatory locks). Changes to this + property only take effect when the file system is umounted and remounted. + This was only supported by Linux prior to 5.15, and was buggy there, and + is not supported by FreeBSD. On Solaris it's used + for SMB clients.
+
=on|off
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux and + FreeBSD file systems. On these platforms the + property is on by default. Set to off + to disable overlay mounts for consistency with OpenZFS on other + platforms.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(7)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace command + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@ properties + are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + mount + options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two + greater than or equal to 512 B and less than or + equal to 128 KiB. If the + + feature is enabled on the pool, the size may be up to 1 + MiB. See zpool-features(7) for details on ZFS + feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most|some|none
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 1000 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

When set to some, ZFS stores an extra copy + of only critical metadata. This can improve file create performance + since less metadata needs to be written. If a single on-disk block is + corrupt, at worst a single user file can be lost.

+

When set to none, ZFS does not store any + copies of metadata redundantly. If a single on-disk block is corrupt, an + entire dataset can be lost.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + atime=on is set. Turning this property + on causes the access time to be updated relative to the modify or change + time. Access time is only updated if the previous access time was earlier + than the current modify or change time or if the existing access time + hasn't been updated within the past 24 hours. The default value is + on. The values on and + off are equivalent to the relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=all|none|metadata
+
Controls what speculative prefetch does. If this property is set to + all, then both user data and metadata are prefetched. If + this property is set to none, then neither user data nor + metadata are prefetched. If this property is set to + metadata, then only metadata are prefetched. The default + value is all. +

Please note that the module parameter zfs_disable_prefetch=1 + can be used to totally disable speculative prefetch, bypassing anything + this property does.

+
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a + . +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", i.e. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user — + passwd(5)/shadow(5)-, LDAP- or + smbpasswd(5)-based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+

When the sharesmb property is updated with + zfs set + -u , the property is set to desired value, but + the operation to share, reshare or unshare the the dataset is not + performed.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the /etc/exports file. + Otherwise, the file system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +
sec=sys,rw,crossmnt,no_subtree_check
+

Please note that the options are comma-separated, unlike those + found in exports(5). This is done to negate the need + for quoting, as well as to make parsing with scripts easier.

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+

When the sharenfs property is updated with + zfs set + -u , the property is set to desired value, but + the operation to share, reshare or unshare the the dataset is not + performed.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + /dev/zvol/pool⟩ + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section of + zfsconcepts(7). The default value is + hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX-specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse volume" (also + known as "thin provisioned") can be created by specifying the + -s option to the zfs + create -V command, or by + changing the value of the refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the refreservation. A volume that is not sparse is + said to be "thick provisioned". A sparse volume can become + thick provisioned by setting refreservation to + auto.

+
+
=default|full|geom|dev|none
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshotted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + , + where full, dev and + none are encoded as 1, 2 and 3 respectively. The default + value is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used by OpenZFS.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported: either directory-based or + system-attribute-based. +

The default value of on enables + directory-based extended attributes. This style of extended attribute + imposes no practical limit on either the size or number of attributes + which can be set on a file. Although under Linux the + getxattr(2) and setxattr(2) system + calls limit the maximum size to 64K. This is the most + compatible style of extended attribute and is supported by all ZFS + implementations.

+

System-attribute-based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk I/O required. Up + to 64K of data may be stored per-file in the space + reserved for system attributes. If there is not enough space available + for an extended attribute then it will be automatically written as a + directory-based xattr. System-attribute-based extended attributes are + not accessible on platforms which do not support the + xattr=sa feature. OpenZFS supports + xattr=sa on both + FreeBSD and Linux.

+

The use of system-attribute-based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely on extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=off|on
+
Controls whether the dataset is managed from a jail. See + zfs-jail(8) for more information. Jails are a + FreeBSD feature and this property is not available + on other platforms.
+
=off|on
+
Controls whether the dataset is managed from a non-global zone or + namespace. See zfs-zone(8) for more information. Zoning + is a Linux feature and this property is not available on other + platforms.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
+
+
atime/noatime
+
+
auto/noauto
+
+
dev/nodev
+
+
exec/noexec
+
+
ro/rw
+
+
relatime/norelatime
+
+
suid/nosuid
+
+
xattr/noxattr
+
+
mand/nomand
+
=
+
context=
+
=
+
fscontext=
+
=
+
defcontext=
+
=
+
rootcontext=
+
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+
+ + + + + +
August 8, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zpool-features.7.html b/man/master/7/zpool-features.7.html new file mode 100644 index 000000000..451f98b5a --- /dev/null +++ b/man/master/7/zpool-features.7.html @@ -0,0 +1,1254 @@ + + + + + + + zpool-features.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.7

+
+ + + + + +
ZPOOL-FEATURES(7)Miscellaneous Information ManualZPOOL-FEATURES(7)
+
+
+

+

zpool-features — + description of ZFS pool features

+
+
+

+

ZFS pool on-disk format versions are specified via + “features” which replace the old on-disk format numbers (the + last supported on-disk format number is 28). To enable a feature on a pool + use the zpool upgrade, or + set the feature@feature-name + property to enabled. Please also see the + Compatibility feature + sets section for information on how sets of features may be enabled + together.

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

Since most features can be enabled independently of each other, + the on-disk format of the pool is specified by the set of all features + marked as active on the pool. If the pool was created by + another software version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature-name. The + reversed DNS name ensures that the feature's GUID is unique across all ZFS + implementations. When unsupported features are encountered on a pool they + will be identified by their GUIDs. Refer to the documentation for the ZFS + implementation that created the pool for information about those + features.

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the + ‘:’ (i.e. + com.example:feature-name would + have the short name feature-name), however a feature's + short name may differ across ZFS implementations if following the convention + would result in name conflicts.

+
+
+

+

Features can be in one of three states:

+
+
+
This feature's on-disk format changes are in effect on the pool. Support + for this feature is required to import the pool in read-write mode. If + this feature is not read-only compatible, support is also required to + import the pool in read-only mode (see + Read-only + compatibility).
+
+
An administrator has marked this feature as enabled on the pool, but the + feature's on-disk format changes have not been made yet. The pool can + still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support + returning to the enabled state after becoming + active. See feature-specific documentation for + details.
+
+
This feature's on-disk format changes have not been made and will not be + made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they + have been enabled.
+
+

The state of supported features is exposed through pool properties + of the form feature@short-name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as “read-only compatible”. If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly + property during import (see zpool-import(8) for details on + importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool, a pool + property named + @feature-name + will indicate why the import was allowed despite the unsupported feature. + Possible values for this property are:

+
+
+
The feature is in the enabled state and therefore the + pool's on-disk format is still compatible with software that does not + support this feature.
+
+
The feature is read-only compatible and the pool has been imported in + read-only mode.
+
+
+
+

+

Some features depend on other features being enabled in order to + function. Enabling a feature will automatically enable any features it + depends on.

+
+
+

+

It is sometimes necessary for a pool to maintain compatibility + with a specific on-disk format, by enabling and disabling particular + features. The compatibility feature facilitates this by + allowing feature sets to be read from text files. When set to + (the + default), compatibility feature sets are disabled (i.e. all features are + enabled); when set to legacy, no features are enabled. + When set to a comma-separated list of filenames (each filename may either be + an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d), the lists of + requested features are read from those files, separated by whitespace and/or + commas. Only features present in all files are enabled.

+

Simple sanity checks are applied to the files: they must be + between 1 B and 16 KiB in size, and must end with a newline character.

+

The requested features are applied when a pool is created using + zpool create + -o + compatibility= and controls + which features are enabled when using zpool + upgrade. zpool + status will not show a warning about disabled + features which are not part of the requested feature set.

+

The special value legacy prevents any features + from being enabled, either via zpool + upgrade or zpool + set + feature@feature-name=enabled. + This setting also prevents pools from being upgraded to newer on-disk + versions. This is a safety measure to prevent new features from being + accidentally enabled, breaking compatibility.

+

By convention, compatibility files in + /usr/share/zfs/compatibility.d are provided by the + distribution, and include feature sets supported by important versions of + popular distributions, and feature sets commonly supported at the start of + each year. Compatibility files in + /etc/zfs/compatibility.d, if present, will take + precedence over files with the same name in + /usr/share/zfs/compatibility.d.

+

If an unrecognized feature is found in these files, an error + message will be shown. If the unrecognized feature is in a file in + /etc/zfs/compatibility.d, this is treated as an + error and processing will stop. If the unrecognized feature is under + /usr/share/zfs/compatibility.d, this is treated as a + warning and processing will continue. This difference is to allow + distributions to include features which might not be recognized by the + currently-installed binaries.

+

Compatibility files may include comments: any text from + ‘#’ to the end of the line is ignored.

+

:

+
+
example# cat /usr/share/zfs/compatibility.d/grub2
+# Features which are supported by GRUB2
+allocation_classes
+async_destroy
+block_cloning
+bookmarks
+device_rebuild
+embedded_data
+empty_bpobj
+enabled_txg
+extensible_dataset
+filesystem_limits
+hole_birth
+large_blocks
+livelist
+log_spacemap
+lz4_compress
+project_quota
+resilver_defer
+spacemap_histogram
+spacemap_v2
+userobj_accounting
+zilsaxattr
+zpool_checkpoint
+
+example# zpool create -o compatibility=grub2 bootpool vdev
+
+

See zpool-create(8) and + zpool-upgrade(8) for more information on how these + commands are affected by feature sets.

+
+
+
+

+

The following features are supported on this system:

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables support for separate allocation + classes.

+

This feature becomes active when a dedicated + allocation class vdev (dedup or special) is created with the + zpool create + or zpool + add commands. With + device removal, it can be returned to the enabled + state if all the dedicated allocation class vdevs are removed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Destroying a file system requires traversing all of its data + in order to return its used space to the pool. Without + async_destroy, the file system is not fully removed + until all space has been reclaimed. If the destroy operation is + interrupted by a reboot or power outage, the next attempt to open the + pool will need to complete the destroy operation synchronously.

+

When async_destroy is enabled, the file + system's data will be reclaimed by a background process, allowing the + destroy operation to complete without traversing the entire file system. + The background process is able to resume interrupted destroys after the + pool has been opened, eliminating the need to finish interrupted + destroys as part of the open operation. The amount of space remaining to + be reclaimed by the background process is available through the + freeing property.

+

This feature is only active while + freeing is non-zero.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the BLAKE3 hash algorithm for + checksum and dedup. BLAKE3 is a secure hash algorithm focused on high + performance.

+

When the blake3 feature is set to + enabled, the administrator can turn on the + blake3 checksum on any dataset using + zfs set + checksum=blake3 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + blake3, and will return to being + enabled once all filesystems that have ever had their + checksum set to blake3 are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

When this feature is enabled ZFS will use + block cloning for operations like + (2). + Block cloning allows to create multiple references to a single block. It + is much faster than copying the data (as the actual data is neither read + nor written) and takes no additional space. Blocks can be cloned across + datasets under some conditions (like equal + recordsize, the same master encryption key, + etc.). ZFS tries its best to clone across datasets including encrypted + ones. This is limited for various (nontrivial) reasons depending on the + OS and/or ZFS internals.

+

This feature becomes active when first block + is cloned. When the last cloned block is freed, it goes back to the + enabled state.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables use of the zfs + bookmark command.

+

This feature is active while + any bookmarks exist in the pool. All bookmarks in the pool can be listed + by running zfs list + -t + + -r poolname.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 + bookmark is created and will be returned to the + enabled state when all v2 bookmarks are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset, bookmark_v2
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables additional bookmark + accounting fields, enabling the + #bookmark + property (space written since a bookmark) and estimates of send stream + sizes for incrementals from bookmarks.

+

This feature becomes active when a bookmark + is created and will be returned to the enabled state + when all bookmarks with these fields are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the ability for the + zpool attach and + zpool replace commands + to perform sequential reconstruction (instead of healing reconstruction) + when resilvering.

+

Sequential reconstruction resilvers a device in LBA order + without immediately verifying the checksums. Once complete, a scrub is + started, which then verifies the checksums. This approach allows full + redundancy to be restored to the pool in the minimum amount of time. + This two-phase approach will take longer than a healing resilver when + the time to verify the checksums is included. However, unless there is + additional pool damage, no checksum errors should be reported by the + scrub. This feature is incompatible with raidz configurations. This + feature becomes active while a sequential resilver is + in progress, and returns to enabled when the resilver + completes.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the zpool + remove command to remove top-level vdevs, + evacuating them to reduce the total size of the pool.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables use of the draid vdev + type. dRAID is a variant of RAID-Z which provides integrated distributed + hot spares that allow faster resilvering while retaining the benefits of + RAID-Z. Data, parity, and spare space are organized in redundancy groups + and distributed evenly over all of the devices.

+

This feature becomes active when creating a + pool which uses the draid vdev type, or when adding a + new draid vdev to an existing pool.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Edon-R hash + algorithm for checksum, including for nopwrite (if compression is also + enabled, an overwrite of a block whose checksum matches the data being + written will be ignored). In an abundance of caution, Edon-R requires + verification when used with dedup: zfs + set + =edonr, + (see zfs-set(8)).

+

Edon-R is a very high-performance hash algorithm that was part + of the NIST SHA-3 competition. It provides extremely high hash + performance (over 350% faster than SHA-256), but was not selected + because of its unsuitability as a general purpose secure hash algorithm. + This implementation utilizes the new salted checksumming functionality + in ZFS, which means that the checksum is pre-seeded with a secret + 256-bit random key (stored on the pool) before being fed the data block + to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the edonr feature is set to + enabled, the administrator can turn on the + edonr checksum on any dataset using + zfs set + checksum=edonr + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + edonr, and will return to being + enabled once all filesystems that have ever had their + checksum set to edonr are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 + bytes or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of + highly-compressible blocks are stored in the block + “pointer” itself (a misnomer in this case, as it contains + the compressed data, rather than a pointer to its location on disk). + Thus the space of the block (one sector, typically 512 B or 4 KiB) is + saved, and no additional I/O is needed to read and write the data block. + This feature becomes active + as soon as it is enabled and will never return to + being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also + reduces the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobjs) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobjs are empty. This + feature allows us to create each bpobj on-demand, thus eliminating the + empty bpobjs.

+

This feature is active while there are any + filesystems, volumes, or snapshots which were created after enabling + this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Once this feature is enabled, ZFS records the transaction + group number in which new features are enabled. This has no user-visible + impact, but other features may depend on this feature.

+

This feature becomes active as soon as it is + enabled and will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark_v2, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an + encrypted dataset is created and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first + dependent feature uses it, and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables filesystem and snapshot limits. These + limits can be used to control how many filesystems and/or snapshots can + be created at the point in the tree on which the limits are set.

+

This feature is active once either of the + limit properties has been set on a dataset and will never return to + being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the upgraded version of errlog, which + required an on-disk error log format change. Now the error log of each + head dataset is stored separately in the zap object and keyed by the + head id. With this feature enabled, every dataset affected by an error + block is listed in the output of zpool + status. In case of encrypted filesystems with + unloaded keys we are unable to check their snapshots or clones for + errors and these will not be reported. An "access denied" + error will be reported.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
enabled_txg
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature has/had bugs, + the result of which is that, if you do a zfs + send -i (or + -R, since it uses + -i) from an affected dataset, the receiving + party will not see any checksum or other errors, but the resulting + destination snapshot will not match the source. Its use by + zfs send + -i has been disabled by default (see + + in zfs(4)).

+

This feature improves performance of incremental sends + (zfs send + -i) and receives for objects with many holes. + The most common case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A + to snapshot B contains + information about every block that changed between A + and B. Blocks which did not + change between those snapshots can be identified and omitted from the + stream using a piece of metadata called the “block birth + time”, but birth times are not recorded for holes (blocks filled + only with zeroes). Since holes created after A + cannot be distinguished from holes created + before A, information about every hole in the + entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. + However, when incrementally replicating filesystems or zvols with many + holes (for example a zvol formatted with another filesystem) a lot of + time will be spent sending and receiving unnecessary information about + holes that already exist on the receiving side.

+

Once the hole_birth feature has been enabled + the block birth times of all new holes will be recorded. Incremental + sends between snapshots created after this feature is enabled will use + this new metadata to avoid sending information about holes that already + exist on the receiving side.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the record size on a dataset to be set + larger than 128 KiB.

+

This feature becomes active once a dataset + contains a file with a block size larger than 128 KiB, and will return + to being enabled once all filesystems that have ever + had their recordsize larger than 128 KiB are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the size of dnodes in a + dataset to be set larger than 512 B. This feature becomes + active once a dataset contains an object with a dnode + larger than 512 B, which occurs as a result of setting the + + dataset property to a value other than legacy. The + feature will return to being enabled once all + filesystems that have ever contained a dnode larger than 512 B are + destroyed. Large dnodes allow more data to be stored in the bonus + buffer, thus potentially improving performance by avoiding the use of + spill blocks.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows clones to be deleted faster than the + traditional method when a large number of random/sparse writes have been + made to the clone. All blocks allocated and freed after a clone is + created are tracked by the the clone's livelist which is referenced + during the deletion of the clone. The feature is activated when a clone + is created and remains active until all clones have + been destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
com.delphix:spacemap_v2
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature improves performance for heavily-fragmented + pools, especially when workloads are heavy in random-writes. It does so + by logging all the metaslab changes on a single spacemap every TXG + instead of scattering multiple writes to all the metaslab spacemaps.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

lz4 is a high-performance real-time + compression algorithm that features significantly faster compression and + decompression as well as a higher compression ratio than the older + lzjb compression. Typically, lz4 + compression is approximately 50% faster on compressible data and 200% + faster on incompressible data than lzjb. It is also + approximately 80% faster on decompression, while giving approximately a + 10% better compression ratio.

+

When the lz4_compress feature is set to + enabled, the administrator can turn on + lz4 compression on any dataset on the pool using the + zfs-set(8) command. All newly written metadata will be + compressed with the lz4 algorithm.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored + or raidz configuration.

+

When the multi_vdev_crash_dump feature is + set to enabled, the administrator can use + dumpadm(8) to configure a dump device on a pool + comprised of multiple vdevs.

+

Under FreeBSD and Linux this feature + is unused, but registered for compatibility. New pools created on these + systems will have the feature enabled but will never + transition to active, as this functionality is not + required for crash dump support. Existing pools where this feature is + active can be imported.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
device_removal
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature is an enhancement of + device_removal, which will over time reduce the memory + used to track removed devices. When indirect blocks are freed or + remapped, we note that their part of the indirect mapping is + “obsolete” – no longer needed.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is an object-based attribute. When + upgrading an existing filesystem, objects without a project ID will be + assigned a zero project ID. When this feature is enabled, newly created + objects inherit their parent directories' project ID if the parent's + inherit flag is set (via chattr + + or zfs + project + -s|-C). Otherwise, the + new object's project ID will be zero. An object's project ID can be + changed at any time by the owner (or privileged user) via + chattr -p + prjid or zfs + project -p + prjid.

+

This feature will become active as soon as + it is enabled and will never return to being disabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
none
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the zpool + attach subcommand to attach a new device to a + RAID-Z group, expanding the total amount usable space in the pool. See + zpool-attach(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmarks, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of redacted + zfs sends, which create + redaction bookmarks storing the list of blocks redacted by the send that + created them. For more information about redacted sends, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the receiving of redacted + zfs send streams, which + create redacted datasets when received. These datasets are missing some + of their blocks, and so cannot be safely mounted, and their contents + cannot be safely read. For more information about redacted receives, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
redaction_bookmarks
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the redaction list created by zfs redact + to store many more entries. It becomes active when a + redaction list is created with more than 36 entries, and returns to + being enabled when no long redaction lists remain in + the pool. For more information about redacted sends, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows ZFS to postpone new resilvers if an + existing one is already in progress. Without this feature, any new + resilvers will cause the currently running one to be immediately + restarted from the beginning.

+

This feature becomes active once a resilver + has been deferred, and returns to being enabled when + the deferred resilver begins.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit + arithmetic of SHA-512 provides an approximate 50% performance boost over + SHA-256 on 64-bit hardware and is thus a good minimum-change replacement + candidate for systems where hash performance is important, but these + systems cannot for whatever reason utilize the faster + skein and + edonr algorithms.

+

When the sha512 feature is set to + enabled, the administrator can turn on the + sha512 checksum on any dataset using + zfs set + checksum=sha512 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + sha512, and will return to being + enabled once all filesystems that have ever had their + checksum set to sha512 are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm + that was a finalist in the NIST SHA-3 competition. It provides a very + high security margin and high performance on 64-bit hardware (80% faster + than SHA-256). This implementation also utilizes the new salted + checksumming functionality in ZFS, which means that the checksum is + pre-seeded with a secret 256-bit random key (stored on the pool) before + being fed the data block to be checksummed. Thus the produced checksums + are unique to a given pool, preventing hash collision attacks on systems + with dedup.

+

When the skein feature is set to + enabled, the administrator can turn on the + skein checksum on any dataset using + zfs set + checksum=skein + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + skein, and will return to being + enabled once all filesystems that have ever had their + checksum set to skein are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This features allows ZFS to maintain more information about + how free space is organized within the pool. If this feature is + enabled, it will be activated when a new space map + object is created, or an existing space map is upgraded to the new + format, and never returns back to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the use of the new space map encoding + which consists of two words (instead of one) whenever it is + advantageous. The new encoding allows space maps to represent large + regions of space more efficiently on-disk while also increasing their + maximum addressable offset.

+

This feature becomes active once it is + enabled, and never returns back to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature creates a ZAP object for the root vdev.

+

This feature becomes active after the next + zpool import or + zpool reguid. Properties can be retrieved or set + on the root vdev using zpool + get and zpool + set with + as the vdev + name which is an alias for + .

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables + xattr=sa extended attribute logging + in the ZIL. If enabled, extended attribute changes (both + = + and + xattr=sa) are guaranteed to be + durable if either the dataset had + = + set at the time the changes were made, or sync(2) is + called on the dataset after the changes were made.

+

This feature becomes active when a ZIL is + created for at least one dataset and will be returned to the + enabled state when it is destroyed for all datasets + that use this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the zpool + checkpoint command that can checkpoint the state + of the pool at the time it was issued and later rewind back to it or + discard it.

+

This feature becomes active when the + zpool checkpoint command + is used to checkpoint the pool. The feature will only return back to + being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

zstd is a high-performance + compression algorithm that features a combination of high compression + ratios and high speed. Compared to + , + zstd offers slightly better compression at much higher + speeds. Compared to lz4, zstd offers + much better compression while being only modestly slower. Typically, + zstd compression speed ranges from 250 to 500 MB/s per + thread and decompression speed is over 1 GB/s per thread.

+

When the zstd feature is set to + enabled, the administrator can turn on + zstd compression of any dataset using + zfs set + compress=zstd + dset (see zfs-set(8)). This + feature becomes active once a + compress property has been set to + zstd, and will return to being + enabled once all filesystems that have ever had their + compress property set to zstd are + destroyed.

+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
June 23, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zpoolconcepts.7.html b/man/master/7/zpoolconcepts.7.html new file mode 100644 index 000000000..91c53b67b --- /dev/null +++ b/man/master/7/zpoolconcepts.7.html @@ -0,0 +1,605 @@ + + + + + + + zpoolconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolconcepts.7

+
+ + + + + +
ZPOOLCONCEPTS(7)Miscellaneous Information ManualZPOOLCONCEPTS(7)
+
+
+

+

zpoolconcepts — + overview of ZFS storage pools

+
+
+

+
+

+

A "virtual device" describes a single device or a + collection of devices, organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system on which it + resides. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with + N disks of size + X can hold X + bytes and can withstand + + devices failing, without losing data.
+
, + raidz1, raidz2, + raidz3
+
A distributed-parity layout, similar to RAID-5/6, with improved + distribution of parity, and which does not suffer from the RAID-5/6 + "write hole", (in which data and parity become inconsistent + after a power loss). Data and parity is striped across all disks within a + raidz group, though not necessarily in a consistent stripe width. +

A raidz group can have single, double, or triple parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N + disks of size X + with P parity + disks can hold approximately + + bytes and can withstand P + devices failing without losing data. The minimum + number of devices in a raidz group is one more than the number of parity + disks. The recommended number is between 3 and 9 to help increase + performance.

+
+
, + draid1, draid2, + draid3
+
A variant of raidz that provides integrated distributed hot spares, + allowing for faster resilvering, while retaining the benefits of raidz. A + dRAID vdev is constructed from multiple internal raidz groups, each with + D data devices and + P parity devices. These groups + are distributed over all of the children in order to fully utilize the + available disk performance. +

Unlike raidz, dRAID uses a fixed stripe width + (padding as necessary with zeros) to allow fully sequential resilvering. + This fixed stripe width significantly affects both usable capacity and + IOPS. For example, with the default + + and + + disk sectors the minimum allocation size is + . If + using compression, this relatively large allocation size can reduce the + effective compression ratio. When using ZFS volumes (zvols) and dRAID, + the default of the + + property is increased to account for the allocation size. If a dRAID + pool will hold a significant amount of small blocks, it is recommended + to also add a mirrored special vdev to store those + blocks.

+

In regards to I/O, + performance is similar to raidz since, for any read, all + D data disks must be accessed. + Delivered random IOPS can be reasonably approximated as + .

+

Like raidz, a dRAID can have single-, double-, or + triple-parity. The draid1, draid2, + and draid3 types can be used to specify the parity + level. The draid vdev type is an alias for + draid1.

+

A dRAID with N disks + of size X, D + data disks per redundancy group, + P parity level, and + + distributed hot spares can hold approximately + + bytes and can withstand P + devices failing without losing data.

+
+
[parity][:data][:children][:spares]
+
A non-default dRAID configuration can be specified by appending one or + more of the following optional arguments to the draid + keyword: +
+
parity
+
The parity level (1-3).
+
data
+
The number of data devices per redundancy group. In general, a smaller + value of D will increase IOPS, + improve the compression ratio, and speed up resilvering at the + expense of total usable capacity. Defaults to 8, + unless + + is less than 8.
+
children
+
The expected number of children. Useful as a cross-check when listing + a large number of devices. An error is returned when the provided + number of children differs.
+
spares
+
The number of distributed hot spares. Defaults to zero.
+
+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device solely dedicated for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested arbitrarily. A mirror, raidz or + draid virtual device can only be created with files or disks. Mirrors of + mirrors or other such combinations are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. Keywords like mirror + and raidz are used to distinguish + where a group ends and another begins. For example, the following creates a + pool with two root vdevs, each a mirror of two disks:

+
# zpool + create mypool + mirror sda sdb + mirror sdc sdd
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy, when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three + states: , + , + or + . + An online pool has all devices operating normally. A degraded pool is one in + which one or more devices have failed, but the data is still available due + to a redundant configuration. A faulted pool has corrupted metadata, or one + or more faulted devices, and insufficient replicas to continue + functioning.

+

The health of the top-level vdev, such as a mirror or raidz + device, is potentially impacted by the state of its associated vdevs or + component devices. A top-level vdev or component device is in one of the + following states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors or slow I/Os exceeds acceptable levels + and the device is degraded as an indication that something may be + wrong. ZFS continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

Checksum errors represent events where a disk returned data that + was expected to be correct, but was not. In other words, these are instances + of silent data corruption. The checksum errors are reported in + zpool status and + zpool events. When a block + is stored redundantly, a damaged block may be reconstructed (e.g. from raidz + parity or a mirrored copy). In this case, ZFS reports the checksum error + against the disks that contained damaged data. If a block is unable to be + reconstructed (e.g. due to 3 disks being damaged in a raidz2 group), it is + not possible to determine which disks were silently corrupted. In this case, + checksum errors are reported for all disks on which the block is stored.

+

If a device is removed and later re-attached to the system, ZFS + attempts to bring the device online automatically. Device attachment + detection is hardware-dependent and might not be supported on all + platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool. But, when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
# zpool + create pool + mirror sda sdb spare + sdc sdd
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again, if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool cannot be exported, since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

The draid vdev type provides distributed hot + spares. These hot spares are named after the dRAID vdev they're a part of + (draid1-2-3 + specifies spare 3 + of vdev 2, + which is a single parity dRAID) and may only be used + by that dRAID vdev. Otherwise, they behave the same as normal hot + spares.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
# zpool + create pool sda sdb + log sdc
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached, and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allows much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
# zpool + create pool sda sdb + cache sdc sdd
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is + persistent across reboots and restored asynchronously when importing the + pool in L2ARC (persistent L2ARC). This can be disabled by setting + =0. + For cache devices smaller than + , ZFS does + not write the metadata structures required for rebuilding the L2ARC, to + conserve space. This can be changed with + . + The cache device header + () is + updated even if no metadata structures are written. Setting + =0 + will result in scanning the full-length ARC lists for cacheable content to + be written in L2ARC (persistent ARC). If a cache device is added with + zpool add, its label and + header will be overwritten and its contents will not be restored in L2ARC, + even if the device was previously part of the pool. If a cache device is + onlined with zpool online, + its contents will be restored in L2ARC. This is useful in case of memory + pressure, where the contents of the cache device are not fully restored in + L2ARC. The user can off- and online the cache device when there is less + memory pressure, to fully restore its contents to L2ARC.

+
+
+

+

Before starting critical procedures that include destructive + actions (like zfs destroy), + an administrator can checkpoint the pool's state and, in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, certain operations are not allowed + while a pool has a checkpoint. Specifically, vdev removal/attach/detach, + mirror splitting, and changing the pool's GUID. Adding a new vdev is + supported, but in the case of a rewind it will have to be added again. + Finally, users of this feature should keep in mind that scrubs in a pool + that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
# zpool + checkpoint pool
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
# zpool + export pool
+
# zpool + import --rewind-to-checkpoint + pool
+

To discard the checkpoint from a pool:

+
# zpool + checkpoint -d + pool
+

Dataset reservations (controlled by the + + and + + properties) may be unenforceable while a checkpoint exists, because the + checkpoint is allowed to consume the dataset's reservation. Finally, data + that is part of the checkpoint but has been freed in the current state of + the pool won't be scanned during a scrub.

+
+
+

+

Allocations in the special class are dedicated to specific block + types. By default, this includes all metadata, the indirect blocks of user + data, and any deduplication tables. The class can also be provisioned to + accept small file blocks.

+

A pool must always have at least one normal + (non-dedup/-special) vdev before other + devices can be assigned to the special class. If the + special class becomes full, then allocations intended for + it will spill back into the normal class.

+

Deduplication tables can be excluded + from the special class by unsetting the + + ZFS module parameter.

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + property to nonzero. See zfsprops(7) for more info on this + property.

+
+
+
+ + + + + +
April 7, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/7/zpoolprops.7.html b/man/master/7/zpoolprops.7.html new file mode 100644 index 000000000..bc38bd922 --- /dev/null +++ b/man/master/7/zpoolprops.7.html @@ -0,0 +1,511 @@ + + + + + + + zpoolprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolprops.7

+
+ + + + + +
ZPOOLPROPS(7)Miscellaneous Information ManualZPOOLPROPS(7)
+
+
+

+

zpoolprops — + properties of ZFS storage pools

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

User properties have no effect on ZFS behavior. Use them to + annotate pools in a way that is meaningful in your environment. For more + information about user properties, see the + User Properties section.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
The ratio of the total amount of storage that would be required to store + all the cloned blocks without cloning to the actual storage used. The + bcloneratio property is calculated as: +

((bclonesaved + bcloneused) + ) +

+
+
+
The amount of additional storage that would be required if block cloning + was not used.
+
+
The amount of storage used by cloned blocks.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. On whole-disk vdevs, this is + the space beyond the end of the GPT – typically occurring when a + LUN is dynamically expanded or a disk replaced with a larger one. On + partition vdevs, this is the space appended to the partition after it was + added to the pool – most likely by resizing it in-place. The space + can be claimed for the pool by bringing it online with + + or using zpool online + -e.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz parity; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs(4) for more information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
A unique identifier for the pool.
+
+
The current health of the pool. Health can be one of + , + , + , + , + .
+
+
Space not released while freeing due to corruption, now + permanently leaked into the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (i.e. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
guid
+
Information about unsupported features that are enabled on the pool. See + zpool-features(7) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpoolprops command does not. For non-full pools + of a reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of + (internally + referred to as ashift). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space/performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + ashift= + (which is + + = + ). + When set, this property is used as the default hint value in subsequent + vdev operations (add, attach and replace). Changing this value will not + modify any existing vdev, not even on disk replacement; however it can be + used, for instance, to replace a dying 512B sectors disk with a newer 4KiB + sectors device: this will probably result in bad performance but at the + same time could prevent loss of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) manual page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) manual page for more + details.
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default value for this property is + off. +

Automatic TRIM does not immediately + reclaim blocks after a free. Instead, it will optimistically delay + allowing smaller ranges to be aggregated into a few larger ones. These + can then be issued more efficiently to the storage. TRIM on L2ARC + devices is enabled by setting + .

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower-end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
=|pool[/dataset]
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=off|legacy|file[,file]…
+
Specifies that the pool maintain compatibility with specific feature sets. + When set to off (or unset) compatibility is disabled + (all features may be enabled); when set to legacy no + features may be enabled. When set to a comma-separated list of filenames + (each filename may either be an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d) the lists of + requested features are read from those files, separated by whitespace + and/or commas. Only features present in all files may be enabled. +

See zpool-features(7), + zpool-create(8) and zpool-upgrade(8) + for more information on the operation of compatibility feature sets.

+
+
=number
+
This property is deprecated and no longer has any effect.
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared with zpool + clear. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(7) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It does not + protect against an individual device being used in multiple pools, + regardless of the type of vdev. See the discussion under + zpool create.

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs(4) manual page. In order to enable this + property each host must set a unique hostid. See + zgenhostid(8) + spl(4) for additional details. The default value is + off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate pools.

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings and are never + validated. All of the commands that operate on properties + (zpool list, + zpool get, + zpool set, and so forth) can + be used to manipulate both native properties and user properties. Use + zpool set + name= to clear a user property. Property values are + limited to 8192 bytes.

+
+
+
+ + + + + +
January 2, 2024Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/fsck.zfs.8.html b/man/master/8/fsck.zfs.8.html new file mode 100644 index 000000000..25f661bf6 --- /dev/null +++ b/man/master/8/fsck.zfs.8.html @@ -0,0 +1,292 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
FSCK.ZFS(8)System Manager's ManualFSCK.ZFS(8)
+
+
+

+

fsck.zfsdummy + ZFS filesystem checker

+
+
+

+ + + + + +
fsck.zfs[options] + dataset
+
+
+

+

fsck.zfs is a thin shell wrapper that at + most checks the status of a dataset's container pool. It is installed by + OpenZFS because some Linux distributions expect a fsck helper for all + filesystems.

+

If more than one dataset is specified, each + is checked in turn and the results binary-ored.

+
+
+

+

Ignored.

+
+
+

+

ZFS datasets are checked by running zpool + scrub on the containing pool. An individual ZFS + dataset is never checked independently of its pool, which is unlike a + regular filesystem.

+

However, the fsck(8) interface still + allows it to communicate some errors: if the dataset + is in a degraded pool, then fsck.zfs will return + exit code to indicate + an uncorrected filesystem error.

+

Similarly, if the dataset is in a + faulted pool and has a legacy /etc/fstab record, + then fsck.zfs will return exit code + to indicate a fatal + operational error.

+
+
+

+

fstab(5), fsck(8), + zpool-scrub(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/index.html b/man/master/8/index.html new file mode 100644 index 000000000..9d98df6af --- /dev/null +++ b/man/master/8/index.html @@ -0,0 +1,313 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/mount.zfs.8.html b/man/master/8/mount.zfs.8.html new file mode 100644 index 000000000..de220ecd1 --- /dev/null +++ b/man/master/8/mount.zfs.8.html @@ -0,0 +1,299 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
MOUNT.ZFS(8)System Manager's ManualMOUNT.ZFS(8)
+
+
+

+

mount.zfsmount + ZFS filesystem

+
+
+

+ + + + + +
mount.zfs[-sfnvh] [-o + options] dataset + mountpoint
+
+
+

+

The mount.zfs helper is used by + mount(8) to mount filesystem snapshots and + legacy + ZFS filesystems, as well as by zfs(8) when the + + environment variable is not set. Users should should invoke either + zfs(8) in most cases.

+

options are handled according + to the section in zfsprops(7), except + for those described below.

+

If /etc/mtab is a regular file and + -n was not specified, it will be updated via + libmount.

+
+
+

+
+
+
Ignore unknown (sloppy) mount options.
+
+
Do everything except actually executing the system call.
+
+
Never update /etc/mtab.
+
+
Print resolved mount options and parser state.
+
+
Print the usage message.
+
+ zfsutil
+
This private flag indicates that mount(8) is being + called by the zfs(8) command.
+
+
+
+

+

fstab(5), mount(8), + zfs-mount(8)

+
+
+ + + + + +
May 24, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/vdev_id.8.html b/man/master/8/vdev_id.8.html new file mode 100644 index 000000000..10f46bc3a --- /dev/null +++ b/man/master/8/vdev_id.8.html @@ -0,0 +1,324 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
VDEV_ID(8)System Manager's ManualVDEV_ID(8)
+
+
+

+

vdev_idgenerate + user-friendly names for JBOD disks

+
+
+

+ + + + + +
vdev_id-d dev + -c config_file + -g + sas_direct|sas_switch|scsi + -m -p + phys_per_port
+
+
+

+

vdev_id is an udev helper which parses + vdev_id.conf(5) to map a physical path in a storage + topology to a channel name. The channel name is combined with a disk + enclosure slot number to create an alias that reflects the physical location + of the drive. This is particularly helpful when it comes to tasks like + replacing failed drives. Slot numbers may also be remapped in case the + default numbering is unsatisfactory. The drive aliases will be created as + symbolic links in /dev/disk/by-vdev.

+

The currently supported topologies are + sas_direct, sas_switch, and + scsi. A multipath mode is supported in which dm-mpath + devices are handled by examining the first running component disk as + reported by the driver. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating + aliases based on existing udev links in the /dev hierarchy using the + configuration + file keyword. See vdev_id.conf(5) for details.

+
+
+

+
+
+ device
+
The device node to classify, like /dev/sda.
+
+ config_file
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+
Only handle dm-multipath devices. If specified, examine the first running + component disk of a dm-multipath device as provided by the driver to + determine the physical path.
+
+ phys_per_port
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zdb.8.html b/man/master/8/zdb.8.html new file mode 100644 index 000000000..08e6fe94a --- /dev/null +++ b/man/master/8/zdb.8.html @@ -0,0 +1,806 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's ManualZDB(8)
+
+
+

+

zdbdisplay ZFS + storage pool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMNPsTvXYy] + [-e [-V] + [-p path]…] + [-I inflight-I/O-ops] + [-o + var=value]… + [-t txg] + [-U cache] + [-x dumpdir] + [-K key] + [poolname[/dataset|objset-ID]] + [object|range…]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path]…] [-U + cache] [-K + key] + poolname[/dataset|objset-ID] + [object|range…]
+
+ + + + + +
zdb-B [-e + [-V] [-p + path]…] [-U + cache] [-K + key] + poolname/objset-ID + [backup-flags]
+
+ + + + + +
zdb-C [-A] + [-U cache] + [poolname]
+
+ + + + + +
zdb-E [-A] + word0:word1:…:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path]…] + [-t txg] + [-U cache] + poolname [vdev + [metaslab]…]
+
+ + + + + +
zdb-O [-K + key] dataset path
+
+ + + + + +
zdb-r [-K + key] dataset path + destination
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path]…] + [-U cache] + poolname + vdev:offset:[lsize/]psize[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path]…] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. It is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as "pool/".

+

zdb is an "offline" tool; it + accesses the block devices underneath the pools directly from userspace and + does not care if the pool is imported or datasets are mounted (or even if + the system understands ZFS at all). When operating on an imported and active + pool it is possible, though unlikely, that zdb may interpret inconsistent + pool data and behave erratically.

+
+
+

+

Display options:

+
+
, + --block-stats
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
, + --backup
+
Generate a backup stream, similar to zfs + send, but for the numeric objset ID, and without + opening the dataset. This can be useful in recovery scenarios if dataset + metadata has become corrupted but the dataset itself is readable. The + optional flags argument is a string of one or more + of the letters e, L, + c, and + , which + correspond to the same flags in zfs-send(8).
+
, + --checksum
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
, + --config
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
, + --datasets
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. See + -N for determining if + poolname[/dataset|objset-ID] + is to use the specified + dataset|objset-ID as a string + (dataset name) or a number (objset ID) when datasets have numeric names. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs or object ID ranges are specified, display + information about those specific objects or ranges only.

+

An object ID range is specified in terms of a colon-separated + tuple of the form + ⟨start⟩:⟨end⟩[:⟨flags⟩]. The + fields start and end are + integer object identifiers that denote the upper and lower bounds of the + range. An end value of -1 specifies a range with + no upper bound. The flags field optionally + specifies a set of flags, described below, that control which object + types are dumped. By default, all object types are dumped. A minus sign + (-) negates the effect of the flag that follows it and has no effect + unless preceded by the A flag. For example, the + range 0:-1:A-d will dump all object types except for directories.

+

+
+
+
Dump all objects (this is the default)
+
+
Dump ZFS directory objects
+
+
Dump ZFS plain file objects
+
+
Dump SPA space map objects
+
+
Dump ZAP objects
+
-
+
Negate the effect of next flag
+
+
+
, + --dedup-stats
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + × compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
, + --embedded-block-pointer=word0:word1:…:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
, + --history
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
, + --intent-logs
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
, + --checkpointed-state
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
, + --label=device
+
Read the vdev labels and L2ARC header from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. The presence of L2ARC header is indicated by a specific sequence + (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size or the + number of L2ARC log blocks zdb + -l will return 1. Each unique configuration is + displayed only once.
+
+ device
+
In addition display label space usage stats. If a valid L2ARC header was + found also display the properties of log blocks used for restoring L2ARC + contents (persistent L2ARC).
+
+ device
+
Display every configuration, unique or not. If a valid L2ARC header was + found also display the properties of log entries in log blocks used for + restoring L2ARC contents (persistent L2ARC). +

If the -q option is also specified, + don't print the labels or the L2ARC header.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
, + --disable-leak-tracking
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
, + --metaslabs
+
Display the offset, spacemap, free space of each metaslab, all the log + spacemaps and their obsolete entry statistics.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
, + --metaslab-groups
+
Display all "normal" vdev metaslab group information - per-vdev + metaslab count, fragmentation, and free space histogram, as well as + overall pool fragmentation and histogram.
+
+
"Special" vdevs are added to -M's normal output.
+
, + --object-lookups=dataset + path
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Same as -d but force zdb to interpret the + [dataset|objset-ID] in + [poolname[/dataset|objset-ID]] + as a numeric objset ID.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
, + --copy-object=dataset path + destination
+
Copy the specified path inside of the + dataset to the specified destination. Specified + path must be relative to the root of + dataset. This option can be combined with + -v for increasing verbosity.
+
, + --read-block=poolname + vdev:offset:[lsize/]psize[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer at hex offset
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
, + --io-stats
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
, + --simulate-dedup
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
, + --brt-stats
+
Display block reference table (BRT) statistics, including the size of + uniques blocks cloned, the space saving as a result of cloning, and the + saving ratio.
+
+
Display the per-vdev BRT statistics, including total references.
+
+
Dump the contents of the block reference tables.
+
, + --uberblock
+
Display the current uberblock.
+
+

Other options:

+
+
, + --ignore-assertions
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
, + --exported=[-p + path]…
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
, + --dump-blocks=dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
, + --automatic-rewind
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
, + --dump-debug-msg
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
, + --inflight=inflight-I/O-ops
+
Limit the number of outstanding checksum I/O operations to the specified + value. The default value is 200. This option affects the performance of + the -c option.
+
, + --key=key
+
Decryption key needed to access an encrypted dataset. This will cause + zdb to attempt to unlock the dataset using the + encryption root, key format and other encryption parameters on the given + dataset. zdb can still inspect pool and dataset + structures on encrypted datasets without unlocking them, but will not be + able to access file names and attributes and object contents. + WARNING: The raw decryption key and any decrypted data will be in + user memory while zdb is running. Other user + programs may be able to extract it by inspecting + zdb as it runs. Exercise extreme caution when + using this option in shared or uncontrolled environments.
+
, + --option=var=value
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
, + --parseable
+
Print numbers in an unscaled form more amenable to parsing, e.g. + + rather than + .
+
, + --txg=transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
, + --cachefile=cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
, + --verbose
+
Enable verbosity. Specify multiple times for increased verbosity.
+
, + --verbatim
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
, + --extreme-rewind
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
, + --all-reconstruction
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
, + --livelist
+
Perform validation for livelists that are being deleted. Scans through the + livelist and metaslabs, checking for duplicate entries and compares the + two, checking for potential double frees. If it encounters issues, + warnings will be printed, but the command will not necessarily fail.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+

+
+
# zdb -C rpool
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ …
+
+
+
+

+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ …
+
+
+
+

+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
+

+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ …
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
November 18, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zed.8.html b/man/master/8/zed.8.html new file mode 100644 index 000000000..bc0581461 --- /dev/null +++ b/man/master/8/zed.8.html @@ -0,0 +1,474 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Manager's ManualZED(8)
+
+
+

+

ZEDZFS Event + Daemon

+
+
+

+ + + + + +
ZED[-fFhILMvVZ] [-d + zedletdir] [-p + pidfile] [-P + path] [-s + statefile] [-j + jobs] [-b + buflen]
+
+
+

+

The ZED (ZFS Event Daemon) monitors events + generated by the ZFS kernel module. When a zevent (ZFS Event) is posted, the + ZED will run any ZEDLETs (ZFS Event Daemon Linkage + for Executable Tasks) that have been enabled for the corresponding zevent + class.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Don't daemonise: remain attached to the controlling terminal, log to the + standard I/O streams.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Request that the daemon idle rather than exit when the kernel modules are + not loaded. Processing of events will start, or resume, when the kernel + modules are (re)loaded. Under Linux the kernel modules cannot be unloaded + while the daemon is running.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+ zedletdir
+
Read the enabled ZEDLETs from the specified directory.
+
+ pidfile
+
Write the daemon's process ID to the specified file.
+
+ path
+
Custom $PATH for zedlets to use. Normally zedlets + run in a locked-down environment, with hardcoded paths to the ZFS commands + ($ZFS, $ZPOOL, + $ZED, ), and a + hard-coded $PATH. This is done for security + reasons. However, the ZFS test suite uses a custom PATH for its ZFS + commands, and passes it to ZED with + -P. In short, -P is only + to be used by the ZFS test suite; never use it in production!
+
+ statefile
+
Write the daemon's state to the specified file.
+
+ jobs
+
Allow at most jobs ZEDLETs to run concurrently, + delaying execution of new ones until they finish. Defaults to + .
+
+ buflen
+
Cap kernel event buffer growth to buflen entries. + This buffer is grown when the daemon misses an event, but results in + unreclaimable memory use in the kernel. A value of + removes the + cap. Defaults to + .
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the + zpool events + -v command.

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory + (zedletdir). These can be symlinked or copied from the + + directory; symlinks allow for automatic updates from the installed ZEDLETs, + whereas copies preserve local modifications. As a security measure, since + ownership change is a privileged operation, ZEDLETs must be owned by root. + They must have execute permissions for the user, but they must not have + write permissions for group or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they + should be invoked. In particular, a ZEDLET will be invoked for a given + zevent if either its class or subclass string is a prefix of its filename + (and is followed by a non-alphabetic character). As a special case, the + prefix matches + all zevents. Multiple ZEDLETs may be invoked for a given zevent.

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + .

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner:

+
    +
  1. it is prefixed with + ,
  2. +
  3. it is converted to uppercase, and
  4. +
  5. each non-alphanumeric character is converted to an underscore.
  6. +
+

Some additional environment variables have been defined to present + certain nvpair values in a more convenient form. An incomplete list of + zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as “seconds + nanoseconds” since the Epoch.
+
+
The seconds component of + ZEVENT_TIME.
+
+
The + + component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The alias + (“--”) + string of the ZFS distribution the daemon is part of.
+
+
The ZFS version the daemon is part of.
+
+
The ZFS release the daemon is part of.
+
+

ZEDLETs may need to call other ZFS commands. The + installation paths of the following executables are defined as environment + variables: , + , + , + , + and + . + These variables may be overridden in the rc file.

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state.
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
, +
+
Terminate the daemon.
+
+
+
+

+

zfs(8), zpool(8), + zpool-events(8)

+
+
+

+

The ZED requires root privileges.

+

Do not taunt the ZED.

+
+
+

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Internationalization support via gettext has not been added.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-allow.8.html b/man/master/8/zfs-allow.8.html new file mode 100644 index 000000000..044008a46 --- /dev/null +++ b/man/master/8/zfs-allow.8.html @@ -0,0 +1,956 @@ + + + + + + + zfs-allow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-allow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of mount, + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@ property
groupobjquotaotherAllows accessing any groupobjquota@ + property
groupusedotherAllows reading any groupused@ property
groupobjusedotherAllows reading any groupobjused@ property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@ property
userobjquotaotherAllows accessing any userobjquota@ + property
userusedotherAllows reading any userused@ property
userobjusedotherAllows reading any userobjused@ property
projectobjquotaotherAllows accessing any projectobjquota@ + property
projectquotaotherAllows accessing any projectquota@ + property
projectobjusedotherAllows reading any projectobjused@ + property
projectusedotherAllows reading any projectused@ property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+

+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys , users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-bookmark.8.html b/man/master/8/zfs-bookmark.8.html new file mode 100644 index 000000000..eb5f332b2 --- /dev/null +++ b/man/master/8/zfs-bookmark.8.html @@ -0,0 +1,291 @@ + + + + + + + zfs-bookmark.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-bookmark.8

+
+ + + + + +
ZFS-BOOKMARK(8)System Manager's ManualZFS-BOOKMARK(8)
+
+
+

+

zfs-bookmark — + create bookmark of ZFS snapshot

+
+
+

+ + + + + +
zfsbookmark + snapshot|bookmark + newbookmark
+
+
+

+

Creates a new bookmark of the given snapshot or bookmark. + Bookmarks mark the point in time when the snapshot was created, and can be + used as the incremental source for a zfs + send.

+

When creating a bookmark from an existing redaction + bookmark, the resulting bookmark is + a redaction + bookmark.

+

This feature must be enabled to be used. See + zpool-features(7) for details on ZFS feature flags and the + + feature.

+
+
+

+
+

+

The following example creates a bookmark to a snapshot. This + bookmark can then be used instead of a snapshot in send streams.

+
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
+
+

+

zfs-destroy(8), zfs-send(8), + zfs-snapshot(8)

+
+
+ + + + + +
May 12, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-change-key.8.html b/man/master/8/zfs-change-key.8.html new file mode 100644 index 000000000..ded668b3e --- /dev/null +++ b/man/master/8/zfs-change-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-change-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-change-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-clone.8.html b/man/master/8/zfs-clone.8.html new file mode 100644 index 000000000..ac2bb707c --- /dev/null +++ b/man/master/8/zfs-clone.8.html @@ -0,0 +1,315 @@ + + + + + + + zfs-clone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-clone.8

+
+ + + + + +
ZFS-CLONE(8)System Manager's ManualZFS-CLONE(8)
+
+
+

+

zfs-cloneclone + snapshot of ZFS dataset

+
+
+

+ + + + + +
zfsclone [-p] + [-o + property=value]… + snapshot + filesystem|volume
+
+
+

+

See the Clones section of + zfsconcepts(7) for details. The target dataset can be + located anywhere in the ZFS hierarchy, and is created as the same type as + the original.

+
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + + property inherited from their parent. If the target filesystem or volume + already exists, the operation completes successfully.
+
+
+
+

+
+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-promote(8), + zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-create.8.html b/man/master/8/zfs-create.8.html new file mode 100644 index 000000000..9328f004f --- /dev/null +++ b/man/master/8/zfs-create.8.html @@ -0,0 +1,452 @@ + + + + + + + zfs-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-create.8

+
+ + + + + +
ZFS-CREATE(8)System Manager's ManualZFS-CREATE(8)
+
+
+

+

zfs-create — + create ZFS dataset

+
+
+

+ + + + + +
zfscreate [-Pnpuv] + [-o + property=value]… + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]… + -V size + volume
+
+
+

+
+
zfs create + [-Pnpuv] [-o + property=value]… + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent, unless the -u option is used. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have filesystem as + their only value. The create_ancestors key only + appears if the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + filesystem due to the use of the + -o option.
+
+
Do not mount the newly created file system.
+
+
Print verbose information about the created dataset.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]… + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically + rounded up to the nearest multiple of the + .

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + + in the + section of zfsprops(7) for more + information about sparse volumes.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have volume as their + only value. The create_ancestors key only appears if + the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + volume due to the use of the + -b or -o options, as + well as + + if the volume is not sparse.
+
+
Print verbose information about the created dataset.
+
+
+
+
+

+

Swapping to a ZFS volume is prone to deadlock and not recommended. + See OpenZFS FAQ.

+

Swapping to a file on a ZFS filesystem is not supported.

+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-destroy(8), zfs-list(8), + zpool-create(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-destroy.8.html b/man/master/8/zfs-destroy.8.html new file mode 100644 index 000000000..e5fc921ad --- /dev/null +++ b/man/master/8/zfs-destroy.8.html @@ -0,0 +1,424 @@ + + + + + + + zfs-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-destroy.8

+
+ + + + + +
ZFS-DESTROY(8)System Manager's ManualZFS-DESTROY(8)
+
+
+

+

zfs-destroy — + destroy ZFS dataset, snapshots, or bookmark

+
+
+

+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+
+

+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Forcibly unmount file systems. This option has no effect on non-file + systems or unmounted file systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + ) should be + specified when using a range or comma-separated list to identify + multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
+
+
+

+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+

+

zfs-create(8), zfs-hold(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-diff.8.html b/man/master/8/zfs-diff.8.html new file mode 100644 index 000000000..479c8329e --- /dev/null +++ b/man/master/8/zfs-diff.8.html @@ -0,0 +1,341 @@ + + + + + + + zfs-diff.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-diff.8

+
+ + + + + +
ZFS-DIFF(8)System Manager's ManualZFS-DIFF(8)
+
+
+

+

zfs-diffshow + difference between ZFS snapshots

+
+
+

+ + + + + +
zfsdiff [-FHth] + snapshot + snapshot|filesystem
+
+
+

+

Display the difference between a snapshot of a given filesystem + and another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are:

+
+
+
-
+
The path has been removed
+
+
The path has been created
+
+
The path has been modified
+
+
The path has been renamed
+
+
+
+
+
Display an indication of the type of file, in a manner similar to the + -F option of ls(1). +
+
+
+
Block device
+
+
Character device
+
+
Directory
+
+
Door
+
+
Named pipe
+
+
Symbolic link
+
+
Event port
+
+
Socket
+
+
Regular file
+
+
+
+
+
Give more parsable tab-separated output, without header lines and without + arrows.
+
+
Display the path's inode change time as the first column of output.
+
+
Do not + ooo-escape + non-ASCII paths.
+
+
+
+

+
+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected.

+
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-get.8.html b/man/master/8/zfs-get.8.html new file mode 100644 index 000000000..9ef0e7a1d --- /dev/null +++ b/man/master/8/zfs-get.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-get.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-groupspace.8.html b/man/master/8/zfs-groupspace.8.html new file mode 100644 index 000000000..f7e21cc5e --- /dev/null +++ b/man/master/8/zfs-groupspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-groupspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-groupspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-hold.8.html b/man/master/8/zfs-hold.8.html new file mode 100644 index 000000000..08bdce011 --- /dev/null +++ b/man/master/8/zfs-hold.8.html @@ -0,0 +1,325 @@ + + + + + + + zfs-hold.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-hold.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rHp] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rHp] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
Prints holds timestamps as unix epoch timestamps.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-inherit.8.html b/man/master/8/zfs-inherit.8.html new file mode 100644 index 000000000..6d04f1d59 --- /dev/null +++ b/man/master/8/zfs-inherit.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-inherit.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-inherit.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-jail.8.html b/man/master/8/zfs-jail.8.html new file mode 100644 index 000000000..e1600944e --- /dev/null +++ b/man/master/8/zfs-jail.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-jail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-jail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-list.8.html b/man/master/8/zfs-list.8.html new file mode 100644 index 000000000..777bc0671 --- /dev/null +++ b/man/master/8/zfs-list.8.html @@ -0,0 +1,376 @@ + + + + + + + zfs-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-list.8

+
+ + + + + +
ZFS-LIST(8)System Manager's ManualZFS-LIST(8)
+
+
+

+

zfs-listlist + properties of ZFS datasets

+
+
+

+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]…] + [-s property]… + [-S property]… + [-t + type[,type]…] + [filesystem|volume|snapshot]…
+
+
+

+

If specified, you can list property information by the absolute + pathname or the relative pathname. By default, all file systems and volumes + are displayed. Snapshots are displayed if the + + pool property is on (the default is + off), or if the -t + snapshot or -t + all options are specified. The following fields are + displayed: name, used, + , + , + .

+
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ depth
+
Recursively display any children of the dataset, limiting the recursion to + depth. A depth of + will display + only the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: + +
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command line.
+
+ property
+
A property for sorting the output by column in ascending order based on + the value of the property. The property must be one of the properties + described in the Properties section + of zfsprops(7) or the value name to + sort by the dataset name. Multiple properties can be specified at one time + using multiple -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior of + zfs list is + preserved.

+
+
+ property
+
Same as -s, but sorts by property in descending + order.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + , + or all. For example, specifying + -t snapshot displays only + snapshots. + , + , or + can be + used as aliases for filesystem, + snapshot, or volume.
+
+
+
+

+
+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties.

+
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
+
+

+

zfsprops(7), zfs-get(8)

+
+
+ + + + + +
February 8, 2024Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-load-key.8.html b/man/master/8/zfs-load-key.8.html new file mode 100644 index 000000000..665446f86 --- /dev/null +++ b/man/master/8/zfs-load-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-load-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-load-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-mount-generator.8.html b/man/master/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..560bd01f2 --- /dev/null +++ b/man/master/8/zfs-mount-generator.8.html @@ -0,0 +1,439 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)System Manager's ManualZFS-MOUNT-GENERATOR(8)
+
+
+

+

zfs-mount-generator — + generate systemd mount units for ZFS filesystems

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+
+
+

+

zfs-mount-generator is a + systemd.generator(7) that generates native + systemd.mount(5) units for configured ZFS datasets.

+
+

+
+
=
+
+ + or none.
+
=
+
off. Skipped if + only noauto datasets exist for a given mountpoint + and there's more than one. Datasets with + + take precedence over ones with + noauto for the same mountpoint. + Sets logical noauto + flag if noauto. Encryption roots + always generate + zfs-load-key@root.service, + even if off.
+
=, + relatime=, + =, + =, + =, + =, + =
+
Used to generate mount options equivalent to zfs + mount.
+
=, + keylocation=
+
If the dataset is an encryption root, its mount unit will bind to + zfs-load-key@root.service, + with additional dependencies as follows: +
+
+
=
+
None, uses systemd-ask-password(1)
+
=URL + (et al.)
+
=, + After=: + network-online.target
+
=<path>
+
=path
+
+
+ The service also uses the same Wants=, + After=, Requires=, + and RequiresMountsFor=, as the + mount unit.
+
=path[ + path]…
+
+ Requires= for the mount- and key-loading unit.
+
=path[ + path]…
+
+ RequiresMountsFor= for the mount- and key-loading + unit.
+
=unit[ + unit]…
+
+ Before= for the mount unit.
+
=unit[ + unit]…
+
+ After= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + WantedBy= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + RequiredBy= for the mount unit.
+
=(unset)|on|off
+
Waxes or wanes strength of default reverse dependencies of the mount unit, + see below.
+
=on|off
+
on. Defaults to + off.
+
+
+
+

+

Additionally, unless the pool the dataset resides on is imported + at generation time, both units gain + Wants=zfs-import.target and + After=zfs-import.target.

+

Additionally, unless the logical noauto flag is + set, the mount unit gains a reverse-dependency for + local-fs.target of strength

+
+
+
(unset)
+
= + + Before=
+
+
=
+
+
= + + Before=
+
+
+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of

+
zfs + list -Ho + name,⟨every property above in + order⟩
+for datasets that should be mounted by systemd should be kept at + @sysconfdir@/zfs/zfs-list.cache/poolname, + and, if writeable, will be kept synchronized for the entire pool by the + history_event-zfs-list-cacher.sh ZEDLET, if enabled + (see zed(8)). +
+
+
+

+

If the + + environment variable is nonzero (or unset and + /proc/cmdline contains + ""), + print summary accounting information at the end.

+
+
+

+

To begin, enable tracking for the pool:

+
# touch + @sysconfdir@/zfs/zfs-list.cache/poolname
+Then enable the tracking ZEDLET: +
# ln + -s + @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh + @sysconfdir@/zfs/zed.d
+
# systemctl + enable + zfs-zed.service
+
# systemctl + restart + zfs-zed.service
+

If no history event is in the queue, inject one to ensure the + ZEDLET runs to refresh the cache file by setting a monitored property + somewhere on the pool:

+
# zfs + set relatime=off + poolname/dset
+
# zfs + inherit relatime + poolname/dset
+

To test the generator output:

+
$ mkdir + /tmp/zfs-mount-generator
+
$ + @systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator
+If the generated units are satisfactory, instruct + systemd to re-run all generators: +
# systemctl + daemon-reload
+
+
+

+

systemd.mount(5), + zfs(5), + systemd.generator(7), + zed(8), + zpool-events(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-mount.8.html b/man/master/8/zfs-mount.8.html new file mode 100644 index 000000000..a42fa8d0d --- /dev/null +++ b/man/master/8/zfs-mount.8.html @@ -0,0 +1,338 @@ + + + + + + + zfs-mount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-mount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-program.8.html b/man/master/8/zfs-program.8.html new file mode 100644 index 000000000..de14e4d91 --- /dev/null +++ b/man/master/8/zfs-program.8.html @@ -0,0 +1,1007 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs-program — + execute ZFS channel programs

+
+
+

+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script + [script arguments]
+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at + http://www.lua.org/manual/5.2/

+

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MiB, and can be set to a maximum of 100 + MiB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libzfs interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libzfs interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
return ret0, ret1, ret2, + ...
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
error: "error string, including + Lua stack trace"
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

: + ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libzfs interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
string->string
number->int64
boolean->boolean_value
nil->boolean (no value)
table->nvlist
+

Likewise, table keys are replaced by string equivalents as + follows:

+ + + + + + + + + + + + + + + + + + + +
string->no change
number->signed decimal string ("%lld")
boolean->"true" | "false"
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
assertrawlencollectgarbagerawget
errorrawsetgetmetatableselect
ipairssetmetatablenexttonumber
pairstostringrawequaltype
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+ + + + + + + + + + +
dofileloadfileloadpcallprintxpcall
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
zfs.sync.destroy("rpool@snap")
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
zfs.sync.destroy({1="rpool@snap", + defer=true})
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
zfs.sync.snapshot{"rpool@snap", + defer=true}
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
errno = + zfs.sync.promote(dataset)
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
EPERMECHILDENODEVENOSPCENOENTEAGAINENOTDIR
ESPIPEESRCHENOMEMEISDIREROFSEINTREACCES
EINVALEMLINKEIOEFAULTENFILEEPIPEENXIO
ENOTBLKEMFILEEDOME2BIGEBUSYENOTTYERANGE
ENOEXECEEXISTETXTBSYEDQUOTEBADFEXDEVEFBIG
+
+
+

+

For detailed descriptions of the exact behavior of any ZFS + administrative operations, see the main zfs(8) manual + page.

+
+
(msg)
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running +
dtrace -n + 'zfs-dbgmsg{trace(stringof(arg0))}'
+

+
+
msg (string)
+
Debug message to be printed.
+
+
+
(dataset)
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns + false, but + zfs.exists("somepool/fs_that_may_exist") will + error. +

+
+
dataset (string)
+
Dataset to check for existence. Must be in the target pool.
+
+
+
(dataset, + property)
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like GUIDs) may wrap around and appear negative. +

+
+
dataset (string)
+
Filesystem or snapshot path to retrieve properties from.
+
property (string)
+
Name of property to retrieve. All filesystem, snapshot and volume + properties are supported except for + and + . + Also supports the + snap + and + bookmark + properties and the + ⟨|⟩⟨|id + properties, though the id must be in numeric form.
+
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
(dataset, + [defer=true|false])
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

+
+
dataset (string)
+
Filesystem or snapshot to be destroyed.
+
[defer (boolean)]
+
Valid only for destroying snapshots. If set to true, and the + snapshot has holds or clones, allows the snapshot to be marked for + deferred deletion rather than failing.
+
+
+
(dataset, + property)
+
Clears the specified property in the given dataset, causing it to be + inherited from an ancestor, or restored to the default if no ancestor + property is set. The zfs + inherit -S option has + not been implemented. Returns 0 on success, or a nonzero error code if + the property could not be cleared. +

+
+
dataset (string)
+
Filesystem or snapshot containing the property to clear.
+
property (string)
+
The property to clear. Allowed properties are the same as those + for the zfs + inherit command.
+
+
+
(dataset)
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

+
+
dataset (string)
+
Clone to be promoted.
+
+
+
(filesystem)
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

+
+
filesystem (string)
+
Filesystem to rollback.
+
+
+
(dataset, + property, value)
+
Sets the given property on a dataset. Currently only user properties + are supported. Returns 0 if the property was set, or a nonzero error + code otherwise. +

+
+
dataset (string)
+
The dataset where the property will be set.
+
property (string)
+
The property to set.
+
value (string)
+
The value of the property to be set.
+
+
+
(dataset)
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

+
+
dataset (string)
+
Name of snapshot to create.
+
+
+
(dataset, + oldsnapname, + newsnapname)
+
Rename a snapshot of a filesystem or a volume. Returns 0 if the + snapshot was successfully renamed, and a nonzero error code otherwise. +

+
+
dataset (string)
+
Name of the snapshot's parent dataset.
+
oldsnapname (string)
+
Original name of the snapshot.
+
newsnapname (string)
+
New name of the snapshot.
+
+
+
(source, + newbookmark)
+
Create a bookmark of an existing source snapshot or bookmark. Returns + 0 if the new bookmark was successfully created, and a nonzero error + code otherwise. +

Note: Bookmarking requires the corresponding pool feature + to be enabled.

+

+
+
source (string)
+
Full name of the existing snapshot or bookmark.
+
newbookmark (string)
+
Full name of the new bookmark.
+
+
+
+
+
+
For each function in the zfs.sync submodule, there is a + corresponding zfs.check function which performs a + "dry run" of the same operation. Each takes the same arguments + as its zfs.sync counterpart and returns 0 if the + operation would succeed, or a non-zero error code if it would fail, along + with any other error details. That is, each has the same behavior as the + corresponding sync function except for actually executing the requested + change. For example, + ("fs") + returns 0 if + zfs.sync.destroy("fs") + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
(dataset, + [defer=true|false])
+
 
+
(dataset)
+
 
+
(filesystem)
+
 
+
(dataset, + property, value)
+
 
+
(dataset)
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
(snapshot)
+
Iterate through all clones of the given snapshot. +

+
+
snapshot (string)
+
Must be a valid snapshot path in the current pool.
+
+
+
(dataset)
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all bookmarks of the given dataset. Each bookmark is + returned as a string containing the full dataset name, e.g. + "pool/fs#bookmark". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(snapshot)
+
Iterate through all user holds on the given snapshot. Each hold is + returned as a pair of the hold's tag and the timestamp (in seconds + since the epoch) at which it was created. +

+
+
snapshot (string)
+
Must be a valid snapshot.
+
+
+
(dataset)
+
An alias for zfs.list.user_properties (see relevant entry). +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Iterate through all user properties for the given dataset. For each + step of the iteration, output the property name, its value, and its + source. Throws a Lua error if the dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot or volume.
+
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-project.8.html b/man/master/8/zfs-project.8.html new file mode 100644 index 000000000..b747ce153 --- /dev/null +++ b/man/master/8/zfs-project.8.html @@ -0,0 +1,362 @@ + + + + + + + zfs-project.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-project.8

+
+ + + + + +
ZFS-PROJECT(8)System Manager's ManualZFS-PROJECT(8)
+
+
+

+

zfs-project — + manage projects in ZFS filesystem

+
+
+

+ + + + + +
zfsproject + [-d|-r] + file|directory
+
+ + + + + +
zfsproject -C + [-kr] + file|directory
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory
+
+
+

+
+
zfs project + [-d|-r] + file|directory
+
List project identifier (ID) and inherit flag of files and directories. +
+
+
Show the directory project ID and inherit flag, not its children.
+
+
List subdirectories recursively.
+
+
+
zfs project + -C [-kr] + file|directory
+
Clear project inherit flag and/or ID on the files and directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset to zero.
+
+
Clear subdirectories' flags recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory
+
Check project ID and inherit flag on the files and directories: report + entries without the project inherit flag, or with project IDs different + from the target directory's project ID or the one specified with + -p. +
+
+
Delimit filenames with a NUL byte instead of newline, don't output + diagnoses.
+
+
Check the directory project ID and inherit flag, not its + children.
+
+ id
+
Compare to id instead of the target files and + directories' project IDs.
+
+
Check subdirectories recursively.
+
+
+
zfs project + -p id + [-rs] + file|directory
+
Set project ID and/or inherit flag on the files and directories. +
+
+ id
+
Set the project ID to the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given files and directories. This is + usually used for setting up tree quotas with + -r. In that case, the directory's project ID + will be set for all its descendants, unless specified explicitly with + -p.
+
+
+
+
+
+

+

zfs-projectspace(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-projectspace.8.html b/man/master/8/zfs-projectspace.8.html new file mode 100644 index 000000000..995af8fae --- /dev/null +++ b/man/master/8/zfs-projectspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-projectspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-projectspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-promote.8.html b/man/master/8/zfs-promote.8.html new file mode 100644 index 000000000..09b9aa3ef --- /dev/null +++ b/man/master/8/zfs-promote.8.html @@ -0,0 +1,299 @@ + + + + + + + zfs-promote.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-promote.8

+
+ + + + + +
ZFS-PROMOTE(8)System Manager's ManualZFS-PROMOTE(8)
+
+
+

+

zfs-promote — + promote clone dataset to no longer depend on origin + snapshot

+
+
+

+ + + + + +
zfspromote clone
+
+
+

+

The zfs promote + command makes it possible to destroy the dataset that the clone was created + from. The clone parent-child dependency relationship is reversed, so that + the origin dataset becomes a clone of the specified dataset.

+

The snapshot that was cloned, and any snapshots previous to this + snapshot, are now owned by the promoted clone. The space they use moves from + the origin dataset to the promoted clone, so enough space must be available + to accommodate these snapshots. No new space is consumed by this operation, + but the space accounting is adjusted. The promoted clone must not have any + conflicting snapshot names of its own. The zfs + rename subcommand can be used to rename any + conflicting snapshots.

+
+
+

+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-clone(8), + zfs-rename(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-receive.8.html b/man/master/8/zfs-receive.8.html new file mode 100644 index 000000000..741acd431 --- /dev/null +++ b/man/master/8/zfs-receive.8.html @@ -0,0 +1,628 @@ + + + + + + + zfs-receive.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-receive.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsreceive -c + [-vn] + filesystem|snapshot
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no effect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs receive + -c [-vn] + filesystem|snapshot
+
Attempt to repair data corruption in the specified dataset, by using the + provided stream as the source of healthy data. This method of healing can + only heal data blocks present in the stream. Metadata can not be healed by + corrective receive. Running a scrub is recommended post-healing to ensure + all data corruption was repaired. +

It's important to consider why corruption has happened in the + first place. If you have slowly failing hardware - periodically + repairing the data is not going to save you from data loss later on when + the hardware fails completely.

+
+
+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
March 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-recv.8.html b/man/master/8/zfs-recv.8.html new file mode 100644 index 000000000..de0e2d9c9 --- /dev/null +++ b/man/master/8/zfs-recv.8.html @@ -0,0 +1,628 @@ + + + + + + + zfs-recv.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-recv.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsreceive -c + [-vn] + filesystem|snapshot
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no effect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs receive + -c [-vn] + filesystem|snapshot
+
Attempt to repair data corruption in the specified dataset, by using the + provided stream as the source of healthy data. This method of healing can + only heal data blocks present in the stream. Metadata can not be healed by + corrective receive. Running a scrub is recommended post-healing to ensure + all data corruption was repaired. +

It's important to consider why corruption has happened in the + first place. If you have slowly failing hardware - periodically + repairing the data is not going to save you from data loss later on when + the hardware fails completely.

+
+
+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
March 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-redact.8.html b/man/master/8/zfs-redact.8.html new file mode 100644 index 000000000..b2ed726cc --- /dev/null +++ b/man/master/8/zfs-redact.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-redact.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-redact.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVbcehnpsvw] + [-R [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVbcehnpsvw] [-R + [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --exclude + dataset[,dataset]…
+
With -R, -X specifies + a set of datasets (and, hence, their descendants), to be excluded from + the send stream. The root dataset may not be excluded. + -X a + -X b is equivalent to + -X + a,b.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o compress= + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data. Note that uncompressed data from the sender will still attempt + to compress on the receiver, unless you specify + -o compress= + .
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

See -v.

+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
July 27, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-release.8.html b/man/master/8/zfs-release.8.html new file mode 100644 index 000000000..f95207a21 --- /dev/null +++ b/man/master/8/zfs-release.8.html @@ -0,0 +1,325 @@ + + + + + + + zfs-release.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-release.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rHp] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rHp] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
Prints holds timestamps as unix epoch timestamps.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-rename.8.html b/man/master/8/zfs-rename.8.html new file mode 100644 index 000000000..3d6b61f8b --- /dev/null +++ b/man/master/8/zfs-rename.8.html @@ -0,0 +1,375 @@ + + + + + + + zfs-rename.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-rename.8

+
+ + + + + +
ZFS-RENAME(8)System Manager's ManualZFS-RENAME(8)
+
+
+

+

zfs-rename — + rename ZFS dataset

+
+
+

+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename -p + [-f] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -u + [-f] filesystem + filesystem
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+
+

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + -p [-f] + filesystem|volume + filesystem|volume
+
 
+
zfs rename + -u [-f] + filesystem filesystem
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any file systems that need to be unmounted in the + process. This flag has no effect if used together with the + -u flag.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
Do not remount file systems during rename. If a file system's + mountpoint property is set to + + or + , + the file system is not unmounted even if this option is not + given.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
+
+
+

+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-rollback.8.html b/man/master/8/zfs-rollback.8.html new file mode 100644 index 000000000..886ff5cde --- /dev/null +++ b/man/master/8/zfs-rollback.8.html @@ -0,0 +1,299 @@ + + + + + + + zfs-rollback.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rollback.8

+
+ + + + + +
ZFS-ROLLBACK(8)System Manager's ManualZFS-ROLLBACK(8)
+
+
+

+

zfs-rollback — + roll ZFS dataset back to snapshot

+
+
+

+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+
+

+

When a dataset is rolled back, all data that has changed since the + snapshot is discarded, and the dataset reverts to the state at the time of + the snapshot. By default, the command refuses to roll back to a snapshot + other than the most recent one. In order to do so, all intermediate + snapshots and bookmarks must be destroyed by specifying the + -r option.

+

The -rR options do not recursively destroy + the child snapshots of a recursive snapshot. Only direct snapshots of the + specified filesystem are destroyed by either of these options. To completely + roll back a recursive snapshot, you must roll back the individual child + snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones of + those snapshots.
+
+
Used with the -R option to force an unmount of any + clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
+

+
+

+

The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots:

+
# zfs + rollback -r + pool/home/anne@yesterday
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-send.8.html b/man/master/8/zfs-send.8.html new file mode 100644 index 000000000..c3624b0c2 --- /dev/null +++ b/man/master/8/zfs-send.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-send.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-send.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVbcehnpsvw] + [-R [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVbcehnpsvw] [-R + [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --exclude + dataset[,dataset]…
+
With -R, -X specifies + a set of datasets (and, hence, their descendants), to be excluded from + the send stream. The root dataset may not be excluded. + -X a + -X b is equivalent to + -X + a,b.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o compress= + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data. Note that uncompressed data from the sender will still attempt + to compress on the receiver, unless you specify + -o compress= + .
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

See -v.

+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
July 27, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-set.8.html b/man/master/8/zfs-set.8.html new file mode 100644 index 000000000..a395b54b0 --- /dev/null +++ b/man/master/8/zfs-set.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-set.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-share.8.html b/man/master/8/zfs-share.8.html new file mode 100644 index 000000000..e8e6ba659 --- /dev/null +++ b/man/master/8/zfs-share.8.html @@ -0,0 +1,310 @@ + + + + + + + zfs-share.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-share.8

+
+ + + + + +
ZFS-SHARE(8)System Manager's ManualZFS-SHARE(8)
+
+
+

+

zfs-shareshare + and unshare ZFS filesystems

+
+
+

+ + + + + +
zfsshare [-l] + -a|filesystem
+
+ + + + + +
zfsunshare + -a|filesystem|mountpoint
+
+
+

+
+
zfs share + [-l] + -a|filesystem
+
Shares available ZFS file systems. +
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a|filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
+
+
+

+

exports(5), smb.conf(5), + zfsprops(7)

+
+
+ + + + + +
May 17, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-snapshot.8.html b/man/master/8/zfs-snapshot.8.html new file mode 100644 index 000000000..ee9c49396 --- /dev/null +++ b/man/master/8/zfs-snapshot.8.html @@ -0,0 +1,357 @@ + + + + + + + zfs-snapshot.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-snapshot.8

+
+ + + + + +
ZFS-SNAPSHOT(8)System Manager's ManualZFS-SNAPSHOT(8)
+
+
+

+

zfs-snapshot — + create snapshots of ZFS datasets

+
+
+

+ + + + + +
zfssnapshot [-r] + [-o + property=value]… + dataset@snapname
+
+
+

+

Creates a snapshot of a dataset or multiple snapshots of different + datasets.

+

Snapshots are created atomically. That is, a snapshot is a + consistent image of a dataset at a specific point in time; it includes all + modifications to the dataset made by system calls that have successfully + completed before that point in time. Recursive snapshots created through the + -r option are all created at the same time.

+

zfs snap can be + used as an alias for zfs + snapshot.

+

See the Snapshots section of + zfsconcepts(7) for details.

+
+
+ property=value
+
Set the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
+

+
+

+

The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system.

+
# zfs + snapshot + pool/home/bob@yesterday
+
+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+

+

zfs-bookmark(8), zfs-clone(8), + zfs-destroy(8), zfs-diff(8), + zfs-hold(8), zfs-rename(8), + zfs-rollback(8), zfs-send(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unallow.8.html b/man/master/8/zfs-unallow.8.html new file mode 100644 index 000000000..9971bc760 --- /dev/null +++ b/man/master/8/zfs-unallow.8.html @@ -0,0 +1,956 @@ + + + + + + + zfs-unallow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unallow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of mount, + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@ property
groupobjquotaotherAllows accessing any groupobjquota@ + property
groupusedotherAllows reading any groupused@ property
groupobjusedotherAllows reading any groupobjused@ property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@ property
userobjquotaotherAllows accessing any userobjquota@ + property
userusedotherAllows reading any userused@ property
userobjusedotherAllows reading any userobjused@ property
projectobjquotaotherAllows accessing any projectobjquota@ + property
projectquotaotherAllows accessing any projectquota@ + property
projectobjusedotherAllows reading any projectobjused@ + property
projectusedotherAllows reading any projectused@ property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+

+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys , users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unjail.8.html b/man/master/8/zfs-unjail.8.html new file mode 100644 index 000000000..57d87a31a --- /dev/null +++ b/man/master/8/zfs-unjail.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-unjail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-unjail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unload-key.8.html b/man/master/8/zfs-unload-key.8.html new file mode 100644 index 000000000..358429deb --- /dev/null +++ b/man/master/8/zfs-unload-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-unload-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unload-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unmount.8.html b/man/master/8/zfs-unmount.8.html new file mode 100644 index 000000000..9820098f9 --- /dev/null +++ b/man/master/8/zfs-unmount.8.html @@ -0,0 +1,338 @@ + + + + + + + zfs-unmount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unmount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-unzone.8.html b/man/master/8/zfs-unzone.8.html new file mode 100644 index 000000000..6e6bd11cb --- /dev/null +++ b/man/master/8/zfs-unzone.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-unzone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-unzone.8

+
+ + + + + +
ZFS-ZONE(8)System Manager's ManualZFS-ZONE(8)
+
+
+

+

zfs-zone, + zfs-unzoneattach and + detach ZFS filesystems to user namespaces

+
+
+

+ + + + + +
zfs zonensfile filesystem
+
+ + + + + +
zfs unzonensfile filesystem
+
+
+

+
+
zfs zone + nsfile filesystem
+
Attach the specified filesystem to the user + namespace identified by nsfile. From now on this + file system tree can be managed from within a user namespace if the + zoned property has been set. +

You cannot attach a zoned dataset's children to another user + namespace. You can also not attach the root file system of the user + namespace or any dataset which needs to be mounted before the zfs + service is run inside the user namespace, as it would be attached + unmounted until it is mounted from the service inside the user + namespace.

+

To allow management of the dataset from within a + user namespace, the zoned property has to be set and + the user namespaces needs access to the /dev/zfs + device. The + property + cannot be changed from within a user namespace.

+

After a dataset is attached to a user namespace and the + zoned property is set, a zoned file system cannot be + mounted outside the user namespace, since the user namespace + administrator might have set the mount point to an unacceptable + value.

+
+
zfs unzone + nsfile filesystem
+
Detach the specified filesystem from the user + namespace identified by nsfile.
+
+
+
+

+
+

+

The following example delegates the + tank/users dataset to a user namespace identified by + user namespace file /proc/1234/ns/user.

+
# zfs + zone /proc/1234/ns/user + tank/users
+
+
+
+

+

zfsprops(7)

+
+
+ + + + + +
June 3, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-upgrade.8.html b/man/master/8/zfs-upgrade.8.html new file mode 100644 index 000000000..94ee3cf0f --- /dev/null +++ b/man/master/8/zfs-upgrade.8.html @@ -0,0 +1,317 @@ + + + + + + + zfs-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-upgrade.8

+
+ + + + + +
ZFS-UPGRADE(8)System Manager's ManualZFS-UPGRADE(8)
+
+
+

+

zfs-upgrade — + manage on-disk version of ZFS filesystems

+
+
+

+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a|filesystem
+
+
+

+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] + -a|filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of ZFS. zfs send + streams generated from new snapshots of these file systems cannot be + accessed on systems running older versions of ZFS. +

In general, the file system version is independent of the pool + version. See zpool-features(7) for information on + features of ZFS storage pools.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to version. If not specified, upgrade to + the most recent version. This option can only be used to increase the + version number, and only up to the most recent version supported by + this version of ZFS.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
+
+
+

+

zpool-upgrade(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-userspace.8.html b/man/master/8/zfs-userspace.8.html new file mode 100644 index 000000000..6f3eafa7f --- /dev/null +++ b/man/master/8/zfs-userspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-userspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-userspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-wait.8.html b/man/master/8/zfs-wait.8.html new file mode 100644 index 000000000..ee7b2d06b --- /dev/null +++ b/man/master/8/zfs-wait.8.html @@ -0,0 +1,282 @@ + + + + + + + zfs-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-wait.8

+
+ + + + + +
ZFS-WAIT(8)System Manager's ManualZFS-WAIT(8)
+
+
+

+

zfs-waitwait + for activity in ZFS filesystem to stop

+
+
+

+ + + + + +
zfswait [-t + activity[,activity]…] + filesystem
+
+
+

+

Waits until all background activity of the given types has ceased + in the given filesystem. The activity could cease because it has completed + or because the filesystem has been destroyed or unmounted. If no activities + are specified, the command waits until background activity of every type + listed below has ceased. If there is no activity of the given types in + progress, the command returns immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
The filesystem's internal delete queue to empty
+
+
+

Note that the internal delete queue does not finish draining until + all large files have had time to be fully destroyed and all open file + handles to unlinked files are closed.

+
+
+

+

lsof(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs-zone.8.html b/man/master/8/zfs-zone.8.html new file mode 100644 index 000000000..b78193d20 --- /dev/null +++ b/man/master/8/zfs-zone.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-zone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-zone.8

+
+ + + + + +
ZFS-ZONE(8)System Manager's ManualZFS-ZONE(8)
+
+
+

+

zfs-zone, + zfs-unzoneattach and + detach ZFS filesystems to user namespaces

+
+
+

+ + + + + +
zfs zonensfile filesystem
+
+ + + + + +
zfs unzonensfile filesystem
+
+
+

+
+
zfs zone + nsfile filesystem
+
Attach the specified filesystem to the user + namespace identified by nsfile. From now on this + file system tree can be managed from within a user namespace if the + zoned property has been set. +

You cannot attach a zoned dataset's children to another user + namespace. You can also not attach the root file system of the user + namespace or any dataset which needs to be mounted before the zfs + service is run inside the user namespace, as it would be attached + unmounted until it is mounted from the service inside the user + namespace.

+

To allow management of the dataset from within a + user namespace, the zoned property has to be set and + the user namespaces needs access to the /dev/zfs + device. The + property + cannot be changed from within a user namespace.

+

After a dataset is attached to a user namespace and the + zoned property is set, a zoned file system cannot be + mounted outside the user namespace, since the user namespace + administrator might have set the mount point to an unacceptable + value.

+
+
zfs unzone + nsfile filesystem
+
Detach the specified filesystem from the user + namespace identified by nsfile.
+
+
+
+

+
+

+

The following example delegates the + tank/users dataset to a user namespace identified by + user namespace file /proc/1234/ns/user.

+
# zfs + zone /proc/1234/ns/user + tank/users
+
+
+
+

+

zfsprops(7)

+
+
+ + + + + +
June 3, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs.8.html b/man/master/8/zfs.8.html new file mode 100644 index 000000000..6c50e0882 --- /dev/null +++ b/man/master/8/zfs.8.html @@ -0,0 +1,1033 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's ManualZFS(8)
+
+
+

+

zfsconfigure + ZFS datasets

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfsversion
+
+ + + + + +
zfssubcommand + [arguments]
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace:

+

+
pool[/component]/component
+

for example:

+

+
rpool/var/log
+

The maximum length of a dataset name + is + + - 1 ASCII characters (currently 255) satisfying + . Additionally snapshots are allowed to contain a single + character, + while bookmarks are allowed to contain a single + character. + / is used as separator between components. The maximum + amount of nesting allowed in a path is + + levels deep. ZFS tunables + () + are explained in zfs(4).

+

A dataset can be one of the following:

+
+
+
+
Can be mounted within the standard system namespace and behaves like other + file systems. While ZFS file systems are designed to be POSIX-compliant, + known issues exist that prevent compliance in some cases. Applications + that depend on standards conformance might fail due to non-standard + behavior when checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

See zfsconcepts(7) for details.

+
+

+

Properties are divided into two types: native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about properties, see + zfsprops(7).

+
+
+

+

Enabling the + + feature allows for the creation of encrypted filesystems and volumes. ZFS + will encrypt file and zvol data, file attributes, ACLs, permission bits, + directory listings, FUID mappings, and + // + data. For an overview of encryption, see + zfs-load-key(8).

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
 
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+

+
+
zfs-list(8)
+
Lists the property information for the given datasets in tabular + form.
+
zfs-create(8)
+
Creates a new ZFS file system or volume.
+
zfs-destroy(8)
+
Destroys the given dataset(s), snapshot(s), or bookmark.
+
zfs-rename(8)
+
Renames the given dataset (filesystem or snapshot).
+
zfs-upgrade(8)
+
Manage upgrading the on-disk version of filesystems.
+
+
+
+

+
+
zfs-snapshot(8)
+
Creates snapshots with the given names.
+
zfs-rollback(8)
+
Roll back the given dataset to a previous snapshot.
+
zfs-hold(8)/zfs-release(8)
+
Add or remove a hold reference to the specified snapshot or snapshots. If + a hold exists on a snapshot, attempts to destroy that snapshot by using + the zfs destroy command + return + .
+
zfs-diff(8)
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem.
+
+
+
+

+
+
zfs-clone(8)
+
Creates a clone of the given snapshot.
+
zfs-promote(8)
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot.
+
+
+
+

+
+
zfs-send(8)
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark.
+
zfs-receive(8)
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the + zfs-send(8) subcommand, which by default creates a full + stream.
+
zfs-bookmark(8)
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs + send command.
+
zfs-redact(8)
+
Generate a new redaction bookmark. This feature can be used to allow + clones of a filesystem to be made available on a remote system, in the + case where their parent need not (or needs to not) be usable.
+
+
+
+

+
+
zfs-get(8)
+
Displays properties for the given datasets.
+
zfs-set(8)
+
Sets the property or list of properties to the given value(s) for each + dataset.
+
zfs-inherit(8)
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists.
+
+
+
+

+
+
zfs-userspace(8)/zfs-groupspace(8)/zfs-projectspace(8)
+
Displays space consumed by, and quotas on, each user, group, or project in + the specified filesystem or snapshot.
+
zfs-project(8)
+
List, set, or clear project ID and/or inherit flag on the files or + directories.
+
+
+
+

+
+
zfs-mount(8)
+
Displays all ZFS file systems currently mounted, or mount ZFS filesystem + on a path described by its mountpoint property.
+
zfs-unmount(8)
+
Unmounts currently mounted ZFS file systems.
+
+
+
+

+
+
zfs-share(8)
+
Shares available ZFS file systems.
+
zfs-unshare(8)
+
Unshares currently shared ZFS file systems.
+
+
+
+

+
+
zfs-allow(8)
+
Delegate permissions on the specified filesystem or volume.
+
zfs-unallow(8)
+
Remove delegated permissions on the specified filesystem or volume.
+
+
+
+

+
+
zfs-change-key(8)
+
Add or change an encryption key on the specified dataset.
+
zfs-load-key(8)
+
Load the key for the specified encrypted dataset, enabling access.
+
zfs-unload-key(8)
+
Unload a key for the specified dataset, removing the ability to access the + dataset.
+
+
+
+

+
+
zfs-program(8)
+
Execute ZFS administrative operations programmatically via a Lua + script-language channel program.
+
+
+
+

+
+
zfs-jail(8)
+
Attaches a filesystem to a jail.
+
zfs-unjail(8)
+
Detaches a filesystem from a jail.
+
+
+
+

+
+
zfs-wait(8)
+
Wait for background activity in a filesystem to complete.
+
+
+
+
+

+

The zfs utility exits 0 + on success, if + an error occurs, and + if invalid + command line options were specified.

+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system.

+
# zfs + snapshot + pool/home/bob@yesterday
+
+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression=off + pool/home
+
# zfs + set compression=on + pool/home/anne
+
+
+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties.

+
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set quota=50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get  pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s  -o ,,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots:

+
# zfs + rollback -r + pool/home/anne@yesterday
+
+
+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys quota, users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected.

+
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
+

+

The following example creates a bookmark to a snapshot. This + bookmark can then be used instead of a snapshot in send streams.

+
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
+

+ Property Options on a ZFS File System

+

The following example show how to share SMB filesystem through + ZFS. Note that a user and their password must be given.

+
# smbmount + //127.0.0.1/share_tmp /mnt/tmp + -o + user=workgroup/turbo,password=obrut,uid=1000
+

Minimal /etc/samba/smb.conf configuration + is required, as follows.

+

Samba will need to bind to the loopback interface for the ZFS + utilities to communicate with Samba. This is the default behavior for most + Linux distributions.

+

Samba must be able to authenticate a user. This can be done in a + number of ways (passwd(5), LDAP, + smbpasswd(5), &c.). How to do this is outside the + scope of this document – refer to smb.conf(5) for + more information.

+

See the USERSHARES section + for all configuration options, in case you need to modify any options of the + share afterwards. Do note that any changes done with the + net(8) command will be undone if the share is ever + unshared (like via a reboot).

+
+
+
+

+
+
+
Use ANSI color in zfs diff + and zfs list output.
+
+
Cause zfs mount to use + mount(8) to mount ZFS datasets. This option is provided + for backwards compatibility with older ZFS versions.
+
+
Tells zfs to set the maximum pipe size for + sends/recieves. Disabled by default on Linux due to an unfixed deadlock in + Linux's pipe size handling code.
+
+
Time, in seconds, to wait for /dev/zfs to appear. + Defaults to + , max + (10 + minutes). If <0, wait forever; if + 0, don't wait.
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + zfsconcepts(7), zfsprops(7), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-allow(8), zfs-bookmark(8), + zfs-change-key(8), zfs-clone(8), + zfs-create(8), zfs-destroy(8), + zfs-diff(8), zfs-get(8), + zfs-groupspace(8), zfs-hold(8), + zfs-inherit(8), zfs-jail(8), + zfs-list(8), zfs-load-key(8), + zfs-mount(8), zfs-program(8), + zfs-project(8), zfs-projectspace(8), + zfs-promote(8), zfs-receive(8), + zfs-redact(8), zfs-release(8), + zfs-rename(8), zfs-rollback(8), + zfs-send(8), zfs-set(8), + zfs-share(8), zfs-snapshot(8), + zfs-unallow(8), zfs-unjail(8), + zfs-unload-key(8), zfs-unmount(8), + zfs-upgrade(8), + zfs-userspace(8), zfs-wait(8), + zpool(8)

+
+
+ + + + + +
May 12, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs_ids_to_path.8.html b/man/master/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..4eca4fc14 --- /dev/null +++ b/man/master/8/zfs_ids_to_path.8.html @@ -0,0 +1,274 @@ + + + + + + + zfs_ids_to_path.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_ids_to_path.8

+
+ + + + + +
ZFS_IDS_TO_PATH(8)System Manager's ManualZFS_IDS_TO_PATH(8)
+
+
+

+

zfs_ids_to_path — + convert objset and object ids to names and paths

+
+
+

+ + + + + +
zfs_ids_to_path[-v] pool + objset-id object-id
+
+
+

+

The + + utility converts a provided objset and object ids into a path to the file + they refer to.

+
+
+
Verbose. Print the dataset name and the file path within the dataset + separately. This will work correctly even if the dataset is not + mounted.
+
+
+
+

+

zdb(8), zfs(8)

+
+
+ + + + + +
April 17, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zfs_prepare_disk.8.html b/man/master/8/zfs_prepare_disk.8.html new file mode 100644 index 000000000..ef24ca9fd --- /dev/null +++ b/man/master/8/zfs_prepare_disk.8.html @@ -0,0 +1,302 @@ + + + + + + + zfs_prepare_disk.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_prepare_disk.8

+
+ + + + + +
ZFS_PREPARE_DISK(8)System Manager's ManualZFS_PREPARE_DISK(8)
+
+
+

+

zfs_prepare_disk — + special script that gets run before bringing a disk into a + pool

+
+
+

+

zfs_prepare_disk is an optional script + that gets called by libzfs before bringing a disk into a pool. It can be + modified by the user to run whatever commands are necessary to prepare a + disk for inclusion into the pool. For example, users can add lines to + zfs_prepare_disk to do things like update the + drive's firmware or check the drive's health. + zfs_prepare_disk is optional and can be removed if + not needed. libzfs will look for the script at + @zfsexecdir@/zfs_prepare_disk.

+
+

+

zfs_prepare_disk will be passed the + following environment variables:

+

+
+
POOL_NAME
+
+
VDEV_PATH
+
+
VDEV_PREPARE
+
('create', 'add', 'replace', or + 'autoreplace'). This can be useful if you only want the script to be run + under certain actions.
+
VDEV_UPATH
+
disk. For multipath this would + return one of the /dev/sd* paths to the disk. If the device is not a + device mapper device, then VDEV_UPATH just returns + the same value as VDEV_PATH
+
VDEV_ENC_SYSFS_PATH
+
+
+

Note that some of these variables may have a blank value. + POOL_NAME is blank at pool creation time, for + example.

+
+
+
+

+

zfs_prepare_disk runs with a limited + $PATH.

+
+
+

+

zfs_prepare_disk should return 0 on + success, non-zero otherwise. If non-zero is returned, the disk will not be + included in the pool.

+
+
+ + + + + +
August 30, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zgenhostid.8.html b/man/master/8/zgenhostid.8.html new file mode 100644 index 000000000..e9a1bbd28 --- /dev/null +++ b/man/master/8/zgenhostid.8.html @@ -0,0 +1,332 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's ManualZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate host ID into /etc/hostid

+
+
+

+ + + + + +
zgenhostid[-f] [-o + filename] [hostid]
+
+
+

+

Creates /etc/hostid file and stores the + host ID in it. If hostid was provided, validate and + store that value. Otherwise, randomly generate an ID.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Allow output overwrite.
+
+ filename
+
Write to filename instead of the default + /etc/hostid.
+
hostid
+
Specifies the value to be placed in /etc/hostid. + It should be a number with a value between 1 and 2^32-1. If + , generate a random + ID. This value must be unique among your systems. It + must be an 8-digit-long hexadecimal number, optionally + prefixed by "0x".
+
+
+
+

+

/etc/hostid

+
+
+

+
+
Generate a random hostid and store it
+
+
# + zgenhostid
+
+
Record the libc-generated hostid in + /etc/hostid
+
+
# + zgenhostid + "$(hostid)"
+
+
Record a custom hostid (0xdeadbeef) in + /etc/hostid
+
+
# + zgenhostid + deadbeef
+
+
Record a custom hostid (0x01234567) in + /tmp/hostid and overwrite the file + if it exists
+
+
# + zgenhostid -f + -o /tmp/hostid + 0x01234567
+
+
+
+
+

+

genhostid(1), hostid(1), + spl(4)

+
+
+

+

zgenhostid emulates the + genhostid(1) utility and is provided for use on systems + which do not include the utility or do not provide the + sethostid(3) function.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zinject.8.html b/man/master/8/zinject.8.html new file mode 100644 index 000000000..e3ab79436 --- /dev/null +++ b/man/master/8/zinject.8.html @@ -0,0 +1,551 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
ZINJECT(8)System Manager's ManualZINJECT(8)
+
+
+

+

zinjectZFS + Fault Injector

+
+
+

+

zinject creates artificial problems in a + ZFS pool by simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+ + + + + +
zinject
+
+
List injection records.
+
+ + + + + +
zinject-b + objset:object:level:start:end + [-f frequency] + -amu [pool]
+
+
Force an error into the pool at a bookmark.
+
+ + + + + +
zinject-c + id|all
+
+
Cancel injection records.
+
+ + + + + +
zinject-d vdev + -A + | + pool
+
+
Force a vdev into the DEGRADED or FAULTED state.
+
+ + + + + +
zinject-d vdev + -D + latency:lanes + [-T read|write] + pool
+
+
Add an artificial delay to I/O requests on a particular device, such that + the requests take a minimum of latency milliseconds + to complete. Each delay has an associated number of + lanes which defines the number of concurrent I/O + requests that can be processed. +

For example, with a single lane delay of 10 ms + (-D + 10:1), the device will only + be able to service a single I/O request at a time with each request + taking 10 ms to complete. So, if only a single request is submitted + every 10 ms, the average latency will be 10 ms; but if more than one + request is submitted every 10 ms, the average latency will be more than + 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D + 10:2), then the device will + be able to service two requests at a time, each with a minimum latency + of 10 ms. So, if two requests are submitted every 10 ms, then the + average latency will be 10 ms; but if more than two requests are + submitted every 10 ms, the average latency will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + -D + 10:1 are roughly equivalent + to a single invocation of -D + 10:2. This also means, that + one can specify multiple lanes with differing target latencies. For + example, an invocation of -D + 10:1 followed by + -D + 25:2 will create 3 lanes on + the device: one lane with a latency of 10 ms and two lanes with a 25 ms + latency.

+
+
+ + + + + +
zinject-d vdev + [-e device_error] + [-L label_error] + [-T failure] + [-f frequency] + [-F] pool
+
+
Force a vdev error.
+
+ + + + + +
zinject-I [-s + seconds|-g + txgs] pool
+
+
Simulate a hardware failure that fails to honor a cache flush.
+
+ + + + + +
zinject-p function + pool
+
+
Panic inside the specified function.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amq] path
+
+
Force an error into the contents of a file.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-amq] path
+
+
Force an error into the metadnode for a file or directory.
+
+ + + + + +
zinject-t mos_type + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amqu] pool
+
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+ objset:object:level:start:end
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+ dvas
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas + (e.g. + 0,2). This option is not + applicable to logical data errors such as decompress and + decrypt.
+
+ vdev
+
A vdev specified by path or GUID.
+
+ device_error
+
Specify +
+
+
for an ECKSUM error,
+
+
for a data decompression error,
+
+
for a data decryption error,
+
+
to flip a bit in the data after a read,
+
+
for an ECHILD error,
+
+
for an EIO error where reopening the device will succeed, or
+
+
for an ENXIO error where reopening the device will fail.
+
+

For EIO and ENXIO, the "failed" reads or writes + still occur. The probe simply sets the error value reported by the I/O + pipeline so it appears the read or write failed. Decryption errors only + currently work with file data.

+
+
+ frequency
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between + + and + .
+
+
Fail faster. Do fewer checks.
+
+ txgs
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+ level
+
Inject an error at a particular block level. The default is + .
+
+ label_error
+
Set the label error region to one of + , + , + , or + .
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+ range
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+ seconds
+
Run for this many seconds before reporting failure.
+
+ failure
+
Set the failure type to one of all, + , + , + , or + .
+
+ mos_type
+
Set this to +
+
+
for any data in the MOS,
+
+
for an object directory,
+
+
for the pool configuration,
+
+
for the block pointer list,
+
+
for the space map,
+
+
for the metaslab, or
+
+
for the persistent error log.
+
+
+
+
Unload the pool after injection.
+
+
+
+

+
+
+
Run zinject in debug mode.
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-add.8.html b/man/master/8/zpool-add.8.html new file mode 100644 index 000000000..ed277bc37 --- /dev/null +++ b/man/master/8/zpool-add.8.html @@ -0,0 +1,336 @@ + + + + + + + zpool-add.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-add.8

+
+ + + + + +
ZPOOL-ADD(8)System Manager's ManualZPOOL-ADD(8)
+
+
+

+

zpool-addadd + vdevs to ZFS storage pool

+
+
+

+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev
+
+
+

+

Adds the specified virtual devices to the given pool. The + vdev specification is described in the + section of zpoolconcepts(7). The behavior + of the -f option, and the device checks performed + are described in the zpool + create subcommand.

+
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name + regardless of the /dev/disk path used to open + it.
+
+
Displays the configuration that would be used without actually adding the + vdevs. The actual pool creation can still fail due + to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
+
+

+
+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool.

+
# zpool + add tank + + sda sdb
+
+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+
+

+

zpool-attach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-remove(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-attach.8.html b/man/master/8/zpool-attach.8.html new file mode 100644 index 000000000..c8177b9bd --- /dev/null +++ b/man/master/8/zpool-attach.8.html @@ -0,0 +1,335 @@ + + + + + + + zpool-attach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-attach.8

+
+ + + + + +
ZPOOL-ATTACH(8)System Manager's ManualZPOOL-ATTACH(8)
+
+
+

+

zpool-attach — + attach new device to existing ZFS vdev

+
+
+

+ + + + + +
zpoolattach [-fsw] + [-o + property=value] + pool device new_device
+
+
+

+

Attaches new_device to the existing + device. The behavior differs depending on if the + existing device is a RAID-Z device, or a mirror/plain + device.

+

If the existing device is a mirror or plain device (e.g. specified + as "sda" or + "mirror-7"), the new device will be + mirrored with the existing device, a resilver will be initiated, and the new + device will contribute to additional redundancy once the resilver completes. + If device is not currently part of a mirrored + configuration, device automatically transforms into a + two-way mirror of device and + new_device. If device is part of + a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately and any + running scrub is cancelled.

+

If the existing device is a RAID-Z device (e.g. specified as + "raidz2-0"), the new device will become part + of that RAID-Z group. A "raidz expansion" will be initiated, and + once the expansion completes, the new device will contribute additional + space to the RAID-Z group. The expansion entails reading all allocated space + from existing disks in the RAID-Z group, and rewriting it to the new disks + in the RAID-Z group (including the newly added + device). Its progress can be monitored with + zpool status.

+

Data redundancy is maintained during and after the expansion. If a + disk fails while the expansion is in progress, the expansion pauses until + the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk + and waiting for reconstruction to complete). Expansion does not change the + number of failures that can be tolerated without data loss (e.g. a RAID-Z2 + is still a RAID-Z2 even after expansion). A RAID-Z vdev can be expanded + multiple times.

+

After the expansion completes, old blocks retain their old + data-to-parity ratio (e.g. 5-wide RAID-Z2 has 3 data and 2 parity) but + distributed among the larger set of disks. New blocks will be written with + the new data-to-parity ratio (e.g. a 5-wide RAID-Z2 which has been expanded + once to 6-wide, has 4 data and 2 parity). However, the vdev's assumed parity + ratio does not change, so slightly less space than is expected may be + reported for newly-written blocks, according to zfs + list, df, + ls -s, and similar + tools.

+

A pool-wide scrub is initiated at the end of the expansion in + order to verify the checksums of all blocks which have been copied during + the expansion.

+
+
+
Forces use of new_device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
When attaching to a mirror or plain device, the + new_device is reconstructed sequentially to restore + redundancy as quickly as possible. Checksums are not verified during + sequential reconstruction so a scrub is started when the resilver + completes.
+
+
Waits until new_device has finished resilvering or + expanding before returning.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-replace(8), + zpool-resilver(8)

+
+
+ + + + + +
June 28, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-checkpoint.8.html b/man/master/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..3c1109c5a --- /dev/null +++ b/man/master/8/zpool-checkpoint.8.html @@ -0,0 +1,290 @@ + + + + + + + zpool-checkpoint.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-checkpoint.8

+
+ + + + + +
ZPOOL-CHECKPOINT(8)System Manager's ManualZPOOL-CHECKPOINT(8)
+
+
+

+

zpool-checkpoint — + check-point current ZFS storage pool state

+
+
+

+ + + + + +
zpoolcheckpoint [-d + [-w]] pool
+
+
+

+

Checkpoints the current state of pool , + which can be later restored by zpool + import --rewind-to-checkpoint. The existence of a + checkpoint in a pool prohibits the following zpool + subcommands: remove, attach, + detach, split, + and reguid. In addition, it + may break reservation boundaries if the pool lacks free space. The + zpool status command + indicates the existence of a checkpoint or the progress of discarding a + checkpoint from a pool. zpool + list can be used to check how much space the + checkpoint takes from the pool.

+
+
+

+
+
, + --discard
+
Discards an existing checkpoint from pool.
+
, + --wait
+
Waits until the checkpoint has finished being discarded before + returning.
+
+
+
+

+

zfs-snapshot(8), + zpool-import(8), zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-clear.8.html b/man/master/8/zpool-clear.8.html new file mode 100644 index 000000000..9b53176cf --- /dev/null +++ b/man/master/8/zpool-clear.8.html @@ -0,0 +1,284 @@ + + + + + + + zpool-clear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-clear.8

+
+ + + + + +
ZPOOL-CLEAR(8)System Manager's ManualZPOOL-CLEAR(8)
+
+
+

+

zpool-clear — + clear device errors in ZFS storage pool

+
+
+

+ + + + + +
zpoolclear [--power] + pool [device]…
+
+
+

+

Clears device errors in a pool. If no arguments are specified, all + device errors within the pool are cleared. If one or more devices is + specified, only those errors associated with the specified device or devices + are cleared.

+

If the pool was suspended it will be brought back + online provided the devices can be accessed. Pools with + + enabled which have been suspended cannot be resumed. While the pool was + suspended, it may have been imported on another host, and resuming I/O could + result in pool damage.

+
+
+
Power on the devices's slot in the storage enclosure and wait for the + device to show up before attempting to clear errors. This is done on all + the devices specified. Alternatively, you can set the + + environment variable to always enable this behavior. Note: This flag + currently works on Linux only.
+
+
+
+

+

zdb(8), zpool-reopen(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-create.8.html b/man/master/8/zpool-create.8.html new file mode 100644 index 000000000..260ed40dd --- /dev/null +++ b/man/master/8/zpool-create.8.html @@ -0,0 +1,449 @@ + + + + + + + zpool-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-create.8

+
+ + + + + +
ZPOOL-CREATE(8)System Manager's ManualZPOOL-CREATE(8)
+
+
+

+

zpool-create — + create ZFS storage pool

+
+
+

+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]… + [-o + feature@feature=value] + [-o + compatibility=off|legacy|file[,file]…] + [-O + file-system-property=value]… + [-R root] + [-t tname] + pool vdev
+
+
+

+

Creates a new storage pool containing the virtual devices + specified on the command line. The pool name must begin with a letter, and + can only contain alphanumeric characters as well as the underscore + (""), + dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + draid, spare and log + are reserved, as are names beginning with mirror, + raidz, draid, and + spare. The vdev specification is + described in the Virtual Devices + section of zpoolconcepts(7).

+

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts to + use a new device in different pools, even if + = + enabled. The administrator must ensure that simultaneous + invocations of any combination of zpool + replace, zpool + create, zpool + add, or zpool + labelclear do not refer to the same device. Using + the same device in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or specified + as the dedicated dump device, that prevents a device from ever being used by + ZFS. Other uses, such as having a preexisting UFS file system, can be + overridden with -f.

+

The command also checks that the replication strategy for the pool + is consistent. An attempt to combine redundant and non-redundant storage in + a single pool, or to mix disks and files, results in an error unless + -f is specified. The use of differently-sized + devices within a single raidz or mirror group is also flagged as an error + unless -f is specified.

+

Unless the -R option is specified, the + default mount point is /pool. + The mount point must not exist or must be empty, or else the root dataset + will not be able to be be mounted. This can be overridden with the + -m option.

+

By default all supported features are enabled + on the new pool. The -d option and the + -o compatibility property (e.g + -o + =2020) + can be used to restrict the features that are enabled, so that the pool can + be imported on other releases of ZFS.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with -o. See + zpool-features(7) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool if + altroot is specified. The mount point must be an + absolute path, legacy, or none. For + more information on dataset mount points, see + zfsprops(7).
+
+
Displays the configuration that would be used without actually creating + the pool. The actual pool creation can still fail due to insufficient + privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See zpoolprops(7) for a + list of valid properties that can be set.
+
+ compatibility=off|legacy|file[,file]…
+
Specifies compatibility feature sets. See + zpool-features(7) for more information about + compatibility feature sets.
+
+ feature@feature=value
+
Sets the given pool feature. See the zpool-features(7) + section for a list of valid features that can be set. Value can be either + disabled or enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the pool. + See zfsprops(7) for a list of valid properties that can + be set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to tname while the + on-disk name will be the name specified as pool. + This will set the default of the cachefile property to + none. This is intended to handle name space collisions + when creating pools for other systems, such as virtual machines or + physical machines whose pools live on network block devices.
+
+
+
+

+
+

+

The following command creates a pool with a single raidz root vdev + that consists of six disks:

+
# zpool + create tank + raidz sda sdb sdc sdd sde + sdf
+
+
+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks:

+
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
+

+

The following command creates a non-redundant pool using two disk + partitions:

+
# zpool + create tank + sda1 sdb2
+
+
+

+

The following command creates a non-redundant pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+
# zpool + create tank + /path/to/file/a /path/to/file/b
+
+
+

+

The following command creates a new pool with an available hot + spare:

+
# zpool + create tank + mirror sda sdb + spare sdc
+
+
+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+
# zpool + create pool + mirror sda sdb + mirror sdc sdd log + mirror sde sdf
+
+
+
+

+

zpool-destroy(8), + zpool-export(8), zpool-import(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-destroy.8.html b/man/master/8/zpool-destroy.8.html new file mode 100644 index 000000000..6190eccad --- /dev/null +++ b/man/master/8/zpool-destroy.8.html @@ -0,0 +1,278 @@ + + + + + + + zpool-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-destroy.8

+
+ + + + + +
ZPOOL-DESTROY(8)System Manager's ManualZPOOL-DESTROY(8)
+
+
+

+

zpool-destroy — + destroy ZFS storage pool

+
+
+

+ + + + + +
zpooldestroy [-f] + pool
+
+
+

+

Destroys the given pool, freeing up any devices for other use. + This command tries to unmount any active datasets before destroying the + pool.

+
+
+
Forcefully unmount all active datasets.
+
+
+
+

+
+

+

The following command destroys the pool tank + and any datasets contained within:

+
# zpool + destroy -f + tank
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-detach.8.html b/man/master/8/zpool-detach.8.html new file mode 100644 index 000000000..73ab2ecbe --- /dev/null +++ b/man/master/8/zpool-detach.8.html @@ -0,0 +1,271 @@ + + + + + + + zpool-detach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-detach.8

+
+ + + + + +
ZPOOL-DETACH(8)System Manager's ManualZPOOL-DETACH(8)
+
+
+

+

zpool-detach — + detach device from ZFS mirror

+
+
+

+ + + + + +
zpooldetach pool device
+
+
+

+

Detaches device from a mirror. The operation + is refused if there are no other valid replicas of the data. If + device may be re-added to the pool later on then + consider the zpool offline + command instead.

+
+
+

+

zpool-attach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-remove(8), zpool-replace(8), + zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-events.8.html b/man/master/8/zpool-events.8.html new file mode 100644 index 000000000..bb4fd44ed --- /dev/null +++ b/man/master/8/zpool-events.8.html @@ -0,0 +1,872 @@ + + + + + + + zpool-events.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-events.8

+
+ + + + + +
ZPOOL-EVENTS(8)System Manager's ManualZPOOL-EVENTS(8)
+
+
+

+

zpool-events — + list recent events generated by kernel

+
+
+

+ + + + + +
zpoolevents [-vHf] + [pool]
+
+ + + + + +
zpoolevents -c
+
+
+

+

Lists all recent events generated by the ZFS kernel modules. These + events are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. For + more information about the subclasses and event payloads that can be + generated see EVENTS and the following + sections.

+
+
+

+
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
+

+

These are the different event subclasses. The full event name + would be + , + but only the last part is listed here.

+

+
+
+
Issued when a checksum error has been detected.
+
+
Issued when there is an I/O error in a vdev in the pool.
+
+
Issued when there have been data errors in the pool.
+
+
Issued when an I/O request is determined to be "hung", this can + be caused by lost completion events due to flaky hardware or drivers. See + + in zfs(4) for additional information regarding + "hung" I/O detection and configuration.
+
+
Issued when a completed I/O request exceeds the maximum allowed time + specified by the + + module parameter. This can be an indicator of problems with the underlying + storage device. The number of delay events is ratelimited by the + + module parameter.
+
+
Issued every time a vdev change have been done to the pool.
+
+
Issued when a pool cannot be imported.
+
+
Issued when a pool is destroyed.
+
+
Issued when a pool is exported.
+
+
Issued when a pool is imported.
+
+
Issued when a REGUID (new unique identifier for the pool have been + regenerated) have been detected.
+
+
Issued when the vdev is unknown. Such as trying to clear device errors on + a vdev that have failed/been kicked from the system/pool and is no longer + available.
+
+
Issued when a vdev could not be opened (because it didn't exist for + example).
+
+
Issued when corrupt data have been detected on a vdev.
+
+
Issued when there are no more replicas to sustain the pool. This would + lead to the pool being + .
+
+
Issued when a missing device in the pool have been detected.
+
+
Issued when the system (kernel) have removed a device, and ZFS notices + that the device isn't there any more. This is usually followed by a + probe_failure event.
+
+
Issued when the label is OK but invalid.
+
+
Issued when the ashift alignment requirement has increased.
+
+
Issued when a vdev is detached from a mirror (or a spare detached from a + vdev where it have been used to replace a failed drive - only works if the + original drive have been re-added).
+
+
Issued when clearing device errors in a pool. Such as running + zpool clear on a device in + the pool.
+
+
Issued when a check to see if a given vdev could be opened is + started.
+
+
Issued when a spare have kicked in to replace a failed device.
+
+
Issued when a vdev can be automatically expanded.
+
+
Issued when there is an I/O failure in a vdev in the pool.
+
+
Issued when a probe fails on a vdev. This would occur if a vdev have been + kicked from the system outside of ZFS (such as the kernel have removed the + device).
+
+
Issued when the intent log cannot be replayed. The can occur in the case + of a missing or damaged log device.
+
+
Issued when a resilver is started.
+
+
Issued when the running resilver have finished.
+
+
Issued when a scrub is started on a pool.
+
+
Issued when a pool has finished scrubbing.
+
+
Issued when a scrub is aborted on a pool.
+
+
Issued when a scrub is resumed on a pool.
+
+
Issued when a scrub is paused on a pool.
+
+
 
+
+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to + uppercase and prefixed with + .

+

+
+
+
Pool name.
+
+
Failmode - + , + , + or + . + See the + + property in zpoolprops(7) for more information.
+
+
The GUID of the pool.
+
+
The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, + 4=recover 5=error).
+
+
The GUID of the vdev in question (the vdev failing or operated upon with + zpool clear, etc.).
+
+
Type of vdev - + , + , + , + etc. See the + section of zpoolconcepts(7) for more + information on possible values.
+
+
Full path of the vdev, including any -partX.
+
+
ID of vdev (if any).
+
+
Physical FRU location.
+
+
State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed + to open, 5=faulted, 6=degraded, 7=healthy).
+
+
The ashift value of the vdev.
+
+
The time the last I/O request completed for the specified vdev.
+
+
The time since the last I/O request completed for the specified vdev.
+
+
List of spares, including full path and any -partX.
+
+
GUID(s) of spares.
+
+
How many read errors that have been detected on the vdev.
+
+
How many write errors that have been detected on the vdev.
+
+
How many checksum errors that have been detected on the vdev.
+
+
GUID of the vdev parent.
+
+
Type of parent. See vdev_type.
+
+
Path of the vdev parent (if any).
+
+
ID of the vdev parent (if any).
+
+
The object set number for a given I/O request.
+
+
The object number for a given I/O request.
+
+
The indirect level for the block. Level 0 is the lowest level and includes + data blocks. Values > 0 indicate metadata blocks at the appropriate + level.
+
+
The block ID for a given I/O request.
+
+
The error number for a failure when handling a given I/O request, + compatible with errno(3) with the value of + + used to indicate a ZFS checksum error.
+
+
The offset in bytes of where to write the I/O request for the specified + vdev.
+
+
The size in bytes of the I/O request.
+
+
The current flags describing how the I/O request should be handled. See + the I/O FLAGS section for the full list of I/O + flags.
+
+
The current stage of the I/O in the pipeline. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The time elapsed (in nanoseconds) waiting for the block layer to complete + the I/O request. Unlike zio_delta, this does not include + any vdev queuing time and is therefore solely a measure of the block layer + performance.
+
+
The time when a given I/O request was submitted.
+
+
The time required to service a given I/O request.
+
+
The previous state of the vdev.
+
+
Checksum algorithm used. See zfsprops(7) for more + information on the available checksum algorithms.
+
+
Whether or not the data is byteswapped.
+
+
start, + end) pairs of corruption offsets. Offsets are always + aligned on a 64-bit boundary, and can include some gaps of non-corruption. + (See bad_ranges_min_gap)
+
+
In order to bound the size of the bad_ranges array, gaps + of non-corruption less than or equal to + bad_ranges_min_gap bytes have been merged with adjacent + corruption. Always at least 8 bytes, since corruption is detected on a + 64-bit word basis.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits in that range which were clear in the + good data and set in the bad data.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits for that range which were set in the + good data and clear in the bad data.
+
+
If this field exists, it is an array of (bad data + & ~(good data)); that + is, the bits set in the bad data which are cleared in the good data. Each + element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, + the first element is the first byte in the first + bad_ranges range, and the last element is the last byte + in the last bad_ranges range.
+
+
Like bad_set_bits, but contains (good + data & ~(bad + data)); that is, the bits set in the good data which are cleared in + the bad data.
+
+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, Ioctl and Trim. These stages may be + set on an event to describe the life cycle of a given I/O request.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCIT
ZIO_STAGE_READ_BP_INIT0x00000002R-----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W----
ZIO_STAGE_FREE_BP_INIT0x00000008--F---
ZIO_STAGE_ISSUE_ASYNC0x00000010-WF--T
ZIO_STAGE_WRITE_COMPRESS0x00000020-W----
ZIO_STAGE_ENCRYPT0x00000040-W----
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W----
ZIO_STAGE_NOP_WRITE0x00000100-W----
ZIO_STAGE_BRT_FREE0x00000200--F---
ZIO_STAGE_DDT_READ_START0x00000400R-----
ZIO_STAGE_DDT_READ_DONE0x00000800R-----
ZIO_STAGE_DDT_WRITE0x00001000-W----
ZIO_STAGE_DDT_FREE0x00002000--F---
ZIO_STAGE_GANG_ASSEMBLE0x00004000RWFC--
ZIO_STAGE_GANG_ISSUE0x00008000RWFC--
ZIO_STAGE_DVA_THROTTLE0x00010000-W----
ZIO_STAGE_DVA_ALLOCATE0x00020000-W----
ZIO_STAGE_DVA_FREE0x00040000--F---
ZIO_STAGE_DVA_CLAIM0x00080000---C--
ZIO_STAGE_READY0x00100000RWFCIT
ZIO_STAGE_VDEV_IO_START0x00200000RW--IT
ZIO_STAGE_VDEV_IO_DONE0x00400000RW---T
ZIO_STAGE_VDEV_IO_ASSESS0x00800000RW--IT
ZIO_STAGE_CHECKSUM_VERIFY0x01000000R-----
ZIO_STAGE_DONE0x02000000RWFCIT
+
+
+

+

Every I/O request in the pipeline contains a set of flags which + describe its function and are used to govern its behavior. These flags will + be set in an event as a zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+

+

zfs(4), zed(8), + zpool-wait(8)

+
+
+ + + + + +
February 28, 2024Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-export.8.html b/man/master/8/zpool-export.8.html new file mode 100644 index 000000000..c10c84372 --- /dev/null +++ b/man/master/8/zpool-export.8.html @@ -0,0 +1,299 @@ + + + + + + + zpool-export.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-export.8

+
+ + + + + +
ZPOOL-EXPORT(8)System Manager's ManualZPOOL-EXPORT(8)
+
+
+

+

zpool-export — + export ZFS storage pools

+
+
+

+ + + + + +
zpoolexport [-f] + -a|pool
+
+
+

+

Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present.

+

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, so + that ZFS can label the disks with portable EFI labels. Otherwise, disk + drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, and allow export of pools with active + shared spares. +

This command will forcefully export the pool even if it has a + shared spare that is currently being used. This may lead to potential + data corruption.

+
+
+
+
+

+
+

+

The following command exports the devices in pool + tank so that they can be relocated or later + imported:

+
# zpool + export tank
+
+
+
+

+

zpool-import(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-get.8.html b/man/master/8/zpool-get.8.html new file mode 100644 index 000000000..0be761657 --- /dev/null +++ b/man/master/8/zpool-get.8.html @@ -0,0 +1,389 @@ + + + + + + + zpool-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-get.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolset + property=value + pool vdev
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified vdevs (or all vdevs if + all-vdevs is used) in the specified pool. These + properties are displayed with the following fields: +
+
+
+
Name of vdev.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the vdevprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
zpool set + property=value + pool vdev
+
Sets the given property on the specified vdev in the specified pool. See + the vdevprops(7) manual page for more information on + what properties can be set and acceptable values.
+
+
+
+

+

vdevprops(7), + zpool-features(7), zpoolprops(7), + zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-history.8.html b/man/master/8/zpool-history.8.html new file mode 100644 index 000000000..26e2bd086 --- /dev/null +++ b/man/master/8/zpool-history.8.html @@ -0,0 +1,277 @@ + + + + + + + zpool-history.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-history.8

+
+ + + + + +
ZPOOL-HISTORY(8)System Manager's ManualZPOOL-HISTORY(8)
+
+
+

+

zpool-history — + inspect command history of ZFS storage pools

+
+
+

+ + + + + +
zpoolhistory [-il] + [pool]…
+
+
+

+

Displays the command history of the specified pool(s) or all pools + if no pool is specified.

+
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard format + includes, the user name, the hostname, and the zone in which the operation + was performed.
+
+
+
+

+

zpool-checkpoint(8), + zpool-events(8), zpool-status(8), + zpool-wait(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-import.8.html b/man/master/8/zpool-import.8.html new file mode 100644 index 000000000..690de60a9 --- /dev/null +++ b/man/master/8/zpool-import.8.html @@ -0,0 +1,575 @@ + + + + + + + zpool-import.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-import.8

+
+ + + + + +
ZPOOL-IMPORT(8)System Manager's ManualZPOOL-IMPORT(8)
+
+
+

+

zpool-import — + import ZFS storage pools or list available pools

+
+
+

+ + + + + +
zpoolimport [-D] + [-d + dir|device]…
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root]
+
+ + + + + +
zpoolimport [-Dflmt] + [-F [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
+
+

+
+
zpool import + [-D] [-d + dir|device]…
+
Lists pools available to import. If the -d or + -c options are not specified, this command + searches for devices using libblkid on Linux and geom on + FreeBSD. The -d option can + be specified multiple times, and all directories are searched. If the + device appears to be part of an exported pool, this command displays a + summary of the pool with the name of the pool, a numeric identifier, as + well as the vdev layout and current health of the device for each device + or file. Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-nTX]] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the + + mounting option is enabled. In this case, the checkpointed state of + the pool is opened and an administrator can see how the pool would + look like if they were to fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflmt] [-F + [-nTX]] [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. + : + This option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set + -o + cachefile=none when not explicitly + specified.
+
+
+
+
+
+

+
+

+

The following command displays available pools, and then imports + the pool tank for use on the system. The results from + this command are similar to the following:

+
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
+
+

+

zpool-export(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-initialize.8.html b/man/master/8/zpool-initialize.8.html new file mode 100644 index 000000000..0b857ff2a --- /dev/null +++ b/man/master/8/zpool-initialize.8.html @@ -0,0 +1,298 @@ + + + + + + + zpool-initialize.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-initialize.8

+
+ + + + + +
ZPOOL-INITIALIZE(8)System Manager's ManualZPOOL-INITIALIZE(8)
+
+
+

+

zpool-initialize — + write to unallocated regions of ZFS storage pool

+
+
+

+ + + + + +
zpoolinitialize + [-c|-s + |-u] [-w] + pool [device]…
+
+
+

+

Begins initializing by writing to all unallocated regions on the + specified devices, or all eligible devices in the pool if no individual + devices are specified. Only leaf data or log devices may be initialized.

+
+
, + --cancel
+
Cancel initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no cancellation + will occur on any device.
+
, + --suspend
+
Suspend initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no suspension will + occur on any device. Initializing can then be resumed by running + zpool initialize with no + flags on the relevant target devices.
+
, + --uninit
+
Clears the initialization state on the specified devices, or all eligible + devices if none are specified. If the devices are being actively + initialized the command will fail. After being cleared + zpool initialize with no + flags can be used to re-initialize all unallocoated regions on the + relevant target devices.
+
, + --wait
+
Wait until the devices have finished initializing before returning.
+
+
+
+

+

zpool-add(8), zpool-attach(8), + zpool-create(8), zpool-online(8), + zpool-replace(8), zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-iostat.8.html b/man/master/8/zpool-iostat.8.html new file mode 100644 index 000000000..d8100ceee --- /dev/null +++ b/man/master/8/zpool-iostat.8.html @@ -0,0 +1,490 @@ + + + + + + + zpool-iostat.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-iostat.8

+
+ + + + + +
ZPOOL-IOSTAT(8)System Manager's ManualZPOOL-IOSTAT(8)
+
+
+

+

zpool-iostat — + display logical I/O statistics for ZFS storage + pools

+
+
+

+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [pool…|[pool + vdev…]|vdev…] + [interval [count]]
+
+
+

+

Displays logical I/O statistics for the given pools/vdevs. + Physical I/O statistics may be observed via iostat(1). If + writes are located nearby, they may be merged into a single larger + operation. Additional I/O may be generated depending on the level of vdev + redundancy. To filter output, you may pass in a list of pools, a pool and + list of vdevs in that pool, or a list of any vdevs from any pool. If no + items are specified, statistics for every pool in the system are shown. When + given an interval, the statistics are printed every + interval seconds until killed. If + -n flag is specified the headers are displayed only + once, otherwise they are displayed periodically. If + count is specified, the command exits after + count reports are printed. The first report printed is + always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the -y + flag. Also note that the units of + , + , + … that + are printed in the report are in base 1024. To get the raw values, use the + -p flag.

+
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool iostat + output. Users can run any script found in their + ~/.zpool.d directory or from the system + /etc/zfs/zpool.d directory. Script names + containing the slash + () character + are not allowed. The default search path can be overridden by setting the + + environment variable. A privileged user can only run + -c if they have the + + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or add + the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script name, + it prints a list of all scripts. -c also sets + verbose mode + (-v).

+

Script output should be in the form of "name=value". + The column name is set to "name" and the value is set to + "value". Multiple lines can be used to output multiple + columns. The first line of output not in the "name=value" + format is displayed without a column title, and no more output after + that is displayed. This can be useful for printing error messages. Blank + or NULL values are printed as a '-' to make output AWKable.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
Underlying path to the vdev (/dev/sd*). For + use with device mapper, multipath, or partitioned vdevs.
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(1). Specify d for standard date + format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Print request size histograms for the leaf vdev's I/O. This includes + histograms of individual I/O (ind) and aggregate I/O (agg). These stats + can be useful for observing how well I/O aggregation is working. Note that + TRIM I/O may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
Normally the first line of output reports the statistics since boot: + suppress it.
+
+
Display latency histograms: +
+
+
Total I/O time (queuing + disk I/O time).
+
+
Disk I/O time (time reading/writing the disk).
+
+
Amount of time I/O spent in synchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in asynchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in scrub queue. Does not include disk + time.
+
+
Amount of time I/O spent in rebuild queue. Does not include disk + time.
+
+
+
+
Include average latency statistics: +
+
+
Average total I/O time (queuing + disk I/O time).
+
+
Average disk I/O time (time reading/writing the disk).
+
+
Average amount of time I/O spent in synchronous priority queues. Does + not include disk time.
+
+
Average amount of time I/O spent in asynchronous priority queues. Does + not include disk time.
+
+
Average queuing time in scrub queue. Does not include disk time.
+
+
Average queuing time in trim queue. Does not include disk time.
+
+
Average queuing time in rebuild queue. Does not include disk + time.
+
+
+
+
Include active queue statistics. Each priority queue has both pending + () + and active + () + I/O requests. Pending requests are waiting to be issued to the disk, and + active requests have been issued to disk and are waiting for completion. + These stats are broken out by priority queue: +
+
+
Current number of entries in synchronous priority queues.
+
+
Current number of entries in asynchronous priority queues.
+
+
Current number of entries in scrub queue.
+
+
Current number of entries in trim queue.
+
+
Current number of entries in rebuild queue.
+
+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
+

+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+

iostat(1), smartctl(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-labelclear.8.html b/man/master/8/zpool-labelclear.8.html new file mode 100644 index 000000000..fa0ee4928 --- /dev/null +++ b/man/master/8/zpool-labelclear.8.html @@ -0,0 +1,275 @@ + + + + + + + zpool-labelclear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-labelclear.8

+
+ + + + + +
ZPOOL-LABELCLEAR(8)System Manager's ManualZPOOL-LABELCLEAR(8)
+
+
+

+

zpool-labelclear — + remove ZFS label information from device

+
+
+

+ + + + + +
zpoollabelclear [-f] + device
+
+
+

+

Removes ZFS label information from the specified + device. If the device is a cache + device, it also removes the L2ARC header (persistent L2ARC). The + device must not be part of an active pool + configuration.

+
+
+
Treat exported or foreign devices as inactive.
+
+
+
+

+

zpool-destroy(8), + zpool-detach(8), zpool-remove(8), + zpool-replace(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-list.8.html b/man/master/8/zpool-list.8.html new file mode 100644 index 000000000..12b5f309c --- /dev/null +++ b/man/master/8/zpool-list.8.html @@ -0,0 +1,354 @@ + + + + + + + zpool-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-list.8

+
+ + + + + +
ZPOOL-LIST(8)System Manager's ManualZPOOL-LIST(8)
+
+
+

+

zpool-listlist + information about ZFS storage pools

+
+
+

+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]…] + [-T u|d] + [pool]… [interval + [count]]
+
+
+

+

Lists the given pools along with a health status and space usage. + If no pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until killed. If + count is specified, the command exits after + count reports are printed.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + zpoolprops(7) manual page for a list of valid + properties. The default list is + , + , + , + , + , + , + , + , + , + .
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(1). Specify d for standard date + format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
+
+

+
+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following:

+
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
+

+

The following command displays the detailed information for the + pool data. This pool is comprised of a single raidz + vdev where one of its devices increased its capacity by 10 GiB. In this + example, the pool will not be able to utilize this extra capacity until all + the devices under the raidz vdev have been expanded.

+
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
+
+

+

zpool-import(8), + zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-offline.8.html b/man/master/8/zpool-offline.8.html new file mode 100644 index 000000000..809aefc67 --- /dev/null +++ b/man/master/8/zpool-offline.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-offline.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-offline.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline + [--power|[-ft]] + pool device
+
+ + + + + +
zpoolonline + [--power] + [-e] pool + device
+
+
+

+
+
zpool offline + [--power|[-ft]] + pool device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Power off the device's slot in the storage enclosure. This flag + currently works on Linux only
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [--power] [-e] + pool device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Power on the device's slot in the storage enclosure and wait for the + device to show up before attempting to online it. Alternatively, you + can set the + + environment variable to always enable this behavior. This flag + currently works on Linux only
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-online.8.html b/man/master/8/zpool-online.8.html new file mode 100644 index 000000000..850dd78df --- /dev/null +++ b/man/master/8/zpool-online.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-online.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-online.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline + [--power|[-ft]] + pool device
+
+ + + + + +
zpoolonline + [--power] + [-e] pool + device
+
+
+

+
+
zpool offline + [--power|[-ft]] + pool device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Power off the device's slot in the storage enclosure. This flag + currently works on Linux only
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [--power] [-e] + pool device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Power on the device's slot in the storage enclosure and wait for the + device to show up before attempting to online it. Alternatively, you + can set the + + environment variable to always enable this behavior. This flag + currently works on Linux only
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-reguid.8.html b/man/master/8/zpool-reguid.8.html new file mode 100644 index 000000000..3ff0eee86 --- /dev/null +++ b/man/master/8/zpool-reguid.8.html @@ -0,0 +1,268 @@ + + + + + + + zpool-reguid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reguid.8

+
+ + + + + +
ZPOOL-REGUID(8)System Manager's ManualZPOOL-REGUID(8)
+
+
+

+

zpool-reguid — + generate new unique identifier for ZFS storage + pool

+
+
+

+ + + + + +
zpoolreguid pool
+
+
+

+

Generates a new unique identifier for the pool. You must ensure + that all devices in this pool are online and healthy before performing this + action.

+
+
+

+

zpool-export(8), + zpool-import(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-remove.8.html b/man/master/8/zpool-remove.8.html new file mode 100644 index 000000000..996f29b7d --- /dev/null +++ b/man/master/8/zpool-remove.8.html @@ -0,0 +1,363 @@ + + + + + + + zpool-remove.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-remove.8

+
+ + + + + +
ZPOOL-REMOVE(8)System Manager's ManualZPOOL-REMOVE(8)
+
+
+

+

zpool-remove — + remove devices from ZFS storage pool

+
+
+

+ + + + + +
zpoolremove [-npw] + pool device
+
+ + + + + +
zpoolremove -s + pool
+
+
+

+
+
zpool remove + [-npw] pool + device
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. +

Top-level vdevs can only be removed if the primary pool + storage does not contain a top-level raidz vdev, all top-level vdevs + have the same sector size, and the keys for all encrypted datasets are + loaded.

+

Removing a top-level vdev reduces the + total amount of space in the storage pool. The specified device will be + evacuated by copying all allocated space from it to the other devices in + the pool. In this case, the zpool + remove command initiates the removal and + returns, while the evacuation continues in the background. The removal + progress can be monitored with zpool + status. If an I/O error is encountered during + the removal process it will be cancelled. The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(7).

+

A mirrored top-level device (log or data) can be removed by + specifying the top- level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("No-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
Waits until the removal has completed before returning.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
+
+
+

+
+

+

The following commands remove the mirrored log device + + and mirrored top-level data device + .

+

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-replace(8), zpool-split(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-reopen.8.html b/man/master/8/zpool-reopen.8.html new file mode 100644 index 000000000..bf061c80b --- /dev/null +++ b/man/master/8/zpool-reopen.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-reopen.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reopen.8

+
+ + + + + +
ZPOOL-REOPEN(8)System Manager's ManualZPOOL-REOPEN(8)
+
+
+

+

zpool-reopen — + reopen vdevs associated with ZFS storage pools

+
+
+

+ + + + + +
zpoolreopen [-n] + [pool]…
+
+
+

+

Reopen all vdevs associated with the specified pools, or all pools + if none specified.

+
+
+

+
+
+
Do not restart an in-progress scrub operation. This is not recommended and + can result in partially resilvered devices unless a second scrub is + performed.
+
+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-replace.8.html b/man/master/8/zpool-replace.8.html new file mode 100644 index 000000000..a1c850f47 --- /dev/null +++ b/man/master/8/zpool-replace.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-replace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-replace.8

+
+ + + + + +
ZPOOL-REPLACE(8)System Manager's ManualZPOOL-REPLACE(8)
+
+
+

+

zpool-replace — + replace one device with another in ZFS storage + pool

+
+
+

+ + + + + +
zpoolreplace [-fsw] + [-o + property=value] + pool device + [new-device]
+
+
+

+

Replaces device with + new-device. This is equivalent to attaching + new-device, waiting for it to resilver, and then + detaching device. Any in progress scrub will be + cancelled.

+

The size of new-device must be greater than + or equal to the minimum size of all the devices in a mirror or raidz + configuration.

+

new-device is required if the pool is not + redundant. If new-device is not specified, it defaults + to device. This form of replacement is useful after an + existing disk has failed and has been physically replaced. In this case, the + new disk may have the same /dev path as the old + device, even though it is actually a different disk. ZFS recognizes + this.

+
+
+
Forces use of new-device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new-device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until the replacement has completed before returning.
+
+
+
+

+

zpool-detach(8), + zpool-initialize(8), zpool-online(8), + zpool-resilver(8)

+
+
+ + + + + +
May 29, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-resilver.8.html b/man/master/8/zpool-resilver.8.html new file mode 100644 index 000000000..7cc7cce3c --- /dev/null +++ b/man/master/8/zpool-resilver.8.html @@ -0,0 +1,272 @@ + + + + + + + zpool-resilver.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-resilver.8

+
+ + + + + +
ZPOOL-RESILVER(8)System Manager's ManualZPOOL-RESILVER(8)
+
+
+

+

zpool-resilver — + resilver devices in ZFS storage pools

+
+
+

+ + + + + +
zpoolresilver pool
+
+
+

+

Starts a resilver of the specified pools. If an existing resilver + is already running it will be restarted from the beginning. Any drives that + were scheduled for a deferred resilver will be added to the new one. This + requires the + + pool feature.

+
+
+

+

zpool-iostat(8), + zpool-online(8), zpool-reopen(8), + zpool-replace(8), zpool-scrub(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-scrub.8.html b/man/master/8/zpool-scrub.8.html new file mode 100644 index 000000000..a64e5866b --- /dev/null +++ b/man/master/8/zpool-scrub.8.html @@ -0,0 +1,362 @@ + + + + + + + zpool-scrub.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-scrub.8

+
+ + + + + +
ZPOOL-SCRUB(8)System Manager's ManualZPOOL-SCRUB(8)
+
+
+

+

zpool-scrub — + begin or resume scrub of ZFS storage pools

+
+
+

+ + + + + +
zpoolscrub + [-s|-p] + [-w] [-e] + pool
+
+
+

+

Begins a scrub or resumes a paused scrub. The scrub examines all + data in the specified pools to verify that it checksums correctly. For + replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any + damage discovered during the scrub. The zpool + status command reports the progress of the scrub and + summarizes the results of the scrub upon completion.

+

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be out + of date (for example, when attaching a new device to a mirror or replacing + an existing device), whereas scrubbing examines all data to discover silent + errors due to hardware faults or disk failure.

+

When scrubbing a pool with encrypted filesystems the keys do not + need to be loaded. However, if the keys are not loaded and an unrepairable + checksum error is detected the file name cannot be included in the + zpool status + -v verbose error report.

+

Because scrubbing and resilvering are I/O-intensive operations, + ZFS only allows one at a time.

+

A scrub is split into two parts: metadata scanning and block + scrubbing. The metadata scanning sorts blocks into large sequential ranges + which can then be read much more efficiently from disk when issuing the + scrub I/O.

+

If a scrub is paused, the zpool + scrub resumes it. If a resilver is in progress, ZFS + does not allow a scrub to be started until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During this + period, no completion time estimate will be provided.

+
+
+

+
+
+
Stop scrubbing.
+
+
Pause scrubbing. Scrub pause state and progress are periodically synced to + disk. If the system is restarted or pool is exported during a paused + scrub, even after import, scrub will remain paused until it is resumed. + Once resumed the scrub will pick up from the place where it was last + checkpointed to disk. To resume a paused scrub issue + zpool scrub or + zpool scrub + -e again.
+
+
Wait until scrub has completed before returning.
+
+
Only scrub files with known data errors as reported by + zpool status + -v. The pool must have been scrubbed at least once + with the + + feature enabled to use this option. Error scrubbing cannot be run + simultaneously with regular scrubbing or resilvering, nor can it be run + when a regular scrub is paused.
+
+
+
+

+
+

+

Status of pool with ongoing scrub:

+

+
+
# zpool status
+  ...
+  scan: scrub in progress since Sun Jul 25 16:07:49 2021
+        403M / 405M scanned at 100M/s, 68.4M / 405M issued at 10.0M/s
+        0B repaired, 16.91% done, 00:00:04 to go
+  ...
+
+

Where metadata which references 403M of file data has been scanned + at 100M/s, and 68.4M of that file data has been scrubbed sequentially at + 10.0M/s.

+
+
+
+

+

On machines using systemd, scrub timers can be enabled on per-pool + basis. weekly and monthly + timer units are provided.

+
+
+
systemctl enable + zfs-scrub-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-scrub-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpool-iostat(8), + zpool-resilver(8), + zpool-status(8)

+
+
+ + + + + +
June 22, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-set.8.html b/man/master/8/zpool-set.8.html new file mode 100644 index 000000000..42f245cb9 --- /dev/null +++ b/man/master/8/zpool-set.8.html @@ -0,0 +1,389 @@ + + + + + + + zpool-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-set.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolset + property=value + pool vdev
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified vdevs (or all vdevs if + all-vdevs is used) in the specified pool. These + properties are displayed with the following fields: +
+
+
+
Name of vdev.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the vdevprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
zpool set + property=value + pool vdev
+
Sets the given property on the specified vdev in the specified pool. See + the vdevprops(7) manual page for more information on + what properties can be set and acceptable values.
+
+
+
+

+

vdevprops(7), + zpool-features(7), zpoolprops(7), + zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-split.8.html b/man/master/8/zpool-split.8.html new file mode 100644 index 000000000..280143f1b --- /dev/null +++ b/man/master/8/zpool-split.8.html @@ -0,0 +1,317 @@ + + + + + + + zpool-split.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-split.8

+
+ + + + + +
ZPOOL-SPLIT(8)System Manager's ManualZPOOL-SPLIT(8)
+
+
+

+

zpool-split — + split devices off ZFS storage pool, creating new + pool

+
+
+

+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]… + [-R root] + pool newpool + [device]…
+
+
+

+

Splits devices off pool creating + newpool. All vdevs in pool must + be mirrors and the pool must not be in the process of resilvering. At the + time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool.

+

The optional device specification causes the specified device(s) + to be included in the new pool and, should any devices + remain unspecified, the last device in each mirror is used as would be by + default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all encrypted + datasets it attempts to mount as it is bringing the new pool online. Note + that if any datasets have + =, + this command will block waiting for the keys to be entered. Without this + flag, encrypted datasets will be left unavailable until the keys are + loaded.
+
+
Do a dry-run ("No-op") split: do not actually perform it. Print + out the expected configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ property=value
+
Sets the specified property for newpool. See the + zpoolprops(7) manual page for more information on the + available pool properties.
+
+ root
+
Set + + for newpool to root and + automatically import it.
+
+
+
+

+

zpool-import(8), + zpool-list(8), zpool-remove(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-status.8.html b/man/master/8/zpool-status.8.html new file mode 100644 index 000000000..7c6259fef --- /dev/null +++ b/man/master/8/zpool-status.8.html @@ -0,0 +1,373 @@ + + + + + + + zpool-status.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-status.8

+
+ + + + + +
ZPOOL-STATUS(8)System Manager's ManualZPOOL-STATUS(8)
+
+
+

+

zpool-status — + show detailed health status for ZFS storage + pools

+
+
+

+ + + + + +
zpoolstatus [-DegiLpPstvx] + [-T u|d] + [-c + [SCRIPT1[,SCRIPT2]…]] + [pool]… [interval + [count]]
+
+
+

+

Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in the + system is displayed. For more information on pool and device health, see the + Device Failure and + Recovery section of zpoolconcepts(7).

+

If a scrub or resilver is in progress, this command reports the + percentage done and the estimated time to completion. Both of these are only + approximate, because the amount of data in the pool and the other workloads + on the system can change.

+
+
+
Display vdev enclosure slot power status (on or off).
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool status + output. See the -c option of + zpool iostat for complete + details.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Only show unhealthy vdevs (not-ONLINE or with errors).
+
+
Display vdev GUIDs instead of the normal device names These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display vdev initialization status.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Display the number of leaf vdev slow I/O operations. This is the number of + I/O operations that didn't complete in + + milliseconds + ( + by default). This does not necessarily mean the + I/O operations failed to complete, just took an unreasonably long amount + of time. This may indicate a problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(1). Specify d for standard date + format. See date(1).
+
+
Displays verbose data error information, printing out a complete list of + all data errors since the last complete pool scrub. If the head_errlog + feature is enabled and files containing errors have been removed then the + respective filenames will not be reported in subsequent runs of this + command.
+
+
Only display status for pools that are exhibiting errors or are otherwise + unavailable. Warnings about pools not using the latest on-disk format will + not be included.
+
+
+
+

+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+

zpool-events(8), + zpool-history(8), zpool-iostat(8), + zpool-list(8), zpool-resilver(8), + zpool-scrub(8), zpool-wait(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-sync.8.html b/man/master/8/zpool-sync.8.html new file mode 100644 index 000000000..d6673acb9 --- /dev/null +++ b/man/master/8/zpool-sync.8.html @@ -0,0 +1,269 @@ + + + + + + + zpool-sync.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-sync.8

+
+ + + + + +
ZPOOL-SYNC(8)System Manager's ManualZPOOL-SYNC(8)
+
+
+

+

zpool-syncflush + data to primary storage of ZFS storage pools

+
+
+

+ + + + + +
zpoolsync [pool]…
+
+
+

+

This command forces all in-core dirty data to be written to the + primary pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified pools.

+
+
+

+

zpoolconcepts(7), + zpool-export(8), zpool-iostat(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-trim.8.html b/man/master/8/zpool-trim.8.html new file mode 100644 index 000000000..ed2ad1a0b --- /dev/null +++ b/man/master/8/zpool-trim.8.html @@ -0,0 +1,326 @@ + + + + + + + zpool-trim.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-trim.8

+
+ + + + + +
ZPOOL-TRIM(8)System Manager's ManualZPOOL-TRIM(8)
+
+
+

+

zpool-trim — + initiate TRIM of free space in ZFS storage pool

+
+
+

+ + + + + +
zpooltrim [-dw] + [-r rate] + [-c|-s] + pool [device]…
+
+
+

+

Initiates an immediate on-demand TRIM operation for all of the + free space in a pool. This operation informs the underlying storage devices + of all blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.

+

A manual on-demand TRIM operation can be initiated irrespective of + the autotrim pool property setting. See the documentation + for the autotrim property above for the types of vdev + devices which can be trimmed.

+
+
, + --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, the + device guarantees that data stored on the trimmed blocks has been erased. + This requires support from the device and is not supported by all + SSDs.
+
, + --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
, + --cancel
+
Cancel trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no cancellation will + occur on any device.
+
, + --suspend
+
Suspend trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no suspension will + occur on any device. Trimming can then be resumed by running + zpool trim with no flags + on the relevant target devices.
+
, + --wait
+
Wait until the devices are done being trimmed before returning.
+
+
+
+

+

On machines using systemd, trim timers can be enabled on a + per-pool basis. weekly and + monthly timer units are provided.

+
+
+
systemctl enable + zfs-trim-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-trim-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpoolprops(7), + zpool-initialize(8), + zpool-wait(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-upgrade.8.html b/man/master/8/zpool-upgrade.8.html new file mode 100644 index 000000000..51b7037cb --- /dev/null +++ b/man/master/8/zpool-upgrade.8.html @@ -0,0 +1,337 @@ + + + + + + + zpool-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-upgrade.8

+
+ + + + + +
ZPOOL-UPGRADE(8)System Manager's ManualZPOOL-UPGRADE(8)
+
+
+

+

zpool-upgrade — + manage version and feature flags of ZFS storage + pools

+
+
+

+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool
+
+
+

+
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools (subject to + the -o compatibility + property).
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the this version of ZFS. See + zpool-features(7) for a description of feature flags + features supported by this version of ZFS.
+
zpool upgrade + [-V version] + -a|pool
+
Enables all supported features on the given pool. +

If the pool has specified compatibility feature sets using the + -o compatibility property, + only the features present in all requested compatibility sets will be + enabled. If this property is set to legacy then no + upgrade will take place.

+

Once this is done, the pool will no longer be accessible on + systems that do not support feature flags. See + zpool-features(7) for details on compatibility with + systems that support feature flags, but do not support all features + enabled on the pool.

+
+
+
Enables all supported features (from specified compatibility sets, if + any) on all pools.
+
+ version
+
Upgrade to the specified legacy version. If specified, no features + will be enabled on the pool. This option can only be used to increase + the version number up to the last supported legacy version + number.
+
+
+
+
+
+

+
+

+

The following command upgrades all ZFS Storage pools to the + current version of the software:

+
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
+
+

+

zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zpool-history(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool-wait.8.html b/man/master/8/zpool-wait.8.html new file mode 100644 index 000000000..532b40238 --- /dev/null +++ b/man/master/8/zpool-wait.8.html @@ -0,0 +1,320 @@ + + + + + + + zpool-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-wait.8

+
+ + + + + +
ZPOOL-WAIT(8)System Manager's ManualZPOOL-WAIT(8)
+
+
+

+

zpool-waitwait + for activity to stop in a ZFS storage pool

+
+
+

+ + + + + +
zpoolwait [-Hp] + [-T u|d] + [-t + activity[,activity]…] + pool [interval]
+
+
+

+

Waits until all background activity of the given types has ceased + in the given pool. The activity could cease because it has completed, or + because it has been paused or canceled by a user, or because the pool has + been exported or destroyed. If no activities are specified, the command + waits until background activity of every type listed below has ceased. If + there is no activity of the given types in progress, the command returns + immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
Checkpoint to be discarded
+
+
+ property to become +
+
+
All initializations to cease
+
+
All device replacements to cease
+
+
Device removal to cease
+
+
Resilver to cease
+
+
Scrub to cease
+
+
Manual trim to cease
+
+
Attaching to a RAID-Z vdev to complete
+
+
+

If an interval is provided, the amount of + work remaining, in bytes, for each activity is printed every + interval seconds.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display numbers in parsable (exact) values.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(1). Specify d for standard date + format. See date(1).
+
+
+
+

+

zpool-checkpoint(8), + zpool-initialize(8), zpool-remove(8), + zpool-replace(8), zpool-resilver(8), + zpool-scrub(8), zpool-status(8), + zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool.8.html b/man/master/8/zpool.8.html new file mode 100644 index 000000000..f5f159be1 --- /dev/null +++ b/man/master/8/zpool.8.html @@ -0,0 +1,838 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's ManualZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpoolversion
+
+ + + + + +
zpoolsubcommand + [arguments]
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+

For an overview of creating and managing ZFS storage pools see the + zpoolconcepts(7) manual page.

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
 
+
zpool version
+
Displays the software version of the zpool + userland utility and the ZFS kernel module.
+
+
+

+
+
zpool-create(8)
+
Creates a new storage pool containing the virtual devices specified on the + command line.
+
zpool-initialize(8)
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified.
+
+
+
+

+
+
zpool-destroy(8)
+
Destroys the given pool, freeing up any devices for other use.
+
zpool-labelclear(8)
+
Removes ZFS label information from the specified + device.
+
+
+
+

+
+
zpool-attach(8)/zpool-detach(8)
+
Converts a non-redundant disk into a mirror, or increases the redundancy + level of an existing mirror (attach), or performs + the inverse operation (detach).
+
zpool-add(8)/zpool-remove(8)
+
Adds the specified virtual devices to the given pool, or removes the + specified device from the pool.
+
zpool-replace(8)
+
Replaces an existing device (which may be faulted) with a new one.
+
zpool-split(8)
+
Creates a new pool by splitting all mirrors in an existing pool (which + decreases its redundancy).
+
+
+
+

+

Available pool properties listed in the + zpoolprops(7) manual page.

+
+
zpool-list(8)
+
Lists the given pools along with a health status and space usage.
+
zpool-get(8)/zpool-set(8)
+
Retrieves the given list of properties (or all properties if + is used) for + the specified storage pool(s).
+
+
+
+

+
+
zpool-status(8)
+
Displays the detailed health status for the given pools.
+
zpool-iostat(8)
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/O + operations may be observed via iostat(1).
+
zpool-events(8)
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + That manual page also describes the subclasses and event payloads that can + be generated.
+
zpool-history(8)
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified.
+
+
+
+

+
+
zpool-scrub(8)
+
Begins a scrub or resumes a paused scrub.
+
zpool-checkpoint(8)
+
Checkpoints the current state of pool, which can be + later restored by zpool + import + --rewind-to-checkpoint.
+
zpool-trim(8)
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.
+
zpool-sync(8)
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified + pool(s).
+
zpool-upgrade(8)
+
Manage the on-disk format version of storage pools.
+
zpool-wait(8)
+
Waits until all background activity of the given types has ceased in the + given pool.
+
+
+
+

+
+
zpool-offline(8)/zpool-online(8)
+
Takes the specified physical device offline or brings it online.
+
zpool-resilver(8)
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning.
+
zpool-reopen(8)
+
Reopen all the vdevs associated with the pool.
+
zpool-clear(8)
+
Clears device errors in a pool.
+
+
+
+

+
+
zpool-import(8)
+
Make disks containing ZFS storage pools available for use on the + system.
+
zpool-export(8)
+
Exports the given pools from the system.
+
zpool-reguid(8)
+
Generates a new unique identifier for the pool.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+
+

+
+

+

The following command creates a pool with a single raidz root vdev + that consists of six disks:

+
# zpool + create tank + + sda sdb sdc sdd sde sdf
+
+
+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks:

+
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
+

+

The following command creates a non-redundant pool using two disk + partitions:

+
# zpool + create tank + sda1 sdb2
+
+
+

+

The following command creates a non-redundant pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+
# zpool + create tank + /path/to/file/a /path/to/file/b
+
+
+

+

The following command converts an existing single device + sda into a mirror by attaching a second device to it, + sdb.

+
# zpool + attach tank sda + sdb
+
+
+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool.

+
# zpool + add tank + mirror sda sdb
+
+
+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following:

+
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
+

+

The following command destroys the pool tank + and any datasets contained within:

+
# zpool + destroy -f + tank
+
+
+

+

The following command exports the devices in pool + tank so that they can be relocated or later + imported:

+
# zpool + export tank
+
+
+

+

The following command displays available pools, and then imports + the pool tank for use on the system. The results from + this command are similar to the following:

+
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
+

+

The following command upgrades all ZFS Storage pools to the + current version of the software:

+
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
+

+

The following command creates a new pool with an available hot + spare:

+
# zpool + create tank + mirror sda sdb + + sdc
+

If one of the disks were to fail, the pool would be reduced to the + degraded state. The failed device can be replaced using the following + command:

+
# zpool + replace tank + sda sdd
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The hot + spare can be permanently removed from the pool using the following + command:

+
# zpool + remove tank + sdc
+
+
+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+
# zpool + create pool + mirror sda sdb + mirror sdc sdd + + sde sdf
+
+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+

+

The following commands remove the mirrored log device + + and mirrored top-level data device + .

+

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
+

+

The following command displays the detailed information for the + pool data. This pool is comprised of a single raidz + vdev where one of its devices increased its capacity by 10 GiB. In this + example, the pool will not be able to utilize this extra capacity until all + the devices under the raidz vdev have been expanded.

+
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
Use ANSI color in zpool + status and zpool + iostat output.
+
+
Automatically attempt to turn on the drives enclosure slot power to a + drive when running the zpool + online or zpool + clear commands. This has the same effect as + passing the --power option to those commands.
+
+
The maximum time in milliseconds to wait for a slot power sysfs value to + return the correct value after writing it. For example, after writing + "on" to the sysfs enclosure slot power_control file, it can take + some time for the enclosure to power down the slot and return + "on" if you read back the 'power_control' value. Defaults to 30 + seconds (30000ms) if not set.
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
If set, suppress warning about non-native vdev ashift in + zpool status. The value is + not used, only the presence or absence of the variable matters.
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool + status -g command line + option.
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the + zpool status + -L command line option.
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the + zpool status + -P command line option.
+
+
Older OpenZFS implementations had issues when attempting to display pool + config vdev names if a devid NVP value is present in the + pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and + zpool status would fail + when listing the config. This would also be true for future Linux-based + pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool + add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+

+
+
+
Allow a privileged user to run zpool + status/iostat + -c. Normally, only unprivileged users are allowed + to run -c.
+
+
The search path for scripts when running zpool + status/iostat + -c. This is a colon-separated list of directories + and overrides the default ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
Allow a user to run zpool + status/iostat + -c. If ZPOOL_SCRIPTS_ENABLED is + not set, it is assumed that the user is allowed to run + zpool + status/iostat + -c.
+
+
Time, in seconds, to wait for /dev/zfs to appear. + Defaults to + , max + (10 + minutes). If <0, wait forever; if + 0, don't wait.
+
+
+
+

+

+
+
+

+

zfs(4), zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zed(8), zfs(8), + zpool-add(8), zpool-attach(8), + zpool-checkpoint(8), zpool-clear(8), + zpool-create(8), zpool-destroy(8), + zpool-detach(8), zpool-events(8), + zpool-export(8), zpool-get(8), + zpool-history(8), zpool-import(8), + zpool-initialize(8), zpool-iostat(8), + zpool-labelclear(8), zpool-list(8), + zpool-offline(8), zpool-online(8), + zpool-reguid(8), zpool-remove(8), + zpool-reopen(8), zpool-replace(8), + zpool-resilver(8), zpool-scrub(8), + zpool-set(8), zpool-split(8), + zpool-status(8), zpool-sync(8), + zpool-trim(8), zpool-upgrade(8), + zpool-wait(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zpool_influxdb.8.html b/man/master/8/zpool_influxdb.8.html new file mode 100644 index 000000000..7fdde5625 --- /dev/null +++ b/man/master/8/zpool_influxdb.8.html @@ -0,0 +1,319 @@ + + + + + + + zpool_influxdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool_influxdb.8

+
+ + + + + +
ZPOOL_INFLUXDB(8)System Manager's ManualZPOOL_INFLUXDB(8)
+
+
+

+

zpool_influxdb — + collect ZFS pool statistics in InfluxDB line protocol + format

+
+
+

+ + + + + +
zpool_influxdb[-e|--execd] + [-n|--no-histogram] + [-s|--sum-histogram-buckets] + [-t|--tags + key=value[,key=value]…] + [pool]
+
+
+

+

zpool_influxdb produces + InfluxDB-line-protocol-compatible metrics from zpools. Like the + zpool command, + zpool_influxdb reads the current pool status and + statistics. Unlike the zpool command which is + intended for humans, zpool_influxdb formats the + output in the InfluxDB line protocol. The expected use is as a plugin to a + metrics collector or aggregator, such as Telegraf.

+

By default, zpool_influxdb prints pool + metrics and status in the InfluxDB line protocol format. All pools are + printed, similar to the zpool + status command. Providing a pool name restricts the + output to the named pool.

+
+
+

+
+
, + --execd
+
Run in daemon mode compatible with Telegraf's + execd plugin. In this mode, the pools are sampled + every time a newline appears on the standard input.
+
, + --no-histogram
+
Do not print latency and I/O size histograms. This can reduce the total + amount of data, but one should consider the value brought by the insights + that latency and I/O size distributions provide. The resulting values are + suitable for graphing with Grafana's heatmap plugin.
+
, + --sum-histogram-buckets
+
Accumulates bucket values. By default, the values are not accumulated and + the raw data appears as shown by zpool + iostat. This works well for Grafana's heatmap + plugin. Summing the buckets produces output similar to Prometheus + histograms.
+
, + --tags + key=value[,key=value]…
+
Adds specified tags to the tag set. No sanity checking is performed. See + the InfluxDB Line Protocol format documentation for details on escaping + special characters used in tags.
+
, + --help
+
Print a usage summary.
+
+
+
+

+

zpool-iostat(8), + zpool-status(8), + InfluxDB, + Telegraf, + Grafana, + Prometheus

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zstream.8.html b/man/master/8/zstream.8.html new file mode 100644 index 000000000..751fcb604 --- /dev/null +++ b/man/master/8/zstream.8.html @@ -0,0 +1,406 @@ + + + + + + + zstream.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zstream.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamdecompress [-v] + [object,offset[,type...]]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+ + + + + +
zstreamrecompress [-l + level] algorithm
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream + decompress [-v] + [object,offset[,type...]]
+
Decompress selected records in a ZFS send stream provided on standard + input, when the compression type recorded in ZFS metadata may be + incorrect. Specify the object number and byte offset of each record that + you wish to decompress. Optionally specify the compression type. Valid + compression types include off, + , + lz4, + , + , + and . + The default is lz4. Every record for that object + beginning at that offset will be decompressed, if possible. It may not be + possible, because the record may be corrupted in some but not all of the + stream's snapshots. Specifying a compression type of off + will change the stream's metadata accordingly, without attempting + decompression. This can be useful if the record is already uncompressed + but the metadata insists otherwise. The repaired stream will be written to + standard output. +
+
+
Verbose. Print summary of decompressed records.
+
+
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
zstream recompress + [-l level] + algorithm
+
Recompresses a send stream, provided on standard input, using the provided + algorithm and optional level, and writes the modified stream to standard + output. All WRITE records in the send stream will be recompressed, unless + they fail to result in size reduction compared to being left uncompressed. + The provided algorithm can be any valid value to the + compress property. Note that encrypted send + streams cannot be recompressed. +
+
+ level
+
Specifies compression level. Only needed for algorithms where the + level is not implied as part of the name of the algorithm (e.g. gzip-3 + does not require it, while zstd does, if a non-default level is + desired).
+
+
+
+
+
+

+

Heal a dataset that was corrupted due to OpenZFS bug #12762. + First, determine which records are corrupt. That cannot be done + automatically; it requires information beyond ZFS's metadata. If object + is + corrupted at offset + and is + compressed using lz4, then run this command:

+
+
# zfs send -c  | zstream decompress 128,0,lz4 | zfs recv 
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8), + https://github.com/openzfs/zfs/issues/12762

+
+
+ + + + + +
October 4, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/8/zstreamdump.8.html b/man/master/8/zstreamdump.8.html new file mode 100644 index 000000000..2938cd443 --- /dev/null +++ b/man/master/8/zstreamdump.8.html @@ -0,0 +1,406 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamdecompress [-v] + [object,offset[,type...]]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+ + + + + +
zstreamrecompress [-l + level] algorithm
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream + decompress [-v] + [object,offset[,type...]]
+
Decompress selected records in a ZFS send stream provided on standard + input, when the compression type recorded in ZFS metadata may be + incorrect. Specify the object number and byte offset of each record that + you wish to decompress. Optionally specify the compression type. Valid + compression types include off, + , + lz4, + , + , + and . + The default is lz4. Every record for that object + beginning at that offset will be decompressed, if possible. It may not be + possible, because the record may be corrupted in some but not all of the + stream's snapshots. Specifying a compression type of off + will change the stream's metadata accordingly, without attempting + decompression. This can be useful if the record is already uncompressed + but the metadata insists otherwise. The repaired stream will be written to + standard output. +
+
+
Verbose. Print summary of decompressed records.
+
+
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
zstream recompress + [-l level] + algorithm
+
Recompresses a send stream, provided on standard input, using the provided + algorithm and optional level, and writes the modified stream to standard + output. All WRITE records in the send stream will be recompressed, unless + they fail to result in size reduction compared to being left uncompressed. + The provided algorithm can be any valid value to the + compress property. Note that encrypted send + streams cannot be recompressed. +
+
+ level
+
Specifies compression level. Only needed for algorithms where the + level is not implied as part of the name of the algorithm (e.g. gzip-3 + does not require it, while zstd does, if a non-default level is + desired).
+
+
+
+
+
+

+

Heal a dataset that was corrupted due to OpenZFS bug #12762. + First, determine which records are corrupt. That cannot be done + automatically; it requires information beyond ZFS's metadata. If object + is + corrupted at offset + and is + compressed using lz4, then run this command:

+
+
# zfs send -c  | zstream decompress 128,0,lz4 | zfs recv 
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8), + https://github.com/openzfs/zfs/issues/12762

+
+
+ + + + + +
October 4, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/master/index.html b/man/master/index.html new file mode 100644 index 000000000..d037e627c --- /dev/null +++ b/man/master/index.html @@ -0,0 +1,147 @@ + + + + + + + master — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/v0.6/1/cstyle.1.html b/man/v0.6/1/cstyle.1.html new file mode 100644 index 000000000..e0b445681 --- /dev/null +++ b/man/v0.6/1/cstyle.1.html @@ -0,0 +1,284 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
cstyle(1)General Commands Manualcstyle(1)
+
+
+

+

cstyle - check for some common stylistic errors in C source + files

+
+
+

+

cstyle [-chpvCP] [-o constructs] [file...]

+
+
+

+

cstyle inspects C source files (*.c and *.h) for common + sylistic errors. It attempts to check for the cstyle documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. Note that + there is much in that document that cannot be checked for; just + because your code is cstyle(1) clean does not mean that you've + followed Sun's C style. Caveat emptor.

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented exactly four + spaces from the start line. This option enables a series of checks + designed to find contination line problems within functions only. The + checks have some limitations; see CONTINUATION CHECKING, below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI #else and #endif + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + "u_int" and "u_long" were used, but they are now + deprecated in favor of the POSIX types uint_t, ulong_t, etc. This detects + any use of the deprecated types. Used as part of the putback checks.
+
+
Allow a comma-seperated list of additional constructs. Available + constructs include:
+
+
Allow doxygen-style block comments (/** and /*!)
+
+
Allow splint-style lint comments (/*@...@*/)
+
+
+
+

+

The cstyle rule for the OS/Net consolidation is that all new files + must be -pP clean. For existing files, the following invocations are + run against both the old and new files:

+
+
+
+
+
+
+
+
+

If the old file gave no errors for one of the invocations, the new + file must also give no errors. This way, files can only become more + clean.

+
+
+

+

The continuation checker is a resonably simple state machine that + knows something about how C is layed out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
+
1.
+
Preprocessor macros which cause unmatched parenthesis will confuse the + checker for that line. To fix this, you'll need to make sure that each + branch of the #if statement has balanced parenthesis.
+
2.
+
Some cpp macros do not require ;s after them. Any such macros + *must* be ALL_CAPS; any lower case letters will cause bad output.
+
+

The bad output will generally be corrected after the next + ;, {, or }.

+

Some continuation error messages deserve some additional + explanation

+
+
+
A multi-line statement which is not broken at statement boundries. For + example:
+
+
+

if (this_is_a_long_variable == another_variable) a = +
+ b + c;

+

Will trigger this error. Instead, do:

+

if (this_is_a_long_variable == another_variable) +
+ a = b + c;

+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example:
+
+
+

while (do_something(&x) == 0);

+

Will trigger this error. Instead, do:

+

while (do_something(&x) == 0) +
+ ;

+
+

+
+
+ + + + + +
28 March 2005
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/1/index.html b/man/v0.6/1/index.html new file mode 100644 index 000000000..f630dd9b0 --- /dev/null +++ b/man/v0.6/1/index.html @@ -0,0 +1,151 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/1/zhack.1.html b/man/v0.6/1/zhack.1.html new file mode 100644 index 000000000..58892adfb --- /dev/null +++ b/man/v0.6/1/zhack.1.html @@ -0,0 +1,252 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
zhack(1)User Commandszhack(1)
+
+

+
+

+

zhack - libzpool debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+

zhack [-c cachefile] [-d dir] + <subcommand> [arguments]

+
+
+

+

-c cachefile

+
+
+
Read the pool configuration from the cachefile, which is + /etc/zfs/zpool.cache by default.
+
+

-d dir

+
+
+
Search for pool members in the dir path. Can be specified + more than once.
+
+
+
+

+

feature stat pool

+
+
+
List feature flags.
+
+

feature enable [-d description] [-r] pool + guid

+
+
+
Add a new feature to pool that is uniquely identified by + guid, which is specified in the same form as a zfs(8) user + property.
+
+
The description is a short human readable explanation of the new + feature.
+
+
The -r switch indicates that pool can be safely opened in + read-only mode by a system that does not have the guid + feature.
+
+

feature ref [-d|-m] pool guid

+
+
+
Increment the reference count of the guid feature in + pool.
+
+
The -d switch decrements the reference count of the guid + feature in pool.
+
+
The -m switch indicates that the guid feature is now + required to read the pool MOS.
+
+
+
+

+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
# zhack feature enable -d 'Predict future disk failures.' \
+
+ tank com.example:clairvoyance
+
# zhack feature ref tank com.example:clairvoyance
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

splat(1), zfs(8), zpios(1), + zpool-features(5), ztest(1)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/1/zpios.1.html b/man/v0.6/1/zpios.1.html new file mode 100644 index 000000000..8d88b4c12 --- /dev/null +++ b/man/v0.6/1/zpios.1.html @@ -0,0 +1,384 @@ + + + + + + + zpios.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpios.1

+
+ + + + + +
zpios(1)User Commandszpios(1)
+
+

+
+

+

zpios - Directly test the DMU.

+
+
+

+

zpios [options] <-p pool>

+

+
+
+

+

This utility runs in-kernel DMU performance and stress tests that + do not depend on the ZFS Posix Layer ("ZPL").

+

+
+
+

+

-s regex, --threadcount regex

+
+
+
Start this many threads for each test series, specified as a comma + delimited regular expression. (eg: "-s 1,2,3")
+
+
This option is mutually exclusive with the threadcount_* + options.
+
+

-l regex_low, --threadcount_low + regex_low

+

-h regex_high, --threadcount_high + regex_high

+

-e regex_incr, --threadcount_incr + regex_incr

+
+
+
Start regex_low threads for the first test, add regex_incr + threads for each subsequent test, and start regex_high threads for + the last test.
+
+
These three options must be specified together and are mutually exclusive + with the threadcount option.
+
+

-n regex, --regioncount regex

+
+
+
Create this many regions for each test series, specified as a comma + delimited regular expression. (eg: "-n 512,4096,65536")
+
+
This option is mutually exclusive with the regioncount_* + options.
+
+

-i regex_low, --regioncount_low + regex_low

+

-j regex_high, --regioncount_high + regex_high

+

-k regex_incr, --regioncount_incr + regex_incr

+
+
+
Create regex_low regions for the first test, add regex_incr + regions for each subsequent test, and create regex_high regions for + the last test.
+
+
These three options must be specified together and are mutually exclusive + with the regioncount option.
+
+

-o size, --offset size

+
+
+
Create regions at size offset for each test series, specified as a + comma delimited regular expression with an optional unit suffix. (eg: + "-o 4M" means four megabytes.)
+
+
This option is mutually exclusive with the offset_* options.
+
+

-m size_low, --offset_low + size_low

+

-q size_high, --offset_high + size_high

+

-r size_incr, --offset_incr + size_incr

+
+
+
Create a region at size_low offset for the first test, add + size_incr to the offset for each subsequent test, and create a + region at size_high offset for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the offset option.
+
+

-c size, --chunksize size

+
+
+
Use size chunks for each test, specified as a comma delimited + regular expression with an optional unit suffix. (eg: "-c 1M" + means one megabyte.) The chunk size must be at least the region size.
+
+
This option is mutually exclusive with the chunksize_* + options.
+
+

-a size_low, --chunksize_low + size_low

+

-b size_high, --chunksize_high + size_high

+

-g size_incr, --chunksize_incr + size_incr

+
+
+
Use a size_low chunk size for the first test, add size_incr + to the chunk size for each subsequent test, and use a size_high + chunk size for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the chunksize option.
+
+

-L dmu_flags, --load dmu_flags

+
+
+
Specify dmuio for regular DMU_IO, ssf for single shared file + access, or fpp for per thread access. Use commas to delimit + multiple flags. (eg: "-L dmuio,ssf")
+
+

-p name, --pool name

+
+
+
The pool name, which is mandatory.
+
+

-M test, --name test

+
+
+
An arbitrary string that appears in the program output.
+
+

-x, --cleanup

+
+
+
Enable the DMU_REMOVE flag.
+
+

-P command, --prerun command

+
+
+
Invoke command from the kernel before running the test. Shell + expansion is not performed and the environment is set to HOME=/; + TERM=linux; PATH=/sbin:/usr/sbin:/bin:/usr/bin.
+
+

-R command, --postrun command

+
+
+
Invoke command from the kernel after running the test. Shell + expansion is not performed and the environment is set to HOME=/; + TERM=linux; PATH=/sbin:/usr/sbin:/bin:/usr/bin.
+
+

-G directory, --log directory

+
+
+
Put logging output in this directory.
+
+

-I size, --regionnoise size

+
+
+
Randomly vary the regionsize parameter for each test modulo + size bytes.
+
+

-N size, --chunknoise size

+
+
+
Randomly vary the chunksize parameter for each test modulo + size bytes.
+
+

-T time, --threaddelay time

+
+
+
Randomly vary the execution time for each test modulo time kernel + jiffies.
+
+

-V, --verify

+
+
+
Enable the DMU_VERIFY flag for trivial data verification.
+
+

-z, --zerocopy

+
+
+
Enable the DMU_READ_ZC and DMU_WRITE_ZC flags, which are currently + unimplemented for Linux.
+
+

-O, --nowait

+
+
+
Enable the DMU_WRITE_NOWAIT flag.
+
+

-f, --noprefetch

+
+
+
Enable the DMU_READ_NOPF flag.
+
+

-H, --human-readable

+
+
+
Print PASS and FAIL results explicitly and put unit suffixes on large + numbers.
+
+

-v, --verbose

+
+
+
Increase output verbosity.
+
+

-? , --help

+
+
+
Print the usage message.
+
+
+
+

+

The original zpios implementation was created by Cluster File + Systems Inc and adapted to ZFS on Linux by Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/1/ztest.1.html b/man/v0.6/1/ztest.1.html new file mode 100644 index 000000000..d0494b9c7 --- /dev/null +++ b/man/v0.6/1/ztest.1.html @@ -0,0 +1,337 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ztest(1)User Commandsztest(1)
+
+

+
+

+

ztest - was written by the ZFS Developers as a ZFS unit + test.

+
+
+

+

ztest <options>

+
+
+

+

This manual page documents briefly the ztest command.

+

ztest was written by the ZFS Developers as a ZFS unit test. + The tool was developed in tandem with the ZFS functionality and was executed + nightly as one of the many regression test against the daily build. As + features were added to ZFS, unit tests were also added to ztest. In + addition, a separate test development team wrote and executed more + functional and stress tests.

+

By default ztest runs for ten minutes and uses block files + (stored in /tmp) to create pools rather than using physical disks. Block + files afford ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you have a small tmp + directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 minutes. The -V option + can be used to increase the verbosity of the tool. Adding multiple -V option + is allowed and the more you add the more chatty ztest becomes.

+

After the ztest run completes, you should notice many + ztest.* files lying around. Once the run completes you can safely remove + these files. Note that you shouldn't remove these files during a run. You + can re-use these files in your next ztest run by using the -E + option.

+
+
+

+

-?

+
+
+
Print a help summary.
+
+

-v vdevs (default: 5)

+
+
+
Number of vdevs.
+
+

-s size_of_each_vdev (default: 64M)

+
+
+
Size of each vdev.
+
+

-a alignment_shift (default: 9) (use 0 for + random)

+
+
+
Used alignment in test.
+
+

-m mirror_copies (default: 2)

+
+
+
Number of mirror copies.
+
+

-r raidz_disks (default: 4)

+
+
+
Number of raidz disks.
+
+

-R raidz_parity (default: 1)

+
+
+
Raidz parity.
+
+

-d datasets (default: 7)

+
+
+
Number of datasets.
+
+

-t threads (default: 23)

+
+
+
Number of threads.
+
+

-g gang_block_threshold (default: 32K)

+
+
+
Gang block threshold.
+
+

-i initialize_pool_i_times (default: + 1)

+
+
+
Number of pool initialisations.
+
+

-k kill_percentage (default: 70%)

+
+
+
Kill percentage.
+
+

-p pool_name (default: ztest)

+
+
+
Pool name.
+
+

-V(erbose)

+
+
+
Verbose (use multiple times for ever more blather).
+
+

-E(xisting)

+
+
+
Use existing pool (use existing pool instead of creating new one).
+
+

-T time (default: 300 sec)

+
+
+
Total test run time.
+
+

-z zil_failure_rate (default: fail every 2^5 + allocs)

+
+
+
Injected failure rate.
+
+
+
+

+

To override /tmp as your location for block files, you can use the + -f option:

+
+
+
ztest -f /
+
+

To get an idea of what ztest is actually testing try this:

+
+
+
ztest -f / -VVV
+
+

Maybe you'd like to run ztest for longer? To do so simply use the + -T option and specify the runlength in seconds like so:

+
+
+
ztest -f / -V -T 120 +

+
+
+
+
+

+
+
+
Limit the default stack size to stacksize bytes for the purpose of + detecting and debugging kernel stack overflows. For x86_64 platforms this + value should be set as follows to simulate these platforms: 8192 + (Linux), 20480 (Illumos), 16384 (FreeBSD). +

In practice you may need to set these value slightly higher + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to 256K.

+
+
+
+
+

+

zpool (1), zfs (1), zdb (1),

+
+
+

+

This manual page was transvered to asciidoc by Michael + Gebetsroither <gebi@grml.org> from + http://opensolaris.org/os/community/zfs/ztest/

+
+
+ + + + + +
2009 NOV 01ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/index.html b/man/v0.6/5/index.html new file mode 100644 index 000000000..ba2f8e46f --- /dev/null +++ b/man/v0.6/5/index.html @@ -0,0 +1,151 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/vdev_id.conf.5.html b/man/v0.6/5/vdev_id.conf.5.html new file mode 100644 index 000000000..09da91feb --- /dev/null +++ b/man/v0.6/5/vdev_id.conf.5.html @@ -0,0 +1,310 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
vdev_id.conf(5)File Formats Manualvdev_id.conf(5)
+
+
+

+

vdev_id.conf - Configuration file for vdev_id

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of vdev_id(8) + while it is mapping a disk device name to an alias.

+

The vdev_id.conf file uses a simple format consisting of a + keyword followed by one or more values on a single line. Any line not + beginning with a recognized keyword is ignored. Comments may optionally + begin with a hash character.

+

The following keywords and values are used.

+
+
+
Maps a device link in the /dev directory hierarchy to a new device name. + The udev rule defining the device link must have run prior to + vdev_id(8). A defined alias takes precedence over a + topology-derived name, but the two naming methods can otherwise coexist. + For example, one might name drives in a JBOD with the sas_direct topology + while naming an internal L2ARC device with an alias. +

name - the name of the link to the device that will by + created in /dev/disk/by-vdev.

+

devlink - the name of the device link that has already + been defined by udev. This may be an absolute path or the base + filename.

+

+
+
+
Maps a physical path to a channel name (typically representing a single + disk enclosure). +

pci_slot - specifies the PCI SLOT of the HBA hosting + the disk enclosure being mapped, as found in the output of + lspci(8). This argument is not used in sas_switch mode.

+

port - specifies the numeric identifier of the HBA or + SAS switch port connected to the disk enclosure being mapped.

+

name - specifies the name of the channel.

+

+
+
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is specified then + the mapping is only applied to slots in the named channel, otherwise the + mapping is applied to all channels. The first-specified slot rule + that can match a slot takes precedence. Therefore a channel-specific + mapping for a given slot should generally appear before a generic mapping + for the same slot. In this way a custom mapping may be applied to a + particular channel and a default mapping applied to the others. +

+
+
+
Specifies whether vdev_id(8) will handle only dm-multipath devices. + If set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+

+
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4. +

+
+
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay. +

bay - read the slot number from the bay identifier.

+

phy - read the slot number from the phy identifier.

+

id - use the scsi id as the slot number.

+

lun - use the scsi lun as the slot number.

+
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping.

+

+
	multipath     no
+	topology      sas_direct
+	phys_per_port 4
+	slot          bay
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         C
+	channel 86:00.0  0         D
+	# Custom mapping for Channel A
+	#    Linux      Mapped
+	#    Slot       Slot      Channel
+	slot 1          7         A
+	slot 2          10        A
+	slot 3          3         A
+	slot 4          6         A
+	# Default mapping for B, C, and D
+	slot 1          4
+	slot 2          2
+	slot 3          1
+	slot 4          3
+

A SAS-switch topology. Note that the channel keyword takes + only two arguments in this example.

+

+
	topology      sas_switch
+	#       SWITCH PORT  CHANNEL NAME
+	channel 1            A
+	channel 2            B
+	channel 3            C
+	channel 4            D
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path.

+

+
	multipath yes
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         A
+	channel 86:00.0  0         B
+

A configuration using device link aliases.

+

+
	#     by-vdev
+	#     name     fully qualified or base name of device link
+	alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+	alias d2       wwn-0x5000c5002def789e
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/zfs-events.5.html b/man/v0.6/5/zfs-events.5.html new file mode 100644 index 000000000..34e6f3f3d --- /dev/null +++ b/man/v0.6/5/zfs-events.5.html @@ -0,0 +1,777 @@ + + + + + + + zfs-events.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-events.5

+
+ + + + + +
ZFS-EVENTS(5)File Formats ManualZFS-EVENTS(5)
+
+
+

+

zfs-events - Events created by the ZFS filesystem.

+
+
+

+

Description of the different events generated by the ZFS + stack.

+

Most of these don't have any description. The events generated by + ZFS have never been publicly documented. What is here is intended as a + starting point to provide documentation for all possible events.

+

To view all events created since the loading of the ZFS + infrastructure (i.e, "the module"), run

+

+
zpool events
+

to get a short list, and

+

+
zpool events -v
+

to get a full detail of the events and what information is + available about it.

+

This man page lists the different subclasses that are issued in + the case of an event. The full event name would be + ereport.fs.zfs.SUBCLASS, but we only list the last part here.

+

+
+

+

+

checksum

+
Issued when a checksum error have been detected.
+

+

io

+
Issued when there is an I/O error in a vdev in the + pool.
+

+

data

+
Issued when there have been data errors in the + pool.
+

+

delay

+
Issued when an I/O was slow to complete as defined by the + zio_delay_max module option.
+

+

config.sync

+
Issued every time a vdev change have been done to the + pool.
+

+

zpool

+
Issued when a pool cannot be imported.
+

+

zpool.destroy

+
Issued when a pool is destroyed.
+

+

zpool.export

+
Issued when a pool is exported.
+

+

zpool.import

+
Issued when a pool is imported.
+

+

zpool.reguid

+
Issued when a REGUID (new unique identifier for the pool + have been regenerated) have been detected.
+

+

vdev.unknown

+
Issued when the vdev is unknown. Such as trying to clear + device errors on a vdev that have failed/been kicked from the system/pool and + is no longer available.
+

+

vdev.open_failed

+
Issued when a vdev could not be opened (because it didn't + exist for example).
+

+

vdev.corrupt_data

+
Issued when corrupt data have been detected on a + vdev.
+

+

vdev.no_replicas

+
Issued when there are no more replicas to sustain the + pool. This would lead to the pool being DEGRADED.
+

+

vdev.bad_guid_sum

+
Issued when a missing device in the pool have been + detected.
+

+

vdev.too_small

+
Issued when the system (kernel) have removed a device, + and ZFS notices that the device isn't there any more. This is usually followed + by a probe_failure event.
+

+

vdev.bad_label

+
Issued when the label is OK but invalid.
+

+

vdev.bad_ashift

+
Issued when the ashift alignment requirement has + increased.
+

+

vdev.remove

+
Issued when a vdev is detached from a mirror (or a spare + detached from a vdev where it have been used to replace a failed drive - only + works if the original drive have been readded).
+

+

vdev.clear

+
Issued when clearing device errors in a pool. Such as + running zpool clear on a device in the pool.
+

+

vdev.check

+
Issued when a check to see if a given vdev could be + opened is started.
+

+

vdev.spare

+
Issued when a spare have kicked in to replace a failed + device.
+

+

vdev.autoexpand

+
Issued when a vdev can be automatically expanded.
+

+

io_failure

+
Issued when there is an I/O failure in a vdev in the + pool.
+

+

probe_failure

+
Issued when a probe fails on a vdev. This would occur if + a vdeev have been kicked from the system outside of ZFS (such as the kernel + have removed the device).
+

+

log_replay

+
Issued when the intent log cannot be replayed. The can + occur in the case of a missing or damaged log device.
+

+

resilver.start

+
Issued when a resilver is started.
+

+

resilver.finish

+
Issued when the running resilver have finished.
+

+

scrub.start

+
Issued when a scrub is started on a pool.
+

+

scrub.finish

+
Issued when a pool have finished scrubbing.
+

+

bootfs.vdev.attach

+
+

+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to uppercase and prefixed with + ZEVENT_.

+

+

pool

+
Pool name.
+

+

pool_failmode

+
Failmode - wait, continue or panic. + See pool(8) (failmode property) for more information.
+

+

pool_guid

+
The GUID of the pool.
+

+

pool_context

+
The load state for the pool (0=none, 1=open, 2=import, + 3=tryimport, 4=recover 5=error).
+

+

vdev_guid

+
The GUID of the vdev in question (the vdev failing or + operated upon with zpool clear etc).
+

+

vdev_type

+
Type of vdev - disk, file, mirror + etc. See zpool(8) under Virtual Devices for more information on + possible values.
+

+

vdev_path

+
Full path of the vdev, including any -partX.
+

+

vdev_devid

+
ID of vdev (if any).
+

+

vdev_fru

+
Physical FRU location.
+

+

vdev_state

+
State of vdev (0=uninitialized, 1=closed, 2=offline, + 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healty).
+

+

vdev_ashift

+
The ashift value of the vdev.
+

+

vdev_complete_ts

+
The time the last I/O completed for the specified + vdev.
+

+

vdev_delta_ts

+
The time since the last I/O completed for the specified + vdev.
+

+

vdev_spare_paths

+
List of spares, including full path and any + -partX.
+

+

vdev_spare_guids

+
GUID(s) of spares.
+

+

vdev_read_errors

+
How many read errors that have been detected on the + vdev.
+

+

vdev_write_errors

+
How many write errors that have been detected on the + vdev.
+

+

vdev_cksum_errors

+
How many checkum errors that have been detected on the + vdev.
+

+

parent_guid

+
GUID of the vdev parent.
+

+

parent_type

+
Type of parent. See vdev_type.
+

+

parent_path

+
Path of the vdev parent (if any).
+

+

parent_devid

+
ID of the vdev parent (if any).
+

+

zio_objset

+
The object set number for a given I/O.
+

+

zio_object

+
The object number for a given I/O.
+

+

zio_level

+
The block level for a given I/O.
+

+

zio_blkid

+
The block ID for a given I/O.
+

+

zio_err

+
The errno for a failure when handling a given I/O.
+

+

zio_offset

+
The offset in bytes of where to write the I/O for the + specified vdev.
+

+

zio_size

+
The size in bytes of the I/O.
+

+

zio_flags

+
The current flags describing how the I/O should be + handled. See the I/O FLAGS section for the full list of I/O + flags.
+

+

zio_stage

+
The current stage of the I/O in the pipeline. See the + I/O STAGES section for a full list of all the I/O stages.
+

+

zio_pipeline

+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+

+

zio_delay

+
The time in ticks (HZ) required for the block layer to + service the I/O. Unlike zio_delta this does not include any vdev + queuing time and is therefore solely a measure of the block layer performance. + On most modern Linux systems HZ is defined as 1000 making a tick equivalent to + 1 millisecond.
+

+

zio_timestamp

+
The time when a given I/O was submitted.
+

+

zio_delta

+
The time required to service a given I/O.
+

+

prev_state

+
The previous state of the vdev.
+

+

cksum_expected

+
The expected checksum value.
+

+

cksum_actual

+
The actual/current checksum value.
+

+

cksum_algorithm

+
Checksum algorithm used. See zfs(8) for more + information on checksum algorithms available.
+

+

cksum_byteswap

+
Checksum value is byte swapped.
+

+

bad_ranges

+
Checksum bad offset ranges.
+

+

bad_ranges_min_gap

+
Checksum allowed minimum gap.
+

+

bad_range_sets

+
Checksum for each range the number of bits set.
+

+

bad_range_clears

+
Checksum for each range the number of bits cleared.
+

+

bad_set_bits

+
Checksum array of bits set.
+

+

bad_cleared_bits

+
Checksum array of bits cleared.
+

+

bad_set_histogram

+
Checksum histogram of set bits by bit number in a 64-bit + word.
+

+

bad_cleared_histogram

+
Checksum histogram of cleared bits by bit number in a + 64-bit word.
+

+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_FREE_BP_INIT0x00000004--F--
ZIO_STAGE_ISSUE_ASYNC0x00000008RWF--
ZIO_STAGE_WRITE_BP_INIT0x00000010-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000020-W---
ZIO_STAGE_NOP_WRITE0x00000040-W---
ZIO_STAGE_DDT_READ_START0x00000080R----
ZIO_STAGE_DDT_READ_DONE0x00000100R----
ZIO_STAGE_DDT_WRITE0x00000200-W---
ZIO_STAGE_DDT_FREE0x00000400--F--
ZIO_STAGE_GANG_ASSEMBLE0x00000800RWFC-
ZIO_STAGE_GANG_ISSUE0x00001000RWFC-
ZIO_STAGE_DVA_ALLOCATE0x00002000-W---
ZIO_STAGE_DVA_FREE0x00004000--F--
ZIO_STAGE_DVA_CLAIM0x00008000---C-
ZIO_STAGE_READY0x00010000RWFCI
ZIO_STAGE_VDEV_IO_START0x00020000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00040000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00080000RW--I
ZIO_STAGE_CHECKSUM_VERIFY00x00100000R----
ZIO_STAGE_DONE0x00200000RWFCI
+

+
+
+

+

Every I/O in the pipeline contains a set of flags which describe + its function and are used to govern its behavior. These flags will be set in + an event as an zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_RETRY0x00004000
ZIO_FLAG_PROBE0x00008000
ZIO_FLAG_TRYHARD0x00010000
ZIO_FLAG_OPTIONAL0x00020000
ZIO_FLAG_DONT_QUEUE0x00040000
ZIO_FLAG_DONT_PROPAGATE0x00080000
ZIO_FLAG_IO_BYPASS0x00100000
ZIO_FLAG_IO_REWRITE0x00200000
ZIO_FLAG_RAW0x00400000
ZIO_FLAG_GANG_CHILD0x00800000
ZIO_FLAG_DDT_CHILD0x01000000
ZIO_FLAG_GODFATHER0x02000000
ZIO_FLAG_NOPWRITE0x04000000
ZIO_FLAG_REEXECUTED0x08000000
ZIO_FLAG_DELEGATED0x10000000
ZIO_FLAG_FASTWRITE0x20000000
+
+
+
+ + + + + +
June 6, 2015
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/zfs-module-parameters.5.html b/man/v0.6/5/zfs-module-parameters.5.html new file mode 100644 index 000000000..c1684b3e0 --- /dev/null +++ b/man/v0.6/5/zfs-module-parameters.5.html @@ -0,0 +1,1329 @@ + + + + + + + zfs-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-module-parameters.5

+
+ + + + + +
ZFS-MODULE-PARAMETERS(5)File Formats ManualZFS-MODULE-PARAMETERS(5)
+
+
+

+

zfs-module-parameters - ZFS module parameters

+
+
+

+

Description of the different parameters to the ZFS module.

+

+
+

+

+

ignore_hole_birth (int)

+
When set, the hole_birth optimization will not be used, + and all holes will always be sent on zfs send. Useful if you suspect your + datasets are affected by a bug in hole_birth. +

Use 1 (default) for on and 0 for off.

+
+

+

l2arc_feed_again (int)

+
Turbo L2ARC warmup +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_feed_min_ms (ulong)

+
Min feed interval in milliseconds +

Default value: 200.

+
+

+

l2arc_feed_secs (ulong)

+
Seconds between L2ARC writing +

Default value: 1.

+
+

+

l2arc_headroom (ulong)

+
Number of max device writes to precache +

Default value: 2.

+
+

+

l2arc_headroom_boost (ulong)

+
Compressed l2arc_headroom multiplier +

Default value: 200.

+
+

+

l2arc_nocompress (int)

+
Skip compressing L2ARC buffers +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_noprefetch (int)

+
Skip caching prefetched buffers +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_norw (int)

+
No reads during writes +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_write_boost (ulong)

+
Extra write bytes during device warmup +

Default value: 8,388,608.

+
+

+

l2arc_write_max (ulong)

+
Max write bytes per interval +

Default value: 8,388,608.

+
+

+

metaslab_aliquot (ulong)

+
Metaslab granularity, in bytes. This is roughly similar + to what would be referred to as the "stripe size" in traditional + RAID arrays. In normal operation, ZFS will try to write this amount of data to + a top-level vdev before moving on to the next one. +

Default value: 524,288.

+
+

+

metaslab_bias_enabled (int)

+
Enable metaslab group biasing based on its vdev's over- + or under-utilization relative to the pool. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_debug_load (int)

+
Load all metaslabs during pool import. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_debug_unload (int)

+
Prevent metaslabs from being unloaded. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_fragmentation_factor_enabled (int)

+
Enable use of the fragmentation metric in computing + metaslab weights. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslabs_per_vdev (int)

+
When a vdev is added, it will be divided into + approximately (but no more than) this number of metaslabs. +

Default value: 200.

+
+

+

metaslab_preload_enabled (int)

+
Enable metaslab group preloading. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_lba_weighting_enabled (int)

+
Give more weight to metaslabs with lower LBAs, assuming + they have greater bandwidth as is typically the case on a modern constant + angular velocity disk drive. +

Use 1 for yes (default) and 0 for no.

+
+

+

spa_config_path (charp)

+
SPA config file +

Default value: /etc/zfs/zpool.cache.

+
+

+

spa_asize_inflation (int)

+
Multiplication factor used to estimate actual disk + consumption from the size of data being written. The default value is a worst + case estimate, but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits. +

Default value: 24

+
+

+

spa_load_verify_data (int)

+
Whether to traverse data blocks during an "extreme + rewind" (-X) import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal skips non-metadata blocks. It can be toggled once the import has + started to stop or start the traversal of non-metadata blocks.

+

Default value: 1

+
+

+

spa_load_verify_metadata (int)

+
Whether to traverse blocks during an "extreme + rewind" (-X) pool import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 1, the + traversal is not performed. It can be toggled once the import has started to + stop or start the traversal.

+

Default value: 1

+
+

+

spa_load_verify_maxinflight (int)

+
Maximum concurrent I/Os during the traversal performed + during an "extreme rewind" (-X) pool import. +

Default value: 10000

+
+

+

spa_slop_shift (int)

+
Normally, we don't allow the last 3.2% + (1/(2^spa_slop_shift)) of space in the pool to be consumed. This ensures that + we don't run the pool completely out of space, due to unaccounted changes + (e.g. to the MOS). It also limits the worst-case time to allocate space. If we + have less than this amount of free space, most ZPL operations (e.g. write, + create) will return ENOSPC. +

Default value: 5

+
+

+

zfetch_array_rd_sz (ulong)

+
If prefetching is enabled, disable prefetching for reads + larger than this size. +

Default value: 1,048,576.

+
+

+

zfetch_block_cap (uint)

+
Max number of blocks to prefetch at a time +

Default value: 256.

+
+

+

zfetch_max_streams (uint)

+
Max number of streams per zfetch (prefetch streams per + file). +

Default value: 8.

+
+

+

zfetch_min_sec_reap (uint)

+
Min time before an active prefetch stream can be + reclaimed +

Default value: 2.

+
+

+

zfs_arc_average_blocksize (int)

+
The ARC's buffer hash table is sized based on the + assumption of an average block size of zfs_arc_average_blocksize + (default 8K). This works out to roughly 1MB of hash table per 1GB of physical + memory with 8-byte pointers. For configurations with a known larger average + block size this value can be increased to reduce the memory footprint. +

+

Default value: 8192.

+
+

+

zfs_arc_evict_batch_limit (int)

+
Number ARC headers to evict per sub-list before + proceeding to another sub-list. This batch-style operation prevents entire + sub-lists from being evicted at once but comes at a cost of additional + unlocking and locking. +

Default value: 10.

+
+

+

zfs_arc_grow_retry (int)

+
Seconds before growing arc size +

Default value: 5.

+
+

+

zfs_arc_lotsfree_percent (int)

+
Throttle I/O when free system memory drops below this + percentage of total system memory. Setting this value to 0 will disable the + throttle. +

Default value: 10.

+
+

+

zfs_arc_max (ulong)

+
Max arc size +

Default value: 0.

+
+

+

zfs_arc_meta_limit (ulong)

+
The maximum allowed size in bytes that meta data buffers + are allowed to consume in the ARC. When this limit is reached meta data + buffers will be reclaimed even if the overall arc_c_max has not been reached. + This value defaults to 0 which indicates that 3/4 of the ARC may be used for + meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_min (ulong)

+
The minimum allowed size in bytes that meta data buffers + may consume in the ARC. This value defaults to 0 which disables a floor on the + amount of the ARC devoted meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_prune (int)

+
The number of dentries and inodes to be scanned looking + for entries which can be dropped. This may be required when the ARC reaches + the zfs_arc_meta_limit because dentries and inodes can pin buffers in + the ARC. Increasing this value will cause to dentry and inode caches to be + pruned more aggressively. Setting this value to 0 will disable pruning the + inode and dentry caches. +

Default value: 10,000.

+
+

+

zfs_arc_meta_adjust_restarts (ulong)

+
The number of restart passes to make while scanning the + ARC attempting the free buffers in order to stay below the + zfs_arc_meta_limit. This value should not need to be tuned but is + available to facilitate performance analysis. +

Default value: 4096.

+
+

+

zfs_arc_min (ulong)

+
Min arc size +

Default value: 100.

+
+

+

zfs_arc_min_prefetch_lifespan (int)

+
Min life of prefetch block +

Default value: 100.

+
+

+

zfs_arc_num_sublists_per_state (int)

+
To allow more fine-grained locking, each ARC state + contains a series of lists for both data and meta data objects. Locking is + performed at the level of these "sub-lists". This parameters + controls the number of sub-lists per ARC state. +

Default value: 1 or the number of on-online CPUs, whichever is + greater

+
+

+

zfs_arc_overflow_shift (int)

+
The ARC size is considered to be overflowing if it + exceeds the current ARC target size (arc_c) by a threshold determined by this + parameter. The threshold is calculated as a fraction of arc_c using the + formula "arc_c >> zfs_arc_overflow_shift". +

The default value of 8 causes the ARC to be considered to be + overflowing if it exceeds the target size by 1/256th (0.3%) of the target + size.

+

When the ARC is overflowing, new buffer allocations are stalled + until the reclaim thread catches up and the overflow condition no longer + exists.

+

Default value: 8.

+
+

+

+

zfs_arc_p_min_shift (int)

+
arc_c shift to calc min/max arc_p +

Default value: 4.

+
+

+

zfs_arc_p_aggressive_disable (int)

+
Disable aggressive arc_p growth +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_p_dampener_disable (int)

+
Disable arc_p adapt dampener +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_shrink_shift (int)

+
log2(fraction of arc to reclaim) +

Default value: 5.

+
+

+

zfs_arc_sys_free (ulong)

+
The target number of bytes the ARC should leave as free + memory on the system. Defaults to the larger of 1/64 of physical memory or + 512K. Setting this option to a non-zero value will override the default. +

Default value: 0.

+
+

+

zfs_autoimport_disable (int)

+
Disable pool import at module load by ignoring the cache + file (typically /etc/zfs/zpool.cache). +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_dbgmsg_enable (int)

+
Internally ZFS keeps a small log to facilitate debugging. + By default the log is disabled, to enable it set this option to 1. The + contents of the log can be accessed by reading the /proc/spl/kstat/zfs/dbgmsg + file. Writing 0 to this proc file clears the log. +

Default value: 0.

+
+

+

zfs_dbgmsg_maxsize (int)

+
The maximum size in bytes of the internal ZFS debug log. +

Default value: 4M.

+
+

+

zfs_dbuf_state_index (int)

+
Calculate arc header index +

Default value: 0.

+
+

+

zfs_deadman_enabled (int)

+
Enable deadman timer +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_deadman_synctime_ms (ulong)

+
Expiration time in milliseconds. This value has two + meanings. First it is used to determine when the spa_deadman() logic should + fire. By default the spa_deadman() will fire if spa_sync() has not completed + in 1000 seconds. Secondly, the value determines if an I/O is considered + "hung". Any I/O that has not completed in zfs_deadman_synctime_ms is + considered "hung" resulting in a zevent being logged. +

Default value: 1,000,000.

+
+

+

zfs_dedup_prefetch (int)

+
Enable prefetching dedup-ed blks +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_delay_min_dirty_percent (int)

+
Start to delay each transaction once there is this amount + of dirty data, expressed as a percentage of zfs_dirty_data_max. This + value should be >= zfs_vdev_async_write_active_max_dirty_percent. See the + section "ZFS TRANSACTION DELAY". +

Default value: 60.

+
+

+

zfs_delay_scale (int)

+
This controls how quickly the transaction delay + approaches infinity. Larger values cause longer delays for a given amount of + dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will smoothly + handle between 10x and 1/10th this number.

+

See the section "ZFS TRANSACTION DELAY".

+

Note: zfs_delay_scale * zfs_dirty_data_max must be + < 2^64.

+

Default value: 500,000.

+
+

+

zfs_dirty_data_max (int)

+
Determines the dirty space limit in bytes. Once this + limit is exceeded, new writes are halted until space frees up. This parameter + takes precedence over zfs_dirty_data_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 10 percent of all memory, capped at + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_max_max (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed in bytes. This limit is only enforced at module load time, and will + be ignored if zfs_dirty_data_max is later changed. This parameter takes + precedence over zfs_dirty_data_max_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 25% of physical RAM.

+
+

+

zfs_dirty_data_max_max_percent (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed as a percentage of physical RAM. This limit is only enforced at + module load time, and will be ignored if zfs_dirty_data_max is later + changed. The parameter zfs_dirty_data_max_max takes precedence over + this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 25

+
+

+

zfs_dirty_data_max_percent (int)

+
Determines the dirty space limit, expressed as a + percentage of all memory. Once this limit is exceeded, new writes are halted + until space frees up. The parameter zfs_dirty_data_max takes precedence + over this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 10%, subject to zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_sync (int)

+
Start syncing out a transaction group if there is at + least this much dirty data. +

Default value: 67,108,864.

+
+

+

zfs_free_max_blocks (ulong)

+
Maximum number of blocks freed in a single txg. +

Default value: 100,000.

+
+

+

zfs_vdev_async_read_max_active (int)

+
Maxium asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_async_read_min_active (int)

+
Minimum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_async_write_active_max_dirty_percent (int)

+
When the pool has more than + zfs_vdev_async_write_active_max_dirty_percent dirty data, use + zfs_vdev_async_write_max_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 60.

+
+

+

zfs_vdev_async_write_active_min_dirty_percent (int)

+
When the pool has less than + zfs_vdev_async_write_active_min_dirty_percent dirty data, use + zfs_vdev_async_write_min_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 30.

+
+

+

zfs_vdev_async_write_max_active (int)

+
Maxium asynchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_async_write_min_active (int)

+
Minimum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of 2 was chosen as + a compromise. A value of 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+

Default value: 2.

+
+

+

zfs_vdev_max_active (int)

+
The maximum number of I/Os active to each device. + Ideally, this will be >= the sum of each queue's max_active. It must be at + least the sum of each queue's min_active. See the section "ZFS I/O + SCHEDULER". +

Default value: 1,000.

+
+

+

zfs_vdev_scrub_max_active (int)

+
Maxium scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_scrub_min_active (int)

+
Minimum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_sync_read_max_active (int)

+
Maxium synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_read_min_active (int)

+
Minimum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_max_active (int)

+
Maxium synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_min_active (int)

+
Minimum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_disable_dup_eviction (int)

+
Disable duplicate buffer eviction +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_expire_snapshot (int)

+
Seconds to expire .zfs/snapshot +

Default value: 300.

+
+

+

zfs_admin_snapshot (int)

+
Allow the creation, removal, or renaming of entries in + the .zfs/snapshot directory to cause the creation, destruction, or renaming of + snapshots. When enabled this functionality works both locally and over NFS + exports which have the 'no_root_squash' option set. This functionality is + disabled by default. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_flags (int)

+
Set additional debugging flags. The following flags may + be bitwise-or'd together. +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic Name
Description
1ZFS_DEBUG_DPRINTF
Enable dprintf entries in the debug log.
2ZFS_DEBUG_DBUF_VERIFY *
Enable extra dbuf verifications.
4ZFS_DEBUG_DNODE_VERIFY *
Enable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMES
Enable snapshot name verification.
16ZFS_DEBUG_MODIFY
Check for illegally modified ARC buffers.
32ZFS_DEBUG_SPA
Enable spa_dbgmsg entries in the debug log.
64ZFS_DEBUG_ZIO_FREE
Enable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFY
Enable extra spacemap histogram verifications.
+

* Requires debug build.

+

Default value: 0.

+
+

+

zfs_free_leak_on_eio (int)

+
If destroy encounters an EIO while reading metadata (e.g. + indirect blocks), space referenced by the missing metadata can not be freed. + Normally this causes the background destroy to become "stalled", as + it is unable to make forward progress. While in this stalled state, all + remaining space to free from the error-encountering filesystem is + "temporarily leaked". Set this flag to cause it to ignore the EIO, + permanently leak the space from indirect blocks that can not be read, and + continue to free everything else that it can. +

The default, "stalling" behavior is useful if the + storage partially fails (i.e. some but not all i/os fail), and then later + recovers. In this case, we will be able to continue pool operations while it + is partially failed, and when it recovers, we can continue to free the + space, with no leaks. However, note that this case is actually fairly + rare.

+

Typically pools either (a) fail completely (but perhaps + temporarily, e.g. a top-level vdev going offline), or (b) have localized, + permanent errors (e.g. disk returns the wrong data due to bit flip or + firmware bug). In case (a), this setting does not matter because the pool + will be suspended and the sync thread will not be able to make forward + progress regardless. In case (b), because the error is permanent, the best + we can do is leak the minimum amount of space, which is what setting this + flag will do. Therefore, it is reasonable for this flag to normally be set, + but we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.

+

Default value: 0.

+
+

+

zfs_free_min_time_ms (int)

+
Min millisecs to free per txg +

Default value: 1,000.

+
+

+

zfs_immediate_write_sz (long)

+
Largest data block to write to zil +

Default value: 32,768.

+
+

+

zfs_max_recordsize (int)

+
We currently support block sizes from 512 bytes to 16MB. + The benefits of larger blocks, and thus larger IO, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very large + blocks can have an impact on i/o latency, and also potentially on the memory + allocator. Therefore, we do not allow the recordsize to be set larger than + zfs_max_recordsize (default 1MB). Larger blocks can be created by changing + this tunable, and pools with larger blocks can always be imported and used, + regardless of this setting. +

Default value: 1,048,576.

+
+

+

zfs_mdcomp_disable (int)

+
Disable meta data compression +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_metaslab_fragmentation_threshold (int)

+
Allow metaslabs to keep their active state as long as + their fragmentation percentage is less than or equal to this value. An active + metaslab that exceeds this threshold will no longer keep its active status + allowing better metaslabs to be selected. +

Default value: 70.

+
+

+

zfs_mg_fragmentation_threshold (int)

+
Metaslab groups are considered eligible for allocations + if their fragmenation metric (measured as a percentage) is less than or equal + to this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also crossed + this threshold. +

Default value: 85.

+
+

+

zfs_mg_noalloc_threshold (int)

+
Defines a threshold at which metaslab groups should be + eligible for allocations. The value is expressed as a percentage of free space + beyond which a metaslab group is always eligible for allocations. If a + metaslab group's free space is less than or equal to the the threshold, the + allocator will avoid allocating to that group unless all groups in the pool + have reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of 0 disables the + feature and causes all metaslab groups to be eligible for allocations. +

This parameter allows to deal with pools having heavily imbalanced + vdevs such as would be the case when a new vdev has been added. Setting the + threshold to a non-zero percentage will stop allocations from being made to + vdevs that aren't filled to the specified percentage and allow lesser filled + vdevs to acquire more allocations than they otherwise would under the old + zfs_mg_alloc_failures facility.

+

Default value: 0.

+
+

+

zfs_no_scrub_io (int)

+
Set for no scrub I/O +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_no_scrub_prefetch (int)

+
Set for no scrub prefetching +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nocacheflush (int)

+
Disable cache flushes +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nopwrite_enabled (int)

+
Enable NOP writes +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_pd_bytes_max (int)

+
The number of bytes which should be prefetched. +

Default value: 52,428,800.

+
+

+

zfs_prefetch_disable (int)

+
Disable all ZFS prefetching +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_read_chunk_size (long)

+
Bytes to read per chunk +

Default value: 1,048,576.

+
+

+

zfs_read_history (int)

+
Historic statistics for the last N reads +

Default value: 0.

+
+

+

zfs_read_history_hits (int)

+
Include cache hits in read history +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_recover (int)

+
Set to attempt to recover from fatal errors. This should + only be used as a last resort, as it typically results in leaked space, or + worse. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_resilver_delay (int)

+
Number of ticks to delay prior to issuing a resilver I/O + operation when a non-resilver or non-scrub I/O operation has occurred within + the past zfs_scan_idle ticks. +

Default value: 2.

+
+

+

zfs_resilver_min_time_ms (int)

+
Min millisecs to resilver per txg +

Default value: 3,000.

+
+

+

zfs_scan_idle (int)

+
Idle window in clock ticks. During a scrub or a resilver, + if a non-scrub or non-resilver I/O operation has occurred during this window, + the next scrub or resilver operation is delayed by, respectively + zfs_scrub_delay or zfs_resilver_delay ticks. +

Default value: 50.

+
+

+

zfs_scan_min_time_ms (int)

+
Min millisecs to scrub per txg +

Default value: 1,000.

+
+

+

zfs_scrub_delay (int)

+
Number of ticks to delay prior to issuing a scrub I/O + operation when a non-scrub or non-resilver I/O operation has occurred within + the past zfs_scan_idle ticks. +

Default value: 4.

+
+

+

zfs_send_corrupt_data (int)

+
Allow to send corrupt data (ignore read/checksum errors + when sending data) +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_sync_pass_deferred_free (int)

+
Defer frees starting in this pass +

Default value: 2.

+
+

+

zfs_sync_pass_dont_compress (int)

+
Don't compress starting in this pass +

Default value: 5.

+
+

+

zfs_sync_pass_rewrite (int)

+
Rewrite new bps starting in this pass +

Default value: 2.

+
+

+

zfs_top_maxinflight (int)

+
Max I/Os per top-level vdev during scrub or resilver + operations. +

Default value: 32.

+
+

+

zfs_txg_history (int)

+
Historic statistics for the last N txgs +

Default value: 0.

+
+

+

zfs_txg_timeout (int)

+
Max seconds worth of delta per txg +

Default value: 5.

+
+

+

zfs_vdev_aggregation_limit (int)

+
Max vdev I/O aggregation size +

Default value: 131,072.

+
+

+

zfs_vdev_cache_bshift (int)

+
Shift size to inflate reads too +

Default value: 16.

+
+

+

zfs_vdev_cache_max (int)

+
Inflate reads small than max
+

+

zfs_vdev_cache_size (int)

+
Total size of the per-disk cache +

Default value: 0.

+
+

+

zfs_vdev_mirror_switch_us (int)

+
Switch mirrors every N usecs +

Default value: 10,000.

+
+

+

zfs_vdev_read_gap_limit (int)

+
Aggregate read I/O over gap +

Default value: 32,768.

+
+

+

zfs_vdev_scheduler (charp)

+
I/O scheduler +

Default value: noop.

+
+

+

zfs_vdev_write_gap_limit (int)

+
Aggregate write I/O over gap +

Default value: 4,096.

+
+

+

zfs_zevent_cols (int)

+
Max event column width +

Default value: 80.

+
+

+

zfs_zevent_console (int)

+
Log events to the console +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_zevent_len_max (int)

+
Max event queue length +

Default value: 0.

+
+

+

zil_replay_disable (int)

+
Disable intent logging replay +

Use 1 for yes and 0 for no (default).

+
+

+

zil_slog_limit (ulong)

+
Max commit bytes to separate log device +

Default value: 1,048,576.

+
+

+

zio_delay_max (int)

+
Max zio millisec delay before posting event +

Default value: 30,000.

+
+

+

zio_requeue_io_start_cut_in_line (int)

+
Prioritize requeued I/O +

Default value: 0.

+
+

+

zio_taskq_batch_pct (uint)

+
Percentage of online CPUs (or CPU cores, etc) which will + run a worker thread for IO. These workers are responsible for IO work such as + compression and checksum calculations. Fractional number of CPUs will be + rounded down. +

The default value of 75 was chosen to avoid using all CPUs which + can result in latency issues and inconsistent application performance, + especially when high compression is enabled.

+

Default value: 75.

+
+

+

zvol_inhibit_dev (uint)

+
Do not create zvol device nodes +

Use 1 for yes and 0 for no (default).

+
+

+

zvol_major (uint)

+
Major number for zvol device +

Default value: 230.

+
+

+

zvol_max_discard_blocks (ulong)

+
Max number of blocks to discard at once +

Default value: 16,384.

+
+

+

zvol_prefetch_bytes (uint)

+
When adding a zvol to the system prefetch + zvol_prefetch_bytes from the start and end of the volume. Prefetching + these regions of the volume is desirable because they are likely to be + accessed immediately by blkid(8) or by the kernel scanning for a + partition table. +

Default value: 131,072.

+
+

+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/Os. The I/O scheduler determines when and in what order those operations + are issued. The I/O scheduler divides operations into five I/O classes + prioritized in the following order: sync read, sync write, async read, async + write, and scrub/resilver. Each queue defines the minimum and maximum number + of concurrent operations that may be issued to the device. In addition, the + device has an aggregate maximum, zfs_vdev_max_active. Note that the + sum of the per-queue minimums must not exceed the aggregate maximum. If the + sum of the per-queue maximums exceeds the aggregate maximum, then the number + of active I/Os may reach zfs_vdev_max_active, in which case no + further I/Os will be issued regardless of whether all per-queue minimums + have been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Further, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been hit + or if there are no operations queued for an I/O class that has not hit its + maximum. Every time an I/O is queued or an operation completes, the I/O + scheduler looks for new operations to issue.

+

In general, smaller max_active's will lead to lower latency of + synchronous operations. Larger max_active's may lead to higher overall + throughput, depending on underlying storage.

+

The ratio of the queues' max_actives determines the balance of + performance between reads, writes, and scrubs. E.g., increasing + zfs_vdev_scrub_max_active will cause the scrub or resilver to + complete more quickly, but reads and writes to have higher latency and lower + throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write I/Os according to + the amount of dirty data in the pool. Since both throughput and latency + typically increase with the number of concurrent operations issued to + physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other -- and + in particular synchronous -- queues. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there's + more dirty data in the pool.

+

Async Writes

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points.

+
+
+ | o---------| <-- zfs_vdev_async_write_max_active +
+ ^ | /^ | +
+ | | / | | +active | / | | +
+ I/O | / | | +count | / | | +
+ | / | | +
+ |-------o | | <-- zfs_vdev_async_write_min_active +
+ 0|_______^______|_________| +
+ 0% | | 100% of zfs_dirty_data_max +
+ | | +
+ | `-- zfs_vdev_async_write_active_max_dirty_percent +
+ `--------- zfs_vdev_async_write_active_min_dirty_percent +
+Until the amount of dirty data exceeds a minimum percentage of the dirty data + allowed in the pool, the I/O scheduler will limit the number of concurrent + operations to the minimum. As that threshold is crossed, the number of + concurrent operations issued increases linearly to the maximum at the + specified maximum percentage of the dirty data allowed in the pool. +

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the + maximum percentage, this indicates that the rate of incoming data is greater + than the rate that the backend storage can handle. In this case, we must + further throttle incoming writes, as described in the next section.

+

+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as:

+
+
+ min_time = zfs_delay_scale * (dirty - min) / (max - dirty) +
+ min_time is then capped at 100 milliseconds.
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so that we only start + to delay after writing at full speed has failed to keep up with the incoming + write rate. The scale of the curve is defined by zfs_delay_scale. + Roughly speaking, this variable determines the amount of delay at the + midpoint of the curve.

+

+
delay
+
+ 10ms +-------------------------------------------------------------*+ +
+ | *| +
+ 9ms + *+ +
+ | *| +
+ 8ms + *+ +
+ | * | +
+ 7ms + * + +
+ | * | +
+ 6ms + * + +
+ | * | +
+ 5ms + * + +
+ | * | +
+ 4ms + * + +
+ | * | +
+ 3ms + * + +
+ | * | +
+ 2ms + (midpoint) * + +
+ | | ** | +
+ 1ms + v *** + +
+ | zfs_delay_scale ----------> ******** | +
+ 0 +-------------------------------------*********----------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note that since the delay is added to the outstanding time + remaining on the most recent transaction, the delay is effectively the + inverse of IOPS. Here the midpoint of 500us translates to 2000 IOPS. The + shape of the curve was chosen such that small changes in the amount of + accumulated dirty data in the first 3/4 of the curve yield relatively small + differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay + is represented on a log scale:

+

+
delay
+100ms +-------------------------------------------------------------++
+
+ + + +
+ | | +
+ + *+ +
+ 10ms + *+ +
+ + ** + +
+ | (midpoint) ** | +
+ + | ** + +
+ 1ms + v **** + +
+ + zfs_delay_scale ----------> ***** + +
+ | **** | +
+ + **** + +100us + ** + +
+ + * + +
+ | * | +
+ + * + +
+ 10us + * + +
+ + + +
+ | | +
+ + + +
+ +--------------------------------------------------------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the backend storage, and then by changing the value of + zfs_delay_scale to increase the steepness of the curve.

+
+
+ + + + + +
November 16, 2013
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/5/zpool-features.5.html b/man/v0.6/5/zpool-features.5.html new file mode 100644 index 000000000..4f455c26c --- /dev/null +++ b/man/v0.6/5/zpool-features.5.html @@ -0,0 +1,584 @@ + + + + + + + zpool-features.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.5

+
+ + + + + +
ZPOOL-FEATURES(5)File Formats ManualZPOOL-FEATURES(5)
+
+
+

+

zpool-features - ZFS pool feature descriptions

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the upgrade subcommand of the zpool(8) command, or set the + feature@feature_name property to enabled.

+

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

+

Since most features can be enabled independently of each other the + on-disk format of the pool is specified by the set of all features marked as + active on the pool. If the pool was created by another software + version this set may include unsupported features.

+
+

+

Every feature has a guid of the form + com.example:feature_name. The reverse DNS name ensures that the + feature's guid is unique across all ZFS implementations. When unsupported + features are encountered on a pool they will be identified by their guids. + Refer to the documentation for the ZFS implementation that created the pool + for information about those features.

+

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its guid which follows the ':' (e.g. + com.example:feature_name would have the short name + feature_name), however a feature's short name may differ across ZFS + implementations if following the convention would result in name + conflicts.

+
+
+

+

Features can be in one of three states:

+

active

+
This feature's on-disk format changes are in effect on + the pool. Support for this feature is required to import the pool in + read-write mode. If this feature is not read-only compatible, support is also + required to import the pool in read-only mode (see "Read-only + compatibility").
+

+

enabled

+
An administrator has marked this feature as enabled on + the pool, but the feature's on-disk format changes have not been made yet. The + pool can still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support returning to the + enabled state after becoming active. See feature-specific + documentation for details.
+

+

disabled

+
This feature's on-disk format changes have not been made + and will not be made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they have been + enabled.
+

+

+

The state of supported features is exposed through pool properties + of the form feature@short_name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as "read-only compatible". If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly property during + import (see zpool(8) for details on importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool a pool + property named unsupported@feature_guid will indicate why the import + was allowed despite the unsupported feature. Possible values for this + property are:

+

+

inactive

+
The feature is in the enabled state and therefore + the pool's on-disk format is still compatible with software that does not + support this feature.
+

+

readonly

+
The feature is read-only compatible and the pool has been + imported in read-only mode.
+

+
+
+

+

Some features depend on other features being enabled in order to + function properly. Enabling a feature will automatically enable any features + it depends on.

+
+
+
+

+

The following features are supported on this system:

+

async_destroy

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:async_destroy
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Destroying a file system requires traversing all of its data in + order to return its used space to the pool. Without async_destroy the + file system is not fully removed until all space has been reclaimed. If the + destroy operation is interrupted by a reboot or power outage the next + attempt to open the pool will need to complete the destroy operation + synchronously.

+

When async_destroy is enabled the file system's data will + be reclaimed by a background process, allowing the destroy operation to + complete without traversing the entire file system. The background process + is able to resume interrupted destroys after the pool has been opened, + eliminating the need to finish interrupted destroys as part of the open + operation. The amount of space remaining to be reclaimed by the background + process is available through the freeing property.

+

This feature is only active while freeing is + non-zero.

+
+

+

empty_bpobj

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:empty_bpobj
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also reduces + the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobj's) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobj's are empty. This feature + allows us to create each bpobj on-demand, thus eliminating the empty + bpobjs.

+

This feature is active while there are any filesystems, + volumes, or snapshots which were created after enabling this feature.

+
+

+

filesystem_limits

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:filesystem_limits
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables filesystem and snapshot limits. These limits + can be used to control how many filesystems and/or snapshots can be created + at the point in the tree on which the limits are set.

+

This feature is active once either of the limit properties + has been set on a dataset. Once activated the feature is never + deactivated.

+
+

+

lz4_compress

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:lz4_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

lz4 is a high-performance real-time compression algorithm + that features significantly faster compression and decompression as well as + a higher compression ratio than the older lzjb compression. + Typically, lz4 compression is approximately 50% faster on + compressible data and 200% faster on incompressible data than lzjb. + It is also approximately 80% faster on decompression, while giving + approximately 10% better compression ratio.

+

When the lz4_compress feature is set to enabled, the + administrator can turn on lz4 compression on any dataset on the pool + using the zfs(8) command. Please note that doing so will immediately + activate the lz4_compress feature on the underlying pool using the + zfs(1M) command. Also, all newly written metadata will be compressed + with lz4 algorithm. Since this feature is not read-only compatible, + this operation will render the pool unimportable on systems without support + for the lz4_compress feature. Booting off of lz4-compressed + root pools is supported.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

spacemap_histogram

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_histogram
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This features allows ZFS to maintain more information about how + free space is organized within the pool. If this feature is enabled, + ZFS will set this feature to active when a new space map object is + created or an existing space map is upgraded to the new format. Once the + feature is active, it will remain in that state until the pool is + destroyed.

+

+
+

+

extensible_dataset

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:extensible_dataset
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first dependent + feature uses it, and will be returned to the enabled state when all + datasets that use this feature are destroyed.

+

+
+

+

bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmarks
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables use of the zfs bookmark + subcommand.

+

This feature is active while any bookmarks exist in the + pool. All bookmarks in the pool can be listed by running zfs list -t + bookmark -r poolname.

+

+
+

+

enabled_txg

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:enabled_txg
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Once this feature is enabled ZFS records the transaction group + number in which new features are enabled. This has no user-visible impact, + but other features may depend on this feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

hole_birth

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:hole_birth
READ-ONLY COMPATIBLEno
DEPENDENCIESenabled_txg
+

This feature improves performance of incremental sends ("zfs + send -i") and receives for objects with many holes. The most common + case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A to snapshot + B contains information about every block that changed between + A and B. Blocks which did not change between those snapshots + can be identified and omitted from the stream using a piece of metadata + called the 'block birth time', but birth times are not recorded for holes + (blocks filled only with zeroes). Since holes created after A cannot + be distinguished from holes created before A, information about every + hole in the entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. However, + when incrementally replicating filesystems or zvols with many holes (for + example a zvol formatted with another filesystem) a lot of time will be + spent sending and receiving unnecessary information about holes that already + exist on the receiving side.

+

Once the hole_birth feature has been enabled the block + birth times of all new holes will be recorded. Incremental sends between + snapshots created after this feature is enabled will use this new metadata + to avoid sending information about holes that already exist on the receiving + side.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

embedded_data

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:embedded_data
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 bytes + or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of highly-compressible + blocks are stored in the block "pointer" itself (a misnomer in + this case, as it contains the compresseed data, rather than a pointer to its + location on disk). Thus the space of the block (one sector, typically 512 + bytes or 4KB) is saved, and no additional i/o is needed to read and write + the data block.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

large_blocks

+
+ + + + + + + + + + + + + +
GUIDorg.open-zfs:large_block
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_block feature allows the record size on a dataset + to be set larger than 128KB.

+

This feature becomes active once a recordsize + property has been set larger than 128KB, and will return to being + enabled once all filesystems that have ever had their recordsize + larger than 128KB are destroyed.

+
+

+
+
+

+

zpool(8)

+
+
+ + + + + +
August 27, 2013
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/fsck.zfs.8.html b/man/v0.6/8/fsck.zfs.8.html new file mode 100644 index 000000000..3cdde3639 --- /dev/null +++ b/man/v0.6/8/fsck.zfs.8.html @@ -0,0 +1,215 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
fsck.zfs(8)System Administration Commandsfsck.zfs(8)
+
+

+
+

+

fsck.zfs - Dummy ZFS filesystem checker.

+

+
+
+

+

fsck.zfs [options] + <dataset>

+

+
+
+

+

fsck.zfs is a shell stub that does nothing and always + returns true. It is installed by ZoL because some Linux distributions expect + a fsck helper for all filesystems.

+

+
+
+

+

All options and the dataset are ignored.

+

+
+
+

+

ZFS datasets are checked by running zpool scrub on the + containing pool. An individual ZFS dataset is never checked independently of + its pool, which is unlike a regular filesystem.

+

+
+
+

+

On some systems, if the dataset is in a degraded pool, then + it might be appropriate for fsck.zfs to return exit code 4 to + indicate an uncorrected filesystem error.

+

Similarly, if the dataset is in a faulted pool and has a + legacy /etc/fstab record, then fsck.zfs should return exit code 8 to + indicate a fatal operational error.

+

+
+
+

+

Darik Horn <dajhorn@vanadac.com>.

+

+
+
+

+

fsck(8), fstab(5), zpool(8)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/index.html b/man/v0.6/8/index.html new file mode 100644 index 000000000..93996c694 --- /dev/null +++ b/man/v0.6/8/index.html @@ -0,0 +1,161 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

System Administration Commands (8)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/mount.zfs.8.html b/man/v0.6/8/mount.zfs.8.html new file mode 100644 index 000000000..928cf62f9 --- /dev/null +++ b/man/v0.6/8/mount.zfs.8.html @@ -0,0 +1,264 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
mount.zfs(8)System Administration Commandsmount.zfs(8)
+
+

+
+

+

mount.zfs - mount a ZFS filesystem

+
+
+

+

mount.zfs [-sfnvh] [-o options] dataset + mountpoint

+

+
+
+

+

mount.zfs is part of the zfsutils package for Linux. It is + a helper program that is usually invoked by the mount(8) or + zfs(8) commands to mount a ZFS dataset.

+

All options are handled according to the FILESYSTEM + INDEPENDENT MOUNT OPTIONS section in the mount(8) manual, except for + those described below.

+

The dataset parameter is a ZFS filesystem name, as output + by the zfs list -H -o name command. This parameter never has a + leading slash character and is not a device name.

+

The mountpoint parameter is the path name of a + directory.

+

+

+
+
+

+
+
+
Ignore bad or sloppy mount options.
+
+
Do a fake mount; do not perform the mount operation.
+
+
Do not update the /etc/mtab file.
+
+
Increase verbosity.
+
+
Print the usage message.
+
+
This flag sets the SELinux context for all files in the filesytem under + that mountpoint.
+
+
This flag sets the SELinux context for the filesytem being mounted.
+
+
This flag sets the SELinux context for unlabeled files.
+
+
This flag sets the SELinux context for the root inode of the + filesystem.
+
+
This private flag indicates that the dataset has an entry in the + /etc/fstab file.
+
+
This private flag disables extended attributes.
+
+
This private flag enables directory-based extended attributes and, if + appropriate, adds a ZFS context to the selinux system policy.
+
+
This private flag enables system attributed-based extended attributes and, + if appropriate, adds a ZFS context to the selinux system policy.
+
+
Equivalent to xattr.
+
+
This private flag indicates that mount(8) is being called by the + zfs(8) command. +

+
+
+
+
+

+

ZFS conventionally requires that the mountpoint be an empty + directory, but the Linux implementation inconsistently enforces the + requirement.

+

The mount.zfs helper does not mount the contents of + zvols.

+

+
+
+

+
+
/etc/fstab
+
The static filesystem table.
+
/etc/mtab
+
The mounted filesystem table.
+
+
+
+

+

The primary author of mount.zfs is Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

fstab(5), mount(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/vdev_id.8.html b/man/v0.6/8/vdev_id.8.html new file mode 100644 index 000000000..57f06d808 --- /dev/null +++ b/man/v0.6/8/vdev_id.8.html @@ -0,0 +1,234 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
vdev_id(8)System Manager's Manualvdev_id(8)
+
+
+

+

vdev_id - generate user-friendly names for JBOD disks

+
+
+

+
vdev_id <-d dev> [-c config_file] [-g sas_direct|sas_switch]
+
+ [-m] [-p phys_per_port] +vdev_id -h
+
+
+

+

The vdev_id command is a udev helper which parses the file + /etc/zfs/vdev_id.conf(5) to map a physical path in a storage topology + to a channel name. The channel name is combined with a disk enclosure slot + number to create an alias that reflects the physical location of the drive. + This is particularly helpful when it comes to tasks like replacing failed + drives. Slot numbers may also be re-mapped in case the default numbering is + unsatisfactory. The drive aliases will be created as symbolic links in + /dev/disk/by-vdev.

+

The currently supported topologies are sas_direct and sas_switch. + A multipath mode is supported in which dm-mpath devices are handled by + examining the first-listed running component disk as reported by the + multipath(8) command. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating aliases based on existing + udev links in the /dev hierarchy using the alias configuration file + keyword. See the vdev_id.conf(5) man page for details.

+

+
+
+

+
+
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+
This is the only mandatory argument. Specifies the name of a device in + /dev, i.e. "sda".
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+
+
+
Specifies that vdev_id(8) will handle only dm-multipath devices. If + set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4.
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zdb.8.html b/man/v0.6/8/zdb.8.html new file mode 100644 index 000000000..add6484af --- /dev/null +++ b/man/v0.6/8/zdb.8.html @@ -0,0 +1,526 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)ZDB(8)
+
+

+
+

+

zdb - Display zpool debugging and consistency + information

+

+
+
+

+

zdb [-CumdibcsDvhLMXFPA] [-e [-p path...]] [-t + txg] +
+ [-U cache] [-I inflight I/Os] +
+ [poolname [object ...]]

+

+

zdb [-divPA] [-e [-p path...]] [-U cache] +
+ dataset [object ...]

+

+

zdb -m [-MLXFPA] [-t txg] [-e [-p path...]] + [-U cache] +
+ poolname [vdev [metaslab ...]]

+

+

zdb -R [-A] [-e [-p path...]] [-U cache] + poolname +
+ vdev:offset:size[:flags]

+

+

zdb -S [-AP] [-e [-p path...]] [-U cache] + poolname

+

+

zdb -l [-uA] device

+

+

zdb -C [-A] [-U cache]

+

+
+
+

+

The zdb utility displays information about a ZFS pool + useful for debugging and performs some amount of consistency checking. It is + a not a general purpose tool and options (and facilities) may change. This + is neither a fsck(8) nor an fsdb(8) utility.

+

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

+

If the dataset argument does not contain any / or + @ characters, it is interpreted as a pool name. The root dataset can + be specified as pool/ (pool name followed by a slash).

+

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+

+
+
+

+

Display options:

+

+

-b

+

+
Display statistics regarding the number, size (logical, + physical and allocated) and deduplication of blocks.
+

+

-c

+

+
Verify the checksum of all metadata blocks while printing + block statistics (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+

+

-C

+

+
Display information about the configuration. If specified + with no other options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file to display, see + -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display the configuration that + would be used were the pool to be imported.

+
+

+

-d

+

+
Display information about datasets. Specified once, + displays basic dataset information: ID, create transaction, size, and object + count. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs are specified, display information about those + specific objects only.

+
+

+

-D

+

+
Display deduplication statistics, including the + deduplication ratio (dedup), compression ratio (compress), inflation due to + the zfs copies property (copies), and an overall effective ratio (dedup * + compress / copies). +

If specified twice, display a histogram of deduplication + statistics, showing the allocated (physically present on disk) and + referenced (logically referenced in the pool) block counts and sizes by + reference count.

+

If specified a third time, display the statistics independently + for each deduplication table.

+

If specified a fourth time, dump the contents of the deduplication + tables describing duplicate blocks.

+

If specified a fifth time, also dump the contents of the + deduplication tables describing unique blocks.

+
+

+

-h

+

+
Display pool history similar to zpool history, but + include internal changes, transaction, and dataset information.
+

+

-i

+

+
Display information about intent log (ZIL) entries + relating to each dataset. If specified multiple times, display counts of each + intent log transaction type.
+

+

-l device

+

+
Display the vdev labels from the specified device. If the + -u option is also specified, also display the uberblocks on this + device.
+

+

-L

+

+
Disable leak tracing and the loading of space maps. By + default, zdb verifies that all non-free blocks are referenced, which + can be very expensive.
+

+

-m

+

+
Display the offset, spacemap, and free space of each + metaslab. When specified twice, also display information about the on-disk + free space histogram associated with each metaslab. When specified three time, + display the maximum contiguous free space, the in-core free space histogram, + and the percentage of free space in each space map. When specified four times + display every spacemap record.
+

+

-M

+

+
Display the offset, spacemap, and free space of each + metaslab. When specified twice, also display information about the maximum + contiguous free space and the percentage of free space in each space map. When + specified three times display every spacemap record.
+

+

-R poolname + vdev:offset:size[:flags]

+

+
Read and display a block from the specified device. By + default the block is displayed as a hex dump, but see the description of the + ´r´ flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) offset (the offset within + the vdev) size (the size of the block to read) and, optionally, + flags (a set of flags, described below).

+

+

b offset

+

+
Print block pointer
+

+

d

+

+
Decompress the block
+

+

e

+

+
Byte swap the block
+

+

g

+

+
Dump gang block header
+

+

i

+

+
Dump indirect block
+

+

r

+

+
Dump raw uninterpreted block data
+
+

+

-s

+

+
Report statistics on zdb´s I/O. Display + operation counts, bandwidth, and error counts of I/O to the pool from + zdb.
+

+

-S

+

+
Simulate the effects of deduplication, constructing a DDT + and then display that DDT as with -DD.
+

+

-u

+

+
Display the current uberblock.
+

+

Other options:

+

+

-A

+

+
Do not abort should any assertion fail.
+

+

-AA

+

+
Enable panic recovery, certain errors which would + otherwise be fatal are demoted to warnings.
+

+

-AAA

+

+
Do not abort if asserts fail and also enable panic + recovery.
+

+

-e [-p path]...

+

+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The -p flag specifies the path under which + devices are to be searched.
+

+

-F

+

+
Attempt to make an unreadable pool readable by trying + progressively older transactions.
+

+

-I inflight I/Os

+

+
Limit the number of outstanding checksum I/Os to the + specified value. The default value is 200. This option affects the performance + of the -c option.
+

+

-P

+

+
Print numbers in an unscaled form more amenable to + parsing, eg. 1000000 rather than 1M.
+

+

-t transaction

+

+
Specify the highest transaction to use when searching for + uberblocks. See also the -u and -l options for a means to see + the available uberblocks and their associated transaction numbers.
+

+

-U cachefile

+

+
Use a cache file other than + /etc/zfs/zpool.cache.
+

+

-v

+

+
Enable verbosity. Specify multiple times for increased + verbosity.
+

+

-X

+

+
Attempt ´extreme´ transaction rewind, that + is attempt the same recovery as -F but read transactions otherwise + deemed too old.
+

+

-V

+

+
Attempt a verbatim import. This mimics the behavior of + the kernel when loading a pool from a cachefile.
+

+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+

+
+
+

+

Example 1 Display the configuration of imported pool + 'rpool'

+

+
+

+
# zdb -C rpool
+MOS Configuration:
+
+ version: 28 +
+ name: 'rpool' +
+ ...
+
+

+

+

Example 2 Display basic dataset information about + 'rpool'

+

+
+

+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+
+ ...
+
+

+

+

Example 3 Display basic information about object 0 in + 'rpool/export/home'

+

+
+

+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+ Object lvl iblk dblk dsize lsize %full type +
+ 0 7 16K 16K 15.0K 16K 25.00 DMU dnode
+
+

+

+

Example 4 Display the predicted effect of enabling + deduplication on 'rpool'

+

+
+

+
# zdb -S rpool
+Simulated DDT histogram:
+bucket              allocated                       referenced          
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+
+ 1 694K 27.1G 15.0G 15.0G 694K 27.1G 15.0G 15.0G +
+ 2 35.0K 1.33G 699M 699M 74.7K 2.79G 1.45G 1.45G +
+ ... +dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+

+

+
+
+

+
+
+
Override the default spa_config_path (/etc/zfs/zpool.cache) + setting. If -U flag is specified it will override this environment + variable settings once again. +

+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
February 15, 2012
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zed.8.html b/man/v0.6/8/zed.8.html new file mode 100644 index 000000000..dd3c6859b --- /dev/null +++ b/man/v0.6/8/zed.8.html @@ -0,0 +1,370 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Administration CommandsZED(8)
+
+

+
+

+

ZED - ZFS Event Daemon

+

+
+
+

+

zed [-d zedletdir] [-f] [-F] + [-h] [-L] [-M] [-p pidfile] [-s + statefile] [-v] [-V] [-Z]

+

+
+
+

+

ZED (ZFS Event Daemon) monitors events generated by the ZFS + kernel module. When a zevent (ZFS Event) is posted, ZED will run any + ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) that have been + enabled for the corresponding zevent class.

+

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Run the daemon in the foreground.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+
Read the enabled ZEDLETs from the specified directory.
+
+
Write the daemon's process ID to the specified file.
+
+
Write the daemon's state to the specified file. +

+
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the "zpool + events -v" command.

+

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory. These can be symlinked or copied from the + installed-zedlets directory; symlinks allow for automatic updates + from the installed ZEDLETs, whereas copies preserve local modifications. As + a security measure, ZEDLETs must be owned by root. They must have execute + permissions for the user, but they must not have write permissions for group + or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they should be + invoked. In particular, a ZEDLET will be invoked for a given zevent if + either its class or subclass string is a prefix of its filename (and is + followed by a non-alphabetic character). As a special case, the prefix + "all" matches all zevents. Multiple ZEDLETs may be invoked for a + given zevent.

+

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + "ZED_".

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner: 1) it is prefixed with "ZEVENT_", 2) it is converted to + uppercase, and 3) each non-alphanumeric character is converted to an + underscore. Some additional environment variables have been defined to + present certain nvpair values in a more convenient form. An incomplete list + of zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as + "seconds nanoseconds" since the Epoch.
+
+
The seconds component of ZEVENT_TIME.
+
+
The nanoseconds component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The ZFS alias (name-version-release) string used to build the + daemon.
+
+
The ZFS version used to build the daemon.
+
+
The ZFS release used to build the daemon.
+
+

ZEDLETs may need to call other ZFS commands. The installation + paths of the following executables are defined: ZDB, ZED, + ZFS, ZINJECT, and ZPOOL. These variables can be + overridden in the rc file if needed.

+

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@libexecdir@/zfs/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state. +

+
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
+
Terminate the daemon. +

+
+
+
+
+

+

ZED requires root privileges.

+

+
+
+

+

Events are processed synchronously by a single thread. This can + delay the processing of simultaneous zevents.

+

There is no maximum timeout for ZEDLET execution. Consequently, a + misbehaving ZEDLET can delay the processing of subsequent zevents.

+

The ownership and permissions of the enabled-zedlets + directory (along with all parent directories) are not checked. If any of + these directories are improperly owned or permissioned, an unprivileged user + could insert a ZEDLET to be executed as root. The requirement that ZEDLETs + be owned by root mitigates this to some extent.

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Some zevent nvpair types are not handled. These are denoted by + zevent environment variables having a "_NOT_IMPLEMENTED_" + value.

+

Internationalization support via gettext has not been added.

+

The configuration file is not yet implemented.

+

The diagnosis engine is not yet implemented.

+

+
+
+

+

ZED (ZFS Event Daemon) is distributed under the terms of + the Common Development and Distribution License Version 1.0 (CDDL-1.0).

+

Developed at Lawrence Livermore National Laboratory + (LLNL-CODE-403049).

+

+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
Octember 1, 2013ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zfs.8.html b/man/v0.6/8/zfs.8.html new file mode 100644 index 000000000..f26125b11 --- /dev/null +++ b/man/v0.6/8/zfs.8.html @@ -0,0 +1,3315 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
zfs(8)System Administration Commandszfs(8)
+
+
+

+

zfs - configures ZFS file systems

+
+
+

+
zfs [-?]
+

+

+
zfs create [-p] [-o property=value] ... filesystem
+

+

+
zfs create [-ps] [-b blocksize] [-o property=value] ... -V size volume
+

+

+
zfs destroy [-fnpRrv] filesystem|volume
+

+

+
zfs destroy [-dnpRrv] filesystem|volume@snap[%snap][,...]
+

+

+
zfs destroy filesystem|volume#bookmark
+

+

+
zfs snapshot | snap [-r] [-o property=value] ... 
+
+ filesystem@snapname|volume@snapname ...
+

+

+
zfs rollback [-rRf] snapshot
+

+

+
zfs clone [-p] [-o property=value] ... snapshot filesystem|volume
+

+

+
zfs promote clone-filesystem
+

+

+
zfs rename [-f] filesystem|volume|snapshot
+
+ filesystem|volume|snapshot
+

+

+
zfs rename [-fp] filesystem|volume filesystem|volume
+

+

+
zfs rename -r snapshot snapshot
+

+

+
zfs list [-r|-d depth][-Hp][-o property[,property]...] [-t type[,type]..]
+
+ [-s property] ... [-S property] ... [filesystem|volume|snapshot] ...
+

+

+
zfs set property=value filesystem|volume|snapshot ...
+

+

+
zfs get [-r|-d depth][-Hp][-o field[,...]] [-t type[,...]] 
+
+ [-s source[,...]] "all" | property[,...] filesystem|volume|snapshot ...
+

+

+
zfs inherit [-rS] property filesystem|volume|snapshot ...
+

+

+
zfs upgrade [-v]
+

+

+
zfs upgrade [-r] [-V version] -a | filesystem
+

+

+
zfs userspace [-Hinp] [-o field[,...]] [-s field] ...
+
+ [-S field] ... [-t type[,...]] filesystem|snapshot
+

+

+
zfs groupspace [-Hinp] [-o field[,...]] [-s field] ...
+
+ [-S field] ... [-t type[,...]] filesystem|snapshot
+

+

+
zfs mount 
+

+

+
zfs mount [-vO] [-o options] -a | filesystem
+

+

+
zfs unmount | umount [-f] -a | filesystem|mountpoint
+

+

+
zfs share -a | filesystem
+

+

+
zfs unshare -a filesystem|mountpoint
+

+

+
zfs bookmark snapshot bookmark
+

+

+
zfs send [-DnPpRveL] [-[iI] snapshot] snapshot
+

+

+
zfs send [-eL] [-i snapshot|bookmark] filesystem|volume|snapshot
+

+

+
zfs receive | recv [-vnFu] filesystem|volume|snapshot
+

+

+
zfs receive | recv [-vnFu] [-d|-e] filesystem
+

+

+
zfs allow filesystem|volume
+

+

+
zfs allow [-ldug] "everyone"|user|group[,...] perm|@setname[,...] 
+
+ filesystem|volume
+

+

+
zfs allow [-ld] -e perm|@setname[,...] filesystem|volume
+

+

+
zfs allow -c perm|@setname[,...] filesystem|volume
+

+

+
zfs allow -s @setname perm|@setname[,...] filesystem|volume
+

+

+
zfs unallow [-rldug] "everyone"|user|group[,...] [perm|@setname[,... ]] 
+
+ filesystem|volume
+

+

+
zfs unallow [-rld] -e [perm|@setname[,... ]] filesystem|volume
+

+

+
zfs unallow [-r] -c [perm|@setname[ ... ]] filesystem|volume
+

+

+
zfs unallow [-r] -s @setname [perm|@setname[,... ]] filesystem|volume
+

+

+
zfs hold [-r] tag snapshot...
+

+

+
zfs holds [-r] snapshot...
+

+

+
zfs release [-r] tag snapshot...
+

+

+
zfs diff [-FHt] snapshot snapshot|filesystem
+
+
+
+

+

The zfs command configures ZFS datasets within a + ZFS storage pool, as described in zpool(8). A dataset is + identified by a unique path within the ZFS namespace. For + example:

+

+
+

+
pool/{filesystem,volume,snapshot}
+
+

+

+

+

where the maximum length of a dataset name is MAXNAMELEN + (256 bytes).

+

+

A dataset can be one of the following:

+

file system

+

+
A ZFS dataset of type filesystem can be + mounted within the standard system namespace and behaves like other file + systems. While ZFS file systems are designed to be POSIX + compliant, known issues exist that prevent compliance in some cases. + Applications that depend on standards conformance might fail due to + nonstandard behavior when checking file system free space.
+

+

volume

+

+
A logical volume exported as a raw or block device. This + type of dataset should only be used under special circumstances. File systems + are typically used in most environments.
+

+

snapshot

+

+
A read-only version of a file system or volume at a given + point in time. It is specified as filesystem@name or + volume@name.
+

+

bookmark

+

+
Much like a snapshot, but without the hold on + on-disk data. It can be used as the source of a send (but not for a receive). + It is specified as filesystem#name or volume#name.
+

+
+

+

A ZFS storage pool is a logical collection of devices that + provide space for datasets. A storage pool is also the root of the + ZFS file system hierarchy.

+

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

+

See zpool(8) for more information on creating and + administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

+

Snapshots can have arbitrary names. Snapshots of volumes can be + cloned or rolled back. Visibility is determined by the snapdev + property of the parent volume.

+

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file system. Snapshots are + automatically mounted on demand and may be unmounted at regular intervals. + The visibility of the .zfs directory can be controlled by the + snapdir property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem/volume, and they will + survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

+

Clones can only be created from a snapshot. When a snapshot is + cloned, it creates an implicit dependency between the parent and child. Even + though the clone is created somewhere else in the dataset hierarchy, the + original snapshot cannot be destroyed as long as a clone exists. The + origin property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the "origin" file + system to become a clone of the specified file system, which makes it + possible to destroy the file system that the clone was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the + number of file systems per system is likely to be numerous. To cope with + this, ZFS automatically manages mounting and unmounting file systems + without the need to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

+

By default, file systems are mounted under /path, + where path is the name of the file system in the ZFS + namespace. Directories are created and destroyed as needed.

+

+

A file system can also have a mount point set in the + mountpoint property. This directory is created as needed, and + ZFS automatically mounts the file system when the zfs mount -a + command is invoked (without editing /etc/fstab). The + mountpoint property can be inherited, so if pool/home has a + mount point of /export/stuff, then pool/home/user + automatically inherits a mount point of /export/stuff/user.

+

+

A file system mountpoint property of none prevents + the file system from being mounted.

+

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, umount, /etc/fstab). If a + file system's mount point is set to legacy, ZFS makes no + attempt to manage the file system, and the administrator is responsible for + mounting and unmounting the file system.

+
+
+

+

Deduplication is the process for removing redundant data at the + block-level, reducing the total amount of data stored. If a file system has + the dedup property enabled, duplicate data blocks are removed + synchronously. The result is that only unique data is stored and common + components are shared among files.

+

WARNING: DO NOT ENABLE DEDUPLICATION UNLESS YOU NEED IT AND + KNOW EXACTLY WHAT YOU ARE DOING!

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GB of RAM per 1 TB + of storage when you enable deduplication. But calculating the exact + requirenments is a somewhat complicated affair. Please see the Oracle + Dedup Guide for more information..

+

Enabling deduplication on an improperly-designed system will + result in extreme performance issues (extremely slow filesystem and snapshot + deletions etc.) and can potentially lead to data loss (i.e. unimportable + pool due to memory exhaustion) if your system is not built for this purpose. + Deduplication affects the processing power (CPU), disks (and the controller) + as well as primary (real) memory.

+

Before creating a pool with deduplication enabled, ensure that you + have planned your hardware requirements appropriately and implemented + appropriate recovery practices, such as regular backups.

+

Unless necessary, deduplication should NOT be enabled on a system. + Instead, consider using compression=lz4, as a less resource-intensive + alternative.

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, + native properties are either editable or read-only. User properties have no + effect on ZFS behavior, but you can use them to annotate datasets in + a way that is meaningful in your environment. For more information about + user properties, see the "User Properties" section, below.

+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

+

The values of numeric properties can be specified using + human-readable suffixes (for example, k, KB, M, + Gb, and so forth, up to Z for zettabyte). The following are + all valid (and equal) specifications:

+

+
+

+
1536M, 1.5g, 1.50GB
+
+

+

+

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, sharenfs, and + sharesmb.

+

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+

available

+

+
The amount of space available to the dataset and all its + children, assuming that there is no other activity in the pool. Because space + is shared within a pool, availability can be limited by any number of factors, + including physical pool size, quotas, reservations, or other datasets within + the pool. +

This property can also be referred to by its shortened column + name, avail.

+
+

+

compressratio

+

+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. The used + property includes descendant datasets, and, for clones, does not include the + space shared with the origin snapshot. For snapshots, the compressratio + is the same as the refcompressratio property. Compression can be turned + on by running: zfs set compression=on dataset. The default value + is off.
+

+

creation

+

+
The time this dataset was created.
+

+

clones

+

+
For snapshots, this property is a comma-separated list of + filesystems or volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the clones property is not + empty, then this snapshot can not be destroyed (even with the -r or + -f options).
+

+

defer_destroy

+

+
This property is on if the snapshot has been + marked for deferred destruction by using the zfs destroy -d + command. Otherwise, the property is off.
+

+

filesystem_count

+

+
The total number of filesystems and volumes that exist + under this location in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree under which the + dataset resides.
+

+

logicalreferenced

+

+
The amount of space that is "logically" + accessible by this dataset. See the referenced property. The logical + space ignores the effect of the compression and copies + properties, giving a quantity closer to the amount of data that applications + see. However, it does include space consumed by metadata. +

This property can also be referred to by its shortened column + name, lrefer.

+
+

+

logicalused

+

+
The amount of space that is "logically" + consumed by this dataset and all its descendents. See the used + property. The logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the amount of data that + applications see. However, it does include space consumed by metadata. +

This property can also be referred to by its shortened column + name, lused.

+
+

+

mounted

+

+
For file systems, indicates whether the file system is + currently mounted. This property can be either yes or no.
+

+

origin

+

+
For cloned file systems or volumes, the snapshot from + which the clone was created. See also the clones property.
+

+

referenced

+

+
The amount of data that is accessible by this dataset, + which may or may not be shared with other datasets in the pool. When a + snapshot or clone is created, it initially references the same amount of space + as the file system or snapshot it was created from, since its contents are + identical. +

This property can also be referred to by its shortened column + name, refer.

+
+

+

refcompressratio

+

+
The compression ratio achieved for the referenced + space of this dataset, expressed as a multiplier. See also the + compressratio property.
+

+

snapshot_count

+

+
The total number of snapshots that exist under this + location in the dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under which the + dataset resides.
+

+

type

+

+
The type of dataset: filesystem, volume, or + snapshot.
+

+

used

+

+
The amount of space consumed by this dataset and all its + descendents. This is the value that is checked against this dataset's quota + and reservation. The space used does not include this dataset's reservation, + but does take into account the reservations of any descendent datasets. The + amount of space that a dataset consumes from its parent, as well as the amount + of space that are freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

When snapshots (see the "Snapshots" section) are + created, their space is initially shared between the snapshot and the file + system, and possibly with previous snapshots. As the file system changes, + space that was previously shared becomes unique to the snapshot, and counted + in the snapshot's space used. Additionally, deleting snapshots can increase + the amount of space unique to (and used by) other snapshots.

+

The amount of space used, available, or referenced does not take + into account pending changes. Pending changes are generally accounted for + within a few seconds. Committing a change to a disk using fsync(2) or + O_SYNC does not necessarily guarantee that the space usage + information is updated immediately.

+
+

+

usedby*

+

+
The usedby* properties decompose the used + properties into the various reasons that space is used. Specifically, + used = usedbychildren + usedbydataset + + usedbyrefreservation +, usedbysnapshots. These properties are + only available for datasets created on zpool "version 13" + pools.
+

+

usedbychildren

+

+
The amount of space used by children of this dataset, + which would be freed if all the dataset's children were destroyed.
+

+

usedbydataset

+

+
The amount of space used by this dataset itself, which + would be freed if the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+

+

usedbyrefreservation

+

+
The amount of space used by a refreservation set + on this dataset, which would be freed if the refreservation was + removed.
+

+

usedbysnapshots

+

+
The amount of space consumed by snapshots of this + dataset. In particular, it is the amount of space that would be freed if all + of this dataset's snapshots were destroyed. Note that this is not simply the + sum of the snapshots' used properties because space can be shared by + multiple snapshots.
+

+

userused@user

+

+
The amount of space consumed by the specified user in + this dataset. Space is charged to the owner of each file, as displayed by + ls -l. The amount of space charged is displayed by du and + ls -s. See the zfs userspace subcommand for more + information. +

Unprivileged users can access only their own space usage. The root + user, or a user who has been granted the userused privilege with + zfs allow, can access everyone's usage.

+

The userused@... properties are not displayed by zfs get + all. The user's name must be appended after the @ symbol, using + one of the following forms:

+
+
+
+
POSIX name (for example, joe)
+
+
+
+
+
+
POSIX numeric ID (for example, 789)
+
+
+
+
+
+
SID name (for example, joe.smith@mydomain)
+
+
+
+
+
+
SID numeric ID (for example, S-1-123-456-789)
+
+
+
+

+

userrefs

+

+
This property is set to the number of user holds on this + snapshot. User holds are set by using the zfs hold command.
+

+

groupused@group

+

+
The amount of space consumed by the specified group in + this dataset. Space is charged to the group of each file, as displayed by + ls -l. See the userused@user property for more + information. +

Unprivileged users can only access their own groups' space usage. + The root user, or a user who has been granted the groupused privilege + with zfs allow, can access all groups' usage.

+
+

+

volblocksize=blocksize

+

+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been written, so it + should be set at volume creation time. The default blocksize for + volumes is 8 Kbytes. Any power of 2 from 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its shortened column + name, volblock.

+
+

+

written

+

+
The amount of referenced space written to this + dataset since the previous snapshot.
+

+

written@snapshot

+

+
The amount of referenced space written to this + dataset since the specified snapshot. This is the space that is referenced by + this dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short snapshot name + (just the part after the @), in which case it will be interpreted as + a snapshot in the same filesystem as this dataset. The snapshot be a + full snapshot name (filesystem@snapshot), which for clones may + be a snapshot in the origin's filesystem (or the origin of the origin's + filesystem, etc).

+
+

+

+

The following native properties can be used to change the behavior + of a ZFS dataset.

+

aclinherit=discard | noallow | + restricted | passthrough | passthrough-x

+

+
Controls how ACL entries are inherited when files + and directories are created. A file system with an aclinherit property + of discard does not inherit any ACL entries. A file system with + an aclinherit property value of noallow only inherits + inheritable ACL entries that specify "deny" permissions. The + property value restricted (the default) removes the write_acl + and write_owner permissions when the ACL entry is inherited. A + file system with an aclinherit property value of passthrough + inherits all inheritable ACL entries without any modifications made to + the ACL entries when they are inherited. A file system with an + aclinherit property value of passthrough-x has the same meaning + as passthrough, except that the owner@, group@, and + everyone@ ACEs inherit the execute permission only if the file + creation mode also requests the execute bit. +

When the property value is set to passthrough, files are + created with a mode determined by the inheritable ACEs. If no + inheritable ACEs exist that affect the mode, then the mode is set in + accordance to the requested mode from the application.

+

The aclinherit property does not apply to Posix ACLs.

+
+

+

acltype=noacl | posixacl

+

+
Controls whether ACLs are enabled and if so what type of + ACL to use. When a file system has the acltype property set to + noacl (the default) then ACLs are disabled. Setting the acltype + property to posixacl indicates Posix ACLs should be used. Posix ACLs + are specific to Linux and are not functional on other platforms. Posix ACLs + are stored as an xattr and therefore will not overwrite any existing ZFS/NFSv4 + ACLs which may be set. Currently only posixacls are supported on Linux. +

To obtain the best performance when setting posixacl users + are strongly encouraged to set the xattr=sa property. This will + result in the Posix ACL being stored more efficiently on disk. But as a + consequence of this all new xattrs will only be accessible from ZFS + implementations which support the xattr=sa property. See the + xattr property for more details.

+
+

+

atime=on | off

+

+
Controls whether the access time for files is updated + when they are read. Turning this property off avoids producing write traffic + when reading files and can result in significant performance gains, though it + might confuse mailers and other similar utilities. The default value is + on. See also relatime below.
+

+

canmount=on | off | noauto

+

+
If this property is set to off, the file system + cannot be mounted, and is ignored by zfs mount -a. Setting this + property to off is similar to setting the mountpoint property to + none, except that the dataset still has a normal mountpoint + property, which can be inherited. Setting this property to off allows + datasets to be used solely as a mechanism to inherit properties. One example + of setting canmount=off is to have two datasets with the same + mountpoint, so that the children of both datasets appear in the same + directory, but might have different inherited characteristics. +

When the noauto option is set, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted automatically + when the dataset is created or imported, nor is it mounted by the zfs + mount -a command or unmounted by the zfs unmount -a command.

+

This property is not inherited.

+
+

+

checksum=on | off | fletcher2,| + fletcher4 | sha256

+

+
Controls the checksum used to verify data integrity. The + default value is on, which automatically selects an appropriate + algorithm (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on user data. + Disabling checksums is NOT a recommended practice. +

Changing this property affects only newly-written data.

+
+

+

compression=on | off | lzjb | + lz4 | gzip | gzip-N | zle

+

+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the current + default compression algorithm should be used. The default balances + compression and decompression speed, with compression ratio and is expected + to work well on a wide variety of workloads. Unlike all other settings for + this property, on does not select a fixed compression type. As new + compression algorithms are added to ZFS and enabled on a pool, the default + compression algorithm may change. The current default compression algorthm + is either lzjb or, if the lz4_compress feature is enabled, + lz4.

+

The lzjb compression algorithm is optimized for performance + while providing decent data compression.

+

The lz4 compression algorithm is a high-performance + replacement for the lzjb algorithm. It features significantly faster + compression and decompression, as well as a moderately higher compression + ratio than lzjb, but can only be used on pools with the + lz4_compress feature set to enabled. See + zpool-features(5) for details on ZFS feature flags and the + lz4_compress feature.

+

The gzip compression algorithm uses the same compression as + the gzip(1) command. You can specify the gzip level by using + the value gzip-N where N is an integer from 1 (fastest) + to 9 (best compression ratio). Currently, gzip is equivalent to + gzip-6 (which is also the default for gzip(1)). The zle + compression algorithm compresses runs of zeros.

+

This property can also be referred to by its shortened column name + compress. Changing this property affects only newly-written data.

+
+

+

copies=1 | 2 | 3

+

+
Controls the number of copies of data stored for this + dataset. These copies are in addition to any redundancy provided by the pool, + for example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated file + and dataset, changing the used property and counting against quotas and + reservations. +

Changing this property only affects newly-written data. Therefore, + set this property at file system creation time by using the -o + copies=N option.

+
+

+

dedup=on | off | verify | + sha256[,verify]

+

+
Controls whether deduplication is in effect for a + dataset. The default value is off. The default checksum used for + deduplication is sha256 (subject to change). When dedup is + enabled, the dedup checksum algorithm overrides the checksum + property. Setting the value to verify is equivalent to specifying + sha256,verify. +

If the property is set to verify, then, whenever two blocks + have the same signature, ZFS will do a byte-for-byte comparison with the + existing block to ensure that the contents are identical.

+

Unless necessary, deduplication should NOT be enabled on a system. + See Deduplication above.

+
+

+

devices=on | off

+

+
Controls whether device nodes can be opened on this file + system. The default value is on.
+

+

exec=on | off

+

+
Controls whether processes can be executed from within + this file system. The default value is on.
+

+

mlslabel=label | none

+

+
The mlslabel property is a sensitivity label that + determines if a dataset can be mounted in a zone on a system with Trusted + Extensions enabled. If the labeled dataset matches the labeled zone, the + dataset can be mounted and accessed from the labeled zone. +

When the mlslabel property is not set, the default value is + none. Setting the mlslabel property to none is + equivalent to removing the property.

+

The mlslabel property can be modified only when Trusted + Extensions is enabled and only with appropriate privilege. Rights to modify + it cannot be delegated. When changing a label to a higher label or setting + the initial dataset label, the {PRIV_FILE_UPGRADE_SL} privilege is + required. When changing a label to a lower label or the default + (none), the {PRIV_FILE_DOWNGRADE_SL} privilege is required. + Changing the dataset to labels other than the default can be done only when + the dataset is not mounted. When a dataset with the default label is mounted + into a labeled-zone, the mount operation automatically sets the + mlslabel property to the label of that zone.

+

When Trusted Extensions is not enabled, only datasets with + the default label (none) can be mounted.

+

Zones are a Solaris feature and are not relevant on Linux.

+
+

+

filesystem_limit=count | none

+

+
Limits the number of filesystems and volumes that can + exist under this point in the dataset tree. The limit is not enforced if the + user is allowed to change the limit. Setting a filesystem_limit on a + descendent of a filesystem that already has a filesystem_limit does not + override the ancestor's filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(5)).
+

+

mountpoint=path | none | + legacy

+

+
Controls the mount point used for this file system. See + the "Mount Points" section for more information on how this property + is used. +

When the mountpoint property is changed for a file system, + the file system and any children that inherit the mount point are unmounted. + If the new value is legacy, then they remain unmounted. Otherwise, + they are automatically remounted in the new location if the property was + previously legacy or none, or if they were mounted before the + property was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+

+

nbmand=on | off

+

+
Controls whether the file system should be mounted with + nbmand (Non Blocking mandatory locks). This is used for CIFS + clients. Changes to this property only take effect when the file system is + umounted and remounted. See mount(8) for more information on + nbmand mounts.
+

+

primarycache=all | none | + metadata

+

+
Controls what is cached in the primary cache (ARC). If + this property is set to all, then both user data and metadata is + cached. If this property is set to none, then neither user data nor + metadata is cached. If this property is set to metadata, then only + metadata is cached. The default value is all.
+

+

quota=size | none

+

+
Limits the amount of space a dataset and its descendents + can consume. This property enforces a hard limit on the amount of space used. + This includes all space consumed by descendents, including file systems and + snapshots. Setting a quota on a descendent of a dataset that already has a + quota does not override the ancestor's quota, but rather imposes an additional + limit. +

Quotas cannot be set on volumes, as the volsize property + acts as an implicit quota.

+
+

+

snapshot_limit=count | none

+

+
Limits the number of snapshots that can be created on a + dataset and its descendents. Setting a snapshot_limit on a descendent of a + dataset that already has a snapshot_limit does not override the ancestor's + snapshot_limit, but rather imposes an additional limit. The limit is not + enforced if the user is allowed to change the limit. For example, this means + that recursive snapshots taken from the global zone are counted against each + delegated dataset within a zone. This feature must be enabled to be used (see + zpool-features(5)).
+

+

userquota@user=size | none

+

+
Limits the amount of space consumed by the specified + user. Similar to the refquota property, the userquota space + calculation does not include space that is used by descendent datasets, such + as snapshots and clones. User space consumption is identified by the + userspace@user property. +

Enforcement of user quotas may be delayed by several seconds. This + delay means that a user might exceed their quota before the system notices + that they are over quota and begins to refuse additional writes with the + EDQUOT error message . See the zfs userspace subcommand for + more information.

+

Unprivileged users can only access their own groups' space usage. + The root user, or a user who has been granted the userquota privilege + with zfs allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems before + version 4, or on pools before version 15. The userquota@... + properties are not displayed by zfs get all. The user's name must be + appended after the @ symbol, using one of the following forms:

+
+
+
+
POSIX name (for example, joe)
+
+
+
+
+
+
POSIX numeric ID (for example, 789)
+
+
+
+
+
+
SID name (for example, joe.smith@mydomain)
+
+
+
+
+
+
SID numeric ID (for example, S-1-123-456-789)
+
+
+
+

+

groupquota@group=size | + none

+

+
Limits the amount of space consumed by the specified + group. Group space consumption is identified by the + userquota@user property. +

Unprivileged users can access only their own groups' space usage. + The root user, or a user who has been granted the groupquota + privilege with zfs allow, can get and set all groups' quotas.

+
+

+

readonly=on | off

+

+
Controls whether this dataset can be modified. The + default value is off. +

This property can also be referred to by its shortened column + name, rdonly.

+
+

+

recordsize=size

+

+
Specifies a suggested block size for files in the file + system. This property is designed solely for use with database workloads that + access files in fixed-size records. ZFS automatically tunes block sizes + according to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of the database + can result in significant performance gains. Use of this property for + general purpose file systems is strongly discouraged, and may adversely + affect performance.

+

The size specified must be a power of two greater than or equal to + 512 and less than or equal to 128 Kbytes.

+

Changing the file system's recordsize affects only files + created afterward; existing files are unaffected.

+

This property can also be referred to by its shortened column + name, recsize.

+
+

+

redundant_metadata=all | most

+

+
Controls what types of metadata are stored redundantly. + ZFS stores an extra copy of metadata, so that if a single block is corrupted, + the amount of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and is in + addition to an extra copy specified by the copies property (up to a + total of 3 copies). For example if the pool is mirrored, copies=2, and + redundant_metadata=most, then ZFS stores 6 copies of most metadata, and + 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of all metadata. + If a single on-disk block is corrupt, at worst a single block of user data + (which is recordsize bytes long) can be lost.

+

When set to most, ZFS stores an extra copy of most types of + metadata. This can improve performance of random writes, because less + metadata must be written. In practice, at worst about 100 blocks (of + recordsize bytes each) of user data can be lost if a single on-disk + block is corrupt. The exact behavior of which metadata blocks are stored + redundantly may change in future releases.

+

The default value is all.

+
+

+

refquota=size | none

+

+
Limits the amount of space a dataset can consume. This + property enforces a hard limit on the amount of space used. This hard limit + does not include space used by descendents, including file systems and + snapshots.
+

+

refreservation=size | none

+

+
The minimum amount of space guaranteed to a dataset, not + including its descendents. When the amount of space used is below this value, + the dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation reservation is accounted + for in the parent datasets' space used, and counts against the parent + datasets' quotas and reservations. +

If refreservation is set, a snapshot is only allowed if + there is enough free pool space outside of this reservation to accommodate + the current number of "referenced" bytes in the dataset.

+

This property can also be referred to by its shortened column + name, refreserv.

+
+

+

relatime=on | off

+

+
Controls the manner in which the access time is updated + when atime=on is set. Turning this property on causes the access + time to be updated relative to the modify or change time. Access time is only + updated if the previous access time was earlier than the current modify or + change time or if the existing access time hasn't been updated within the past + 24 hours. The default value is off.
+

+

reservation=size | none

+

+
The minimum amount of space guaranteed to a dataset and + its descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified by + its reservation. Reservations are accounted for in the parent datasets' space + used, and count against the parent datasets' quotas and reservations. +

This property can also be referred to by its shortened column + name, reserv.

+
+

+

secondarycache=all | none | + metadata

+

+
Controls what is cached in the secondary cache (L2ARC). + If this property is set to all, then both user data and metadata is + cached. If this property is set to none, then neither user data nor + metadata is cached. If this property is set to metadata, then only + metadata is cached. The default value is all.
+

+

setuid=on | off

+

+
Controls whether the set-UID bit is respected for + the file system. The default value is on.
+

+

shareiscsi=on | off

+

+
Like the sharenfs property, shareiscsi + indicates whether a ZFS volume is exported as an iSCSI target. + The acceptable values for this property are on, off, and + type=disk. The default value is off. In the future, other target + types might be supported. For example, tape. +

You might want to set shareiscsi=on for a file system so + that all ZFS volumes within the file system are shared by default. + However, setting this property on a file system has no direct effect.

+
+

+

sharesmb=on | off

+

+
Controls whether the file system is shared by using + Samba USERSHARES, and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the zfs share and + zfs unshare commands. If the property is set to on, the + net(8) command is invoked to create a USERSHARE. +

Because SMB shares requires a resource name, a unique + resource name is constructed from the dataset name. The constructed name is + a copy of the dataset name except that the characters in the dataset name, + which would be illegal in the resource name, are replaced with underscore + (_) characters. The ZFS On Linux driver does not (yet) support + additional options which might be available in the Solaris version.

+

If the sharesmb property is set to off, the file + systems are unshared.

+

In Linux, the share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", ie. read and write permissions) and no guest access + (which means samba must be able to authenticate a real user, system + passwd/shadow, ldap or smbpasswd based) by default. This means that any + additional access control (dissalow specific user specific access etc) must + be done on the underlaying filesystem.

+

+
+ Example to mount a SMB filesystem shared through ZFS (share/tmp): Note that a + user and his/her password must be given!

+

+
+ smbmount //127.0.0.1/share_tmp /mnt/tmp -o + user=workgroup/turbo,password=obrut,uid=1000 +
+
+

+

Minimal /etc/samba/smb.conf configuration

+

+
+ * Samba will need to listen to 'localhost' (127.0.0.1) for the zfs utilities + to communitate with samba. This is the default behavior for most Linux + distributions.

+

* Samba must be able to authenticate a user. This can be done in a + number of ways, depending on if using the system password file, LDAP or the + Samba specific smbpasswd file. How to do this is outside the scope of this + manual. Please refer to the smb.conf(5) manpage for more information.

+

* See the USERSHARE section of the smb.conf(5) man + page for all configuration options in case you need to modify any options to + the share afterwards. Do note that any changes done with the 'net' command + will be undone if the share is every unshared (such as at a reboot etc). In + the future, ZoL will be able to set specific options directly using + sharesmb=<option>.

+

+
+

+
+

+

sharenfs=on | off | opts

+

+
Controls whether the file system is shared via + NFS, and what options are used. A file system with a sharenfs + property of off is managed with the exportfs(8) command and + entries in /etc/exports file. Otherwise, the file system is + automatically shared and unshared with the zfs share and zfs + unshare commands. If the property is set to on, the dataset is + shared using the exportfs(8) command in the following manner (see + exportfs(8) for the meaning of the different options): +

+
+

+
/usr/sbin/exportfs -i -o sec=sys,rw,no_subtree_check,no_root_squash,mountpoint *:<mountpoint of dataset>
+
+

Otherwise, the exportfs(8) command is invoked with options + equivalent to the contents of this property.

+

When the sharenfs property is changed for a dataset, the + dataset and any children inheriting the property are re-shared with the new + options, only if the property was previously off, or if they were + shared before the property was changed. If the new property is off, + the file systems are unshared.

+
+

+

logbias = latency | throughput

+

+
Provide a hint to ZFS about handling of synchronous + requests in this dataset. If logbias is set to latency (the + default), ZFS will use pool log devices (if configured) to handle the requests + at low latency. If logbias is set to throughput, ZFS will not + use configured pool log devices. ZFS will instead optimize synchronous + operations for global pool throughput and efficient use of resources.
+

+

snapdev=hidden | visible

+

+
Controls whether the snapshots devices of zvol's are + hidden or visible. The default value is hidden.
+

+

snapdir=hidden | visible

+

+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + "Snapshots" section. The default value is hidden.
+

+

sync=standard | always | + disabled

+

+
Controls the behavior of synchronous requests (e.g. + fsync, O_DSYNC). standard is the POSIX specified behavior of ensuring + all synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to be written and + flushed before its system call returns. This has a large performance penalty. + disabled disables synchronous requests. File system transactions are + only committed to stable storage periodically. This option will give the + highest performance. However, it is very dangerous as ZFS would be ignoring + the synchronous transaction demands of applications such as databases or NFS. + Administrators should only use this option when the risks are + understood.
+

+

version=1 | 2 | current

+

+
The on-disk version of this file system, which is + independent of the pool version. This property can only be set to later + supported versions. See the zfs upgrade command.
+

+

volsize=size

+

+
For volumes, specifies the logical size of the volume. By + default, creating a volume establishes a reservation of equal size. For + storage pools with a version number of 9 or higher, a refreservation is + set instead. Any changes to volsize are reflected in an equivalent + change to the reservation (or refreservation). The volsize can + only be set to a multiple of volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly when + shrinking the size). Extreme care should be used when adjusting the volume + size.

+

Though not recommended, a "sparse volume" (also known as + "thin provisioning") can be created by specifying the -s + option to the zfs create -V command, or by changing the reservation + after the volume has been created. A "sparse volume" is a volume + where the reservation is less then the volume size. Consequently, writes to + a sparse volume can fail with ENOSPC when the pool is low on space. + For a sparse volume, changes to volsize are not reflected in the + reservation.

+
+

+

vscan=on | off

+

+
Controls whether regular files should be scanned for + viruses when a file is opened and closed. In addition to enabling this + property, the virus scan service must also be enabled for virus scanning to + occur. The default value is off.
+

+

xattr=on | off | sa

+

+
Controls whether extended attributes are enabled for this + file system. Two styles of extended attributes are supported either directory + based or system attribute based. +

The default value of on enables directory based extended + attributes. This style of xattr imposes no practical limit on either the + size or number of xattrs which may be set on a file. Although under Linux + the getxattr(2) and setxattr(2) system calls limit the maximum + xattr size to 64K. This is the most compatible style of xattr and it is + supported by the majority of ZFS implementations.

+

System attribute based xattrs may be enabled by setting the value + to sa. The key advantage of this type of xattr is improved + performance. Storing xattrs as system attributes significantly decreases the + amount of disk IO required. Up to 64K of xattr data may be stored per file + in the space reserved for system attributes. If there is not enough space + available for an xattr then it will be automatically written as a directory + based xattr. System attribute based xattrs are not accessible on platforms + which do not support the xattr=sa feature.

+

The use of system attribute based xattrs is strongly encouraged + for users of SELinux or Posix ACLs. Both of these features heavily rely of + xattrs and benefit significantly from the reduced xattr access time.

+
+

+

zoned=on | off

+

+
Controls whether the dataset is managed from a non-global + zone. Zones are a Solaris feature and are not relevant on Linux. The default + value is off.
+

+

+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs create or + zpool create commands, these properties are inherited from the parent + dataset. If the parent dataset lacks these properties due to having been + created prior to these features being supported, the new file system will + have the default values for these properties.

+

casesensitivity=sensitive | + insensitive | mixed

+

+
Indicates whether the file name matching algorithm used + by the file system should be case-sensitive, case-insensitive, or allow a + combination of both styles of matching. The default value for the + casesensitivity property is sensitive. Traditionally, UNIX and + POSIX file systems have case-sensitive file names. +

The mixed value for the casesensitivity property + indicates that the file system can support requests for both case-sensitive + and case-insensitive matching behavior. Currently, case-insensitive matching + behavior on a file system that supports mixed behavior is limited to the + Solaris CIFS server product. For more information about the mixed + value behavior, see the Solaris ZFS Administration Guide.

+
+

+

normalization = none | formC | + formD | formKC | formKD

+

+
Indicates whether the file system should perform a + unicode normalization of file names whenever two file names are + compared, and which normalization algorithm should be used. File names are + always stored unmodified, names are normalized as part of any comparison + process. If this property is set to a legal value other than none, and + the utf8only property was left unspecified, the utf8only + property is automatically set to on. The default value of the + normalization property is none. This property cannot be changed + after the file system is created.
+

+

utf8only=on | off

+

+
Indicates whether the file system should reject file + names that include characters that are not present in the UTF-8 + character code set. If this property is explicitly set to off, the + normalization property must either not be explicitly set or be set to + none. The default value for the utf8only property is off. + This property cannot be changed after the file system is created.
+

+

+

The casesensitivity, normalization, and + utf8only properties are also new permissions that can be assigned to + non-privileged users by using the ZFS delegated administration + feature.

+

+

context=SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level

+

+
This flag sets the SELinux context for all files in the + filesytem under the mountpoint for that filesystem. See selinux(8) for + more information.
+

+

fscontext=SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level

+

+
This flag sets the SELinux context for the filesytem + being mounted. See selinux(8) for more information.
+

+

defntext=SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level

+

+
This flag sets the SELinux context for unlabeled files. + See selinux(8) for more information.
+

+

rootcontext=SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level

+

+
This flag sets the SELinux context for the root inode of + the filesystem. See selinux(8) for more information.
+

+

overlay=on | off

+

+
Allow mounting on a busy directory or a directory which + already contains files/directories. This is the default mount behavior for + Linux filesystems. However, for consistency with ZFS on other platforms + overlay mounts are disabled by default. Set overlay=on to enable + overlay mounts.
+

+
+
+

+

When a file system is mounted, either through mount(8) for + legacy mounts or the zfs mount command for normal file systems, its + mount options are set according to its properties. The correlation between + properties and mount options is as follows:

+

+
+

+
+
+ PROPERTY MOUNT OPTION +
+ devices devices/nodevices +
+ exec exec/noexec +
+ readonly ro/rw +
+ setuid setuid/nosetuid +
+ xattr xattr/noxattr +
+ atime atime/noatime +
+ relatime relatime/norelatime +
+ nbmand nbmand/nonbmand
+
+

+

+

+

In addition, these options can be set on a per-mount basis using + the -o option, without affecting the property that is stored on disk. + The values specified on the command line override the values stored in the + dataset. The -nosuid option is an alias for + nodevices,nosetuid. These properties are reported as + "temporary" by the zfs get command. If the properties are + changed while the dataset is mounted, the new setting overrides any + temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS + behavior, but applications or administrators can use them to annotate + datasets (file systems, volumes, and snapshots).

+

+

User property names must contain a colon (:) character to + distinguish them from native properties. They may contain lowercase letters, + numbers, and the following punctuation characters: colon (:), dash + (-), period (.), and underscore (_). The expected + convention is that the property name is divided into two portions such as + module:property, but this namespace is not enforced by + ZFS. User property names can be at most 256 characters, and cannot + begin with a dash (-).

+

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the module + component of property names to reduce the chance that two + independently-developed packages use the same property name for different + purposes. For example, property names beginning with com.sun. are + reserved for use by Oracle Corporation (which acquired Sun + Microsystems).

+

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, zfs get, zfs set, and so forth) + can be used to manipulate both native properties and user properties. Use + the zfs inherit command to clear a user property . If the property is + not defined in any parent dataset, it is removed entirely. Property values + are limited to 1024 characters.

+
+
+

+

ZFS volumes may be used as Linux swap devices. After + creating the volume with the zfs create command set up and enable the + swap area using the mkswap(8) and swapon(8) commands. Do not + swap to a file on a ZFS file system. A ZFS swap file + configuration is not supported.

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

zfs ?

+

+
Displays a help message.
+

+

zfs create [-p] [-o + property=value] ... filesystem

+

+
Creates a new ZFS file system. The file system is + automatically mounted according to the mountpoint property inherited + from the parent. +

-p

+

+
Creates all the non-existing parent datasets. Datasets + created in this manner are automatically mounted according to the + mountpoint property inherited from their parent. Any property specified + on the command line using the -o option is ignored. If the target + filesystem already exists, the operation completes successfully.
+

+

-o property=value

+

+
Sets the specified property as if the command zfs + set property=value was invoked at the same time the dataset + was created. Any editable ZFS property can also be set at creation + time. Multiple -o options can be specified. An error results if the + same property is specified in multiple -o options.
+

+
+

+

zfs create [-ps] [-b blocksize] + [-o property=value] ... -V size + volume

+

+
Creates a volume of the given size. The volume is + exported as a block device in /dev/zvol/path, where path + is the name of the volume in the ZFS namespace. The size represents the + logical size as exported by the device. By default, a reservation of equal + size is created. +

size is automatically rounded up to the nearest 128 Kbytes + to ensure that the volume has an integral number of blocks regardless of + blocksize.

+

-p

+

+
Creates all the non-existing parent datasets. Datasets + created in this manner are automatically mounted according to the + mountpoint property inherited from their parent. Any property specified + on the command line using the -o option is ignored. If the target + filesystem already exists, the operation completes successfully.
+

+

-s

+

+
Creates a sparse volume with no reservation. See + volsize in the Native Properties section for more information about + sparse volumes.
+

+

-o property=value

+

+
Sets the specified property as if the zfs set + property=value command was invoked at the same time the dataset + was created. Any editable ZFS property can also be set at creation + time. Multiple -o options can be specified. An error results if the + same property is specified in multiple -o options.
+

+

-b blocksize

+

+
Equivalent to -o + volblocksize=blocksize. If this option is specified in + conjunction with -o volblocksize, the resulting behavior is + undefined.
+

+
+

+

zfs destroy [-fnpRrv] + filesystem|volume

+

+
Destroys the given dataset. By default, the command + unshares any file systems that are currently shared, unmounts any file systems + that are currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +

-r

+

+
Recursively destroy all children.
+

+

-R

+

+
Recursively destroy all dependents, including cloned file + systems outside the target hierarchy.
+

+

-f

+

+
Force an unmount of any file systems using the unmount + -f command. This option has no effect on non-file systems or unmounted + file systems.
+

+

-n

+

+
Do a dry-run ("No-op") deletion. No data will + be deleted. This is useful in conjunction with the -v or -p + flags to determine what data would be deleted.
+

+

-p

+

+
Print machine-parsable verbose information about the + deleted data.
+

+

-v

+

+
Print verbose information about the deleted data.
+

+

Extreme care should be taken when applying either the -r or + the -R options, as they can destroy large portions of a pool and + cause unexpected behavior for mounted file systems in use.

+
+

+

zfs destroy [-dnpRrv] + filesystem|volume@snap[%snap][,...]

+

+
The given snapshots are destroyed immediately if and only + if the zfs destroy command without the -d option would have + destroyed it. Such immediate destruction would occur, for example, if the + snapshot had no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it is + marked for deferred destruction. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, at + which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating the + first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or newest + snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same filesystem + or volume may be specified in a comma-separated list of snapshots. Only the + snapshot's short name (the part after the @) should be specified when + using a range or comma-separated list to identify multiple snapshots.

+

-d

+

+
Defer snapshot deletion.
+

+

-r

+

+
Destroy (or mark for deferred destruction) all snapshots + with this name in descendent file systems.
+

+

-R

+

+
Recursively destroy all clones of these snapshots, + including the clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+

+

-n

+

+
Do a dry-run ("No-op") deletion. No data will + be deleted. This is useful in conjunction with the -v or -p + flags to determine what data would be deleted.
+

+

-p

+

+
Print machine-parsable verbose information about the + deleted data.
+

+

-v

+

+
Print verbose information about the deleted data.
+

+

Extreme care should be taken when applying either the -r or + the -R options, as they can destroy large portions of a pool and + cause unexpected behavior for mounted file systems in use.

+
+

+

+

zfs destroy + filesystem|volume#bookmark

+

+
The given bookmark is destroyed. +

+
+

+

zfs snapshot [-r] [-o + property=value] ... + filesystem@snapname|volume@snapname ...

+

+
Creates snapshots with the given names. All previous + modifications by successful system calls to the file system are part of the + snapshots. Snapshots are taken atomically, so that all snapshots correspond to + the same moment in time. See the "Snapshots" section for details. +

-r

+

+
Recursively create snapshots of all descendent + datasets.
+

+

-o property=value

+

+
Sets the specified property; see zfs create for + details.
+

+
+

+

zfs rollback [-rRf] snapshot

+

+
Roll back the given dataset to a previous snapshot. When + a dataset is rolled back, all data that has changed since the snapshot is + discarded, and the dataset reverts to the state at the time of the snapshot. + By default, the command refuses to roll back to a snapshot other than the most + recent one. In order to do so, all intermediate snapshots and bookmarks must + be destroyed by specifying the -r option. +

The -rR options do not recursively destroy the child + snapshots of a recursive snapshot. Only direct snapshots of the specified + filesystem are destroyed by either of these options. To completely roll back + a recursive snapshot, you must rollback the individual child snapshots.

+

-r

+

+
Destroy any snapshots and bookmarks more recent than the + one specified.
+

+

-R

+

+
Recursively destroy any more recent snapshots and + bookmarks, as well as any clones of those snapshots.
+

+

-f

+

+
Used with the -R option to force an unmount of any + clone file systems that are to be destroyed.
+

+
+

+

zfs clone [-p] [-o + property=value] ... snapshot + filesystem|volume

+

+
Creates a clone of the given snapshot. See the + "Clones" section for details. The target dataset can be located + anywhere in the ZFS hierarchy, and is created as the same type as the + original. +

-p

+

+
Creates all the non-existing parent datasets. Datasets + created in this manner are automatically mounted according to the + mountpoint property inherited from their parent. If the target + filesystem or volume already exists, the operation completes + successfully.
+

+

-o property=value

+

+
Sets the specified property; see zfs create for + details.
+

+
+

+

zfs promote clone-filesystem

+

+
Promotes a clone file system to no longer be dependent on + its "origin" snapshot. This makes it possible to destroy the file + system that the clone was created from. The clone parent-child dependency + relationship is reversed, so that the origin file system becomes a clone of + the specified file system. +

The snapshot that was cloned, and any snapshots previous to this + snapshot, are now owned by the promoted clone. The space they use moves from + the origin file system to the promoted clone, so enough space must be + available to accommodate these snapshots. No new space is consumed by this + operation, but the space accounting is adjusted. The promoted clone must not + have any conflicting snapshot names of its own. The rename subcommand + can be used to rename any conflicting snapshots.

+
+

+

zfs rename [-f] + filesystem|volume|snapshot +
+ filesystem|volume|snapshot +
+ zfs rename [-fp] filesystem|volume + filesystem|volume

+

+
Renames the given dataset. The new target can be located + anywhere in the ZFS hierarchy, with the exception of snapshots. + Snapshots can only be renamed within the parent file system or volume. When + renaming a snapshot, the parent file system of the snapshot does not need to + be specified as part of the second argument. Renamed file systems can inherit + new mount points, in which case they are unmounted and remounted at the new + mount point. +

-p

+

+
Creates all the nonexistent parent datasets. Datasets + created in this manner are automatically mounted according to the + mountpoint property inherited from their parent.
+

+

-f

+

+
Force unmount any filesystems that need to be unmounted + in the process.
+

+
+

+

zfs rename -r snapshot + snapshot

+

+
Recursively rename the snapshots of all descendent + datasets. Snapshots are the only dataset that can be renamed + recursively.
+

+

zfs list [-r|-d depth] + [-Hp] [-o property[,...]] [ -t + type[,...]] [ -s property ] ... [ -S + property ] ... [filesystem|volume|snapshot] + ...

+

+
Lists the property information for the given datasets in + tabular form. If specified, you can list property information by the absolute + pathname or the relative pathname. By default, all file systems and volumes + are displayed. Snapshots are displayed if the listsnaps property is + on (the default is off). When listing hundreds or thousands of + snapshots performance can be improved by restricting the output to only the + name. In that case, it is recommended to use -o name -s name. The + following fields are displayed by default, + name,used,available,referenced,mountpoint. +

-H

+

+
Used for scripting mode. Do not print headers and + separate fields by a single tab instead of arbitrary white space.
+

+

-p

+

+
Display numbers in parsable (exact) values.
+

+

-r

+

+
Recursively display any children of the dataset on the + command line.
+

+

-d depth

+

+
Recursively display any children of the dataset, limiting + the recursion to depth. A depth of 1 will display only the + dataset and its direct children.
+

+

-o property

+

+
A comma-separated list of properties to display. The + property must be: +
+
+
+
One of the properties described in the "Native Properties" + section
+
+
+
+
+
+
A user property
+
+
+
+
+
+
The value name to display the dataset name
+
+
+
+
+
+
The value space to display space usage properties on file systems + and volumes. This is a shortcut for specifying -o + name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t + filesystem,volume syntax.
+
+
+
+

+

-s property

+

+
A property for sorting the output by column in ascending + order based on the value of the property. The property must be one of the + properties described in the "Properties" section, or the special + value name to sort by the dataset name. Multiple properties can be + specified at one time using multiple -s property options. Multiple + -s options are evaluated from left to right in decreasing order of + importance. +

The following is a list of sorting criteria:

+
+
+
+
Numeric types sort in numeric order.
+
+
+
+
+
+
String types sort in alphabetical order.
+
+
+
+
+
+
Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
+
+
+
+
+
+
If no sorting options are specified the existing behavior of zfs + list is preserved.
+
+
+
+

+

-S property

+

+
Same as the -s option, but sorts by property in + descending order.
+

+

-t type

+

+
A comma-separated list of types to display, where + type is one of filesystem, snapshot, snap, + volume, bookmark, or all. For example, specifying -t + snapshot displays only snapshots.
+

+
+

+

zfs set property=value + filesystem|volume|snapshot ...

+

+
Sets the property to the given value for each dataset. + Only some properties can be edited. See the "Properties" section for + more information on what properties can be set and acceptable values. Numeric + values can be specified as exact values, or in a human-readable form with a + suffix of B, K, M, G, T, P, + E, Z (for bytes, kilobytes, megabytes, gigabytes, terabytes, + petabytes, exabytes, or zettabytes, respectively). User properties can be set + on snapshots. For more information, see the "User Properties" + section.
+

+

zfs get [-r|-d depth] + [-Hp] [-o field[,...] [-t type[,...]] + [-s source[,...] "all" | + property[,...] filesystem|volume|snapshot + ...

+

+
Displays properties for the given datasets. If no + datasets are specified, then the command displays properties for all datasets + on the system. For each property, the following columns are displayed: +

+
+

+
+
+ name Dataset name +
+ property Property name +
+ value Property value +
+ source Property source. Can either be local, default, +
+ temporary, inherited, received, or none (-).
+
+

+

All columns are displayed by default, though this can be + controlled by using the -o option. This command takes a + comma-separated list of properties as described in the "Native + Properties" and "User Properties" sections.

+

The special value all can be used to display all properties + that apply to the given dataset's type (filesystem, volume snapshot, or + bookmark).

+

-r

+

+
Recursively display properties for any children.
+

+

-d depth

+

+
Recursively display any children of the dataset, limiting + the recursion to depth. A depth of 1 will display only the + dataset and its direct children.
+

+

-H

+

+
Display output in a form more easily parsed by scripts. + Any headers are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+

+

-o field

+

+
A comma-separated list of columns to display. + name,property,value,source is the default value.
+

+

-s source

+

+
A comma-separated list of sources to display. Those + properties coming from a source other than those in this list are ignored. + Each source must be one of the following: + local,default,inherited,received,temporary,none. The default value is + all sources.
+

+

-p

+

+
Display numbers in parsable (exact) values.
+

+
+

+

zfs inherit [-rS] property + filesystem|volume|snapshot ...

+

+
Clears the specified property, causing it to be inherited + from an ancestor, restored to default if no ancestor has the property set, or + with the -S option reverted to the received value if one exists. See + the "Properties" section for a listing of default values, and + details on which properties can be inherited. +

-r

+

+
Recursively inherit the given property for all + children.
+

-S

+

+
Revert the property to the received value if one exists; + otherwise operate as if the -S option was not specified.
+

+
+

+

zfs upgrade [-v]

+

+
Displays a list of file systems that are not the most + recent version.
+

+

zfs upgrade [-r] [-V version] + [-a | filesystem]

+

+
Upgrades file systems to a new on-disk version. Once this + is done, the file systems will no longer be accessible on systems running + older versions of the software. zfs send streams generated from new + snapshots of these file systems cannot be accessed on systems running older + versions of the software. +

In general, the file system version is independent of the pool + version. See zpool(8) for information on the zpool upgrade + command.

+

In some cases, the file system version and the pool version are + interrelated and the pool version must be upgraded before the file system + version can be upgraded.

+

-a

+

+
Upgrade all file systems on all imported pools.
+

+

filesystem

+

+
Upgrade the specified file system.
+

+

-r

+

+
Upgrade the specified file system and all descendent file + systems
+

+

-V version

+

+
Upgrade to the specified version. If the -V + flag is not specified, this command upgrades to the most recent version. This + option can only be used to increase the version number, and only up to the + most recent version supported by this software.
+

+
+

+

zfs userspace [-Hinp] [-o + field[,...]] [-s field] ... [-S field] + ... [-t type[,...]] filesystem|snapshot

+

+
Displays space consumed by, and quotas on, each user in + the specified filesystem or snapshot. This corresponds to the + userused@user and userquota@user properties. +

-n

+

+
Print numeric ID instead of user/group name.
+

+

-H

+

+
Do not print headers, use tab-delimited output.
+

+

-p

+

+
Use exact (parsable) numeric output.
+

+

-o field[,...]

+

+
Display only the specified fields from the following set: + type, name, used, quota. The default is to display all fields.
+

+

-s field

+

+
Sort output by this field. The s and S + flags may be specified multiple times to sort first by one field, then by + another. The default is -s type -s name.
+

+

-S field

+

+
Sort by this field in reverse order. See -s.
+

+

-t type[,...]

+

+
Print only the specified types from the following set: + all, posixuser, smbuser, posixgroup, smbgroup. The default is -t + posixuser,smbuser. The default can be changed to include group + types.
+

+

-i

+

+
Translate SID to POSIX ID. The POSIX ID may be ephemeral + if no mapping exists. Normal POSIX interfaces (for example, stat(2), + ls -l) perform this translation, so the -i option allows + the output from zfs userspace to be compared directly with those + utilities. However, -i may lead to confusion if some files were created + by an SMB user before a SMB-to-POSIX name mapping was established. In such a + case, some files will be owned by the SMB entity and some by the POSIX entity. + However, the -i option will report that the POSIX entity has the total + usage and quota for both.
+

+
+

+

zfs groupspace [-Hinp] [-o + field[,...]] [-s field] ... [-S field] + ... [-t type[,...]] filesystem|snapshot

+

+
Displays space consumed by, and quotas on, each group in + the specified filesystem or snapshot. This subcommand is identical to zfs + userspace, except that the default types to display are -t + posixgroup,smbgroup.
+

+

zfs mount

+

+
Displays all ZFS file systems currently + mounted.
+

+

zfs mount [-vO] [-o options] + -a | filesystem

+

+
Mounts ZFS file systems. Invoked automatically as + part of the boot process. +

-o options

+

+
An optional, comma-separated list of mount options to use + temporarily for the duration of the mount. See the "Temporary Mount Point + Properties" section for details.
+

+

-O

+

+
Perform an overlay mount. See mount(8) for more + information.
+

+

-v

+

+
Report mount progress.
+

+

-a

+

+
Mount all available ZFS file systems. Invoked + automatically as part of the boot process.
+

+

filesystem

+

+
Mount the specified filesystem.
+

+
+

+

zfs unmount [-f] -a | + filesystem|mountpoint

+

+
Unmounts currently mounted ZFS file systems. + Invoked automatically as part of the shutdown process. +

-f

+

+
Forcefully unmount the file system, even if it is + currently in use.
+

+

-a

+

+
Unmount all available ZFS file systems. Invoked + automatically as part of the boot process.
+

+

filesystem|mountpoint

+

+
Unmount the specified filesystem. The command can also be + given a path to a ZFS file system mount point on the system.
+

+
+

+

zfs share -a | filesystem

+

+
Shares available ZFS file systems. +

-a

+

+
Share all available ZFS file systems. Invoked + automatically as part of the boot process.
+

+

filesystem

+

+
Share the specified filesystem according to the + sharenfs and sharesmb properties. File systems are shared when + the sharenfs or sharesmb property is set.
+

+
+

+

zfs unshare -a | + filesystem|mountpoint

+

+
Unshares currently shared ZFS file systems. This + is invoked automatically as part of the shutdown process. +

-a

+

+
Unshare all available ZFS file systems. Invoked + automatically as part of the boot process.
+

+

filesystem|mountpoint

+

+
Unshare the specified filesystem. The command can also be + given a path to a ZFS file system shared on the system.
+

+
+

+

zfs bookmark snapshot bookmark

+

+
Creates a bookmark of the given snapshot. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs send command. +

This feature must be enabled to be used. See + zpool-features(5) for details on ZFS feature flags and the + bookmarks feature.

+
+

+

+

zfs send [-DnPpRveL] [-[iI] + snapshot] snapshot

+

+
Creates a stream representation of the second + snapshot, which is written to standard output. The output can be + redirected to a file or to a different system (for example, using + ssh(1). By default, a full stream is generated. +

-i snapshot

+

+
Generate an incremental stream from the first + snapshot (the incremental source) to the second snapshot (the + incremental target). The incremental source can be specified as the last + component of the snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the source may be the origin + snapshot, which must be fully specified (for example, pool/fs@origin, + not just @origin).

+
+

+

-I snapshot

+

+
Generate a stream package that sends all intermediary + snapshots from the first snapshot to the second snapshot. For example, -I + @a fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The + incremental source may be specified as with the -i option.
+

+

-R

+

+
Generate a replication stream package, which will + replicate the specified filesystem, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent file + systems, and clones are preserved. +

If the -i or -I flags are used in conjunction with + the -R flag, an incremental replication stream is generated. The + current values of properties, and current snapshot and file system names are + set when the stream is received. If the -F flag is specified when + this stream is received, snapshots and file systems that do not exist on the + sending side are destroyed.

+
+

+

-D

+

+
Generate a deduplicated stream. Blocks which would have + been sent multiple times in the send stream will only be sent once. The + receiving system must also support this feature to receive a deduplicated + stream. This flag can be used regardless of the dataset's dedup property, but + performance will be much better if the filesystem uses a dedup-capable + checksum (eg. sha256).
+

+

-L

+

+
Generate a stream which may contain blocks larger than + 128KB. This flag has no effect if the large_blocks pool feature is + disabled, or if the recordsize property of this filesystem has never been set + above 128KB. The receiving system must have the large_blocks pool + feature enabled as well. See zpool-features(5) for details on ZFS + feature flags and the large_blocks feature.
+

+

-e

+

+
Generate a more compact stream by using WRITE_EMBEDDED + records for blocks which are stored more compactly on disk by the + embedded_data pool feature. This flag has no effect if the + embedded_data feature is disabled. The receiving system must have the + embedded_data feature enabled. If the lz4_compress feature is + active on the sending system, then the receiving system must have that feature + enabled as well. See zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+

+

-p

+

+
Include the dataset's properties in the stream. This flag + is implicit when -R is specified. The receiving system must also support this + feature.
+

+

-n

+

+
Do a dry-run ("No-op") send. Do not generate + any actual send data. This is useful in conjunction with the -v or + -P flags to determine what data will be sent. In this case, the verbose + output will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes to + standard error).
+

+

-P

+

+
Print machine-parsable verbose information about the + stream package generated.
+

+

-v

+

+
Print verbose information about the stream package + generated. This information includes a per-second report of how much data has + been sent.
+

The format of the stream is committed. You will be able to receive + your streams on future versions of ZFS.

+
+

+

zfs send [-eL] [-i + snapshot|bookmark] + filesystem|volume|snapshot

+

+
Generate a send stream, which may be of a filesystem, and + may be incremental from a bookmark. If the destination is a filesystem or + volume, the pool must be read-only, or the filesystem must not be mounted. + When the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +

+

-i snapshot|bookmark

+

+
Generate an incremental send stream. The incremental + source must be an earlier snapshot in the destination's history. It will + commonly be an earlier snapshot in the destination's filesystem, in which case + it can be specified as the last component of the name (the # or + @ character and following). +

If the incremental target is a clone, the incremental source can + be the origin snapshot, or an earlier snapshot in the origin's filesystem, + or the origin's origin, etc.

+
+

+

-L

+

+
Generate a stream which may contain blocks larger than + 128KB. This flag has no effect if the large_blocks pool feature is + disabled, or if the recordsize property of this filesystem has never been set + above 128KB. The receiving system must have the large_blocks pool + feature enabled as well. See zpool-features(5) for details on ZFS + feature flags and the large_blocks feature.
+

+

-e

+

+
Generate a more compact stream by using WRITE_EMBEDDED + records for blocks which are stored more compactly on disk by the + embedded_data pool feature. This flag has no effect if the + embedded_data feature is disabled. The receiving system must have the + embedded_data feature enabled. If the lz4_compress feature is + active on the sending system, then the receiving system must have that feature + enabled as well. See zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+

+
+

zfs receive [-vnFu] + filesystem|volume|snapshot +
+ zfs receive [-vnFu] [-d|-e] + filesystem

+

+
Creates a snapshot whose contents are as specified in the + stream provided on standard input. If a full stream is received, then a new + file system is created as well. Streams are created using the zfs send + subcommand, which by default creates a full stream. zfs recv can be + used as an alias for zfs receive. +

If an incremental stream is received, then the destination file + system must already exist, and its most recent snapshot must match the + incremental stream's source. For zvols, the destination device link + is destroyed and recreated, which means the zvol cannot be accessed + during the receive operation.

+

When a snapshot replication package stream that is generated by + using the zfs send -R command is received, any snapshots that + do not exist on the sending location are destroyed by using the zfs + destroy -d command.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and the + use of the -d or -e options.

+

If the argument is a snapshot name, the specified snapshot + is created. If the argument is a file system or volume name, a snapshot with + the same name as the sent snapshot is created within the specified + filesystem or volume. If neither of the -d or -e + options are specified, the provided target snapshot name is used exactly as + provided.

+

The -d and -e options cause the file system name of + the target snapshot to be determined by appending a portion of the sent + snapshot's name to the specified target filesystem. If the -d + option is specified, all but the first element of the sent snapshot's file + system path (usually the pool name) is used and any required intermediate + file systems within the specified one are created. If the -e option + is specified, then only the last element of the sent snapshot's file system + name (i.e. the name of the source file system itself) is used as the target + file system name.

+

-d

+

+
Discard the first element of the sent snapshot's file + system name, using the remaining elements to determine the name of the target + file system for the new snapshot as described in the paragraph above.
+

+

-e

+

+
Discard all but the last element of the sent snapshot's + file system name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+

+

-u

+

+
File system that is associated with the received stream + is not mounted.
+

+

-v

+

+
Print verbose information about the stream and the time + required to perform the receive operation.
+

+

-n

+

+
Do not actually receive the stream. This can be useful in + conjunction with the -v option to verify the name the receive operation + would use.
+

+

-F

+

+
Force a rollback of the file system to the most recent + snapshot before performing the receive operation. If receiving an incremental + replication stream (for example, one generated by zfs send -R -[iI]), + destroy snapshots and file systems that do not exist on the sending + side.
+

+
+

+

zfs allow filesystem | volume

+

+
Displays permissions that have been delegated on the + specified filesystem or volume. See the other forms of zfs allow for + more information.
+

+

zfs allow [-ldug] + "everyone"|user|group[,...] + perm|@setname[,...] filesystem| volume +
+ zfs allow [-ld] -e + perm|@setname[,...] filesystem | volume

+

+
Delegates ZFS administration permission for the + file systems to non-privileged users. +

[-ug] + "everyone"|user|group[,...]

+

+
Specifies to whom the permissions are delegated. Multiple + entities can be specified as a comma-separated list. If neither of the + -ug options are specified, then the argument is interpreted + preferentially as the keyword "everyone", then as a user name, and + lastly as a group name. To specify a user or group named "everyone", + use the -u or -g options. To specify a group with the same name + as a user, use the -g options.
+

+

[-e] perm|@setname[,...]

+

+
Specifies that the permissions be delegated to + "everyone." Multiple permissions may be specified as a + comma-separated list. Permission names are the same as ZFS subcommand + and property names. See the property list below. Property set names, which + begin with an at sign (@) , may be specified. See the -s form + below for details.
+

+

[-ld] filesystem|volume

+

+
Specifies where the permissions are delegated. If neither + of the -ld options are specified, or both are, then the permissions are + allowed for the file system or volume, and all of its descendents. If only the + -l option is used, then is allowed "locally" only for the + specified file system. If only the -d option is used, then is allowed + only for the descendent file systems.
+

+
+

+

+

Permissions are generally the ability to use a ZFS + subcommand or change a ZFS property. The following permissions are + available:

+

+
+

+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is being
+
+ allowed +clone subcommand Must also have the 'create' ability and 'mount' +
+ ability in the origin file system +create subcommand Must also have the 'mount' ability +destroy subcommand Must also have the 'mount' ability +diff subcommand Allows lookup of paths within a dataset +
+ given an object number, and the ability to +
+ create snapshots necessary to 'zfs diff'. +mount subcommand Allows mount/umount of ZFS datasets +promote subcommand Must also have the 'mount' +
+ and 'promote' ability in the origin file system +receive subcommand Must also have the 'mount' and 'create' ability +rename subcommand Must also have the 'mount' and 'create' +
+ ability in the new parent +rollback subcommand Must also have the 'mount' ability +send subcommand +share subcommand Allows sharing file systems over NFS or SMB +
+ protocols +snapshot subcommand Must also have the 'mount' ability +groupquota other Allows accessing any groupquota@... property +groupused other Allows reading any groupused@... property +userprop other Allows changing any user property +userquota other Allows accessing any userquota@... property +userused other Allows reading any userused@... property +acltype property +aclinherit property +atime property +canmount property +casesensitivity property +checksum property +compression property +copies property +dedup property +devices property +exec property +filesystem_limit property +logbias property +mlslabel property +mountpoint property +nbmand property +normalization property +primarycache property +quota property +readonly property +recordsize property +refquota property +refreservation property +reservation property +secondarycache property +setuid property +shareiscsi property +sharenfs property +sharesmb property +snapdir property +snapshot_limit property +utf8only property +version property +volblocksize property +volsize property +vscan property +xattr property +zoned property
+
+

+

+

zfs allow -c + perm|@setname[,...] filesystem|volume

+

+
Sets "create time" permissions. These + permissions are granted (locally) to the creator of any newly-created + descendent file system.
+

+

zfs allow -s @setname + perm|@setname[,...] filesystem|volume

+

+
Defines or adds permissions to a permission set. The set + can be used by other zfs allow commands for the specified file system + and its descendents. Sets are evaluated dynamically, so changes to a set are + immediately reflected. Permission sets follow the same naming restrictions as + ZFS file systems, but the name must begin with an "at sign" + (@), and can be no more than 64 characters long.
+

+

zfs unallow [-rldug] + "everyone"|user|group[,...] + [perm|@setname[, ...]] filesystem|volume +
+ zfs unallow [-rld] -e [perm|@setname + [,...]] filesystem|volume +
+ zfs unallow [-r] -c + [perm|@setname[,...]] +
+ filesystem|volume

+

+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly denied, so other permissions + granted are still in effect. For example, if the permission is granted by an + ancestor. If no permissions are specified, then all permissions for the + specified user, group, or everyone are removed. + Specifying "everyone" (or using the -e option) only removes + the permissions that were granted to "everyone", not all permissions + for every user and group. See the zfs allow command for a description + of the -ldugec options. +

-r

+

+
Recursively remove the permissions from this file system + and all descendents.
+

+
+

+

zfs unallow [-r] -s @setname + [perm|@setname[,...]] +
+ filesystem|volume

+

+
Removes permissions from a permission set. If no + permissions are specified, then all permissions are removed, thus removing the + set entirely.
+

+

zfs hold [-r] tag + snapshot...

+

+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its own + tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that snapshot + by using the zfs destroy command return EBUSY.

+

-r

+

+
Specifies that a hold with the given tag is applied + recursively to the snapshots of all descendent file systems.
+

+
+

+

zfs holds [-r] snapshot...

+

+
Lists all existing user references for the given snapshot + or snapshots. +

-r

+

+
Lists the holds that are set on the named descendent + snapshots, in addition to listing the holds on the named snapshot.
+

+
+

+

zfs release [-r] tag + snapshot...

+

+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already exist + for each snapshot. +

If a hold exists on a snapshot, attempts to destroy that snapshot + by using the zfs destroy command return EBUSY.

+

-r

+

+
Recursively releases a hold with the given tag on the + snapshots of all descendent file systems.
+

+
+

+

zfs diff [-FHt] snapshot + snapshot|filesystem

+

+
Display the difference between a snapshot of a given + filesystem and another snapshot of that filesystem from a later time or the + current contents of the filesystem. The first column is a character indicating + the type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change time. +

The types of change are: +
+

+
-       The path has been removed
++       The path has been created
+M       The path has been modified
+R       The path has been renamed
+
+

-F

+

+
Display an indication of the type of file, in a manner + similar to the -F option of ls(1). +
+
B       Block device
+C       Character device
+/       Directory
+>       Door
+|       Named pipe
+@       Symbolic link
+P       Event port
+=       Socket
+F       Regular file
+
+
+

-H

+

+
Give more parsable tab-separated output, without header + lines and without arrows.
+

-t

+

+
Display the path's inode change time as the first column + of output.
+

+
+
+
+

+

Example 1 Creating a ZFS File System Hierarchy

+

+

The following commands create a file system named pool/home + and a file system named pool/home/bob. The mount point + /export/home is set for the parent file system, and is automatically + inherited by the child file system.

+

+

+
+

+
# zfs create pool/home
+# zfs set mountpoint=/export/home pool/home
+# zfs create pool/home/bob
+
+

+

+

Example 2 Creating a ZFS Snapshot

+

+

The following command creates a snapshot named yesterday. + This snapshot is mounted on demand in the .zfs/snapshot directory at + the root of the pool/home/bob file system.

+

+

+
+

+
# zfs snapshot pool/home/bob@yesterday
+
+

+

+

Example 3 Creating and Destroying Multiple Snapshots

+

+

The following command creates snapshots named yesterday of + pool/home and all of its descendent file systems. Each snapshot is + mounted on demand in the .zfs/snapshot directory at the root of its + file system. The second command destroys the newly created snapshots.

+

+

+
+

+
# zfs snapshot -r pool/home@yesterday
+# zfs destroy -r pool/home@yesterday
+
+

+

+

Example 4 Disabling and Enabling File System + Compression

+

+

The following command disables the compression property for + all file systems under pool/home. The next command explicitly enables + compression for pool/home/anne.

+

+

+
+

+
# zfs set compression=off pool/home
+# zfs set compression=on pool/home/anne
+
+

+

+

Example 5 Listing ZFS Datasets

+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if the listsnaps property is + on. The default is off. See zpool(8) for more + information on pool properties.

+

+

+
+

+
# zfs list
+
+ NAME USED AVAIL REFER MOUNTPOINT +
+ pool 450K 457G 18K /pool +
+ pool/home 315K 457G 21K /export/home +
+ pool/home/anne 18K 457G 18K /export/home/anne +
+ pool/home/bob 276K 457G 276K /export/home/bob
+
+

+

+

Example 6 Setting a Quota on a ZFS File System

+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob.

+

+

+
+

+
# zfs set quota=50G pool/home/bob
+
+

+

+

Example 7 Listing ZFS Properties

+

+

The following command lists all properties for + pool/home/bob.

+

+

+
+

+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  shareiscsi            off                    default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+pool/home/bob  logbias               latency                default
+pool/home/bob  dedup                 off                    default
+pool/home/bob  mlslabel              none                   default
+pool/home/bob  relatime              off                    default
+
+

+

+

+

The following command gets a single property value.

+

+

+
+

+
# zfs get -H -o value compression pool/home/bob
+on
+
+

+

+

+

The following command lists all properties with local settings for + pool/home/bob.

+

+

+
+

+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+

+

+

Example 8 Rolling Back a ZFS File System

+

+

The following command reverts the contents of + pool/home/anne to the snapshot named yesterday, deleting all + intermediate snapshots.

+

+

+
+

+
# zfs rollback -r pool/home/anne@yesterday
+
+

+

+

Example 9 Creating a ZFS Clone

+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+

+

+
+

+
# zfs clone pool/home/bob@yesterday pool/clone
+
+

+

+

Example 10 Promoting a ZFS Clone

+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+

+

+
+

+
# zfs create pool/project/production
+
+ populate /pool/project/production with data +# zfs snapshot pool/project/production@today +# zfs clone pool/project/production@today pool/project/beta +make changes to /pool/project/beta and test them +# zfs promote pool/project/beta +# zfs rename pool/project/production pool/project/legacy +# zfs rename pool/project/beta pool/project/production +once the legacy version is no longer needed, it can be destroyed +# zfs destroy pool/project/legacy
+
+

+

+

Example 11 Inheriting ZFS Properties

+

+

The following command causes pool/home/bob and + pool/home/anne to inherit the checksum property from their + parent.

+

+

+
+

+
# zfs inherit checksum pool/home/bob pool/home/anne
+
+

+

The following command causes pool/home/bob to revert to the + received value for the quota property if it exists.

+

+

+
+

+
# zfs inherit -S quota pool/home/bob
+
+

+

+

Example 12 Remotely Replicating ZFS Data

+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + poolB/received/fs@aand poolB/received/fs@b, respectively. + poolB must contain the file system poolB/received, and must + not initially contain poolB/received/fs.

+

+

+
+

+
# zfs send pool/fs@a | \
+
+ ssh host zfs receive poolB/received/fs@a +# zfs send -i a pool/fs@b | ssh host \ +
+ zfs receive poolB/received/fs
+
+

+

+

Example 13 Using the zfs receive -d + Option

+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it into + poolB/received/fsA/fsB@snap. The fsA/fsB@snap portion of the + received snapshot's name is determined from the name of the sent snapshot. + poolB must contain the file system poolB/received. If + poolB/received/fsA does not exist, it is created as an empty file + system.

+

+

+
+

+
# zfs send poolA/fsA/fsB@snap | \
+
+ ssh host zfs receive -d poolB/received
+
+

+

+

Example 14 Setting User Properties

+

+

The following example sets the user-defined + com.example:department property for a dataset.

+

+

+
+

+
# zfs set com.example:department=12345 tank/accounting
+
+

+

+

Example 15 Creating a ZFS Volume as an iSCSI Target + Device

+

+

The following example shows how to create a ZFS volume as + an iSCSI target.

+

+

+
+

+
# zfs create -V 2g pool/volumes/vol1
+# zfs set shareiscsi=on pool/volumes/vol1
+# iscsitadm list target
+Target: pool/volumes/vol1
+
+ iSCSI Name: +
+ iqn.1986-03.com.sun:02:7b4b02a6-3277-eb1b-e686-a24762c52a8c +
+ Connections: 0
+
+

+

+

+

After the iSCSI target is created, set up the iSCSI + initiator. For more information about the Solaris iSCSI initiator, + see iscsitadm(1M).

+

Example 16 Performing a Rolling Snapshot

+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+

+

+
+

+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+

+

+

Example 17 Setting sharenfs Property Options on a + ZFS File System

+

+

The following commands show how to set sharenfs property + options to enable rw access for a set of IP addresses and to + enable root access for system neo on the tank/home file + system.

+

+

+
+

+
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
+
+

+

+

+

If you are using DNS for host name resolution, specify the + fully qualified hostname.

+

+

Example 18 Delegating ZFS Administration Permissions on a + ZFS Dataset

+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots on + tank/cindys. The permissions on tank/cindys are also + displayed.

+

+

+
+

+
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+-------------------------------------------------------------
+Local+Descendent permissions on (tank/cindys)
+
+ user cindys create,destroy,mount,snapshot +-------------------------------------------------------------
+
+

+

+

+

Because the tank/cindys mount point permission is set to + 755 by default, user cindys will be unable to mount file systems + under tank/cindys. Set an ACL similar to the following syntax + to provide mount point access:

+

+
+

+
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
+
+

+

+

Example 19 Delegating Create Time Permissions on a ZFS + Dataset

+

+

The following example shows how to grant anyone in the group + staff to create file systems in tank/users. This syntax also + allows staff members to destroy their own file systems, but not destroy + anyone else's file system. The permissions on tank/users are also + displayed.

+

+

+
+

+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+-------------------------------------------------------------
+Create time permissions on (tank/users)
+
+ create,destroy +Local+Descendent permissions on (tank/users) +
+ group staff create,mount +-------------------------------------------------------------
+
+

+

+

Example 20 Defining and Granting a Permission Set on a ZFS + Dataset

+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+

+

+
+

+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+-------------------------------------------------------------
+Permission sets on (tank/users)
+
+ @pset create,destroy,mount,snapshot +Create time permissions on (tank/users) +
+ create,destroy +Local+Descendent permissions on (tank/users) +
+ group staff @pset,create,mount +-------------------------------------------------------------
+
+

+

+

Example 21 Delegating Property Permissions on a ZFS + Dataset

+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The permissions on + users/home are also displayed.

+

+

+
+

+
# zfs allow cindys quota,reservation users/home
+# zfs allow users/home
+-------------------------------------------------------------
+Local+Descendent permissions on (users/home)
+
+ user cindys quota,reservation +------------------------------------------------------------- +cindys% zfs set quota=10G users/home/marks +cindys% zfs get quota users/home/marks +NAME PROPERTY VALUE SOURCE +users/home/marks quota 10G local
+
+

+

+

Example 22 Removing ZFS Delegated Permissions on a ZFS + Dataset

+

+

The following example shows how to remove the snapshot permission + from the staff group on the tank/users file system. The + permissions on tank/users are also displayed.

+

+

+
+

+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+-------------------------------------------------------------
+Permission sets on (tank/users)
+
+ @pset create,destroy,mount,snapshot +Create time permissions on (tank/users) +
+ create,destroy +Local+Descendent permissions on (tank/users) +
+ group staff @pset,create,mount +-------------------------------------------------------------
+
+

+

+

Example 23 Showing the differences between a snapshot and a + ZFS Dataset

+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS Dataset and its current state. The -F option + is used to indicate type information for the files affected.

+

+

+
+

+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+

+

+

Example 24 Creating a bookmark

+

+

The following example create a bookmark to a snapshot. This + bookmark can then be used instead of snapshot in send streams.

+

+

+
+

+
# zfs bookmark rpool@snapshot rpool#bookmark
+
+

+

+
+
+

+
+
+
Cause zfs to dump core on exit for the purposes of running + ::findleaks. +

+
+
+
+
+

+

The following exit values are returned:

+

0

+

+
Successful completion.
+

+

1

+

+
An error occurred.
+

+

2

+

+
Invalid command line options were specified.
+

+
+
+

+

chmod(2), fsync(2), gzip(1), mount(8), + ssh(1), stat(2), write(2), zpool(8)

+
+
+ + + + + +
November 19, 2013ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zinject.8.html b/man/v0.6/8/zinject.8.html new file mode 100644 index 000000000..b8a4a2341 --- /dev/null +++ b/man/v0.6/8/zinject.8.html @@ -0,0 +1,290 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
zinject(8)System Administration Commandszinject(8)
+
+

+
+

+

zinject - ZFS Fault Injector

+
+
+

+

zinject creates artificial problems in a ZFS pool by + simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+
List injection records.
+
zinject -b objset:object:level:blkd [-f + frequency] [-amu] pool
+
Force an error into the pool at a bookmark.
+
zinject -c <id | all>
+
Cancel injection records.
+
zinject -d vdev -A <degrade|fault> + pool
+
Force a vdev into the DEGRADED or FAULTED state.
+
zinject -d vdev [-e device_error] [-L + label_error] [-T failure] [-F] + pool
+
Force a vdev error.
+
zinject -I [-s seconds | -g txgs] + pool
+
Simulate a hardware failure that fails to honor a cache flush.
+
zinject -p function pool
+
Panic inside the specified function.
+
zinject -t data [-e device_error] [-f + frequency] [-l level] [-r range] + [-amq] path
+
Force an error into the contents of a file.
+
zinject -t dnode [-e device_error] [-f + frequency] [-l level] [-amq] + path
+
Force an error into the metadnode for a file or directory.
+
zinject -t mos_type [-e device_error] [-f + frequency] [-l level] [-r range] + [-amqu] pool
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+
Force an error into the pool at this bookmark tuple. Each number is in + hexidecimal, and only one block can be specified.
+
+
A vdev specified by path or GUID.
+
+
Specify checksum for an ECKSUM error, dtl for an ECHILD + error, io for an EIO error where reopening the device will succeed, + or nxio for an ENXIO error where reopening the device will + fail.
+
+
Only inject errors a fraction of the time. Expressed as an integer + percentage between 1 and 100.
+
+
Fail faster. Do fewer checks.
+
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+
Inject an error at a particular block level. The default is 0.
+
+
Set the label error region to one of nvlist, pad1, + pad2, or uber.
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+
Run for this many seconds before reporting failure.
+
+
Set the failure type to one of all, claim, free, + read, or write.
+
+
Set this to mos for any data in the MOS, mosdir for an + object directory, config for the pool configuration, bpobj + for the block pointer list, spacemap for the space map, + metaslab for the metaslab, or errlog for the persistent + error log.
+
+
Unload the pool after injection. +

+
+
+
+
+

+
+
+
Run zinject in debug mode. +

+
+
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com> excerpting the zinject usage message and + source code.

+

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zpool.8.html b/man/v0.6/8/zpool.8.html new file mode 100644 index 000000000..053ef6716 --- /dev/null +++ b/man/v0.6/8/zpool.8.html @@ -0,0 +1,1980 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
zpool(8)System Administration Commandszpool(8)
+
+
+

+

zpool - configures ZFS storage pools

+
+
+

+
zpool [-?]
+

+

+
zpool add [-fgLnP] [-o property=value] pool vdev ...
+

+

+
zpool attach [-f] [-o property=value] pool device new_device
+

+

+
zpool clear pool [device]
+

+

+
zpool create [-fnd] [-o property=value] ... [-O file-system-property=value]
+
+ ... [-m mountpoint] [-R root] [-t tname] pool vdev ...
+

+

+
zpool destroy [-f] pool
+

+

+
zpool detach pool device
+

+

+
zpool events [-vHfc] [pool] ...
+

+

+
zpool export [-a] [-f] pool ...
+

+

+
zpool get [-pH] "all" | property[,...] pool ...
+

+

+
zpool history [-il] [pool] ...
+

+

+
zpool import [-d dir] [-D]
+

+

+
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
+
+ [-D] [-f] [-m] [-N] [-R root] [-F [-n] [-X] [-T]] -a
+

+

+
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
+
+ [-D] [-f] [-m] [-R root] [-F [-n] [-X] [-T]] [-t]] pool |id [newpool]
+

+

+
zpool iostat [-T d | u ] [-gLPvy] [pool] ... [interval[count]]
+

+

+
zpool labelclear [-f] device
+

+

+
zpool list [-T d | u ] [-HgLPv] [-o property[,...]] [pool] ...
+
+ [interval[count]]
+

+

+
zpool offline [-t] pool device ...
+

+

+
zpool online pool device ...
+

+

+
zpool reguid pool
+

+

+
zpool reopen pool
+

+

+
zpool remove pool device ...
+

+

+
zpool replace [-f] [-o property=value]  pool device [new_device]
+

+

+
zpool scrub [-s] pool ...
+

+

+
zpool set property=value pool
+

+

+
zpool split [-gLnP] [-R altroot] [-o property=value] pool newpool [device ...]
+

+

+
zpool status [-gLPvxD] [-T d | u] [pool] ... [interval [count]]
+

+

+
zpool upgrade 
+

+

+
zpool upgrade -v
+

+

+
zpool upgrade [-V version] -a | pool ...
+

+
+
+

+

The zpool command configures ZFS storage pools. A + storage pool is a collection of devices that provides physical storage and + data replication for ZFS datasets.

+

+

All datasets within a storage pool share the same space. See + zfs(8) for information on managing datasets.

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+

disk

+
A block device, typically located under /dev. + ZFS can use individual partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, or it + can be a shorthand name (the relative portion of the path under + "/dev"). For example, "sda" is equivalent to + "/dev/sda". A whole disk can be specified by omitting the partition + designation. When given a whole disk, ZFS automatically labels the + disk, if necessary.
+

+

file

+
A regular file. The use of files as a backing store is + strongly discouraged. It is designed primarily for experimental purposes, as + the fault tolerance of a file is only as good as the file system of which it + is a part. A file must be specified by a full path.
+

+

mirror

+
A mirror of two or more devices. Data is replicated in an + identical fashion across all components of a mirror. A mirror with N + disks of size X can hold X bytes and can withstand (N-1) + devices failing before data integrity is compromised.
+

+

raidz +
+ raidz1 +
+ raidz2 +
+ raidz3

+
A variation on RAID-5 that allows for better + distribution of parity and eliminates the "RAID-5 write hole" + (in which data and parity become inconsistent after a power loss). Data and + parity is striped across all disks within a raidz group. +

A raidz group can have single-, double- , or triple parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev type + specifies a single-parity raidz group; the raidz2 vdev + type specifies a double-parity raidz group; and the raidz3 + vdev type specifies a triple-parity raidz group. The + raidz vdev type is an alias for raidz1.

+

A raidz group with N disks of size X with + P parity disks can hold approximately (N-P)*X bytes and + can withstand P device(s) failing before data integrity is + compromised. The minimum number of devices in a raidz group is one + more than the number of parity disks. The recommended number is between 3 + and 9 to help increase performance.

+
+

+

spare

+
A special pseudo-vdev which keeps track of + available hot spares for a pool. For more information, see the "Hot + Spares" section.
+

+

log

+
A separate-intent log device. If more than one log device + is specified, then writes are load-balanced between devices. Log devices can + be mirrored. However, raidz vdev types are not supported for the + intent log. For more information, see the "Intent Log" + section.
+

+

cache

+
A device used to cache storage pool data. A cache device + cannot be configured as a mirror or raidz group. For more information, + see the "Cache Devices" section.
+

+

+

Virtual devices cannot be nested, so a mirror or raidz + virtual device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the + newly available devices.

+

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. The keywords "mirror" and + "raidz" are used to distinguish where a group ends and another + begins. For example, the following creates two root vdevs, each a mirror of + two disks:

+

+
+

+
# zpool create mypool mirror sda sdb mirror sdc sdd
+
+

+

+
+
+

+

ZFS supports a rich set of mechanisms for handling device + failure and data corruption. All metadata and data is checksummed, and + ZFS automatically repairs bad data from a good copy when corruption + is detected.

+

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. + While ZFS supports running in a non-redundant configuration, where + each root vdev is simply a disk or file, this is strongly discouraged. A + single case of bit corruption can render some or all of your data + unavailable.

+

+

A pool's health status is described by one of three states: + online, degraded, or faulted. An online pool has all devices operating + normally. A degraded pool is one in which one or more devices have failed, + but the data is still available due to a redundant configuration. A faulted + pool has corrupted metadata, or one or more faulted devices, and + insufficient replicas to continue functioning.

+

+

The health of the top-level vdev, such as mirror or raidz + device, is potentially impacted by the state of its associated vdevs, or + component devices. A top-level vdev or component device is in one of the + following states:

+

DEGRADED

+
One or more top-level vdevs is in the degraded state + because one or more component devices are offline. Sufficient replicas exist + to continue functioning. +

One or more component devices is in the degraded or faulted state, + but sufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
+
+
+
The number of checksum errors exceeds acceptable levels and the device is + degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
+
+
+
+
+
+
The number of I/O errors exceeds acceptable levels. The device could not + be marked as faulted because there are insufficient replicas to continue + functioning.
+
+
+
+

+

FAULTED

+
One or more top-level vdevs is in the faulted state + because one or more component devices are offline. Insufficient replicas exist + to continue functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
+
+
+
The device could be opened, but the contents did not match expected + values.
+
+
+
+
+
+
The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
+
+
+
+

+

OFFLINE

+
The device was explicitly taken offline by the + "zpool offline" command.
+

+

ONLINE

+
The device is online and functioning.
+

+

REMOVED

+
The device was physically removed while the system was + running. Device removal detection is hardware-dependent and may not be + supported on all platforms.
+

+

UNAVAIL

+
The device could not be opened. If a pool is imported + when a device was unavailable, then the device will be identified by a unique + identifier instead of its path since the path was never correct in the first + place.
+

+

+

If a device is removed and later re-attached to the system, + ZFS attempts to put the device online automatically. Device attach + detection is hardware-dependent and might not be supported on all + platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a "spare" vdev with any + number of devices. For example,

+

+
+

+
# zpool create pool mirror sda sdb spare sdc sdd
+
+

+

+

+

Spares can be shared across multiple pools, and can be added with + the "zpool add" command and removed with the "zpool + remove" command. Once a spare replacement is initiated, a new + "spare" vdev is created within the configuration that will + remain there until the original device is replaced. At this point, the hot + spare becomes available again.

+

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX + requirements for synchronous transactions. For instance, databases often + require their transactions to be on stable storage devices when returning + from a system call. NFS and other applications can also use + fsync() to ensure data stability. By default, the intent log is + allocated from blocks within the main pool. However, it might be possible to + get better performance using separate intent log devices such as + NVRAM or a dedicated disk. For example:

+

+
+

+
# zpool create pool sda sdb log sdc
+
+

+

+

+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an example of mirroring multiple log + devices.

+

+

Log devices can be added, replaced, attached, detached, and + imported and exported as part of the larger pool. Mirrored log devices can + be removed by specifying the top-level mirror for the log.

+
+
+

+

Devices can be added to a storage pool as "cache + devices." These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allow much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

+

To create a pool with cache devices, specify a "cache" + vdev with any number of devices. For example:

+

+
+

+
# zpool create pool sda sdb cache sdc sdd
+
+

+

+

+

Cache devices cannot be mirrored or part of a raidz + configuration. If a read error is encountered on a cache device, that read + I/O is reissued to the original storage pool device, which might be + part of a mirrored or raidz configuration.

+

+

The content of the cache devices is considered volatile, as is the + case with other system caches.

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool. The following are read-only properties:

+

available

+
Amount of storage available within the pool. This + property can also be referred to by its shortened column name, + "avail".
+

+

capacity

+
Percentage of pool space used. This property can also be + referred to by its shortened column name, "cap".
+

+

expandsize

+
Amount of uninitialized space within the pool or device + that can be used to increase the total capacity of the pool. Uninitialized + space consists of any space on an EFI labeled vdev which has not been brought + online (i.e. zpool online -e). This space occurs when a LUN is dynamically + expanded.
+

+

fragmentation

+
The amount of fragmentation in the pool.
+

+

free

+
The amount of free space available in the pool.
+

+

freeing

+
After a file system or snapshot is destroyed, the space + it was using is returned to the pool asynchronously. freeing is + the amount of space remaining to be reclaimed. Over time freeing + will decrease while free increases.
+

+

health

+
The current health of the pool. Health can be + "ONLINE", "DEGRADED", + "FAULTED", " OFFLINE", + "REMOVED", or "UNAVAIL".
+

+

guid

+
A unique identifier for the pool.
+

+

size

+
Total size of the storage pool.
+

+

unsupported@feature_guid

+
+

Information about unsupported features that are enabled on the + pool. See zpool-features(5) for details.

+
+

+

used

+
Amount of storage space used within the pool.
+

+

+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of + the data being written. In addition, ZFS reserves some space for + internal accounting that the zfs(8) command takes into account, but + the zpool command does not. For non-full pools of a reasonable size, + these effects should be invisible. For small pools, or pools that are close + to being completely full, these discrepancies may become more + noticeable.

+

+

+

The following property can be set at creation time:

+

ashift

+

+
Pool sector size exponent, to the power of 2 (internally + referred to as "ashift"). I/O operations will be aligned to the + specified size boundaries. Additionally, the minimum (disk) write size will be + set to the specified size, so this represents a space vs. performance + trade-off. The typical case for setting this property is when performance is + important and the underlying disks use 4KiB sectors but report 512B sectors to + the OS (for compatibility reasons); in that case, set ashift=12 (which + is 1<<12 = 4096). +

For optimal performance, the pool sector size should be greater + than or equal to the sector size of the underlying disks. Since the property + cannot be changed after pool creation, if in a given pool, you ever + want to use drives that report 4KiB sectors, you must set + ashift=12 at pool creation time.

+

Keep in mind is that the ashift is vdev specific and + is not a pool global. This means that when adding new vdevs to an + existing pool you may need to specify the ashift.

+
+

+

+

The following property can be set at creation time and import + time:

+

altroot

+

+
Alternate root directory. If set, this directory is + prepended to any mount points within the pool. This can be used when examining + an unknown pool where the mount points cannot be trusted, or in an alternate + boot environment, where the typical paths are not valid. altroot is not + a persistent property. It is valid only while the system is up. Setting + altroot defaults to using cachefile=none, though this may be + overridden using an explicit setting.
+

+

+

The following property can only be set at import time:

+

readonly=on | off

+

+
If set to on, the pool will be imported in + read-only mode: Synchronous data in the intent log will not be accessible, + properties of the pool can not be changed and datasets of the pool can only be + mounted read-only. The readonly property of its datasets will be + implicitly set to on. +

It can also be specified by its column name of rdonly.

+

To write to a read-only pool, a export and import of the pool is + required.

+
+

+

+

The following properties can be set at creation time and import + time, and later changed with the zpool set command:

+

autoexpand=on | off

+

+
Controls automatic pool expansion when the underlying LUN + is grown. If set to on, the pool will be resized according to the size + of the expanded device. If the device is part of a mirror or raidz then + all devices within that mirror/raidz group must be expanded before the + new space is made available to the pool. The default behavior is off. + This property can also be referred to by its shortened column name, + expand.
+

+

autoreplace=on | off

+

+
Controls automatic device replacement. If set to + "off", device replacement must be initiated by the + administrator by using the "zpool replace" command. If set to + "on", any new device, found in the same physical location as + a device that previously belonged to the pool, is automatically formatted and + replaced. The default behavior is "off". This property can + also be referred to by its shortened column name, "replace".
+

+

bootfs=pool/dataset

+

+
Identifies the default bootable dataset for the root + pool. This property is expected to be set mainly by the installation and + upgrade programs.
+

+

cachefile=path | none

+

+
Controls the location of where the pool configuration is + cached. Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in this + cache are automatically imported when the system boots. Some environments, + such as install and clustering, need to cache this information in a different + location so that pools are not automatically imported. Setting this property + caches the pool configuration in a different location that can later be + imported with "zpool import -c". Setting it to the special + value "none" creates a temporary pool that is never cached, + and the special value '' (empty string) uses the default location. +

Multiple pools can share the same cache file. Because the kernel + destroys and recreates this file when pools are added and removed, care + should be taken when attempting to access this file. When the last pool + using a cachefile is exported or destroyed, the file is removed.

+
+

+

comment=text

+

+
A text string consisting of printable ASCII characters + that will be stored such that it is available even if the pool becomes + faulted. An administrator can provide additional information about a pool + using this property.
+

+

dedupditto=number

+

+
Threshold for the number of block ditto copies. If the + reference count for a deduplicated block increases above this number, a new + ditto copy of this block is automatically stored. The default setting is 0 + which causes no ditto copies to be created for deduplicated blocks. The + miniumum legal nonzero setting is 100.
+

+

delegation=on | off

+

+
Controls whether a non-privileged user is granted access + based on the dataset permissions defined on the dataset. See zfs(8) for + more information on ZFS delegated administration.
+

+

failmode=wait | continue | + panic

+

+
Controls the system behavior in the event of catastrophic + pool failure. This condition is typically a result of a loss of connectivity + to the underlying storage device(s) or a failure of all devices within the + pool. The behavior of such an event is determined as follows: +

wait

+
Blocks all I/O access until the device + connectivity is recovered and the errors are cleared. This is the default + behavior.
+

+

continue

+
Returns EIO to any new write I/O requests + but allows reads to any of the remaining healthy devices. Any write requests + that have yet to be committed to disk would be blocked.
+

+

panic

+
Prints out a message to the console and generates a + system crash dump.
+

+
+

+

feature@feature_name=enabled

+
The value of this property is the current state of + feature_name. The only valid value when setting this property is + enabled which moves feature_name to the enabled state. See + zpool-features(5) for details on feature states.
+

+

listsnaps=on | off

+

+
Controls whether information about snapshots associated + with this pool is output when "zfs list" is run without the + -t option. The default value is "off".
+

+

version=version

+

+
The current on-disk version of the pool. This can be + increased, but never decreased. The preferred method of updating pools is with + the "zpool upgrade" command, though this property can be used + when a specific version is needed for backwards compatibility. Once feature + flags are enabled on a pool this property will no longer have a value.
+

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

+

The zpool command provides subcommands to create and + destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+

zpool -?

+

+
Displays a help message.
+

+

zpool add [-fgLnP] [-o + property=value] pool vdev ...

+

+
Adds the specified virtual devices to the given pool. The + vdev specification is described in the "Virtual Devices" + section. The behavior of the -f option, and the device checks performed + are described in the "zpool create" subcommand. +

-f

+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden in + this manner.
+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-n

+
Displays the configuration that would be used without + actually adding the vdevs. The actual pool creation can still fail due + to insufficient privileges or device sharing.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

+

-o property=value

+

+
Sets the given pool properties. See the + "Properties" section for a list of valid properties that can be set. + The only property supported at the moment is ashift. Do note + that some properties (among them ashift) are not inherited from + a previous vdev. They are vdev specific, not pool specific.
+

Do not add a disk that is currently configured as a quorum device + to a zpool. After a disk is in the pool, that disk can then be configured as + a quorum device.

+
+

+

zpool attach [-f] [-o + property=value] pool device new_device

+

+
Attaches new_device to an existing zpool + device. The existing device cannot be part of a raidz configuration. If + device is not currently part of a mirrored configuration, device + automatically transforms into a two-way mirror of device and + new_device. If device is part of a two-way mirror, attaching + new_device creates a three-way mirror, and so on. In either case, + new_device begins to resilver immediately. +

-f

+
Forces use of new_device, even if its appears to + be in use. Not all devices can be overridden in this manner.
+

+

-o property=value

+

+
Sets the given pool properties. See the + "Properties" section for a list of valid properties that can be set. + The only property supported at the moment is "ashift".
+

+
+

+

zpool clear pool [device] ...

+

+
Clears device errors in a pool. If no arguments are + specified, all device errors within the pool are cleared. If one or more + devices is specified, only those errors associated with the specified device + or devices are cleared.
+

+

zpool create [-fnd] [-o + property=value] ... [-O file-system-property=value] ... + [-m mountpoint] [-R root] [-t + tname] pool vdev ...

+

+
Creates a new storage pool containing the virtual devices + specified on the command line. The pool name must begin with a letter, and can + only contain alphanumeric characters as well as underscore ("_"), + dash ("-"), period ("."), colon (":"), and space + (" "). The pool names "mirror", "raidz", + "spare" and "log" are reserved, as are names beginning + with the pattern "c[0-9]". The vdev specification is + described in the "Virtual Devices" section. +

The command verifies that each device specified is accessible and + not currently in use by another subsystem. There are some uses, such as + being currently mounted, or specified as the dedicated dump device, that + prevents a device from ever being used by ZFS. Other uses, such as + having a preexisting UFS file system, can be overridden with the + -f option.

+

The command also checks that the replication strategy for the pool + is consistent. An attempt to combine redundant and non-redundant storage in + a single pool, or to mix disks and files, results in an error unless + -f is specified. The use of differently sized devices within a single + raidz or mirror group is also flagged as an error unless -f is + specified.

+

Unless the -R option is specified, the default mount point + is "/pool". The mount point must not exist or must be + empty, or else the root dataset cannot be mounted. This can be overridden + with the -m option.

+

By default all supported features are enabled on the new pool + unless the -d option is specified.

+

-f

+

+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden in + this manner.
+

+

-n

+

+
Displays the configuration that would be used without + actually creating the pool. The actual pool creation can still fail due to + insufficient privileges or device sharing.
+

+

-d

+

+
Do not enable any features on the new pool. Individual + features can be enabled by setting their corresponding properties to + enabled with the -o option. See zpool-features(5) for + details about feature properties.
+

+

-o property=value [-o + property=value] ...

+

+
Sets the given pool properties. See the + "Properties" section for a list of valid properties that can be + set.
+

+

-O file-system-property=value +
+ [-O file-system-property=value] ...

+

+
Sets the given file system properties in the root file + system of the pool. See the "Properties" section of zfs(8) + for a list of valid properties that can be set.
+

+

-R root

+

+
Equivalent to "-o + cachefile=none,altroot=root"
+

+

-m mountpoint

+

+
Sets the mount point for the root dataset. The default + mount point is "/pool" or + "altroot/pool" if altroot is specified. The + mount point must be an absolute path, "legacy", or + "none". For more information on dataset mount points, see + zfs(8).
+

+

-t tname

+

+
Sets the in-core pool name to "tname" + while the on-disk name will be the name specified as the pool name + "pool". This will set the default cachefile property to none. + This is intended to handle name space collisions when creating pools for other + systems, such as virtual machines or physical machines whose pools live on + network block devices.
+

+
+

+

zpool destroy [-f] pool

+

+
Destroys the given pool, freeing up any devices for other + use. This command tries to unmount any active datasets before destroying the + pool. +

-f

+
Forces any active datasets contained within the pool to + be unmounted.
+

+
+

+

zpool detach pool device

+

+
Detaches device from a mirror. The operation is + refused if there are no other valid replicas of the data. If device may + be re-added to the pool later on then consider the "zpool + offline" command instead.
+

+

+

zpool events [-vHfc] [pool] ...

+

+
Description of the different events generated by the ZFS + kernel modules. See zfs-events(5) for more information about the + subclasses and event payloads that can be generated. +

+

-v

+
Get a full detail of the events and what information is + available about it.
+

+

-H

+
Scripted mode. Do not display headers, and separate + fields by a single tab instead of arbitrary space.
+

+

-f

+
Follow mode.
+

+

-c

+
Clear all previous events.
+

+
+

+

zpool export [-a] [-f] pool + ...

+

+
Exports the given pools from the system. All devices are + marked as exported, but are still considered in use by other subsystems. The + devices can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present. +

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the zpool command + whole disks, not just partitions, so that ZFS can label the disks + with portable EFI labels. Otherwise, disk drivers on platforms of + different endianness will not recognize the disks.

+

-a

+
Exports all pools imported on the system.
+

+

-f

+
Forcefully unmount all datasets, using the + "unmount -f" command. +

This command will forcefully export the pool even if it has a + shared spare that is currently being used. This may lead to potential data + corruption.

+
+

+
+

+

zpool get [-p] "all" | + property[,...] pool ...

+

+
Retrieves the given list of properties (or all properties + if "all" is used) for the specified storage pool(s). These + properties are displayed with the following fields: +

+
+

+
+
+ name Name of storage pool +
+ property Property name +
+ value Property value +
+ source Property source, either 'default' or 'local'.
+
+

+

See the "Properties" section for more information on the + available pool properties.

+

-p

+
Display numbers in parseable (exact) values.
+

+

-H

+
Scripted mode. Do not display headers, and separate + fields by a single tab instead of arbitrary space.
+

+
+

+

zpool history [-il] [pool] ...

+

+
Displays the command history of the specified pools or + all pools if no pool is specified. +

-i

+
Displays internally logged ZFS events in addition + to user initiated events.
+

+

-l

+
Displays log records in long format, which in addition to + standard format includes, the user name, the hostname, and the zone in which + the operation was performed.
+

+
+

+

zpool import [-d dir | -c + cachefile] [-D]

+

+
Lists pools available to import. If the -d option + is not specified, this command searches for devices in "/dev". The + -d option can be specified multiple times, and all directories are + searched. If the device appears to be part of an exported pool, this command + displays a summary of the pool with the name of the pool, a numeric + identifier, as well as the vdev layout and current health of the device + for each device or file. Destroyed pools, pools that were previously destroyed + with the "zpool destroy" command, are not listed unless the + -D option is specified. +

The numeric identifier is unique, and can be used instead of the + pool name when multiple exported pools of the same name are available.

+

-c cachefile

+
Reads configuration from the given cachefile that + was created with the "cachefile" pool property. This + cachefile is used instead of searching for devices.
+

+

-d dir

+
Searches for devices or files in dir. The + -d option can be specified multiple times.
+

+

-D

+
Lists destroyed pools only.
+

+
+

+

zpool import [-o mntopts] [ -o + property=value] ... [-d dir | -c + cachefile] [-D] [-f] [-m] [-N] [-R + root] [-F [-n]] -a

+

+
Imports all pools found in the search directories. + Identical to the previous command, except that all pools with a sufficient + number of devices available are imported. Destroyed pools, pools that were + previously destroyed with the "zpool destroy" command, will + not be imported unless the -D option is specified. +

-o mntopts

+
Comma-separated list of mount options to use when + mounting datasets within the pool. See zfs(8) for a description of + dataset properties and mount options.
+

+

-o property=value

+
Sets the specified property on the imported pool. See the + "Properties" section for more information on the available pool + properties.
+

+

-c cachefile

+
Reads configuration from the given cachefile that + was created with the "cachefile" pool property. This + cachefile is used instead of searching for devices.
+

+

-d dir

+
Searches for devices or files in dir. The + -d option can be specified multiple times. This option is incompatible + with the -c option.
+

+

-D

+
Imports destroyed pools only. The -f option is + also required.
+

+

-f

+
Forces import, even if the pool appears to be potentially + active.
+

+

-F

+
Recovery mode for a non-importable pool. Attempt to + return the pool to an importable state by discarding the last few + transactions. Not all damaged pools can be recovered by using this option. If + successful, the data from the discarded transactions is irretrievably lost. + This option is ignored if the pool is importable or already imported.
+

+

-a

+
Searches for and imports all pools found.
+

+

-m

+
Allows a pool to import when there is a missing log + device.
+

+

-R root

+
Sets the "cachefile" property to + "none" and the "altroot" property to + "root".
+

+

-N

+
Import the pool without mounting any file systems.
+

+

-n

+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does not + actually perform the pool recovery. For more details about pool recovery mode, + see the -F option, above.
+

+

-X

+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This allows + the pool to be rolled back to a txg which is no longer guaranteed to be + consistent. Pools imported at an inconsistent txg may contain uncorrectable + checksum errors. For more details about pool recovery mode, see the -F + option, above. WARNING: This option can be extremely hazardous to the + health of your pool and should only be used as a last resort.
+

+

-T

+
Specify the txg to use for rollback. Implies -FX. + For more details about pool recovery mode, see the -X option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+

+
+

+

zpool import [-o mntopts] [ -o + property=value] ... [-d dir | -c + cachefile] [-D] [-f] [-m] [-R + root] [-F [-n]] [-t]] pool | id + [newpool]

+

+
Imports a specific pool. A pool can be identified by its + name or the numeric identifier. If newpool is specified, the pool is + imported using the name newpool. Otherwise, it is imported with the + same name as its exported name. +

If a device is removed from a system without running + "zpool export" first, the device appears as potentially + active. It cannot be determined if this was a failed export, or whether the + device is really in use from another host. To import a pool in this state, + the -f option is required.

+

-o mntopts

+

+
Comma-separated list of mount options to use when + mounting datasets within the pool. See zfs(8) for a description of + dataset properties and mount options.
+

+

-o property=value

+

+
Sets the specified property on the imported pool. See the + "Properties" section for more information on the available pool + properties.
+

+

-c cachefile

+

+
Reads configuration from the given cachefile that + was created with the "cachefile" pool property. This + cachefile is used instead of searching for devices.
+

+

-d dir

+

+
Searches for devices or files in dir. The + -d option can be specified multiple times. This option is incompatible + with the -c option.
+

+

-D

+

+
Imports destroyed pool. The -f option is also + required.
+

+

-f

+

+
Forces import, even if the pool appears to be potentially + active.
+

+

-F

+

+
Recovery mode for a non-importable pool. Attempt to + return the pool to an importable state by discarding the last few + transactions. Not all damaged pools can be recovered by using this option. If + successful, the data from the discarded transactions is irretrievably lost. + This option is ignored if the pool is importable or already imported.
+

+

-R root

+

+
Sets the "cachefile" property to + "none" and the "altroot" property to + "root".
+

+

-n

+

+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does not + actually perform the pool recovery. For more details about pool recovery mode, + see the -F option, above.
+

+

-X

+

+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This allows + the pool to be rolled back to a txg which is no longer guaranteed to be + consistent. Pools imported at an inconsistent txg may contain uncorrectable + checksum errors. For more details about pool recovery mode, see the -F + option, above. WARNING: This option can be extremely hazardous to the + health of your pool and should only be used as a last resort.
+

+

-T

+

+
Specify the txg to use for rollback. Implies -FX. + For more details about pool recovery mode, see the -X option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+

+

-t

+

+
Used with "newpool". Specifies that + "newpool" is temporary. Temporary pool names last until + export. Ensures that the original pool name will be used in all label updates + and therefore is retained upon export. Will also set -o cachefile=none when + not explicitly specified.
+

+

-m

+

+
Allows a pool to import when there is a missing log + device.
+

+
+

+

zpool iostat [-T d | u] + [-gLPvy] [pool] ... [interval[count]]

+

+
Displays I/O statistics for the given pools. When + given an interval, the statistics are printed every interval seconds + until Ctrl-C is pressed. If no pools are specified, statistics + for every pool in the system is shown. If count is specified, the + command exits after count reports are printed. +

-T u | d

+
Display a time stamp. +

Specify u for a printed representation of the internal + representation of time. See time(2). Specify d for standard + date format. See date(1).

+
+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

+

-v

+
Verbose statistics. Reports usage statistics for + individual vdevs within the pool, in addition to the pool-wide + statistics.
+

+

-y

+
Omit statistics since boot. Normally the first line of + output reports the statistics since boot. This option suppresses that first + line of output.
+

+
+

+

zpool labelclear [-f] device

+

+
Removes ZFS label information from the specified device. + The device must not be part of an active pool configuration. +

-f

+
Treat exported or foreign devices as inactive.
+

+
+

+

zpool list [-T d | u] + [-HgLPv] [-o props[,...]] [pool] ... + [interval[count]]

+

+
Lists the given pools along with a health status and + space usage. If no pools are specified, all pools in the system are + listed. When given an interval, the information is printed every + interval seconds until Ctrl-C is pressed. If count is + specified, the command exits after count reports are printed. +

-H

+
Scripted mode. Do not display headers, and separate + fields by a single tab instead of arbitrary space.
+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

-T d | u

+
Display a time stamp. +

Specify u for a printed representation of the internal + representation of time. See time(2). Specify d for standard + date format. See date(1).

+
+

+

-o props

+
Comma-separated list of properties to display. See the + "Properties" section for a list of valid properties. The default + list is "name, size, used, available, fragmentation, expandsize, + capacity, dedupratio, health, altroot"
+

+

-v

+
Verbose statistics. Reports usage statistics for + individual vdevs within the pool, in addition to the pool-wise + statistics.
+

+
+

+

zpool offline [-t] pool device + ...

+

+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or write to the device. +

This command is not applicable to spares or cache devices.

+

-t

+
Temporary. Upon reboot, the specified physical device + reverts to its previous state.
+

+
+

+

zpool online [-e] pool + device...

+

+
Brings the specified physical device online. +

This command is not applicable to spares or cache devices.

+

-e

+
Expand the device to use all available space. If the + device is part of a mirror or raidz then all devices must be expanded + before the new space will become available to the pool.
+

+
+

+

zpool reguid pool

+

+
Generates a new unique identifier for the pool. You must + ensure that all devices in this pool are online and healthy before performing + this action.
+

+

zpool reopen pool

+

+
Reopen all the vdevs associated with the pool.
+

+

zpool remove pool device ...

+

+
Removes the specified device from the pool. This command + currently only supports removing hot spares, cache, and log devices. A + mirrored log device can be removed by specifying the top-level mirror for the + log. Non-log devices that are part of a mirrored configuration can be removed + using the zpool detach command. Non-redundant and raidz devices + cannot be removed from a pool.
+

+

zpool replace [-f] [-o + property=value] pool old_device [new_device]

+

+
Replaces old_device with new_device. This + is equivalent to attaching new_device, waiting for it to resilver, and + then detaching old_device. +

The size of new_device must be greater than or equal to the + minimum size of all the devices in a mirror or raidz + configuration.

+

new_device is required if the pool is not redundant. If + new_device is not specified, it defaults to old_device. This + form of replacement is useful after an existing disk has failed and has been + physically replaced. In this case, the new disk may have the same + /dev path as the old device, even though it is actually a different + disk. ZFS recognizes this.

+

-f

+
Forces use of new_device, even if its appears to + be in use. Not all devices can be overridden in this manner.
+

+

-o property=value

+

+
Sets the given pool properties. See the + "Properties" section for a list of valid properties that can be set. + The only property supported at the moment is ashift. Do note + that some properties (among them ashift) are not inherited from + a previous vdev. They are vdev specific, not pool specific.
+

+
+

+

zpool scrub [-s] pool ...

+

+
Begins a scrub. The scrub examines all data in the + specified pools to verify that it checksums correctly. For replicated (mirror + or raidz) devices, ZFS automatically repairs any damage + discovered during the scrub. The "zpool status" command + reports the progress of the scrub and summarizes the results of the scrub upon + completion. +

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to + be out of date (for example, when attaching a new device to a mirror or + replacing an existing device), whereas scrubbing examines all data to + discover silent errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive + operations, ZFS only allows one at a time. If a scrub is already in + progress, the "zpool scrub" command terminates it and + starts a new scrub. If a resilver is in progress, ZFS does not allow + a scrub to be started until the resilver completes.

+

-s

+
Stop scrubbing.
+

+
+

+

zpool set property=value + pool

+

+
Sets the given property on the specified pool. See the + "Properties" section for more information on what properties can be + set and acceptable values.
+

+

zpool split [-gLnP] [-R altroot] + [-o property=value] pool newpool [device + ...]

+

+
Split devices off pool creating newpool. + All vdevs in pool must be mirrors and the pool must not be in + the process of resilvering. At the time of the split, newpool will be a + replica of pool. By default, the last device in each mirror is split + from pool to create newpool. +

The optional device specification causes the specified + device(s) to be included in the new pool and, should any devices remain + unspecified, the last device in each mirror is used as would be by + default.

+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-n

+

+
Do dry run, do not actually perform the split. Print out + the expected configuration of newpool.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

+

-R altroot

+

+
Set altroot for newpool and automaticaly + import it. This can be useful to avoid mountpoint collisions if newpool + is imported on the same filesystem as pool.
+

+

-o property=value

+

+
Sets the specified property for newpool. See the + “Properties” section for more information on the available pool + properties.
+

+
+

+

zpool status [-gLPvxD] [-T d | u] + [pool] ... [interval [count]]

+

+
Displays the detailed health status for the given pools. + If no pool is specified, then the status of each pool in the system is + displayed. For more information on pool and device health, see the + "Device Failure and Recovery" section. +

If a scrub or resilver is in progress, this command reports the + percentage done and the estimated time to completion. Both of these are only + approximate, because the amount of data in the pool and the other workloads + on the system can change.

+

+

-g

+
Display vdev GUIDs instead of the normal device names. + These GUIDs can be used innplace of device names for the zpool + detach/offline/remove/replace commands.
+

+

-L

+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name regardless of + the /dev/disk/ path used to open it.
+

+

-P

+
Display full paths for vdevs instead of only the last + component of the path. This can be used in conjunction with the -L + flag.
+

+

-v

+
Displays verbose data error information, printing out a + complete list of all data errors since the last complete pool scrub.
+

+

-x

+
Only display status for pools that are exhibiting errors + or are otherwise unavailable. Warnings about pools not using the latest + on-disk format will not be included.
+

+

-D

+
Display a histogram of deduplication statistics, showing + the allocated (physically present on disk) and referenced (logically + referenced in the pool) block counts and sizes by reference count.
+

+

-T d | u

+
Display a time stamp. +

Specify u for a printed representation of the internal + representation of time. See time(2). Specify d for standard + date format. See date(1).

+
+

+

zpool upgrade

+

+
Displays pools which do not have all supported features + enabled and pools formatted using a legacy ZFS version number. These pools can + continue to be used, but some features may not be available. Use + "zpool upgrade -a" to enable all features on all pools.
+

+

zpool upgrade -v

+

+
Displays legacy ZFS versions supported by the + current software. See zfs-features(5) for a description of feature + flags features supported by the current software.
+

+

zpool upgrade [-V version] -a | + pool ...

+

+
Enables all supported features on the given pool. Once + this is done, the pool will no longer be accessible on systems that do not + support feature flags. See zfs-features(5) for details on compatibility + with systems that support feature flags, but do not support all features + enabled on the pool. +

-a

+
Enables all supported features on all pools.
+

+

-V version

+
Upgrade to the specified legacy version. If the -V + flag is specified, no features will be enabled on the pool. This option can + only be used to increase the version number up to the last supported legacy + version number.
+

+
+

+
+
+
+
+

+

Example 1 Creating a RAID-Z Storage Pool

+

+

The following command creates a pool with a single raidz + root vdev that consists of six disks.

+

+

+
+

+
# zpool create tank raidz sda sdb sdc sdd sde sdf
+
+

+

+

Example 2 Creating a Mirrored Storage Pool

+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks.

+

+

+
+

+
# zpool create tank mirror sda sdb mirror sdc sdd
+
+

+

+

Example 3 Creating a ZFS Storage Pool by Using + Partitions

+

+

The following command creates an unmirrored pool using two disk + partitions.

+

+

+
+

+
# zpool create tank sda1 sdb2
+
+

+

+

Example 4 Creating a ZFS Storage Pool by Using Files

+

+

The following command creates an unmirrored pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+

+

+
+

+
# zpool create tank /path/to/file/a /path/to/file/b
+
+

+

+

Example 5 Adding a Mirror to a ZFS Storage Pool

+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way mirrors. The + additional space is immediately available to any datasets within the + pool.

+

+

+
+

+
# zpool add tank mirror sda sdb
+
+

+

+

Example 6 Listing Available ZFS Storage Pools

+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing device.

+

+

+

The results from this command are similar to the following:

+

+

+
+

+
# zpool list
+NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
+zion       -      -      -      -         -      -      -  FAULTED -
+
+

+

+

Example 7 Destroying a ZFS Storage Pool

+

+

The following command destroys the pool tank and any + datasets contained within.

+

+

+
+

+
# zpool destroy -f tank
+
+

+

+

Example 8 Exporting a ZFS Storage Pool

+

+

The following command exports the devices in pool tank so + that they can be relocated or later imported.

+

+

+
+

+
# zpool export tank
+
+

+

+

Example 9 Importing a ZFS Storage Pool

+

+

The following command displays available pools, and then imports + the pool tank for use on the system.

+

+

+

The results from this command are similar to the following:

+

+

+
+

+
# zpool import
+
+ pool: tank +
+ id: 15451357997522795478 +
+ state: ONLINE +action: The pool can be imported using its name or numeric identifier. +config: +
+ tank ONLINE +
+ mirror ONLINE +
+ sda ONLINE +
+ sdb ONLINE +# zpool import tank
+
+

+

+

Example 10 Upgrading All ZFS Storage Pools to the Current + Version

+

+

The following command upgrades all ZFS Storage pools to the + current version of the software.

+

+

+
+

+
# zpool upgrade -a
+This system is currently running ZFS pool version 28.
+
+

+

+

Example 11 Managing Hot Spares

+

+

The following command creates a new pool with an available hot + spare:

+

+

+
+

+
# zpool create tank mirror sda sdb spare sdc
+
+

+

+

+

If one of the disks were to fail, the pool would be reduced to the + degraded state. The failed device can be replaced using the following + command:

+

+

+
+

+
# zpool replace tank sda sdd
+
+

+

+

+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fails. The hot + spare can be permanently removed from the pool using the following + command:

+

+

+
+

+
# zpool remove tank sdc
+
+

+

+

Example 12 Creating a ZFS Pool with Mirrored Separate + Intent Logs

+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+

+

+
+

+
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
+
+ sde sdf
+
+

+

+

Example 13 Adding Cache Devices to a ZFS Pool

+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+

+

+
+

+
# zpool add pool cache sdc sdd
+
+

+

+

+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat option as follows:

+

+

+
+

+
# zpool iostat -v pool 5
+
+

+

+

Example 14 Removing a Mirrored Log Device

+

+

The following command removes the mirrored log device + mirror-2.

+

+

+

Given this configuration:

+

+

+
+

+
+
+ pool: tank +
+ state: ONLINE +
+ scrub: none requested +config: +
+ NAME STATE READ WRITE CKSUM +
+ tank ONLINE 0 0 0 +
+ mirror-0 ONLINE 0 0 0 +
+ sda ONLINE 0 0 0 +
+ sdb ONLINE 0 0 0 +
+ mirror-1 ONLINE 0 0 0 +
+ sdc ONLINE 0 0 0 +
+ sdd ONLINE 0 0 0 +
+ logs +
+ mirror-2 ONLINE 0 0 0 +
+ sde ONLINE 0 0 0 +
+ sdf ONLINE 0 0 0
+
+

+

+

+

The command to remove the mirrored log mirror-2 is:

+

+

+
+

+
# zpool remove tank mirror-2
+
+

+

+

Example 15 Displaying expanded space on a device

+

+

The following command displays the detailed information for the + data pool. This pool is comprised of a single raidz vdev where + one of its devices increased its capacity by 10GB. In this example, the pool + will not be able to utilized this extra capacity until all the devices under + the raidz vdev have been expanded.

+

+

+
+

+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
+
+ raidz1 23.9G 14.6G 9.30G 48% - +
+ c1t1d0 - - - - - +
+ c1t2d0 - - - - 10G +
+ c1t3d0 - - - - -
+
+

+
+
+

+

The following exit values are returned:

+

0

+
Successful completion.
+

+

1

+
An error occurred.
+

+

2

+
Invalid command line options were specified.
+

+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes of running + ::findleaks.
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool looks for device + nodes and files. Similar to the -d option in zpool + import.
+
+
Cause zpool subcommands to output vdev guids by default. This + behavior is identical to the zpool status -g command line + option.
+ +
Cause zpool subcommands to follow links for vdev names by default. + This behavior is identical to the zpool status -L command line + option.
+
+
Cause zpool subcommands to output full vdev path names by default. + This behavior is identical to the zpool status -p command line + option. +

+
+
+
+
+

+

zfs(8), zpool-features(5), zfs-events(5)

+
+
+ + + + + +
14 December 2012ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/8/zstreamdump.8.html b/man/v0.6/8/zstreamdump.8.html new file mode 100644 index 000000000..1ed0f92ae --- /dev/null +++ b/man/v0.6/8/zstreamdump.8.html @@ -0,0 +1,197 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
zstreamdump(8)System Administration Commandszstreamdump(8)
+
+
+

+

zstreamdump - filter data in zfs send stream

+
+
+

+
zstreamdump [-C] [-v]
+

+
+
+

+

The zstreamdump utility reads from the output of the zfs + send command, then displays headers and some statistics from that + output. See zfs(1M).

+
+
+

+

The following options are supported:

+

-C

+

+
Suppress the validation of checksums.
+

+

-v

+

+
Verbose. Dump all headers, not only begin and end + headers.
+

+
+
+

+

zfs(8)

+
+
+ + + + + +
29 Aug 2012ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.6/index.html b/man/v0.6/index.html new file mode 100644 index 000000000..6a3b35a39 --- /dev/null +++ b/man/v0.6/index.html @@ -0,0 +1,143 @@ + + + + + + + v0.6 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/cstyle.1.html b/man/v0.7/1/cstyle.1.html new file mode 100644 index 000000000..b6d7588b5 --- /dev/null +++ b/man/v0.7/1/cstyle.1.html @@ -0,0 +1,285 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
cstyle(1)General Commands Manualcstyle(1)
+
+
+

+

cstyle - check for some common stylistic errors in C source + files

+
+
+

+

cstyle [-chpvCP] [-o constructs] [file...]

+
+
+

+

cstyle inspects C source files (*.c and *.h) for common + sylistic errors. It attempts to check for the cstyle documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. Note that + there is much in that document that cannot be checked for; just + because your code is cstyle(1) clean does not mean that you've + followed Sun's C style. Caveat emptor.

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented exactly four + spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see CONTINUATION CHECKING, below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI #else and #endif + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + "u_int" and "u_long" were used, but they are now + deprecated in favor of the POSIX types uint_t, ulong_t, etc. This detects + any use of the deprecated types. Used as part of the putback checks.
+
+
Allow a comma-separated list of additional constructs. Available + constructs include:
+
+
Allow doxygen-style block comments (/** and /*!)
+
+
Allow splint-style lint comments (/*@...@*/)
+
+
+
+

+

The cstyle rule for the OS/Net consolidation is that all new files + must be -pP clean. For existing files, the following invocations are + run against both the old and new files:

+
+
+
+
+
+
+
+
+

If the old file gave no errors for one of the invocations, the new + file must also give no errors. This way, files can only become more + clean.

+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
+
1.
+
Preprocessor macros which cause unmatched parenthesis will confuse the + checker for that line. To fix this, you'll need to make sure that each + branch of the #if statement has balanced parenthesis.
+
2.
+
Some cpp macros do not require ;s after them. Any such macros + *must* be ALL_CAPS; any lower case letters will cause bad output.
+
+

The bad output will generally be corrected after the next + ;, {, or }.

+

Some continuation error messages deserve some additional + explanation

+
+
+
A multi-line statement which is not broken at statement boundaries. For + example:
+
+
+

if (this_is_a_long_variable == another_variable) a = +
+ b + c;

+

Will trigger this error. Instead, do:

+

if (this_is_a_long_variable == another_variable) +
+ a = b + c;

+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example:
+
+
+

while (do_something(&x) == 0);

+

Will trigger this error. Instead, do:

+

while (do_something(&x) == 0) +
+ ;

+
+

+
+
+ + + + + +
28 March 2005
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/index.html b/man/v0.7/1/index.html new file mode 100644 index 000000000..589b1c8ac --- /dev/null +++ b/man/v0.7/1/index.html @@ -0,0 +1,153 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/raidz_test.1.html b/man/v0.7/1/raidz_test.1.html new file mode 100644 index 000000000..c043d3e64 --- /dev/null +++ b/man/v0.7/1/raidz_test.1.html @@ -0,0 +1,260 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
raidz_test(1)User Commandsraidz_test(1)
+
+

+
+

+

raidz_test - raidz implementation verification and + bencmarking tool

+
+
+

+

raidz_test <options>

+
+
+

+

This manual page documents briefly the raidz_test + command.

+

Purpose of this tool is to run all supported raidz implementation + and verify results of all methods. Tool also contains a parameter sweep + option where all parameters affecting RAIDZ block are verified (like ashift + size, data offset, data size, etc...). The tool also supports a benchmarking + mode using -B option.

+
+
+

+

-h

+
+
+
Print a help summary.
+
+

-a ashift (default: 9)

+
+
+
Ashift value.
+
+

-o zio_off_shift (default: 0)

+
+
+
Zio offset for raidz block. Offset value is 1 << + (zio_off_shift)
+
+

-d raidz_data_disks (default: 8)

+
+
+
Number of raidz data disks to use. Additional disks for parity will be + used during testing.
+
+

-s zio_size_shift (default: 19)

+
+
+
Size of data for raidz block. Size is 1 << (zio_size_shift).
+
+

-S(weep)

+
+
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for -a -o -d -s options. + Runtime using this option will be long.
+
+

-t(imeout)

+
+
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
+

-B(enchmark)

+
+
+
This options starts the benchmark mode. All implementations are + benchmarked using increasing per disk data size. Results are given as + throughput per disk, measured in MiB/s.
+
+

-v(erbose)

+
+
+
Increase verbosity.
+
+

-T(est the test)

+
+
+
Debugging option. When this option is specified tool is supposed to fail + all tests. This is to check if tests would properly verify + bit-exactness.
+
+

-D(ebug)

+
+
+
Debugging option. Specify to attach gdb when SIGSEGV or SIGABRT are + received.
+
+

+

+
+
+

+

ztest (1)

+
+
+

+

vdev_raidz, created for ZFS on Linux by Gvozden + Nešković <neskovic@gmail.com>

+
+
+ + + + + +
2016ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/zhack.1.html b/man/v0.7/1/zhack.1.html new file mode 100644 index 000000000..02b6051ee --- /dev/null +++ b/man/v0.7/1/zhack.1.html @@ -0,0 +1,253 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
zhack(1)User Commandszhack(1)
+
+

+
+

+

zhack - libzpool debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+

zhack [-c cachefile] [-d dir] + <subcommand> [arguments]

+
+
+

+

-c cachefile

+
+
+
Read the pool configuration from the cachefile, which is + /etc/zfs/zpool.cache by default.
+
+

-d dir

+
+
+
Search for pool members in the dir path. Can be specified + more than once.
+
+
+
+

+

feature stat pool

+
+
+
List feature flags.
+
+

feature enable [-d description] [-r] pool + guid

+
+
+
Add a new feature to pool that is uniquely identified by + guid, which is specified in the same form as a zfs(8) user + property.
+
+
The description is a short human readable explanation of the new + feature.
+
+
The -r switch indicates that pool can be safely opened in + read-only mode by a system that does not have the guid + feature.
+
+

feature ref [-d|-m] pool guid

+
+
+
Increment the reference count of the guid feature in + pool.
+
+
The -d switch decrements the reference count of the guid + feature in pool.
+
+
The -m switch indicates that the guid feature is now + required to read the pool MOS.
+
+
+
+

+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
# zhack feature enable -d 'Predict future disk failures.' \
+
+ tank com.example:clairvoyance
+
# zhack feature ref tank com.example:clairvoyance
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

splat(1), zfs(8), zpios(1), + zpool-features(5), ztest(1)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/zpios.1.html b/man/v0.7/1/zpios.1.html new file mode 100644 index 000000000..63d1e0efd --- /dev/null +++ b/man/v0.7/1/zpios.1.html @@ -0,0 +1,420 @@ + + + + + + + zpios.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpios.1

+
+ + + + + +
zpios(1)User Commandszpios(1)
+
+

+
+

+

zpios - Directly test the DMU.

+
+
+

+

zpios [options] <-p pool>

+

+
+
+

+

This utility runs in-kernel DMU performance and stress tests that + do not depend on the ZFS Posix Layer ("ZPL").

+

+
+
+

+

-t regex, --threadcount regex

+
+
+
Start this many threads for each test series, specified as a comma + delimited regular expression. (eg: "-t 1,2,3")
+
+
This option is mutually exclusive with the threadcount_* + options.
+
+

-l regex_low, --threadcount_low + regex_low

+

-h regex_high, --threadcount_high + regex_high

+

-e regex_incr, --threadcount_incr + regex_incr

+
+
+
Start regex_low threads for the first test, add regex_incr + threads for each subsequent test, and start regex_high threads for + the last test.
+
+
These three options must be specified together and are mutually exclusive + with the threadcount option.
+
+

-n regex, --regioncount regex

+
+
+
Create this many regions for each test series, specified as a comma + delimited regular expression. (eg: "-n 512,4096,65536")
+
+
This option is mutually exclusive with the regioncount_* + options.
+
+

-i regex_low, --regioncount_low + regex_low

+

-j regex_high, --regioncount_high + regex_high

+

-k regex_incr, --regioncount_incr + regex_incr

+
+
+
Create regex_low regions for the first test, add regex_incr + regions for each subsequent test, and create regex_high regions for + the last test.
+
+
These three options must be specified together and are mutually exclusive + with the regioncount option.
+
+

-o size, --offset size

+
+
+
Create regions at size offset for each test series, specified as a + comma delimited regular expression with an optional unit suffix. (eg: + "-o 4M" means four megabytes.)
+
+
This option is mutually exclusive with the offset_* options.
+
+

-m size_low, --offset_low + size_low

+

-q size_high, --offset_high + size_high

+

-r size_incr, --offset_incr + size_incr

+
+
+
Create a region at size_low offset for the first test, add + size_incr to the offset for each subsequent test, and create a + region at size_high offset for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the offset option.
+
+

-c size, --chunksize size

+
+
+
Use size chunks for each test, specified as a comma delimited + regular expression with an optional unit suffix. (eg: "-c 1M" + means one megabyte.) The chunk size must be at least the region size.
+
+
This option is mutually exclusive with the chunksize_* + options.
+
+

-a size_low, --chunksize_low + size_low

+

-b size_high, --chunksize_high + size_high

+

-g size_incr, --chunksize_incr + size_incr

+
+
+
Use a size_low chunk size for the first test, add size_incr + to the chunk size for each subsequent test, and use a size_high + chunk size for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the chunksize option.
+
+

-s size, --regionsize size

+
+
+
Use size regions for each test, specified as a comma delimited + regular expression with an optional unit suffix. (eg: "-s 1M" + means one megabyte.)
+
+
This option is mutually exclusive with the regionsize_* + options.
+
+

-A size_low, --regionsize_low + size_low

+

-B size_high, --regionsize_high + size_high

+

-C size_incr, --regionsize_incr + size_incr

+
+
+
Use a size_low region size for the first test, add size_incr + to the region size for each subsequent test, and use a size_high + region size for the last test.
+
+
These three options must be specified together and are mutually exclusive + with the regionsize option.
+
+

-S size | sizes, --blocksize size | + sizes

+
+
+
Use size ZFS blocks for each test, specified as a comma delimited + regular expression with an optional unit suffix. (eg: "-S 1M" + means one megabyte.) The supported range is powers of two from 128K + through 16M. A range of blocks can be tested as follows: "-S + 128K,256K,512K,1M".
+
+

-L dmu_flags, --load dmu_flags

+
+
+
Specify dmuio for regular DMU_IO, ssf for single shared file + access, or fpp for per thread access. Use commas to delimit + multiple flags. (eg: "-L dmuio,ssf")
+
+

-p name, --pool name

+
+
+
The pool name, which is mandatory.
+
+

-M test, --name test

+
+
+
An arbitrary string that appears in the program output.
+
+

-x, --cleanup

+
+
+
Enable the DMU_REMOVE flag.
+
+

-P command, --prerun command

+
+
+
Invoke command from the kernel before running the test. Shell + expansion is not performed and the environment is set to HOME=/; + TERM=linux; PATH=/sbin:/usr/sbin:/bin:/usr/bin.
+
+

-R command, --postrun command

+
+
+
Invoke command from the kernel after running the test. Shell + expansion is not performed and the environment is set to HOME=/; + TERM=linux; PATH=/sbin:/usr/sbin:/bin:/usr/bin.
+
+

-G directory, --log directory

+
+
+
Put logging output in this directory.
+
+

-I size, --regionnoise size

+
+
+
Randomly vary the regionsize parameter for each test modulo + size bytes.
+
+

-N size, --chunknoise size

+
+
+
Randomly vary the chunksize parameter for each test modulo + size bytes.
+
+

-T time, --threaddelay time

+
+
+
Randomly vary the execution time for each test modulo time kernel + jiffies.
+
+

-V, --verify

+
+
+
Enable the DMU_VERIFY flag for trivial data verification.
+
+

-z, --zerocopy

+
+
+
Enable the DMU_READ_ZC and DMU_WRITE_ZC flags, which are currently + unimplemented for Linux.
+
+

-O, --nowait

+
+
+
Enable the DMU_WRITE_NOWAIT flag.
+
+

-f, --noprefetch

+
+
+
Enable the DMU_READ_NOPF flag.
+
+

-H, --human-readable

+
+
+
Print PASS and FAIL results explicitly and put unit suffixes on large + numbers.
+
+

-v, --verbose

+
+
+
Increase output verbosity.
+
+

-? , --help

+
+
+
Print the usage message.
+
+
+
+

+

The original zpios implementation was created by Cluster File + Systems Inc and adapted to ZFS on Linux by Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/1/ztest.1.html b/man/v0.7/1/ztest.1.html new file mode 100644 index 000000000..067e67d0a --- /dev/null +++ b/man/v0.7/1/ztest.1.html @@ -0,0 +1,344 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ztest(1)User Commandsztest(1)
+
+

+
+

+

ztest - was written by the ZFS Developers as a ZFS unit + test.

+
+
+

+

ztest <options>

+
+
+

+

This manual page documents briefly the ztest command.

+

ztest was written by the ZFS Developers as a ZFS unit test. + The tool was developed in tandem with the ZFS functionality and was executed + nightly as one of the many regression test against the daily build. As + features were added to ZFS, unit tests were also added to ztest. In + addition, a separate test development team wrote and executed more + functional and stress tests.

+

By default ztest runs for ten minutes and uses block files + (stored in /tmp) to create pools rather than using physical disks. Block + files afford ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you have a small tmp + directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 minutes. The -V option + can be used to increase the verbosity of the tool. Adding multiple -V option + is allowed and the more you add the more chatty ztest becomes.

+

After the ztest run completes, you should notice many + ztest.* files lying around. Once the run completes you can safely remove + these files. Note that you shouldn't remove these files during a run. You + can re-use these files in your next ztest run by using the -E + option.

+
+
+

+

-?

+
+
+
Print a help summary.
+
+

-v vdevs (default: 5)

+
+
+
Number of vdevs.
+
+

-s size_of_each_vdev (default: 64M)

+
+
+
Size of each vdev.
+
+

-a alignment_shift (default: 9) (use 0 for + random)

+
+
+
Used alignment in test.
+
+

-m mirror_copies (default: 2)

+
+
+
Number of mirror copies.
+
+

-r raidz_disks (default: 4)

+
+
+
Number of raidz disks.
+
+

-R raidz_parity (default: 1)

+
+
+
Raidz parity.
+
+

-d datasets (default: 7)

+
+
+
Number of datasets.
+
+

-t threads (default: 23)

+
+
+
Number of threads.
+
+

-g gang_block_threshold (default: 32K)

+
+
+
Gang block threshold.
+
+

-i initialize_pool_i_times (default: + 1)

+
+
+
Number of pool initialisations.
+
+

-k kill_percentage (default: 70%)

+
+
+
Kill percentage.
+
+

-p pool_name (default: ztest)

+
+
+
Pool name.
+
+

-V(erbose)

+
+
+
Verbose (use multiple times for ever more blather).
+
+

-E(xisting)

+
+
+
Use existing pool (use existing pool instead of creating new one).
+
+

-T time (default: 300 sec)

+
+
+
Total test run time.
+
+

-z zil_failure_rate (default: fail every 2^5 + allocs)

+
+
+
Injected failure rate.
+
+
+
+

+

To override /tmp as your location for block files, you can use the + -f option:

+
+
+
ztest -f /
+
+

To get an idea of what ztest is actually testing try this:

+
+
+
ztest -f / -VVV
+
+

Maybe you'd like to run ztest for longer? To do so simply use the + -T option and specify the runlength in seconds like so:

+
+
+
ztest -f / -V -T 120 +

+
+
+
+
+

+
+
+
Use id instead of the SPL hostid to identify this host. Intended + for use with ztest, but this environment variable will affect any utility + which uses libzpool, including zpool(8). Since the kernel is + unaware of this setting results with utilities other than ztest are + undefined.
+
+
Limit the default stack size to stacksize bytes for the purpose of + detecting and debugging kernel stack overflows. This value defaults to + 32K which is double the default 16K Linux kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to 256K.

+
+
+
+
+

+

spl-module-parameters (5), zpool (1), zfs + (1), zdb (1),

+
+
+

+

This manual page was transvered to asciidoc by Michael + Gebetsroither <gebi@grml.org> from + http://opensolaris.org/os/community/zfs/ztest/

+
+
+ + + + + +
2009 NOV 01ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/index.html b/man/v0.7/5/index.html new file mode 100644 index 000000000..6f734f414 --- /dev/null +++ b/man/v0.7/5/index.html @@ -0,0 +1,151 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/vdev_id.conf.5.html b/man/v0.7/5/vdev_id.conf.5.html new file mode 100644 index 000000000..8166c4b0c --- /dev/null +++ b/man/v0.7/5/vdev_id.conf.5.html @@ -0,0 +1,344 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
vdev_id.conf(5)File Formats Manualvdev_id.conf(5)
+
+
+

+

vdev_id.conf - Configuration file for vdev_id

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of vdev_id(8) + while it is mapping a disk device name to an alias.

+

The vdev_id.conf file uses a simple format consisting of a + keyword followed by one or more values on a single line. Any line not + beginning with a recognized keyword is ignored. Comments may optionally + begin with a hash character.

+

The following keywords and values are used.

+
+
+
Maps a device link in the /dev directory hierarchy to a new device name. + The udev rule defining the device link must have run prior to + vdev_id(8). A defined alias takes precedence over a + topology-derived name, but the two naming methods can otherwise coexist. + For example, one might name drives in a JBOD with the sas_direct topology + while naming an internal L2ARC device with an alias. +

name - the name of the link to the device that will by + created in /dev/disk/by-vdev.

+

devlink - the name of the device link that has already + been defined by udev. This may be an absolute path or the base + filename.

+

+
+
+
Maps a physical path to a channel name (typically representing a single + disk enclosure). +

+
+ +
Additionally create /dev/by-enclosure symlinks to the disk enclosure sg + devices using the naming scheme from from vdev_id.conf. + enclosure_symlinks is only allowed for sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form of: +

/dev/by-enclosure/<prefix>-<channel><num>

+

Defaults to "enc" if not specified.

+
+
+
hosting the disk enclosure being mapped, as found in the output of + lspci(8). This argument is not used in sas_switch mode. +

port - specifies the numeric identifier of the HBA or + SAS switch port connected to the disk enclosure being mapped.

+

name - specifies the name of the channel.

+

+
+
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is specified then + the mapping is only applied to slots in the named channel, otherwise the + mapping is applied to all channels. The first-specified slot rule + that can match a slot takes precedence. Therefore a channel-specific + mapping for a given slot should generally appear before a generic mapping + for the same slot. In this way a custom mapping may be applied to a + particular channel and a default mapping applied to the others. +

+
+
+
Specifies whether vdev_id(8) will handle only dm-multipath devices. + If set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+

+
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4. +

+
+
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay. +

bay - read the slot number from the bay identifier.

+

phy - read the slot number from the phy identifier.

+

port - use the SAS port as the slot number.

+

id - use the scsi id as the slot number.

+

lun - use the scsi lun as the slot number.

+

ses - use the SCSI Enclosure Services (SES) enclosure + device slot number, as reported by sg_ses(8). This is intended + for use only on systems where bay is unsupported, noting that + port and id may be unstable across disk replacement.

+
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping.

+

+
	multipath     no
+	topology      sas_direct
+	phys_per_port 4
+	slot          bay
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         C
+	channel 86:00.0  0         D
+	# Custom mapping for Channel A
+	#    Linux      Mapped
+	#    Slot       Slot      Channel
+	slot 1          7         A
+	slot 2          10        A
+	slot 3          3         A
+	slot 4          6         A
+	# Default mapping for B, C, and D
+	slot 1          4
+	slot 2          2
+	slot 3          1
+	slot 4          3
+

A SAS-switch topology. Note that the channel keyword takes + only two arguments in this example.

+

+
	topology      sas_switch
+	#       SWITCH PORT  CHANNEL NAME
+	channel 1            A
+	channel 2            B
+	channel 3            C
+	channel 4            D
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path.

+

+
	multipath yes
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         A
+	channel 86:00.0  0         B
+

A configuration with enclosure_symlinks enabled.

+

+
	multipath yes
+	enclosure_symlinks yes
+	#          PCI_ID      HBA PORT     CHANNEL NAME
+	channel    05:00.0     1            U
+	channel    05:00.0     0            L
+	channel    06:00.0     1            U
+	channel    06:00.0     0            L
+In addition to the disks symlinks, this configuration will create: +

+
	/dev/by-enclosure/enc-L0
+	/dev/by-enclosure/enc-L1
+	/dev/by-enclosure/enc-U0
+	/dev/by-enclosure/enc-U1
+

A configuration using device link aliases.

+

+
	#     by-vdev
+	#     name     fully qualified or base name of device link
+	alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+	alias d2       wwn-0x5000c5002def789e
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/zfs-events.5.html b/man/v0.7/5/zfs-events.5.html new file mode 100644 index 000000000..c4e488bd6 --- /dev/null +++ b/man/v0.7/5/zfs-events.5.html @@ -0,0 +1,777 @@ + + + + + + + zfs-events.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-events.5

+
+ + + + + +
ZFS-EVENTS(5)File Formats ManualZFS-EVENTS(5)
+
+
+

+

zfs-events - Events created by the ZFS filesystem.

+
+
+

+

Description of the different events generated by the ZFS + stack.

+

Most of these don't have any description. The events generated by + ZFS have never been publicly documented. What is here is intended as a + starting point to provide documentation for all possible events.

+

To view all events created since the loading of the ZFS + infrastructure (i.e, "the module"), run

+

+
zpool events
+

to get a short list, and

+

+
zpool events -v
+

to get a full detail of the events and what information is + available about it.

+

This man page lists the different subclasses that are issued in + the case of an event. The full event name would be + ereport.fs.zfs.SUBCLASS, but we only list the last part here.

+

+
+

+

+

checksum

+
Issued when a checksum error have been detected.
+

+

io

+
Issued when there is an I/O error in a vdev in the + pool.
+

+

data

+
Issued when there have been data errors in the + pool.
+

+

delay

+
Issued when an I/O was slow to complete as defined by the + zio_delay_max module option.
+

+

config.sync

+
Issued every time a vdev change have been done to the + pool.
+

+

zpool

+
Issued when a pool cannot be imported.
+

+

zpool.destroy

+
Issued when a pool is destroyed.
+

+

zpool.export

+
Issued when a pool is exported.
+

+

zpool.import

+
Issued when a pool is imported.
+

+

zpool.reguid

+
Issued when a REGUID (new unique identifier for the pool + have been regenerated) have been detected.
+

+

vdev.unknown

+
Issued when the vdev is unknown. Such as trying to clear + device errors on a vdev that have failed/been kicked from the system/pool and + is no longer available.
+

+

vdev.open_failed

+
Issued when a vdev could not be opened (because it didn't + exist for example).
+

+

vdev.corrupt_data

+
Issued when corrupt data have been detected on a + vdev.
+

+

vdev.no_replicas

+
Issued when there are no more replicas to sustain the + pool. This would lead to the pool being DEGRADED.
+

+

vdev.bad_guid_sum

+
Issued when a missing device in the pool have been + detected.
+

+

vdev.too_small

+
Issued when the system (kernel) have removed a device, + and ZFS notices that the device isn't there any more. This is usually followed + by a probe_failure event.
+

+

vdev.bad_label

+
Issued when the label is OK but invalid.
+

+

vdev.bad_ashift

+
Issued when the ashift alignment requirement has + increased.
+

+

vdev.remove

+
Issued when a vdev is detached from a mirror (or a spare + detached from a vdev where it have been used to replace a failed drive - only + works if the original drive have been readded).
+

+

vdev.clear

+
Issued when clearing device errors in a pool. Such as + running zpool clear on a device in the pool.
+

+

vdev.check

+
Issued when a check to see if a given vdev could be + opened is started.
+

+

vdev.spare

+
Issued when a spare have kicked in to replace a failed + device.
+

+

vdev.autoexpand

+
Issued when a vdev can be automatically expanded.
+

+

io_failure

+
Issued when there is an I/O failure in a vdev in the + pool.
+

+

probe_failure

+
Issued when a probe fails on a vdev. This would occur if + a vdev have been kicked from the system outside of ZFS (such as the kernel + have removed the device).
+

+

log_replay

+
Issued when the intent log cannot be replayed. The can + occur in the case of a missing or damaged log device.
+

+

resilver.start

+
Issued when a resilver is started.
+

+

resilver.finish

+
Issued when the running resilver have finished.
+

+

scrub.start

+
Issued when a scrub is started on a pool.
+

+

scrub.finish

+
Issued when a pool have finished scrubbing.
+

+

bootfs.vdev.attach

+
+

+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to uppercase and prefixed with + ZEVENT_.

+

+

pool

+
Pool name.
+

+

pool_failmode

+
Failmode - wait, continue or panic. + See pool(8) (failmode property) for more information.
+

+

pool_guid

+
The GUID of the pool.
+

+

pool_context

+
The load state for the pool (0=none, 1=open, 2=import, + 3=tryimport, 4=recover 5=error).
+

+

vdev_guid

+
The GUID of the vdev in question (the vdev failing or + operated upon with zpool clear etc).
+

+

vdev_type

+
Type of vdev - disk, file, mirror + etc. See zpool(8) under Virtual Devices for more information on + possible values.
+

+

vdev_path

+
Full path of the vdev, including any -partX.
+

+

vdev_devid

+
ID of vdev (if any).
+

+

vdev_fru

+
Physical FRU location.
+

+

vdev_state

+
State of vdev (0=uninitialized, 1=closed, 2=offline, + 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healthy).
+

+

vdev_ashift

+
The ashift value of the vdev.
+

+

vdev_complete_ts

+
The time the last I/O completed for the specified + vdev.
+

+

vdev_delta_ts

+
The time since the last I/O completed for the specified + vdev.
+

+

vdev_spare_paths

+
List of spares, including full path and any + -partX.
+

+

vdev_spare_guids

+
GUID(s) of spares.
+

+

vdev_read_errors

+
How many read errors that have been detected on the + vdev.
+

+

vdev_write_errors

+
How many write errors that have been detected on the + vdev.
+

+

vdev_cksum_errors

+
How many checkum errors that have been detected on the + vdev.
+

+

parent_guid

+
GUID of the vdev parent.
+

+

parent_type

+
Type of parent. See vdev_type.
+

+

parent_path

+
Path of the vdev parent (if any).
+

+

parent_devid

+
ID of the vdev parent (if any).
+

+

zio_objset

+
The object set number for a given I/O.
+

+

zio_object

+
The object number for a given I/O.
+

+

zio_level

+
The block level for a given I/O.
+

+

zio_blkid

+
The block ID for a given I/O.
+

+

zio_err

+
The errno for a failure when handling a given I/O.
+

+

zio_offset

+
The offset in bytes of where to write the I/O for the + specified vdev.
+

+

zio_size

+
The size in bytes of the I/O.
+

+

zio_flags

+
The current flags describing how the I/O should be + handled. See the I/O FLAGS section for the full list of I/O + flags.
+

+

zio_stage

+
The current stage of the I/O in the pipeline. See the + I/O STAGES section for a full list of all the I/O stages.
+

+

zio_pipeline

+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+

+

zio_delay

+
The time in ticks (HZ) required for the block layer to + service the I/O. Unlike zio_delta this does not include any vdev + queuing time and is therefore solely a measure of the block layer performance. + On most modern Linux systems HZ is defined as 1000 making a tick equivalent to + 1 millisecond.
+

+

zio_timestamp

+
The time when a given I/O was submitted.
+

+

zio_delta

+
The time required to service a given I/O.
+

+

prev_state

+
The previous state of the vdev.
+

+

cksum_expected

+
The expected checksum value.
+

+

cksum_actual

+
The actual/current checksum value.
+

+

cksum_algorithm

+
Checksum algorithm used. See zfs(8) for more + information on checksum algorithms available.
+

+

cksum_byteswap

+
Checksum value is byte swapped.
+

+

bad_ranges

+
Checksum bad offset ranges.
+

+

bad_ranges_min_gap

+
Checksum allowed minimum gap.
+

+

bad_range_sets

+
Checksum for each range the number of bits set.
+

+

bad_range_clears

+
Checksum for each range the number of bits cleared.
+

+

bad_set_bits

+
Checksum array of bits set.
+

+

bad_cleared_bits

+
Checksum array of bits cleared.
+

+

bad_set_histogram

+
Checksum histogram of set bits by bit number in a 64-bit + word.
+

+

bad_cleared_histogram

+
Checksum histogram of cleared bits by bit number in a + 64-bit word.
+

+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_FREE_BP_INIT0x00000004--F--
ZIO_STAGE_ISSUE_ASYNC0x00000008RWF--
ZIO_STAGE_WRITE_BP_INIT0x00000010-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000020-W---
ZIO_STAGE_NOP_WRITE0x00000040-W---
ZIO_STAGE_DDT_READ_START0x00000080R----
ZIO_STAGE_DDT_READ_DONE0x00000100R----
ZIO_STAGE_DDT_WRITE0x00000200-W---
ZIO_STAGE_DDT_FREE0x00000400--F--
ZIO_STAGE_GANG_ASSEMBLE0x00000800RWFC-
ZIO_STAGE_GANG_ISSUE0x00001000RWFC-
ZIO_STAGE_DVA_ALLOCATE0x00002000-W---
ZIO_STAGE_DVA_FREE0x00004000--F--
ZIO_STAGE_DVA_CLAIM0x00008000---C-
ZIO_STAGE_READY0x00010000RWFCI
ZIO_STAGE_VDEV_IO_START0x00020000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00040000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00080000RW--I
ZIO_STAGE_CHECKSUM_VERIFY00x00100000R----
ZIO_STAGE_DONE0x00200000RWFCI
+

+
+
+

+

Every I/O in the pipeline contains a set of flags which describe + its function and are used to govern its behavior. These flags will be set in + an event as an zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_RETRY0x00004000
ZIO_FLAG_PROBE0x00008000
ZIO_FLAG_TRYHARD0x00010000
ZIO_FLAG_OPTIONAL0x00020000
ZIO_FLAG_DONT_QUEUE0x00040000
ZIO_FLAG_DONT_PROPAGATE0x00080000
ZIO_FLAG_IO_BYPASS0x00100000
ZIO_FLAG_IO_REWRITE0x00200000
ZIO_FLAG_RAW0x00400000
ZIO_FLAG_GANG_CHILD0x00800000
ZIO_FLAG_DDT_CHILD0x01000000
ZIO_FLAG_GODFATHER0x02000000
ZIO_FLAG_NOPWRITE0x04000000
ZIO_FLAG_REEXECUTED0x08000000
ZIO_FLAG_DELEGATED0x10000000
ZIO_FLAG_FASTWRITE0x20000000
+
+
+
+ + + + + +
June 6, 2015
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/zfs-module-parameters.5.html b/man/v0.7/5/zfs-module-parameters.5.html new file mode 100644 index 000000000..643096435 --- /dev/null +++ b/man/v0.7/5/zfs-module-parameters.5.html @@ -0,0 +1,1739 @@ + + + + + + + zfs-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-module-parameters.5

+
+ + + + + +
ZFS-MODULE-PARAMETERS(5)File Formats ManualZFS-MODULE-PARAMETERS(5)
+
+
+

+

zfs-module-parameters - ZFS module parameters

+
+
+

+

Description of the different parameters to the ZFS module.

+

+
+

+

+

ignore_hole_birth (int)

+
When set, the hole_birth optimization will not be used, + and all holes will always be sent on zfs send. Useful if you suspect your + datasets are affected by a bug in hole_birth. +

Use 1 for on (default) and 0 for off.

+
+

+

l2arc_feed_again (int)

+
Turbo L2ARC warm-up. When the L2ARC is cold the fill + interval will be set as fast as possible. +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_feed_min_ms (ulong)

+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only applicable in related situations. +

Default value: 200.

+
+

+

l2arc_feed_secs (ulong)

+
Seconds between L2ARC writing +

Default value: 1.

+
+

+

l2arc_headroom (ulong)

+
How far through the ARC lists to search for L2ARC + cacheable content, expressed as a multiplier of l2arc_write_max +

Default value: 2.

+
+

+

l2arc_headroom_boost (ulong)

+
Scales l2arc_headroom by this percentage when + L2ARC contents are being successfully compressed before writing. A value of + 100 disables this feature. +

Default value: 200.

+
+

+

l2arc_noprefetch (int)

+
Do not write buffers to L2ARC if they were prefetched but + not used by applications +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_norw (int)

+
No reads during writes +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_write_boost (ulong)

+
Cold L2ARC devices will have l2arc_write_max + increased by this amount while they remain cold. +

Default value: 8,388,608.

+
+

+

l2arc_write_max (ulong)

+
Max write bytes per interval +

Default value: 8,388,608.

+
+

+

metaslab_aliquot (ulong)

+
Metaslab granularity, in bytes. This is roughly similar + to what would be referred to as the "stripe size" in traditional + RAID arrays. In normal operation, ZFS will try to write this amount of data to + a top-level vdev before moving on to the next one. +

Default value: 524,288.

+
+

+

metaslab_bias_enabled (int)

+
Enable metaslab group biasing based on its vdev's over- + or under-utilization relative to the pool. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_metaslab_segment_weight_enabled (int)

+
Enable/disable segment-based metaslab selection. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_metaslab_switch_threshold (int)

+
When using segment-based metaslab selection, continue + allocating from the active metaslab until zfs_metaslab_switch_threshold + worth of buckets have been exhausted. +

Default value: 2.

+
+

+

metaslab_debug_load (int)

+
Load all metaslabs during pool import. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_debug_unload (int)

+
Prevent metaslabs from being unloaded. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_fragmentation_factor_enabled (int)

+
Enable use of the fragmentation metric in computing + metaslab weights. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslabs_per_vdev (int)

+
When a vdev is added, it will be divided into + approximately (but no more than) this number of metaslabs. +

Default value: 200.

+
+

+

metaslab_preload_enabled (int)

+
Enable metaslab group preloading. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_lba_weighting_enabled (int)

+
Give more weight to metaslabs with lower LBAs, assuming + they have greater bandwidth as is typically the case on a modern constant + angular velocity disk drive. +

Use 1 for yes (default) and 0 for no.

+
+

+

spa_config_path (charp)

+
SPA config file +

Default value: /etc/zfs/zpool.cache.

+
+

+

spa_asize_inflation (int)

+
Multiplication factor used to estimate actual disk + consumption from the size of data being written. The default value is a worst + case estimate, but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits. +

Default value: 24.

+
+

+

spa_load_verify_data (int)

+
Whether to traverse data blocks during an "extreme + rewind" (-X) import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal skips non-metadata blocks. It can be toggled once the import has + started to stop or start the traversal of non-metadata blocks.

+

Default value: 1.

+
+

+

spa_load_verify_metadata (int)

+
Whether to traverse blocks during an "extreme + rewind" (-X) pool import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal is not performed. It can be toggled once the import has started to + stop or start the traversal.

+

Default value: 1.

+
+

+

spa_load_verify_maxinflight (int)

+
Maximum concurrent I/Os during the traversal performed + during an "extreme rewind" (-X) pool import. +

Default value: 10000.

+
+

+

spa_slop_shift (int)

+
Normally, we don't allow the last 3.2% + (1/(2^spa_slop_shift)) of space in the pool to be consumed. This ensures that + we don't run the pool completely out of space, due to unaccounted changes + (e.g. to the MOS). It also limits the worst-case time to allocate space. If we + have less than this amount of free space, most ZPL operations (e.g. write, + create) will return ENOSPC. +

Default value: 5.

+
+

+

zfetch_array_rd_sz (ulong)

+
If prefetching is enabled, disable prefetching for reads + larger than this size. +

Default value: 1,048,576.

+
+

+

zfetch_max_distance (uint)

+
Max bytes to prefetch per stream (default 8MB). +

Default value: 8,388,608.

+
+

+

zfetch_max_streams (uint)

+
Max number of streams per zfetch (prefetch streams per + file). +

Default value: 8.

+
+

+

zfetch_min_sec_reap (uint)

+
Min time before an active prefetch stream can be + reclaimed +

Default value: 2.

+
+

+

zfs_arc_dnode_limit (ulong)

+
When the number of bytes consumed by dnodes in the ARC + exceeds this number of bytes, try to unpin some of it in response to demand + for non-metadata. This value acts as a ceiling to the amount of dnode + metadata, and defaults to 0 which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that may be used + for dnodes. +

See also zfs_arc_meta_prune which serves a similar purpose + but is used when the amount of metadata in the ARC exceeds + zfs_arc_meta_limit rather than in response to overall demand for + non-metadata.

+

+

Default value: 0.

+
+

+

zfs_arc_dnode_limit_percent (ulong)

+
Percentage that can be consumed by dnodes of ARC meta + buffers. +

See also zfs_arc_dnode_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

Default value: 10.

+
+

+

zfs_arc_dnode_reduce_percent (ulong)

+
Percentage of ARC dnodes to try to scan in response to + demand for non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit. +

+

Default value: 10% of the number of dnodes in the ARC.

+
+

+

zfs_arc_average_blocksize (int)

+
The ARC's buffer hash table is sized based on the + assumption of an average block size of zfs_arc_average_blocksize + (default 8K). This works out to roughly 1MB of hash table per 1GB of physical + memory with 8-byte pointers. For configurations with a known larger average + block size this value can be increased to reduce the memory footprint. +

+

Default value: 8192.

+
+

+

zfs_arc_evict_batch_limit (int)

+
Number ARC headers to evict per sub-list before + proceeding to another sub-list. This batch-style operation prevents entire + sub-lists from being evicted at once but comes at a cost of additional + unlocking and locking. +

Default value: 10.

+
+

+

zfs_arc_grow_retry (int)

+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The arc_grow_retry value (default 5) is + the number of seconds the ARC will wait before trying to resume growth after a + memory pressure event. +

Default value: 0.

+
+

+

zfs_arc_lotsfree_percent (int)

+
Throttle I/O when free system memory drops below this + percentage of total system memory. Setting this value to 0 will disable the + throttle. +

Default value: 10.

+
+

+

zfs_arc_max (ulong)

+
Max arc size of ARC in bytes. If set to 0 then it will + consume 1/2 of system RAM. This value must be at least 67108864 (64 + megabytes). +

This value can be changed dynamically with some caveats. It cannot + be set back to 0 while running and reducing it below the current ARC size + will not cause the ARC to shrink without memory pressure to induce + shrinking.

+

Default value: 0.

+
+

+

zfs_arc_meta_adjust_restarts (ulong)

+
The number of restart passes to make while scanning the + ARC attempting the free buffers in order to stay below the + zfs_arc_meta_limit. This value should not need to be tuned but is + available to facilitate performance analysis. +

Default value: 4096.

+
+

+

zfs_arc_meta_limit (ulong)

+
The maximum allowed size in bytes that meta data buffers + are allowed to consume in the ARC. When this limit is reached meta data + buffers will be reclaimed even if the overall arc_c_max has not been reached. + This value defaults to 0 which indicates that a percent which is based on + zfs_arc_meta_limit_percent of the ARC may be used for meta data. +

This value my be changed dynamically except that it cannot be set + back to 0 for a specific percent of the ARC; it must be set to an explicit + value.

+

Default value: 0.

+
+

+

zfs_arc_meta_limit_percent (ulong)

+
Percentage of ARC buffers that can be used for meta data. +

See also zfs_arc_meta_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

+

Default value: 75.

+
+

+

zfs_arc_meta_min (ulong)

+
The minimum allowed size in bytes that meta data buffers + may consume in the ARC. This value defaults to 0 which disables a floor on the + amount of the ARC devoted meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_prune (int)

+
The number of dentries and inodes to be scanned looking + for entries which can be dropped. This may be required when the ARC reaches + the zfs_arc_meta_limit because dentries and inodes can pin buffers in + the ARC. Increasing this value will cause to dentry and inode caches to be + pruned more aggressively. Setting this value to 0 will disable pruning the + inode and dentry caches. +

Default value: 10,000.

+
+

+

zfs_arc_meta_strategy (int)

+
Define the strategy for ARC meta data buffer eviction + (meta reclaim strategy). A value of 0 (META_ONLY) will evict only the ARC meta + data buffers. A value of 1 (BALANCED) indicates that additional data buffers + may be evicted if that is required to in order to evict the required number of + meta data buffers. +

Default value: 1.

+
+

+

zfs_arc_min (ulong)

+
Min arc size of ARC in bytes. If set to 0 then arc_c_min + will default to consuming the larger of 32M or 1/32 of total system memory. +

Default value: 0.

+
+

+

zfs_arc_min_prefetch_lifespan (int)

+
Minimum time prefetched blocks are locked in the ARC, + specified in jiffies. A value of 0 will default to 1 second. +

Default value: 0.

+
+

+

zfs_multilist_num_sublists (int)

+
To allow more fine-grained locking, each ARC state + contains a series of lists for both data and meta data objects. Locking is + performed at the level of these "sub-lists". This parameters + controls the number of sub-lists per ARC state, and also applies to other uses + of the multilist data structure. +

Default value: 4 or the number of online CPUs, whichever is + greater

+
+

+

zfs_arc_overflow_shift (int)

+
The ARC size is considered to be overflowing if it + exceeds the current ARC target size (arc_c) by a threshold determined by this + parameter. The threshold is calculated as a fraction of arc_c using the + formula "arc_c >> zfs_arc_overflow_shift". +

The default value of 8 causes the ARC to be considered to be + overflowing if it exceeds the target size by 1/256th (0.3%) of the target + size.

+

When the ARC is overflowing, new buffer allocations are stalled + until the reclaim thread catches up and the overflow condition no longer + exists.

+

Default value: 8.

+
+

+

+

zfs_arc_p_min_shift (int)

+
If set to a non zero value, this will update + arc_p_min_shift (default 4) with the new value. arc_p_min_shift is used to + shift of arc_c for calculating both min and max max arc_p +

Default value: 0.

+
+

+

zfs_arc_p_dampener_disable (int)

+
Disable arc_p adapt dampener +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_shrink_shift (int)

+
If set to a non zero value, this will update + arc_shrink_shift (default 7) with the new value. +

Default value: 0.

+
+

+

zfs_arc_pc_percent (uint)

+
Percent of pagecache to reclaim arc to +

This tunable allows ZFS arc to play more nicely with the kernel's + LRU pagecache. It can guarantee that the arc size won't collapse under + scanning pressure on the pagecache, yet still allows arc to be reclaimed + down to zfs_arc_min if necessary. This value is specified as percent of + pagecache size (as measured by NR_FILE_PAGES) where that percent may exceed + 100. This only operates during memory pressure/reclaim.

+

Default value: 0 (disabled).

+
+

+

zfs_arc_sys_free (ulong)

+
The target number of bytes the ARC should leave as free + memory on the system. Defaults to the larger of 1/64 of physical memory or + 512K. Setting this option to a non-zero value will override the default. +

Default value: 0.

+
+

+

zfs_autoimport_disable (int)

+
Disable pool import at module load by ignoring the cache + file (typically /etc/zfs/zpool.cache). +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_checksums_per_second (int)

+
Rate limit checksum events to this many per second. Note + that this should not be set below the zed thresholds (currently 10 checksums + over 10 sec) or else zed may not trigger any action. +

Default value: 20

+
+

+

zfs_commit_timeout_pct (int)

+
This controls the amount of time that a ZIL block (lwb) + will remain "open" when it isn't "full", and it has a + thread waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly impacting + the latency of each individual transaction record (itx). +

Default value: 5%.

+
+

+

zfs_dbgmsg_enable (int)

+
Internally ZFS keeps a small log to facilitate debugging. + By default the log is disabled, to enable it set this option to 1. The + contents of the log can be accessed by reading the /proc/spl/kstat/zfs/dbgmsg + file. Writing 0 to this proc file clears the log. +

Default value: 0.

+
+

+

zfs_dbgmsg_maxsize (int)

+
The maximum size in bytes of the internal ZFS debug log. +

Default value: 4M.

+
+

+

zfs_dbuf_state_index (int)

+
This feature is currently unused. It is normally used for + controlling what reporting is available under /proc/spl/kstat/zfs. +

Default value: 0.

+
+

+

zfs_deadman_enabled (int)

+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms milliseconds, a "slow spa_sync" + message is logged to the debug log (see zfs_dbgmsg_enable). If + zfs_deadman_enabled is set, all pending IO operations are also checked + and if any haven't completed within zfs_deadman_synctime_ms + milliseconds, a "SLOW IO" message is logged to the debug log and a + "delay" system event with the details of the hung IO is posted. +

Use 1 (default) to enable the slow IO check and 0 to + disable.

+
+

+

zfs_deadman_checktime_ms (int)

+
Once a pool sync operation has taken longer than + zfs_deadman_synctime_ms milliseconds, continue to check for slow + operations every zfs_deadman_checktime_ms milliseconds. +

Default value: 5,000.

+
+

+

zfs_deadman_synctime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and also the interval after which an IO operation is considered to + be "hung" if zfs_deadman_enabled is set. +

See zfs_deadman_enabled.

+

Default value: 1,000,000.

+
+

+

zfs_dedup_prefetch (int)

+
Enable prefetching dedup-ed blks +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_delay_min_dirty_percent (int)

+
Start to delay each transaction once there is this amount + of dirty data, expressed as a percentage of zfs_dirty_data_max. This + value should be >= zfs_vdev_async_write_active_max_dirty_percent. See the + section "ZFS TRANSACTION DELAY". +

Default value: 60.

+
+

+

zfs_delay_scale (int)

+
This controls how quickly the transaction delay + approaches infinity. Larger values cause longer delays for a given amount of + dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will smoothly + handle between 10x and 1/10th this number.

+

See the section "ZFS TRANSACTION DELAY".

+

Note: zfs_delay_scale * zfs_dirty_data_max must be + < 2^64.

+

Default value: 500,000.

+
+

+

zfs_delays_per_second (int)

+
Rate limit IO delay events to this many per second. +

Default value: 20

+
+

+

zfs_delete_blocks (ulong)

+
This is the used to define a large file for the purposes + of delete. Files containing more than zfs_delete_blocks will be deleted + asynchronously while smaller files are deleted synchronously. Decreasing this + value will reduce the time spent in an unlink(2) system call at the expense of + a longer delay before the freed space is available. +

Default value: 20,480.

+
+

+

zfs_dirty_data_max (int)

+
Determines the dirty space limit in bytes. Once this + limit is exceeded, new writes are halted until space frees up. This parameter + takes precedence over zfs_dirty_data_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 10 percent of all memory, capped at + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_max_max (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed in bytes. This limit is only enforced at module load time, and will + be ignored if zfs_dirty_data_max is later changed. This parameter takes + precedence over zfs_dirty_data_max_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 25% of physical RAM.

+
+

+

zfs_dirty_data_max_max_percent (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed as a percentage of physical RAM. This limit is only enforced at + module load time, and will be ignored if zfs_dirty_data_max is later + changed. The parameter zfs_dirty_data_max_max takes precedence over + this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 25.

+
+

+

zfs_dirty_data_max_percent (int)

+
Determines the dirty space limit, expressed as a + percentage of all memory. Once this limit is exceeded, new writes are halted + until space frees up. The parameter zfs_dirty_data_max takes precedence + over this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 10%, subject to zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_sync (int)

+
Start syncing out a transaction group if there is at + least this much dirty data. +

Default value: 67,108,864.

+
+

+

zfs_fletcher_4_impl (string)

+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, scalar, + sse2, ssse3, avx2, avx512f, and + aarch64_neon. All of the selectors except fastest and + scalar require instruction set extensions to be available and will + only appear if ZFS detects that they are present at runtime. If multiple + implementations of fletcher 4 are available, the fastest will be + chosen using a micro benchmark. Selecting scalar results in the + original, CPU based calculation, being used. Selecting any option other than + fastest and scalar results in vector instructions from the + respective CPU instruction set being used.

+

Default value: fastest.

+
+

+

zfs_free_bpobj_enabled (int)

+
Enable/disable the processing of the free_bpobj object. +

Default value: 1.

+
+

+

zfs_free_max_blocks (ulong)

+
Maximum number of blocks freed in a single txg. +

Default value: 100,000.

+
+

+

zfs_vdev_async_read_max_active (int)

+
Maximum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_async_read_min_active (int)

+
Minimum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_async_write_active_max_dirty_percent (int)

+
When the pool has more than + zfs_vdev_async_write_active_max_dirty_percent dirty data, use + zfs_vdev_async_write_max_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 60.

+
+

+

zfs_vdev_async_write_active_min_dirty_percent (int)

+
When the pool has less than + zfs_vdev_async_write_active_min_dirty_percent dirty data, use + zfs_vdev_async_write_min_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 30.

+
+

+

zfs_vdev_async_write_max_active (int)

+
Maximum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_async_write_min_active (int)

+
Minimum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of 2 was chosen as + a compromise. A value of 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+

Default value: 2.

+
+

+

zfs_vdev_max_active (int)

+
The maximum number of I/Os active to each device. + Ideally, this will be >= the sum of each queue's max_active. It must be at + least the sum of each queue's min_active. See the section "ZFS I/O + SCHEDULER". +

Default value: 1,000.

+
+

+

zfs_vdev_scrub_max_active (int)

+
Maximum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_scrub_min_active (int)

+
Minimum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_sync_read_max_active (int)

+
Maximum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_read_min_active (int)

+
Minimum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_max_active (int)

+
Maximum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_min_active (int)

+
Minimum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_queue_depth_pct (int)

+
Maximum number of queued allocations per top-level vdev + expressed as a percentage of zfs_vdev_async_write_max_active which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. It allows for + dynamic allocation distribution when devices are imbalanced as fuller devices + will tend to be slower than empty devices. +

See also zio_dva_throttle_enabled.

+

Default value: 1000.

+
+

+

zfs_disable_dup_eviction (int)

+
Disable duplicate buffer eviction +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_expire_snapshot (int)

+
Seconds to expire .zfs/snapshot +

Default value: 300.

+
+

+

zfs_admin_snapshot (int)

+
Allow the creation, removal, or renaming of entries in + the .zfs/snapshot directory to cause the creation, destruction, or renaming of + snapshots. When enabled this functionality works both locally and over NFS + exports which have the 'no_root_squash' option set. This functionality is + disabled by default. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_flags (int)

+
Set additional debugging flags. The following flags may + be bitwise-or'd together. +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic Name
Description
1ZFS_DEBUG_DPRINTF
Enable dprintf entries in the debug log.
2ZFS_DEBUG_DBUF_VERIFY *
Enable extra dbuf verifications.
4ZFS_DEBUG_DNODE_VERIFY *
Enable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMES
Enable snapshot name verification.
16ZFS_DEBUG_MODIFY
Check for illegally modified ARC buffers.
32ZFS_DEBUG_SPA
Enable spa_dbgmsg entries in the debug log.
64ZFS_DEBUG_ZIO_FREE
Enable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFY
Enable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFY
Verify space accounting on disk matches in-core range_trees.
512ZFS_DEBUG_SET_ERROR
Enable SET_ERROR and dprintf entries in the debug log.
+

* Requires debug build.

+

Default value: 0.

+
+

+

zfs_free_leak_on_eio (int)

+
If destroy encounters an EIO while reading metadata (e.g. + indirect blocks), space referenced by the missing metadata can not be freed. + Normally this causes the background destroy to become "stalled", as + it is unable to make forward progress. While in this stalled state, all + remaining space to free from the error-encountering filesystem is + "temporarily leaked". Set this flag to cause it to ignore the EIO, + permanently leak the space from indirect blocks that can not be read, and + continue to free everything else that it can. +

The default, "stalling" behavior is useful if the + storage partially fails (i.e. some but not all i/os fail), and then later + recovers. In this case, we will be able to continue pool operations while it + is partially failed, and when it recovers, we can continue to free the + space, with no leaks. However, note that this case is actually fairly + rare.

+

Typically pools either (a) fail completely (but perhaps + temporarily, e.g. a top-level vdev going offline), or (b) have localized, + permanent errors (e.g. disk returns the wrong data due to bit flip or + firmware bug). In case (a), this setting does not matter because the pool + will be suspended and the sync thread will not be able to make forward + progress regardless. In case (b), because the error is permanent, the best + we can do is leak the minimum amount of space, which is what setting this + flag will do. Therefore, it is reasonable for this flag to normally be set, + but we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.

+

Default value: 0.

+
+

+

zfs_free_min_time_ms (int)

+
During a zfs destroy operation using + feature@async_destroy a minimum of this much time will be spent working + on freeing blocks per txg. +

Default value: 1,000.

+
+

+

zfs_immediate_write_sz (long)

+
Largest data block to write to zil. Larger blocks will be + treated as if the dataset being written to had the property setting + logbias=throughput. +

Default value: 32,768.

+
+

+

zfs_max_recordsize (int)

+
We currently support block sizes from 512 bytes to 16MB. + The benefits of larger blocks, and thus larger IO, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very large + blocks can have an impact on i/o latency, and also potentially on the memory + allocator. Therefore, we do not allow the recordsize to be set larger than + zfs_max_recordsize (default 1MB). Larger blocks can be created by changing + this tunable, and pools with larger blocks can always be imported and used, + regardless of this setting. +

Default value: 1,048,576.

+
+

+

zfs_mdcomp_disable (int)

+
Disable meta data compression +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_metaslab_fragmentation_threshold (int)

+
Allow metaslabs to keep their active state as long as + their fragmentation percentage is less than or equal to this value. An active + metaslab that exceeds this threshold will no longer keep its active status + allowing better metaslabs to be selected. +

Default value: 70.

+
+

+

zfs_mg_fragmentation_threshold (int)

+
Metaslab groups are considered eligible for allocations + if their fragmentation metric (measured as a percentage) is less than or equal + to this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also crossed + this threshold. +

Default value: 85.

+
+

+

zfs_mg_noalloc_threshold (int)

+
Defines a threshold at which metaslab groups should be + eligible for allocations. The value is expressed as a percentage of free space + beyond which a metaslab group is always eligible for allocations. If a + metaslab group's free space is less than or equal to the threshold, the + allocator will avoid allocating to that group unless all groups in the pool + have reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of 0 disables the + feature and causes all metaslab groups to be eligible for allocations. +

This parameter allows one to deal with pools having heavily + imbalanced vdevs such as would be the case when a new vdev has been added. + Setting the threshold to a non-zero percentage will stop allocations from + being made to vdevs that aren't filled to the specified percentage and allow + lesser filled vdevs to acquire more allocations than they otherwise would + under the old zfs_mg_alloc_failures facility.

+

Default value: 0.

+
+

+

zfs_multihost_history (int)

+
Historical statistics for the last N multihost updates + will be available in /proc/spl/kstat/zfs/<pool>/multihost +

Default value: 0.

+
+

+

zfs_multihost_interval (ulong)

+
Used to control the frequency of multihost writes which + are performed when the multihost pool property is on. This is one + factor used to determine the length of the activity check during import. +

The multihost write period is zfs_multihost_interval / + leaf-vdevs milliseconds. This means that on average a multihost write + will be issued for each leaf vdev every zfs_multihost_interval + milliseconds. In practice, the observed period can vary with the I/O load + and this observed value is the delay which is stored in the uberblock.

+

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval * + zfs_multihost_import_intervals. The activity check time may be further + extended if the value of mmp delay found in the best uberblock indicates + actual multihost updates happened at longer intervals than + zfs_multihost_interval. A minimum value of 100ms is + enforced.

+

Default value: 1000.

+
+

+

zfs_multihost_import_intervals (uint)

+
Used to control the duration of the activity test on + import. Smaller values of zfs_multihost_import_intervals will reduce + the import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. A value + of 0 is ignored and treated as if it was set to 1 +

Default value: 10.

+
+

+

zfs_multihost_fail_intervals (uint)

+
Controls the behavior of the pool when multihost write + failures are detected. +

When zfs_multihost_fail_intervals = 0 then multihost write + failures are ignored. The failures will still be reported to the ZED which + depending on its configuration may take action such as suspending the pool + or offlining a device.

+

When zfs_multihost_fail_intervals > 0 then sequential + multihost write failures will cause the pool to be suspended. This occurs + when zfs_multihost_fail_intervals * zfs_multihost_interval + milliseconds have passed since the last successful multihost write. This + guarantees the activity test will see multihost writes if the pool is + imported.

+

Default value: 5.

+
+

+

zfs_no_scrub_io (int)

+
Set for no scrub I/O. This results in scrubs not actually + scrubbing data and simply doing a metadata crawl of the pool instead. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_no_scrub_prefetch (int)

+
Set to disable block prefetching for scrubs. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nocacheflush (int)

+
Disable cache flush operations on disks when writing. + Beware, this may cause corruption if disks re-order writes. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nopwrite_enabled (int)

+
Enable NOP writes +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_dmu_offset_next_sync (int)

+
Enable forcing txg sync to find holes. When enabled + forces ZFS to act like prior versions when SEEK_HOLE or SEEK_DATA flags are + used, which when a dnode is dirty causes txg's to be synced so that this data + can be found. +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_pd_bytes_max (int)

+
The number of bytes which should be prefetched during a + pool traversal (eg: zfs send or other data crawling operations) +

Default value: 52,428,800.

+
+

+

zfs_per_txg_dirty_frees_percent (ulong)

+
Tunable to control percentage of dirtied blocks from + frees in one TXG. After this threshold is crossed, additional dirty blocks + from frees wait until the next TXG. A value of zero will disable this + throttle. +

Default value: 30 and 0 to disable.

+
+

+

+

+

zfs_prefetch_disable (int)

+
This tunable disables predictive prefetch. Note that it + leaves "prescient" prefetch (e.g. prefetch for zfs send) intact. + Unlike predictive prefetch, prescient prefetch never issues i/os that end up + not being needed, so it can't hurt performance. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_read_chunk_size (long)

+
Bytes to read per chunk +

Default value: 1,048,576.

+
+

+

zfs_read_history (int)

+
Historical statistics for the last N reads will be + available in /proc/spl/kstat/zfs/<pool>/reads +

Default value: 0 (no data is kept).

+
+

+

zfs_read_history_hits (int)

+
Include cache hits in read history +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_recover (int)

+
Set to attempt to recover from fatal errors. This should + only be used as a last resort, as it typically results in leaked space, or + worse. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_resilver_delay (int)

+
Number of ticks to delay prior to issuing a resilver I/O + operation when a non-resilver or non-scrub I/O operation has occurred within + the past zfs_scan_idle ticks. +

Default value: 2.

+
+

+

zfs_resilver_min_time_ms (int)

+
Resilvers are processed by the sync thread. While + resilvering it will spend at least this much time working on a resilver + between txg flushes. +

Default value: 3,000.

+
+

+

zfs_scan_ignore_errors (int)

+
If set to a nonzero value, remove the DTL (dirty time + list) upon completion of a pool scan (scrub) even if there were unrepairable + errors. It is intended to be used during pool repair or recovery to stop + resilvering when the pool is next imported. +

Default value: 0.

+
+

+

zfs_scan_idle (int)

+
Idle window in clock ticks. During a scrub or a resilver, + if a non-scrub or non-resilver I/O operation has occurred during this window, + the next scrub or resilver operation is delayed by, respectively + zfs_scrub_delay or zfs_resilver_delay ticks. +

Default value: 50.

+
+

+

zfs_scan_min_time_ms (int)

+
Scrubs are processed by the sync thread. While scrubbing + it will spend at least this much time working on a scrub between txg flushes. +

Default value: 1,000.

+
+

+

zfs_scrub_delay (int)

+
Number of ticks to delay prior to issuing a scrub I/O + operation when a non-scrub or non-resilver I/O operation has occurred within + the past zfs_scan_idle ticks. +

Default value: 4.

+
+

+

zfs_send_corrupt_data (int)

+
Allow sending of corrupt data (ignore read/checksum + errors when sending data) +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_send_queue_length (int)

+
The maximum number of bytes allowed in the zfs + send queue. This value must be at least twice the maximum block size in + use. +

Default value: 16,777,216.

+
+

+

zfs_recv_queue_length (int)

+
+

The maximum number of bytes allowed in the zfs receive + queue. This value must be at least twice the maximum block size in use.

+

Default value: 16,777,216.

+
+

+

zfs_sync_pass_deferred_free (int)

+
Flushing of data to disk is done in passes. Defer frees + starting in this pass +

Default value: 2.

+
+

+

zfs_sync_pass_dont_compress (int)

+
Don't compress starting in this pass +

Default value: 5.

+
+

+

zfs_sync_pass_rewrite (int)

+
Rewrite new block pointers starting in this pass +

Default value: 2.

+
+

+

zfs_top_maxinflight (int)

+
Max concurrent I/Os per top-level vdev (mirrors or raidz + arrays) allowed during scrub or resilver operations. +

Default value: 32.

+
+

+

zfs_txg_history (int)

+
Historical statistics for the last N txgs will be + available in /proc/spl/kstat/zfs/<pool>/txgs +

Default value: 0.

+
+

+

zfs_txg_timeout (int)

+
Flush dirty data to disk at least every N seconds + (maximum txg duration) +

Default value: 5.

+
+

+

zfs_vdev_aggregation_limit (int)

+
Max vdev I/O aggregation size +

Default value: 131,072.

+
+

+

zfs_vdev_cache_bshift (int)

+
Shift size to inflate reads too +

Default value: 16 (effectively 65536).

+
+

+

zfs_vdev_cache_max (int)

+
Inflate reads smaller than this value to meet the + zfs_vdev_cache_bshift size (default 64k). +

Default value: 16384.

+
+

+

zfs_vdev_cache_size (int)

+
Total size of the per-disk cache in bytes. +

Currently this feature is disabled as it has been found to not be + helpful for performance and in some cases harmful.

+

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load. +

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 5.

+
+

+

zfs_vdev_mirror_rotating_seek_offset (int)

+
The maximum distance for the last queued I/O in which the + balancing algorithm considers an I/O to have locality. See the section + "ZFS I/O SCHEDULER". +

Default value: 1048576.

+
+

+

zfs_vdev_mirror_non_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/Os do not immediately follow one another. +

Default value: 0.

+
+

+

zfs_vdev_mirror_non_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 1.

+
+

+

zfs_vdev_read_gap_limit (int)

+
Aggregate read I/O operations if the gap on-disk between + them is within this threshold. +

Default value: 32,768.

+
+

+

zfs_vdev_scheduler (charp)

+
Set the Linux I/O scheduler on whole disk vdevs to this + scheduler. Valid options are noop, cfq, bfq & deadline +

Default value: noop.

+
+

+

zfs_vdev_write_gap_limit (int)

+
Aggregate write I/O over gap +

Default value: 4,096.

+
+

+

zfs_vdev_raidz_impl (string)

+
Parameter for selecting raidz parity implementation to + use. +

Options marked (always) below may be selected on module load as + they are supported on all systems. The remaining options may only be set + after the module is loaded, as they are available only if the + implementations are compiled in and supported on the running system.

+

Once the module is loaded, the content of + /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options + with the currently selected one enclosed in []. Possible options are: +
+ fastest - (always) implementation selected using built-in benchmark +
+ original - (always) original raidz implementation +
+ scalar - (always) scalar raidz implementation +
+ sse2 - implementation using SSE2 instruction set (64bit x86 only) +
+ ssse3 - implementation using SSSE3 instruction set (64bit x86 only) +
+ avx2 - implementation using AVX2 instruction set (64bit x86 only) +
+ avx512f - implementation using AVX512F instruction set (64bit x86 only) +
+ avx512bw - implementation using AVX512F & AVX512BW instruction sets + (64bit x86 only) +
+ aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) +
+ aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 + bit ARMv8 only)

+

Default value: fastest.

+
+

+

zfs_zevent_cols (int)

+
When zevents are logged to the console use this as the + word wrap width. +

Default value: 80.

+
+

+

zfs_zevent_console (int)

+
Log events to the console +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_zevent_len_max (int)

+
Max event queue length. A value of 0 will result in a + calculated value which increases with the number of CPUs in the system + (minimum 64 events). Events in the queue can be viewed with the zpool + events command. +

Default value: 0.

+
+

+

zil_replay_disable (int)

+
Disable intent logging replay. Can be disabled for + recovery from corrupted ZIL +

Use 1 for yes and 0 for no (default).

+
+

+

zil_slog_bulk (ulong)

+
Limit SLOG write size per commit executed with + synchronous priority. Any writes above that will be executed with lower + (asynchronous) priority to limit potential SLOG device abuse by single active + ZIL writer. +

Default value: 786,432.

+
+

+

zio_delay_max (int)

+
A zevent will be logged if a ZIO operation takes more + than N milliseconds to complete. Note that this is only a logging facility, + not a timeout on operations. +

Default value: 30,000.

+
+

+

zio_dva_throttle_enabled (int)

+
Throttle block allocations in the ZIO pipeline. This + allows for dynamic allocation distribution when devices are imbalanced. When + enabled, the maximum number of pending allocations per top-level vdev is + limited by zfs_vdev_queue_depth_pct. +

Default value: 1.

+
+

+

zio_requeue_io_start_cut_in_line (int)

+
Prioritize requeued I/O +

Default value: 0.

+
+

+

zio_taskq_batch_pct (uint)

+
Percentage of online CPUs (or CPU cores, etc) which will + run a worker thread for IO. These workers are responsible for IO work such as + compression and checksum calculations. Fractional number of CPUs will be + rounded down. +

The default value of 75 was chosen to avoid using all CPUs which + can result in latency issues and inconsistent application performance, + especially when high compression is enabled.

+

Default value: 75.

+
+

+

zvol_inhibit_dev (uint)

+
Do not create zvol device nodes. This may slightly + improve startup time on systems with a very large number of zvols. +

Use 1 for yes and 0 for no (default).

+
+

+

zvol_major (uint)

+
Major number for zvol block devices +

Default value: 230.

+
+

+

zvol_max_discard_blocks (ulong)

+
Discard (aka TRIM) operations done on zvols will be done + in batches of this many blocks, where block size is determined by the + volblocksize property of a zvol. +

Default value: 16,384.

+
+

+

zvol_prefetch_bytes (uint)

+
When adding a zvol to the system prefetch + zvol_prefetch_bytes from the start and end of the volume. Prefetching + these regions of the volume is desirable because they are likely to be + accessed immediately by blkid(8) or by the kernel scanning for a + partition table. +

Default value: 131,072.

+
+

+

zvol_request_sync (uint)

+
When processing I/O requests for a zvol submit them + synchronously. This effectively limits the queue depth to 1 for each I/O + submitter. When set to 0 requests are handled asynchronously by a thread pool. + The number of requests which can be handled concurrently is controller by + zvol_threads. +

Default value: 0.

+
+

+

zvol_threads (uint)

+
Max number of threads which can handle zvol I/O requests + concurrently. +

Default value: 32.

+
+

+

zvol_volmode (uint)

+
Defines zvol block devices behaviour when volmode + is set to default. Valid values are 1 (full), 2 (dev) and + 3 (none). +

Default value: 1.

+
+

+

zfs_qat_disable (int)

+
This tunable disables qat hardware acceleration for gzip + compression. It is available only if qat acceleration is compiled in and qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/Os. The I/O scheduler determines when and in what order those operations + are issued. The I/O scheduler divides operations into five I/O classes + prioritized in the following order: sync read, sync write, async read, async + write, and scrub/resilver. Each queue defines the minimum and maximum number + of concurrent operations that may be issued to the device. In addition, the + device has an aggregate maximum, zfs_vdev_max_active. Note that the + sum of the per-queue minimums must not exceed the aggregate maximum. If the + sum of the per-queue maximums exceeds the aggregate maximum, then the number + of active I/Os may reach zfs_vdev_max_active, in which case no + further I/Os will be issued regardless of whether all per-queue minimums + have been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Further, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been hit + or if there are no operations queued for an I/O class that has not hit its + maximum. Every time an I/O is queued or an operation completes, the I/O + scheduler looks for new operations to issue.

+

In general, smaller max_active's will lead to lower latency of + synchronous operations. Larger max_active's may lead to higher overall + throughput, depending on underlying storage.

+

The ratio of the queues' max_actives determines the balance of + performance between reads, writes, and scrubs. E.g., increasing + zfs_vdev_scrub_max_active will cause the scrub or resilver to + complete more quickly, but reads and writes to have higher latency and lower + throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write I/Os according to + the amount of dirty data in the pool. Since both throughput and latency + typically increase with the number of concurrent operations issued to + physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other -- and + in particular synchronous -- queues. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there's + more dirty data in the pool.

+

Async Writes

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points.

+
+
+ | o---------| <-- zfs_vdev_async_write_max_active +
+ ^ | /^ | +
+ | | / | | +active | / | | +
+ I/O | / | | +count | / | | +
+ | / | | +
+ |-------o | | <-- zfs_vdev_async_write_min_active +
+ 0|_______^______|_________| +
+ 0% | | 100% of zfs_dirty_data_max +
+ | | +
+ | `-- zfs_vdev_async_write_active_max_dirty_percent +
+ `--------- zfs_vdev_async_write_active_min_dirty_percent +
+Until the amount of dirty data exceeds a minimum percentage of the dirty data + allowed in the pool, the I/O scheduler will limit the number of concurrent + operations to the minimum. As that threshold is crossed, the number of + concurrent operations issued increases linearly to the maximum at the + specified maximum percentage of the dirty data allowed in the pool. +

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the + maximum percentage, this indicates that the rate of incoming data is greater + than the rate that the backend storage can handle. In this case, we must + further throttle incoming writes, as described in the next section.

+

+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as:

+
+
+ min_time = zfs_delay_scale * (dirty - min) / (max - dirty) +
+ min_time is then capped at 100 milliseconds.
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so that we only start + to delay after writing at full speed has failed to keep up with the incoming + write rate. The scale of the curve is defined by zfs_delay_scale. + Roughly speaking, this variable determines the amount of delay at the + midpoint of the curve.

+

+
delay
+
+ 10ms +-------------------------------------------------------------*+ +
+ | *| +
+ 9ms + *+ +
+ | *| +
+ 8ms + *+ +
+ | * | +
+ 7ms + * + +
+ | * | +
+ 6ms + * + +
+ | * | +
+ 5ms + * + +
+ | * | +
+ 4ms + * + +
+ | * | +
+ 3ms + * + +
+ | * | +
+ 2ms + (midpoint) * + +
+ | | ** | +
+ 1ms + v *** + +
+ | zfs_delay_scale ----------> ******** | +
+ 0 +-------------------------------------*********----------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note that since the delay is added to the outstanding time + remaining on the most recent transaction, the delay is effectively the + inverse of IOPS. Here the midpoint of 500us translates to 2000 IOPS. The + shape of the curve was chosen such that small changes in the amount of + accumulated dirty data in the first 3/4 of the curve yield relatively small + differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay + is represented on a log scale:

+

+
delay
+100ms +-------------------------------------------------------------++
+
+ + + +
+ | | +
+ + *+ +
+ 10ms + *+ +
+ + ** + +
+ | (midpoint) ** | +
+ + | ** + +
+ 1ms + v **** + +
+ + zfs_delay_scale ----------> ***** + +
+ | **** | +
+ + **** + +100us + ** + +
+ + * + +
+ | * | +
+ + * + +
+ 10us + * + +
+ + + +
+ | | +
+ + + +
+ +--------------------------------------------------------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the backend storage, and then by changing the value of + zfs_delay_scale to increase the steepness of the curve.

+
+
+ + + + + +
October 28, 2017
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/5/zpool-features.5.html b/man/v0.7/5/zpool-features.5.html new file mode 100644 index 000000000..4e0e5b42a --- /dev/null +++ b/man/v0.7/5/zpool-features.5.html @@ -0,0 +1,771 @@ + + + + + + + zpool-features.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.5

+
+ + + + + +
ZPOOL-FEATURES(5)File Formats ManualZPOOL-FEATURES(5)
+
+
+

+

zpool-features - ZFS pool feature descriptions

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the upgrade subcommand of the zpool(8) command, or set the + feature@feature_name property to enabled.

+

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

+

Since most features can be enabled independently of each other the + on-disk format of the pool is specified by the set of all features marked as + active on the pool. If the pool was created by another software + version this set may include unsupported features.

+
+

+

Every feature has a guid of the form + com.example:feature_name. The reverse DNS name ensures that the + feature's guid is unique across all ZFS implementations. When unsupported + features are encountered on a pool they will be identified by their guids. + Refer to the documentation for the ZFS implementation that created the pool + for information about those features.

+

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its guid which follows the ':' (e.g. + com.example:feature_name would have the short name + feature_name), however a feature's short name may differ across ZFS + implementations if following the convention would result in name + conflicts.

+
+
+

+

Features can be in one of three states:

+

active

+
This feature's on-disk format changes are in effect on + the pool. Support for this feature is required to import the pool in + read-write mode. If this feature is not read-only compatible, support is also + required to import the pool in read-only mode (see "Read-only + compatibility").
+

+

enabled

+
An administrator has marked this feature as enabled on + the pool, but the feature's on-disk format changes have not been made yet. The + pool can still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support returning to the + enabled state after becoming active. See feature-specific + documentation for details.
+

+

disabled

+
This feature's on-disk format changes have not been made + and will not be made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they have been + enabled.
+

+

+

The state of supported features is exposed through pool properties + of the form feature@short_name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as "read-only compatible". If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly property during + import (see zpool(8) for details on importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool a pool + property named unsupported@feature_guid will indicate why the import + was allowed despite the unsupported feature. Possible values for this + property are:

+

+

inactive

+
The feature is in the enabled state and therefore + the pool's on-disk format is still compatible with software that does not + support this feature.
+

+

readonly

+
The feature is read-only compatible and the pool has been + imported in read-only mode.
+

+
+
+

+

Some features depend on other features being enabled in order to + function properly. Enabling a feature will automatically enable any features + it depends on.

+
+
+
+

+

The following features are supported on this system:

+

async_destroy

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:async_destroy
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Destroying a file system requires traversing all of its data in + order to return its used space to the pool. Without async_destroy the + file system is not fully removed until all space has been reclaimed. If the + destroy operation is interrupted by a reboot or power outage the next + attempt to open the pool will need to complete the destroy operation + synchronously.

+

When async_destroy is enabled the file system's data will + be reclaimed by a background process, allowing the destroy operation to + complete without traversing the entire file system. The background process + is able to resume interrupted destroys after the pool has been opened, + eliminating the need to finish interrupted destroys as part of the open + operation. The amount of space remaining to be reclaimed by the background + process is available through the freeing property.

+

This feature is only active while freeing is + non-zero.

+
+

+

empty_bpobj

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:empty_bpobj
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also reduces + the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobj's) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobj's are empty. This feature + allows us to create each bpobj on-demand, thus eliminating the empty + bpobjs.

+

This feature is active while there are any filesystems, + volumes, or snapshots which were created after enabling this feature.

+
+

+

filesystem_limits

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:filesystem_limits
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables filesystem and snapshot limits. These limits + can be used to control how many filesystems and/or snapshots can be created + at the point in the tree on which the limits are set.

+

This feature is active once either of the limit properties + has been set on a dataset. Once activated the feature is never + deactivated.

+
+

+

lz4_compress

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:lz4_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

lz4 is a high-performance real-time compression algorithm + that features significantly faster compression and decompression as well as + a higher compression ratio than the older lzjb compression. + Typically, lz4 compression is approximately 50% faster on + compressible data and 200% faster on incompressible data than lzjb. + It is also approximately 80% faster on decompression, while giving + approximately 10% better compression ratio.

+

When the lz4_compress feature is set to enabled, the + administrator can turn on lz4 compression on any dataset on the pool + using the zfs(8) command. Please note that doing so will immediately + activate the lz4_compress feature on the underlying pool using the + zfs(1M) command. Also, all newly written metadata will be compressed + with lz4 algorithm. Since this feature is not read-only compatible, + this operation will render the pool unimportable on systems without support + for the lz4_compress feature.

+

Booting off of lz4-compressed root pools is supported.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

spacemap_histogram

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_histogram
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This features allows ZFS to maintain more information about how + free space is organized within the pool. If this feature is enabled, + ZFS will set this feature to active when a new space map object is + created or an existing space map is upgraded to the new format. Once the + feature is active, it will remain in that state until the pool is + destroyed.

+

+
+

+

multi_vdev_crash_dump

+
+ + + + + + + + + + +
GUID com.joyent:multi_vdev_crash_dump
READ-ONLY COMPATIBLE no
DEPENDENCIES none
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored or + raidz configuration.

+

When the multi_vdev_crash_dump feature is set to + enabled, the administrator can use the dumpadm(1M) command to + configure a dump device on a pool comprised of multiple vdevs.

+

Under Linux this feature is registered for compatibility but not + used. New pools created under Linux will have the feature enabled but + will never transition to active. This functionality is not + required in order to support crash dumps under Linux. Existing pools where + this feature is active can be imported.

+
+

+

extensible_dataset

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:extensible_dataset
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first dependent + feature uses it, and will be returned to the enabled state when all + datasets that use this feature are destroyed.

+

+
+

+

bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmarks
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables use of the zfs bookmark + subcommand.

+

This feature is active while any bookmarks exist in the + pool. All bookmarks in the pool can be listed by running zfs list -t + bookmark -r poolname.

+

+
+

+

enabled_txg

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:enabled_txg
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Once this feature is enabled ZFS records the transaction group + number in which new features are enabled. This has no user-visible impact, + but other features may depend on this feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

hole_birth

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:hole_birth
READ-ONLY COMPATIBLEno
DEPENDENCIESenabled_txg
+

This feature improves performance of incremental sends ("zfs + send -i") and receives for objects with many holes. The most common + case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A to snapshot + B contains information about every block that changed between + A and B. Blocks which did not change between those snapshots + can be identified and omitted from the stream using a piece of metadata + called the 'block birth time', but birth times are not recorded for holes + (blocks filled only with zeroes). Since holes created after A cannot + be distinguished from holes created before A, information about every + hole in the entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. However, + when incrementally replicating filesystems or zvols with many holes (for + example a zvol formatted with another filesystem) a lot of time will be + spent sending and receiving unnecessary information about holes that already + exist on the receiving side.

+

Once the hole_birth feature has been enabled the block + birth times of all new holes will be recorded. Incremental sends between + snapshots created after this feature is enabled will use this new metadata + to avoid sending information about holes that already exist on the receiving + side.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

embedded_data

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:embedded_data
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 bytes + or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of highly-compressible + blocks are stored in the block "pointer" itself (a misnomer in + this case, as it contains the compressed data, rather than a pointer to its + location on disk). Thus the space of the block (one sector, typically 512 + bytes or 4KB) is saved, and no additional i/o is needed to read and write + the data block.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+

+
+

+

large_blocks

+
+ + + + + + + + + + + + + +
GUIDorg.open-zfs:large_block
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_block feature allows the record size on a dataset + to be set larger than 128KB.

+

This feature becomes active once a recordsize + property has been set larger than 128KB, and will return to being + enabled once all filesystems that have ever had their recordsize + larger than 128KB are destroyed.

+
+

+

large_dnode

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:large_dnode
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_dnode feature allows the size of dnodes in a + dataset to be set larger than 512B.

+

This feature becomes active once a dataset contains an + object with a dnode larger than 512B, which occurs as a result of setting + the dnodesize dataset property to a value other than legacy. + The feature will return to being enabled once all filesystems that + have ever contained a dnode larger than 512B are destroyed. Large dnodes + allow more data to be stored in the bonus buffer, thus potentially improving + performance by avoiding the use of spill blocks.

+
+

sha512

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:sha512
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit arithmetic + of SHA-512 provides an approximate 50% performance boost over SHA-256 on + 64-bit hardware and is thus a good minimum-change replacement candidate for + systems where hash performance is important, but these systems cannot for + whatever reason utilize the faster skein and edonr + algorithms.

+

When the sha512 feature is set to enabled, the + administrator can turn on the sha512 checksum on any dataset using + the zfs set checksum=sha512(1M) command. This feature becomes + active once a checksum property has been set to sha512, + and will return to being enabled once all filesystems that have ever + had their checksum set to sha512 are destroyed.

+

Booting off of pools utilizing SHA-512/256 is supported.

+

+
+

+

skein

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:skein
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm that + was a finalist in the NIST SHA-3 competition. It provides a very high + security margin and high performance on 64-bit hardware (80% faster than + SHA-256). This implementation also utilizes the new salted checksumming + functionality in ZFS, which means that the checksum is pre-seeded with a + secret 256-bit random key (stored on the pool) before being fed the data + block to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the skein feature is set to enabled, the + administrator can turn on the skein checksum on any dataset using the + zfs set checksum=skein(1M) command. This feature becomes + active once a checksum property has been set to skein, + and will return to being enabled once all filesystems that have ever + had their checksum set to skein are destroyed.

+

Booting off of pools using skein is supported.

+

+
+

+

edonr

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:edonr
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Edon-R hash algorithm for + checksum, including for nopwrite (if compression is also enabled, an + overwrite of a block whose checksum matches the data being written will be + ignored). In an abundance of caution, Edon-R can not be used with dedup + (without verification).

+

Edon-R is a very high-performance hash algorithm that was part of + the NIST SHA-3 competition. It provides extremely high hash performance + (over 350% faster than SHA-256), but was not selected because of its + unsuitability as a general purpose secure hash algorithm. This + implementation utilizes the new salted checksumming functionality in ZFS, + which means that the checksum is pre-seeded with a secret 256-bit random key + (stored on the pool) before being fed the data block to be checksummed. Thus + the produced checksums are unique to a given pool.

+

When the edonr feature is set to enabled, the + administrator can turn on the edonr checksum on any dataset using the + zfs set checksum=edonr(1M) command. This feature becomes + active once a checksum property has been set to edonr, + and will return to being enabled once all filesystems that have ever + had their checksum set to edonr are destroyed.

+

Booting off of pools using edonr is supported.

+

+
+

+

userobj_accounting

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:userobj_accounting
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled. Each filesystem will be upgraded + automatically when remounted, or when new files are created under that + filesystem. The upgrade can also be started manually on filesystems by + running `zfs set version=current <pool/fs>`. The upgrade process runs + in the background and may take a while to complete for filesystems + containing a large number of files.

+

+
+

+
+
+

+

zpool(8)

+
+
+ + + + + +
June 8, 2018
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/fsck.zfs.8.html b/man/v0.7/8/fsck.zfs.8.html new file mode 100644 index 000000000..7b33f31fb --- /dev/null +++ b/man/v0.7/8/fsck.zfs.8.html @@ -0,0 +1,216 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
fsck.zfs(8)System Administration Commandsfsck.zfs(8)
+
+

+
+

+

fsck.zfs - Dummy ZFS filesystem checker.

+

+
+
+

+

fsck.zfs [options] + <dataset>

+

+
+
+

+

fsck.zfs is a shell stub that does nothing and always + returns true. It is installed by ZoL because some Linux distributions expect + a fsck helper for all filesystems.

+

+
+
+

+

All options and the dataset are ignored.

+

+
+
+

+

ZFS datasets are checked by running zpool scrub on the + containing pool. An individual ZFS dataset is never checked independently of + its pool, which is unlike a regular filesystem.

+

+
+
+

+

On some systems, if the dataset is in a degraded pool, then + it might be appropriate for fsck.zfs to return exit code 4 to + indicate an uncorrected filesystem error.

+

Similarly, if the dataset is in a faulted pool and has a + legacy /etc/fstab record, then fsck.zfs should return exit code 8 to + indicate a fatal operational error.

+

+
+
+

+

Darik Horn <dajhorn@vanadac.com>.

+

+
+
+

+

fsck(8), fstab(5), zpool(8)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/index.html b/man/v0.7/8/index.html new file mode 100644 index 000000000..9684c7e0b --- /dev/null +++ b/man/v0.7/8/index.html @@ -0,0 +1,163 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

System Administration Commands (8)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/mount.zfs.8.html b/man/v0.7/8/mount.zfs.8.html new file mode 100644 index 000000000..c1cf77391 --- /dev/null +++ b/man/v0.7/8/mount.zfs.8.html @@ -0,0 +1,265 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
mount.zfs(8)System Administration Commandsmount.zfs(8)
+
+

+
+

+

mount.zfs - mount a ZFS filesystem

+
+
+

+

mount.zfs [-sfnvh] [-o options] dataset + mountpoint

+

+
+
+

+

mount.zfs is part of the zfsutils package for Linux. It is + a helper program that is usually invoked by the mount(8) or + zfs(8) commands to mount a ZFS dataset.

+

All options are handled according to the FILESYSTEM + INDEPENDENT MOUNT OPTIONS section in the mount(8) manual, except for + those described below.

+

The dataset parameter is a ZFS filesystem name, as output + by the zfs list -H -o name command. This parameter never has a + leading slash character and is not a device name.

+

The mountpoint parameter is the path name of a + directory.

+

+

+
+
+

+
+
+
Ignore bad or sloppy mount options.
+
+
Do a fake mount; do not perform the mount operation.
+
+
Do not update the /etc/mtab file.
+
+
Increase verbosity.
+
+
Print the usage message.
+
+
This flag sets the SELinux context for all files in the filesystem under + that mountpoint.
+
+
This flag sets the SELinux context for the filesystem being mounted.
+
+
This flag sets the SELinux context for unlabeled files.
+
+
This flag sets the SELinux context for the root inode of the + filesystem.
+
+
This private flag indicates that the dataset has an entry in the + /etc/fstab file.
+
+
This private flag disables extended attributes.
+
+
This private flag enables directory-based extended attributes and, if + appropriate, adds a ZFS context to the selinux system policy.
+
+
This private flag enables system attributed-based extended attributes and, + if appropriate, adds a ZFS context to the selinux system policy.
+
+
Equivalent to xattr.
+
+
This private flag indicates that mount(8) is being called by the + zfs(8) command. +

+
+
+
+
+

+

ZFS conventionally requires that the mountpoint be an empty + directory, but the Linux implementation inconsistently enforces the + requirement.

+

The mount.zfs helper does not mount the contents of + zvols.

+

+
+
+

+
+
/etc/fstab
+
The static filesystem table.
+
/etc/mtab
+
The mounted filesystem table.
+
+
+
+

+

The primary author of mount.zfs is Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

fstab(5), mount(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/vdev_id.8.html b/man/v0.7/8/vdev_id.8.html new file mode 100644 index 000000000..d7927eefe --- /dev/null +++ b/man/v0.7/8/vdev_id.8.html @@ -0,0 +1,235 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
vdev_id(8)System Manager's Manualvdev_id(8)
+
+
+

+

vdev_id - generate user-friendly names for JBOD disks

+
+
+

+
vdev_id <-d dev> [-c config_file] [-g sas_direct|sas_switch]
+
+ [-m] [-p phys_per_port] +vdev_id -h
+
+
+

+

The vdev_id command is a udev helper which parses the file + /etc/zfs/vdev_id.conf(5) to map a physical path in a storage topology + to a channel name. The channel name is combined with a disk enclosure slot + number to create an alias that reflects the physical location of the drive. + This is particularly helpful when it comes to tasks like replacing failed + drives. Slot numbers may also be re-mapped in case the default numbering is + unsatisfactory. The drive aliases will be created as symbolic links in + /dev/disk/by-vdev.

+

The currently supported topologies are sas_direct and sas_switch. + A multipath mode is supported in which dm-mpath devices are handled by + examining the first-listed running component disk as reported by the + multipath(8) command. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating aliases based on existing + udev links in the /dev hierarchy using the alias configuration file + keyword. See the vdev_id.conf(5) man page for details.

+

+
+
+

+
+
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+
This is the only mandatory argument. Specifies the name of a device in + /dev, i.e. "sda".
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+
+
+
Specifies that vdev_id(8) will handle only dm-multipath devices. If + set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4.
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zdb.8.html b/man/v0.7/8/zdb.8.html new file mode 100644 index 000000000..a9b05a798 --- /dev/null +++ b/man/v0.7/8/zdb.8.html @@ -0,0 +1,568 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's Manual (smm)ZDB(8)
+
+
+

+

zdbdisplay + zpool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhiLMPsvX] [-e + [-V] [-p + path ...]] [-I + inflight I/Os] [-o + var=value]... + [-t txg] + [-U cache] + [-x dumpdir] + [poolname [object ...]]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path ...]] [-U + cache] dataset + [object ...]
+
+ + + + + +
zdb-C [-A] + [-U cache]
+
+ + + + + +
zdb-E [-A] + word0:word1:...:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPX] + [-e [-V] + [-p path ...]] + [-t txg] + [-U cache] + poolname [vdev + [metaslab ...]]
+
+ + + + + +
zdb-O dataset path
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path ...]] + [-U cache] + poolname + vdev:offset:size[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path ...]] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. This is neither a fsck(1M) nor an + fsdb(1M) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as pool/ (pool name followed by a + slash).

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+
+
+

+

Display options:

+
+
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs are specified, display information about those + specific objects only.

+
+
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + * compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
+ word0:word1:...:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
+ device
+
Read the vdev labels from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. Each unique configuration is displayed only once.
+
+ device
+
In addition display label space usage stats.
+
+ device
+
Display every configuration, unique or not. +

If the -q option is also specified, + don't print the labels.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
+
Disable leak tracing and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
+ poolname + vdev:offset:size[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the size of the block to read) and, + optionally, flags (a set of flags, described + below).

+

+
+
+ offset
+
Print block pointer
+
+
Decompress the block. Set environment variable + ZBD_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
+
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
+
Display the current uberblock.
+
+

Other options:

+
+
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
+ [-p path ...]
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
+ dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
+ inflight I/Os
+
Limit the number of outstanding checksum I/Os to the specified value. The + default value is 200. This option affects the performance of the + -c option.
+
+ var=value ...
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
+
Print numbers in an unscaled form more amenable to parsing, eg. 1000000 + rather than 1M.
+
+ transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
+ cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
+
Enable verbosity. Specify multiple times for increased verbosity.
+
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+
Display the configuration of imported pool + rpool
+
+
+
# zdb -C rpool
+
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ ...
+
+
+
Display basic dataset information about + rpool
+
+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ ...
+
+
+
Display basic information about object 0 in + rpool/export/home
+
+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
Display the predicted effect of enabling deduplication on + rpool
+
+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ ...
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
April 14, 2017Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zed.8.html b/man/v0.7/8/zed.8.html new file mode 100644 index 000000000..25431a992 --- /dev/null +++ b/man/v0.7/8/zed.8.html @@ -0,0 +1,377 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Administration CommandsZED(8)
+
+

+
+

+

ZED - ZFS Event Daemon

+

+
+
+

+

zed [-d zedletdir] [-f] [-F] + [-h] [-L] [-M] [-p pidfile] [-P + path] [-s statefile] [-v] [-V] + [-Z]

+

+
+
+

+

ZED (ZFS Event Daemon) monitors events generated by the ZFS + kernel module. When a zevent (ZFS Event) is posted, ZED will run any + ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) that have been + enabled for the corresponding zevent class.

+

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Run the daemon in the foreground.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+
Read the enabled ZEDLETs from the specified directory.
+
+
Write the daemon's process ID to the specified file.
+
+
Custom $PATH for zedlets to use. Normally zedlets run in a locked-down + environment, with hardcoded paths to the ZFS commands ($ZFS, $ZPOOL, $ZED, + ...), and a hardcoded $PATH. This is done for security reasons. However, + the ZFS test suite uses a custom PATH for its ZFS commands, and passes it + to zed with -P. In short, -P is only to be used by the ZFS test suite; + never use it in production!
+
+
Write the daemon's state to the specified file.
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the "zpool + events -v" command.

+

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory. These can be symlinked or copied from the + installed-zedlets directory; symlinks allow for automatic updates + from the installed ZEDLETs, whereas copies preserve local modifications. As + a security measure, ZEDLETs must be owned by root. They must have execute + permissions for the user, but they must not have write permissions for group + or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they should be + invoked. In particular, a ZEDLET will be invoked for a given zevent if + either its class or subclass string is a prefix of its filename (and is + followed by a non-alphabetic character). As a special case, the prefix + "all" matches all zevents. Multiple ZEDLETs may be invoked for a + given zevent.

+

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + "ZED_".

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner: 1) it is prefixed with "ZEVENT_", 2) it is converted to + uppercase, and 3) each non-alphanumeric character is converted to an + underscore. Some additional environment variables have been defined to + present certain nvpair values in a more convenient form. An incomplete list + of zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as + "seconds nanoseconds" since the Epoch.
+
+
The seconds component of ZEVENT_TIME.
+
+
The nanoseconds component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The ZFS alias (name-version-release) string used to build the + daemon.
+
+
The ZFS version used to build the daemon.
+
+
The ZFS release used to build the daemon.
+
+

ZEDLETs may need to call other ZFS commands. The installation + paths of the following executables are defined: ZDB, ZED, + ZFS, ZINJECT, and ZPOOL. These variables can be + overridden in the rc file if needed.

+

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@libexecdir@/zfs/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state. +

+
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
+
Terminate the daemon. +

+
+
+
+
+

+

ZED requires root privileges.

+

+
+
+

+

Events are processed synchronously by a single thread. This can + delay the processing of simultaneous zevents.

+

There is no maximum timeout for ZEDLET execution. Consequently, a + misbehaving ZEDLET can delay the processing of subsequent zevents.

+

The ownership and permissions of the enabled-zedlets + directory (along with all parent directories) are not checked. If any of + these directories are improperly owned or permissioned, an unprivileged user + could insert a ZEDLET to be executed as root. The requirement that ZEDLETs + be owned by root mitigates this to some extent.

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Some zevent nvpair types are not handled. These are denoted by + zevent environment variables having a "_NOT_IMPLEMENTED_" + value.

+

Internationalization support via gettext has not been added.

+

The configuration file is not yet implemented.

+

The diagnosis engine is not yet implemented.

+

+
+
+

+

ZED (ZFS Event Daemon) is distributed under the terms of + the Common Development and Distribution License Version 1.0 (CDDL-1.0).

+

Developed at Lawrence Livermore National Laboratory + (LLNL-CODE-403049).

+

+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
Octember 1, 2013ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zfs.8.html b/man/v0.7/8/zfs.8.html new file mode 100644 index 000000000..8f22dc4f0 --- /dev/null +++ b/man/v0.7/8/zfs.8.html @@ -0,0 +1,3543 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's Manual (smm)ZFS(8)
+
+
+

+

zfsconfigures + ZFS file systems

+
+
+

+ + + + + +
zfs-?
+
+ + + + + +
zfscreate [-p] + [-o + property=value]... + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]... + -V size + volume
+
+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+ + + + + +
zfssnapshot [-r] + [-o property=value]... + filesystem@snapname|volume@snapname...
+
+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+ + + + + +
zfsclone [-p] + [-o + property=value]... + snapshot + filesystem|volume
+
+ + + + + +
zfspromote + clone-filesystem
+
+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename [-fp] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + filesystem|volume|snapshot|bookmark...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a | filesystem
+
+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Ov] + [-o options] + -a | filesystem
+
+ + + + + +
zfsunmount [-f] + -a | + filesystem|mountpoint
+
+ + + + + +
zfsshare -a | + filesystem
+
+ + + + + +
zfsunshare -a | + filesystem|mountpoint
+
+ + + + + +
zfsbookmark snapshot + bookmark
+
+ + + + + +
zfssend [-DLPRcenpv] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-Lce] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend [-Penv] + -t receive_resume_token
+
+ + + + + +
zfsreceive [-Fnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-Fnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsallow + filesystem|volume
+
+ + + + + +
zfsallow [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + -@setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfshold [-r] + tag snapshot...
+
+ + + + + +
zfsholds [-r] + snapshot...
+
+ + + + + +
zfsrelease [-r] + tag snapshot...
+
+ + + + + +
zfsdiff [-FHt] + snapshot + snapshot|filesystem
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace. For + example:

+
+
pool/{filesystem,volume,snapshot}
+
+

where the maximum length of a dataset name is + MAXNAMELEN (256 bytes).

+

A dataset can be one of the following:

+
+
+
A ZFS dataset of type filesystem can be mounted within + the standard system namespace and behaves like other file systems. While + ZFS file systems are designed to be POSIX compliant, known issues exist + that prevent compliance in some cases. Applications that depend on + standards conformance might fail due to non-standard behavior when + checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used under special circumstances. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of volumes can be + cloned or rolled back, visibility is determined by the + snapdev property of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the snapdir property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a snapshot is + cloned, it creates an implicit dependency between the parent and child. Even + though the clone is created somewhere else in the dataset hierarchy, the + original snapshot cannot be destroyed as long as a clone exists. The + origin property exposes this dependency, and the + destroy command lists any such dependencies, if they + exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set + in the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if pool/home has a mount + point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + none prevents the file system from being mounted.

+

If needed, ZFS file systems can also be managed with traditional + tools (mount, umount, + /etc/fstab). If a file system's mount point is set + to legacy, ZFS makes no attempt to manage the file system, + and the administrator is responsible for mounting and unmounting the file + system. Because pools must be imported before a legacy mount can succeed, + administrators should ensure that legacy mounts are only attempted after the + zpool import process finishes at boot time. For example, on machines using + systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the dedup property enabled, duplicate data blocks are + removed synchronously. The result is that only unique data is stored and + common components are shared among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow IO and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk IO.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. As an alternative to deduplication consider using + , + as a less resource-intensive alternative.

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + M, + , and so + forth, up to Z for zettabyte). The following are all valid + (and equal) specifications: 1536M, 1.5g, 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its shortened column + name, avail.

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive -s, this opaque token can be provided to + zfs send -t to resume and complete the zfs + receive.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: filesystem, + volume, or snapshot.
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section) is space that is + referenced exclusively by this snapshot. If this snapshot is destroyed, + the amount of used space will be freed. Space that is + shared by multiple snapshots isn't accounted for in this metric. When a + snapshot is destroyed, space that was previously shared with this + snapshot can become unique to snapshots adjacent to it, thus changing + the used space of those snapshots. The used space of the latest snapshot + can also be affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced does not + take into account pending changes. Pending changes are generally + accounted for within a few seconds. Committing a change to a disk using + fsync(2) or O_SYNC does not + necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du and + ls -s. See the + zfs userspace subcommand + for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@... + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property + is + set on a file system additional objects will be created per-file to + store extended attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal objects are normally + required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 8 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which for + clones may be a snapshot in the origin's filesystem (or the origin of + the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + posix ACLs.

+
+
=off|noacl|posixacl
+
Controls whether ACLs are enabled and if so what type of ACL to use. +
+
+
default, when a file system has the acltype property + set to off then ACLs are disabled.
+
+
an alias for off
+
+
indicates posix ACLs should be used. Posix ACLs are specific to Linux + and are not functional on other platforms. Posix ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+

To obtain the best performance when setting + posixacl users are strongly encouraged to set the + xattr=sa property. This will result in the posix ACL + being stored more efficiently on disk. But as a consequence of this all + new extended attributes will only be accessible from OpenZFS + implementations which support the xattr=sa property. + See the xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, + skein, and edonr checksum algorithms + require enabling the appropriate features on the pool. These algorithms + are not supported by GRUB and should not be set on the + + filesystem when using GRUB to boot the system. Please see + zpool-features(5) for more information on these + algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle
+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the + current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(5) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where N is + an integer from 1 (fastest) to 9 (best compression ratio). Currently, + gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=1||
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a + missing top-level vdev. Do NOT create, for example a + two-disk striped pool and set + on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy requires the + large_dnode pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the workload makes heavy + use of extended attributes. This may be applicable to SELinux-enabled + systems, Lustre servers, and Samba servers, for example. Literal values + are supported for cases where the optimal size is known in advance and + for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode feature, or if you + need to import this pool on a system that doesn't support the + large_dnode feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(5)).
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section for more + information on how this property is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none, or if they were mounted before the property + was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non Blocking mandatory locks). This is used for + SMB clients. Changes to this property only take effect when the file + system is umounted and remounted. See mount(8) for more + information on nbmand mounts. This property is not used + on Linux.
+
=off|on
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux file + systems. For consistency with OpenZFS on other platforms overlay mounts + are off by default. Set to on to + enable overlay mounts.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(5)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace subcommand + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@... properties are not + displayed by zfs get + all. The user's name must be appended after the + @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + rw mount options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two greater than or + equal to 512 and less than or equal to 128 Kbytes. If the + large_blocks feature is enabled on the pool, the size + may be up to 1 Mbyte. See zpool-features(5) for + details on ZFS feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 100 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + + is set. Turning this property on causes the access time to be updated + relative to the modify or change time. Access time is only updated if the + previous access time was earlier than the current modify or change time or + if the existing access time hasn't been updated within the past 24 hours. + The default value is off. The values + on and off are equivalent to the + relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a USERSHARE. +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", ie. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user, system + passwd/shadow, LDAP or smbpasswd based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the + + file. Otherwise, the file system is automatically shared and unshared with + the zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section. The default value + is hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse volume" (also + known as "thin provisioning") can be created by specifying the + -s option to the zfs + create -V command, or by + changing the reservation after the volume has been created. A + "sparse volume" is a volume where the reservation is less then + the volume size. Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the reservation.

+
+
=default + | + + | + + | + | +
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshoted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + zvol_volmode, where full, + dev and none are encoded as 1, 2 and 3 + respectively. The default values is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used on Linux.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported either directory based or + system attribute based. +

The default value of on enables directory + based extended attributes. This style of extended attribute imposes no + practical limit on either the size or number of attributes which can be + set on a file. Although under Linux the getxattr(2) + and setxattr(2) system calls limit the maximum size to + 64K. This is the most compatible style of extended attribute and is + supported by all OpenZFS implementations.

+

System attribute based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk IO required. Up to + 64K of data may be stored per-file in the space reserved for system + attributes. If there is not enough space available for an extended + attribute then it will be automatically written as a directory based + xattr. System attribute based extended attributes are not accessible on + platforms which do not support the xattr=sa + feature.

+

The use of system attribute based xattrs is strongly + encouraged for users of SELinux or posix ACLs. Both of these features + heavily rely of extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=on|off
+
Controls whether the dataset is managed from a non-global zone. Zones are + a Solaris feature and are not relevant on Linux. The default value is + off.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
    PROPERTY                MOUNT OPTION
+    atime                   atime/noatime
+    canmount                auto/noauto
+    devices                 dev/nodev
+    exec                    exec/noexec
+    readonly                ro/rw
+    relatime                relatime/norelatime
+    setuid                  suid/nosuid
+    xattr                   xattr/noxattr
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as module:property, but + this namespace is not enforced by ZFS. User property names can be at most + 256 characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the chance + that two independently-developed packages use the same property name for + different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+

+

ZFS volumes may be used as swap devices. After creating the volume + with the zfs create + -V command set up and enable the swap area using the + mkswap(8) and swapon(8) commands. Do not + swap to a file on a ZFS file system. A ZFS swap file configuration is not + supported.

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs create + [-p] [-o + property=value]... + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]... + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically rounded up to the + nearest 128 Kbytes to ensure that the volume has an integral number of + blocks regardless of blocksize.

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + volsize in the + Native Properties section + for more information about sparse volumes.
+
+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Force an unmount of any file systems using the + unmount -f command. + This option has no effect on non-file systems or unmounted file + systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + @) should be specified when using a range or + comma-separated list to identify multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Defer snapshot deletion.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
zfs snapshot + [-r] [-o + property=value]... + filesystem@snapname|volume@snapname...
+
Creates snapshots with the given names. All previous modifications by + successful system calls to the file system are part of the snapshots. + Snapshots are taken atomically, so that all snapshots correspond to the + same moment in time. See the Snapshots + section for details. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
zfs rollback + [-Rfr] snapshot
+
Roll back the given dataset to a previous snapshot. When a dataset is + rolled back, all data that has changed since the snapshot is discarded, + and the dataset reverts to the state at the time of the snapshot. By + default, the command refuses to roll back to a snapshot other than the + most recent one. In order to do so, all intermediate snapshots and + bookmarks must be destroyed by specifying the -r + option. +

The -rR options do not recursively + destroy the child snapshots of a recursive snapshot. Only direct + snapshots of the specified filesystem are destroyed by either of these + options. To completely roll back a recursive snapshot, you must rollback + the individual child snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones + of those snapshots.
+
+
Used with the -R option to force an unmount of + any clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
zfs clone + [-p] [-o + property=value]... + snapshot + filesystem|volume
+
Creates a clone of the given snapshot. See the + Clones section for details. The target + dataset can be located anywhere in the ZFS hierarchy, and is created as + the same type as the original. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. If + the target filesystem or volume already exists, the operation + completes successfully.
+
+
+
zfs promote + clone-filesystem
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot. This makes it possible to destroy the file + system that the clone was created from. The clone parent-child dependency + relationship is reversed, so that the origin file system becomes a clone + of the specified file system. +

The snapshot that was cloned, and any snapshots previous to + this snapshot, are now owned by the promoted clone. The space they use + moves from the origin file system to the promoted clone, so enough space + must be available to accommodate these snapshots. No new space is + consumed by this operation, but the space accounting is adjusted. The + promoted clone must not have any conflicting snapshot names of its own. + The rename subcommand can be used to rename any + conflicting snapshots.

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + [-fp] + filesystem|volume + filesystem|volume
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any filesystems that need to be unmounted in the + process.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
zfs list + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
Lists the property information for the given datasets in tabular form. If + specified, you can list property information by the absolute pathname or + the relative pathname. By default, all file systems and volumes are + displayed. Snapshots are displayed if the listsnaps + property is on (the default is off). + The following fields are displayed, + name,used,available,referenced,mountpoint. +
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ property
+
Same as the -s option, but sorts by property + in descending order.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A + depth of 1 will display only + the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: +
    +
  • One of the properties described in the + Native Properties + section
  • +
  • A user property
  • +
  • The value name to display the dataset name
  • +
  • The value + to + display space usage properties on file systems and volumes. This + is a shortcut for specifying -o + name,avail,used,,,, + -t + filesystem,volume syntax.
  • +
+
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command + line.
+
+ property
+
A property for sorting the output by column in ascending order based + on the value of the property. The property must be one of the + properties described in the + Properties section, or the + special value name to sort by the dataset name. + Multiple properties can be specified at one time using multiple + -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior + of zfs list is + preserved.

+
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or all. For example, + specifying -t snapshot + displays only snapshots.
+
+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Sets the property or list of properties to the given value(s) for each + dataset. Only some properties can be edited. See the + Properties section for more + information on what properties can be set and acceptable values. Numeric + values can be specified as exact values, or in a human-readable form with + a suffix of , + , + M, + , + , + , + , + Z (for bytes, kilobytes, megabytes, gigabytes, + terabytes, petabytes, exabytes, or zettabytes, respectively). User + properties can be set on snapshots. For more information, see the + User Properties section.
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + filesystem|volume|snapshot|bookmark...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source.  Can either be local, default,
+              temporary, inherited, or none (-).
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections.

+

The special value all can be used to display + all properties that apply to the given dataset's type (filesystem, + volume, snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + 1 will display only the dataset and its direct + children.
+
+ field
+
A comma-separated list of columns to display. + name,property,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + default, + , + , + and none. The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See the Properties + section for a listing of default values, and details on which properties + can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] -a | + filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of the software. zfs + send streams generated from new snapshots of these + file systems cannot be accessed on systems running older versions of the + software. +

In general, the file system version is independent of the pool + version. See zpool(8) for information on the + zpool upgrade + command.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to the specified version. If the + -V flag is not specified, this command + upgrades to the most recent version. This option can only be used to + increase the version number, and only up to the most recent version + supported by this software.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each user in the specified + filesystem or snapshot. This corresponds to the + user, + user, + userquota@ + and userobjquota@user properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + used, quota. The default is to + display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + all, posixuser, + smbuser, posixgroup, + smbgroup. The default is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + zfs userspace, except that + the default types to display are -t + posixgroup,smbgroup.
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Ov] [-o + options] -a | + filesystem
+
Mounts ZFS file systems. +
+
+
Perform an overlay mount. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + Temporary Mount + Point Properties section for details.
+
+
Report mount progress.
+
+
+
zfs unmount + [-f] -a | + filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
Forcefully unmount the file system, even if it is currently in + use.
+
+
+
zfs share + -a | filesystem
+
Shares available ZFS file systems. +
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a | + filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
zfs bookmark + snapshot bookmark
+
Creates a bookmark of the given snapshot. Bookmarks mark the point in time + when the snapshot was created, and can be used as the incremental source + for a zfs send command. +

This feature must be enabled to be used. See + zpool-features(5) for details on ZFS feature flags and + the + + feature.

+
+
zfs send + [-DLPRcenpv] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
+ --dedup
+
Generate a deduplicated stream. Blocks which would have been sent + multiple times in the send stream will only be sent once. The + receiving system must also support this feature to receive a + deduplicated stream. This flag can be used regardless of the dataset's + dedup property, but performance will be much better + if the filesystem uses a dedup-capable checksum (for example, + sha256).
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed.

+
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. See zpool-features(5) for details on ZFS + feature flags and the embedded_data feature.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature.
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS .

+
+
+
+
zfs send + [-Lce] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. See zpool-features(5) for details on ZFS + feature flags and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
+
+
zfs send + [-Penv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs receive -s for more details.
+
zfs receive + [-Fnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-Fnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

If -o + property=value or -x + property is specified, it applies to the effective + value of the property throughout the entire subtree of replicated + datasets. Effective property values will be set ( + -o ) or inherited ( -x ) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked immediately before the + receive. When receiving a stream from zfs + send -R, causes the + property to be inherited by all descendant datasets, as through + zfs inherit + property was run on any descendant datasets that + have this property set on the sending system. +

Any editable property can be set at receive time. Set-once + properties bound to the received data, such as + normalization and + casesensitivity, cannot be set at receive time + even when the datasets are newly created by + zfs receive. + Additionally both settable properties version and + volsize cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with a stream generated by + zfs send + -t token, where the + token is the value of the + receive_resume_token property of the filesystem or + volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(5) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions on set-once + and special properties apply equally to + -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + mountpoint, canmount, + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume +
+ zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]...
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]...
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]...
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]...
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+
+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is
+                                being allowed
+clone            subcommand     Must also have the 'create' ability and
+                                'mount' ability in the origin file system
+create           subcommand     Must also have the 'mount' ability
+destroy          subcommand     Must also have the 'mount' ability
+diff             subcommand     Allows lookup of paths within a dataset
+                                given an object number, and the ability
+                                to create snapshots necessary to
+                                'zfs diff'.
+mount            subcommand     Allows mount/umount of ZFS datasets
+promote          subcommand     Must also have the 'mount' and 'promote'
+                                ability in the origin file system
+receive          subcommand     Must also have the 'mount' and 'create'
+                                ability
+rename           subcommand     Must also have the 'mount' and 'create'
+                                ability in the new parent
+rollback         subcommand     Must also have the 'mount' ability
+send             subcommand
+share            subcommand     Allows sharing file systems over NFS
+                                or SMB protocols
+snapshot         subcommand     Must also have the 'mount' ability
+
+groupquota       other          Allows accessing any groupquota@...
+                                property
+groupused        other          Allows reading any groupused@... property
+userprop         other          Allows changing any user property
+userquota        other          Allows accessing any userquota@...
+                                property
+userused         other          Allows reading any userused@... property
+
+aclinherit       property
+acltype          property
+atime            property
+canmount         property
+casesensitivity  property
+checksum         property
+compression      property
+copies           property
+devices          property
+exec             property
+filesystem_limit property
+mountpoint       property
+nbmand           property
+normalization    property
+primarycache     property
+quota            property
+readonly         property
+recordsize       property
+refquota         property
+refreservation   property
+reservation      property
+secondarycache   property
+setuid           property
+sharenfs         property
+sharesmb         property
+snapdir          property
+snapshot_limit   property
+utf8only         property
+version          property
+volblocksize     property
+volsize          property
+vscan            property
+xattr            property
+zoned            property
+
+
+
zfs allow + -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume +
+ zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume +
+ zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
zfs hold + [-r] tag + snapshot...
+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its + own tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-r] snapshot...
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
+
zfs release + [-r] tag + snapshot...
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return + EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
zfs diff + [-FHt] snapshot + snapshot|filesystem
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are: +
+
-       The path has been removed
++       The path has been created
+M       The path has been modified
+R       The path has been renamed
+
+
+
+
Display an indication of the type of file, in a manner similar to the + - option of ls(1). +
+
B       Block device
+C       Character device
+/       Directory
+>       Door
+|       Named pipe
+@       Symbolic link
+P       Event port
+=       Socket
+F       Regular file
+
+
+
+
Give more parsable tab-separated output, without header lines and + without arrows.
+
+
Display the path's inode change time as the first column of + output.
+
+
+
+
+
+

+

The zfs utility exits 0 on success, 1 if + an error occurs, and 2 if invalid command line options were specified.

+
+
+

+
+
Creating a ZFS File System Hierarchy
+
The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, + and is automatically inherited by the child file system. +
+
# zfs create pool/home
+# zfs set mountpoint=/export/home pool/home
+# zfs create pool/home/bob
+
+
+
Creating a ZFS Snapshot
+
The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system. +
+
# zfs snapshot pool/home/bob@yesterday
+
+
+
Creating and Destroying Multiple + Snapshots
+
The following command creates snapshots named yesterday + of pool/home and all of its descendent file systems. + Each snapshot is mounted on demand in the + .zfs/snapshot directory at the root of its file + system. The second command destroys the newly created snapshots. +
+
# zfs snapshot -r pool/home@yesterday
+# zfs destroy -r pool/home@yesterday
+
+
+
Disabling and Enabling File System + Compression
+
The following command disables the compression property + for all file systems under pool/home. The next command + explicitly enables compression for + pool/home/anne. +
+
# zfs set compression=off pool/home
+# zfs set compression=on pool/home/anne
+
+
+
Listing ZFS Datasets
+
The following command lists all active file systems and volumes in the + system. Snapshots are displayed if the listsnaps + property is on. The default is off. + See zpool(8) for more information on pool properties. +
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
Setting a Quota on a ZFS File System
+
The following command sets a quota of 50 Gbytes for + pool/home/bob. +
+
# zfs set quota=50G pool/home/bob
+
+
+
Listing ZFS Properties
+
The following command lists all properties for + pool/home/bob. +
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value.

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+ The following command lists all properties with local settings for + pool/home/bob. +
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
Rolling Back a ZFS File System
+
The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots. +
+
# zfs rollback -r pool/home/anne@yesterday
+
+
+
Creating a ZFS Clone
+
The following command creates a writable file system whose initial + contents are the same as + . +
+
# zfs clone pool/home/bob@yesterday pool/clone
+
+
+
Promoting a ZFS Clone
+
The following commands illustrate how to test out changes to a file + system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming: +
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
Inheriting ZFS Properties
+
The following command causes pool/home/bob and + pool/home/anne to inherit the checksum + property from their parent. +
+
# zfs inherit checksum pool/home/bob pool/home/anne
+
+
+
Remotely Replicating ZFS Data
+
The following commands send a full stream and then an incremental stream + to a remote machine, restoring them into + + and + , + respectively. poolB must contain the file system + poolB/received, and must not initially contain + . +
+
# zfs send pool/fs@a | \
+  ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b | \
+  ssh host zfs receive poolB/received/fs
+
+
+
Using the zfs receive -d Option
+
The following command sends a full stream of + + to a remote machine, receiving it into + . + The + + portion of the received snapshot's name is determined from the name of the + sent snapshot. poolB must contain the file system + poolB/received. If + + does not exist, it is created as an empty file system. +
+
# zfs send poolA/fsA/fsB@snap | \
+  ssh host zfs receive -d poolB/received
+
+
+
Setting User Properties
+
The following example sets the user-defined + + property for a dataset. +
+
# zfs set com.example:department=12345 tank/accounting
+
+
+
Performing a Rolling Snapshot
+
The following example shows how to maintain a history of snapshots with a + consistent naming scheme. To keep a week's worth of snapshots, the user + destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows: +
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
Setting sharenfs Property Options on a ZFS File + System
+
The following commands show how to set sharenfs property + options to enable rw access for a set of + addresses + and to enable root access for system + on the + + file system. +
+
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
+
+

If you are using DNS for host name + resolution, specify the fully qualified hostname.

+
+
Delegating ZFS Administration Permissions on a + ZFS Dataset
+
The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots on + tank/cindys. The permissions on + tank/cindys are also displayed. +
+
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point + access:

+
+
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
+
+
+
Delegating Create Time Permissions on a ZFS + Dataset
+
The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed. +
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
Defining and Granting a Permission Set on a ZFS + Dataset
+
The following example shows how to define and grant a permission set on + the tank/users file system. The permissions on + tank/users are also displayed. +
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Delegating Property Permissions on a ZFS + Dataset
+
The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed. +
+
# zfs allow cindys quota,reservation users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
Removing ZFS Delegated Permissions on a ZFS + Dataset
+
The following example shows how to remove the snapshot permission from the + staff group on the tank/users file + system. The permissions on tank/users are also + displayed. +
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Showing the differences between a snapshot and a + ZFS Dataset
+
The following example shows how to see what has changed between a prior + snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected. +
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
Creating a bookmark
+
The following example create a bookmark to a snapshot. This bookmark can + then be used instead of snapshot in send streams. +
+
# zfs bookmark rpool@snapshot rpool#bookmark
+
+
+
Setting sharesmb Property Options on a ZFS File + System
+
The following example show how to share SMB filesystem through ZFS. Note + that that a user and his/her password must be given. +
+
# smbmount //127.0.0.1/share_tmp /mnt/tmp \
+  -o user=workgroup/turbo,password=obrut,uid=1000
+
+

Minimal + + configuration required:

+

Samba will need to listen to 'localhost' (127.0.0.1) for the + ZFS utilities to communicate with Samba. This is the default behavior + for most Linux distributions.

+

Samba must be able to authenticate a user. This can be done in + a number of ways, depending on if using the system password file, LDAP + or the Samba specific smbpasswd file. How to do this is outside the + scope of this manual. Please refer to the smb.conf(5) + man page for more information.

+

See the USERSHARE section of the + smb.conf(5) man page for all configuration options in + case you need to modify any options to the share afterwards. Do note + that any changes done with the net(8) command will be + undone if the share is ever unshared (such as at a reboot etc).

+
+
+
+
+

+

.

+
+
+

+

gzip(1), ssh(1), + zpool(8), + selinux(8), chmod(2), + stat(2), write(2), + fsync(2), attr(1), + acl(5), exports(5), + exportfs(8), net(8), + attributes(5)

+
+
+ + + + + +
January 5, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zgenhostid.8.html b/man/v0.7/8/zgenhostid.8.html new file mode 100644 index 000000000..f0df09fcf --- /dev/null +++ b/man/v0.7/8/zgenhostid.8.html @@ -0,0 +1,228 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's Manual (smm)ZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate and store a hostid in + /etc/hostid

+
+
+

+ + + + + +
zgenhostid[hostid]
+
+
+

+

If /etc/hostid does not exist, create it and + store a hostid in it. If the user provides [hostid] on + the command line, store that value. Otherwise, randomly generate a value to + store.

+

This emulates the genhostid(1) utility and is + provided for use on systems which do not include the utility.

+
+
+

+

[hostid] Specifies the value to be placed in + /etc/hostid. It must be a number with a value between 1 + and 2^32-1. This value + be + unique among your systems. It must be expressed in hexadecimal and be + exactly 8 digits long.

+
+
+

+
+
Generate a random hostid and store it
+
+
+
# zgenhostid
+
+
+
Record the libc-generated hostid in /etc/hostid
+
+
+
# zgenhostid $(hostid)
+
+
+
Record a custom hostid (0xdeadbeef) in +
+
+
+
# zgenhostid deadbeef
+
+
+
+
+
+

+

spl-module-parameters(5), + genhostid(1), hostid(1)

+
+
+ + + + + +
July 24, 2017Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zinject.8.html b/man/v0.7/8/zinject.8.html new file mode 100644 index 000000000..6240ee76e --- /dev/null +++ b/man/v0.7/8/zinject.8.html @@ -0,0 +1,320 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
zinject(8)System Administration Commandszinject(8)
+
+

+
+

+

zinject - ZFS Fault Injector

+
+
+

+

zinject creates artificial problems in a ZFS pool by + simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+
List injection records.
+
zinject -b objset:object:level:blkd [-f + frequency] [-amu] pool
+
Force an error into the pool at a bookmark.
+
zinject -c <id | all>
+
Cancel injection records.
+
zinject -d vdev -A <degrade|fault> + pool
+
Force a vdev into the DEGRADED or FAULTED state.
+
zinject -d vdev -D latency:lanes + pool
+
+

Add an artificial delay to IO requests on a particular device, + such that the requests take a minimum of 'latency' milliseconds to + complete. Each delay has an associated number of 'lanes' which defines + the number of concurrent IO requests that can be processed.

+

For example, with a single lane delay of 10 ms (-D 10:1), the + device will only be able to service a single IO request at a time with + each request taking 10 ms to complete. So, if only a single request is + submitted every 10 ms, the average latency will be 10 ms; but if more + than one request is submitted every 10 ms, the average latency will be + more than 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D 10:2), then the device will be able to service two requests at a + time, each with a minimum latency of 10 ms. So, if two requests are + submitted every 10 ms, then the average latency will be 10 ms; but if + more than two requests are submitted every 10 ms, the average latency + will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + '-D 10:1', is roughly equivalent to a single invocation of '-D 10:2'. + This also means, one can specify multiple lanes with differing target + latencies. For example, an invocation of '-D 10:1' followed by '-D 25:2' + will create 3 lanes on the device; one lane with a latency of 10 ms and + two lanes with a 25 ms latency.

+

+
+
zinject -d vdev [-e device_error] [-L + label_error] [-T failure] [-f + frequency] [-F] pool
+
Force a vdev error.
+
zinject -I [-s seconds | -g txgs] + pool
+
Simulate a hardware failure that fails to honor a cache flush.
+
zinject -p function pool
+
Panic inside the specified function.
+
zinject -t data [-e device_error] [-f + frequency] [-l level] [-r range] + [-amq] path
+
Force an error into the contents of a file.
+
zinject -t dnode [-e device_error] [-f + frequency] [-l level] [-amq] + path
+
Force an error into the metadnode for a file or directory.
+
zinject -t mos_type [-e device_error] [-f + frequency] [-l level] [-r range] + [-amqu] pool
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+
A vdev specified by path or GUID.
+
+
Specify checksum for an ECKSUM error, dtl for an ECHILD + error, io for an EIO error where reopening the device will succeed, + or nxio for an ENXIO error where reopening the device will fail. + For EIO and ENXIO, the "failed" reads or writes still occur. The + probe simply sets the error value reported by the I/O pipeline so it + appears the read or write failed.
+
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between 0.0001 and 100.
+
+
Fail faster. Do fewer checks.
+
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+
Inject an error at a particular block level. The default is 0.
+
+
Set the label error region to one of nvlist, pad1, + pad2, or uber.
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+
Run for this many seconds before reporting failure.
+
+
Set the failure type to one of all, claim, free, + read, or write.
+
+
Set this to mos for any data in the MOS, mosdir for an + object directory, config for the pool configuration, bpobj + for the block pointer list, spacemap for the space map, + metaslab for the metaslab, or errlog for the persistent + error log.
+
+
Unload the pool after injection. +

+
+
+
+
+

+
+
+
Run zinject in debug mode. +

+
+
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com> excerpting the zinject usage message and + source code.

+

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zpool.8.html b/man/v0.7/8/zpool.8.html new file mode 100644 index 000000000..76e880c3c --- /dev/null +++ b/man/v0.7/8/zpool.8.html @@ -0,0 +1,2223 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's Manual (smm)ZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?
+
+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev...
+
+ + + + + +
zpoolattach [-f] + [-o + property=value] + pool device new_device
+
+ + + + + +
zpoolclear pool + [device]
+
+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]... + [-o + feature@feature=value] + [-O + file-system-property=value]... + [-R root] + pool vdev...
+
+ + + + + +
zpooldestroy [-f] + pool
+
+ + + + + +
zpooldetach pool device
+
+ + + + + +
zpoolevents [-vHfc] + [pool]
+
+ + + + + +
zpoolexport [-a] + [-f] pool...
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]...] + all|property[,property]... + pool...
+
+ + + + + +
zpoolhistory [-il] + [pool]...
+
+ + + + + +
zpoolimport [-D] + [-c + cachefile|-d + dir]
+
+ + + + + +
zpoolimport -a + [-DfmN] [-F + [-n] [-T] + [-X]] [-c + cachefile|-d + dir] [-o + mntopts] [-o + property=value]... + [-R root]
+
+ + + + + +
zpoolimport [-Dfm] + [-F [-n] + [-T] [-X]] + [-c + cachefile|-d + dir] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool [-t]]
+
+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
+ + + + + +
zpoollabelclear [-f] + device
+
+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
+ + + + + +
zpooloffline [-f] + [-t] pool + device...
+
+ + + + + +
zpoolonline [-e] + pool device...
+
+ + + + + +
zpoolreguid pool
+
+ + + + + +
zpoolreopen pool
+
+ + + + + +
zpoolremove pool + device...
+
+ + + + + +
zpoolreplace [-f] + [-o + property=value] + pool device + [new_device]
+
+ + + + + +
zpoolscrub [-s | + -p] pool...
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolsplit [-gLnP] + [-o + property=value]... + [-R root] + pool newpool [device]...
+
+ + + + + +
zpoolstatus [-c + SCRIPT] [-gLPvxD] + [-T u|d] + [pool]... [interval + [count]]
+
+ + + + + +
zpoolsync [pool]...
+
+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool...
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system of which it + is a part. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with N disks of size X + can hold X bytes and can withstand (N-1) devices failing before data + integrity is compromised.
+
, + raidz1, raidz2, + raidz3
+
A variation on RAID-5 that allows for better distribution of parity and + eliminates the RAID-5 "write hole" (in which data and parity + become inconsistent after a power loss). Data and parity is striped across + all disks within a raidz group. +

A raidz group can have single-, double-, or triple-parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N disks of size X with P parity disks can + hold approximately (N-P)*X bytes and can withstand P device(s) failing + before data integrity is compromised. The minimum number of devices in a + raidz group is one more than the number of parity disks. The recommended + number is between 3 and 9 to help increase performance.

+
+
+
A special pseudo-vdev which keeps track of available hot spares for a + pool. For more information, see the Hot + Spares section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested, so a mirror or raidz virtual + device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. The keywords mirror and + raidz are used to distinguish where a group ends and + another begins. For example, the following creates two root vdevs, each a + mirror of two disks:

+
+
# zpool create mypool mirror sda sdb mirror sdc sdd
+
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three states: + online, degraded, or faulted. An online pool has all devices operating + normally. A degraded pool is one in which one or more devices have failed, + but the data is still available due to a redundant configuration. A faulted + pool has corrupted metadata, or one or more faulted devices, and + insufficient replicas to continue functioning.

+

The health of the top-level vdev, such as mirror or raidz device, + is potentially impacted by the state of its associated vdevs, or component + devices. A top-level vdev or component device is in one of the following + states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

If a device is removed and later re-attached to the system, ZFS + attempts to put the device online automatically. Device attach detection is + hardware-dependent and might not be supported on all platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
+
# zpool create pool mirror sda sdb spare sdc sdd
+
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

An in-progress spare replacement can be canceled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
+
# zpool create pool sda sdb log sdc
+
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached, and + imported and exported as part of the larger pool. Mirrored log devices can + be removed by specifying the top-level mirror for the log.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allow much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
+
# zpool create pool sda sdb cache sdc sdd
+
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is considered volatile, as is the + case with other system caches.

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

The following are read-only properties:

+
+
+
Amount of storage available within the pool. This property can also be + referred to by its shortened column name, + .
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. Uninitialized space consists + of any space on an EFI labeled vdev which has not been brought online + (e.g, using zpool online + -e). This space occurs when a LUN is dynamically + expanded.
+
+
The amount of fragmentation in the pool.
+
+
The amount of free space available in the pool.
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
The current health of the pool. Health can be one of + ONLINE, DEGRADED, + FAULTED, + , UNAVAIL.
+
+
A unique identifier for the pool.
+
+
Total size of the storage pool.
+
+
Information about unsupported features that are enabled on the pool. See + zpool-features(5) for details.
+
+
Amount of storage space used within the pool.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpool command does not. For non-full pools of a + reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of 2 (internally + referred to as ashift ). Values from 9 to 16, inclusive, + are valid; also, the special value 0 (the default) means to auto-detect + using the kernel's block layer and a ZFS internal exception list. I/O + operations will be aligned to the specified size boundaries. Additionally, + the minimum (disk) write size will be set to the specified size, so this + represents a space vs. performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + + (which is 1<<12 = 4096). When set, this property is used as the + default hint value in subsequent vdev operations (add, attach and + replace). Changing this value will not modify any existing vdev, not even + on disk replacement; however it can be used, for instance, to replace a + dying 512B sectors disk with a newer 4KiB sectors device: this will + probably result in bad performance but at the same time could prevent loss + of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) man page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) man page for more details.
+
=|pool/dataset
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the special value + none creates a temporary pool that is never cached, and + the special value "" (empty string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=number
+
Threshold for the number of block ditto copies. If the reference count for + a deduplicated block increases above this number, a new ditto copy of this + block is automatically stored. The default setting is 0 + which causes no ditto copies to be created for deduplicated blocks. The + minimum legal nonzero setting is + .
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(5) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. When this property is on, periodic writes to storage occur + to show the pool is in use. See + + in the zfs-module-parameters(5) man page. In order to + enable this property each host must set a unique hostid. See + zgenhostid(8) + spl-module-parameters(5) for additional details. The + default value is off.
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool add + [-fgLnP] [-o + property=value] + pool vdev...
+
Adds the specified virtual devices to the given pool. The + vdev specification is described in the + Virtual Devices section. The + behavior of the -f option, and the device checks + performed are described in the zpool + create subcommand. +
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all + symbolic links. This can be used to look up the current block device + name regardless of the /dev/disk/ path used to open it.
+
+
Displays the configuration that would be used without actually adding + the vdevs. The actual pool creation can still + fail due to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool attach + [-f] [-o + property=value] + pool device new_device
+
Attaches new_device to the existing + device. The existing device cannot be part of a + raidz configuration. If device is not currently part + of a mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part + of a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately. +
+
+
Forces use of new_device, even if its appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool clear + pool [device]
+
Clears device errors in a pool. If no arguments are specified, all device + errors within the pool are cleared. If one or more devices is specified, + only those errors associated with the specified device or devices are + cleared.
+
zpool create + [-dfn] [-m + mountpoint] [-o + property=value]... + [-o + feature@feature=value]... + [-O + file-system-property=value]... + [-R root] + [-t tname] + pool vdev...
+
Creates a new storage pool containing the virtual devices specified on the + command line. The pool name must begin with a letter, and can only contain + alphanumeric characters as well as underscore + (""), dash + ("."), colon + (""), + space ("-"), and period + ("."). The pool names + mirror, raidz, spare + and log are reserved, as are names beginning with the + pattern + . + The vdev specification is described in the + Virtual Devices section. +

The command verifies that each device specified is accessible + and not currently in use by another subsystem. There are some uses, such + as being currently mounted, or specified as the dedicated dump device, + that prevents a device from ever being used by ZFS. Other uses, such as + having a preexisting UFS file system, can be overridden with the + -f option.

+

The command also checks that the replication strategy for the + pool is consistent. An attempt to combine redundant and non-redundant + storage in a single pool, or to mix disks and files, results in an error + unless -f is specified. The use of differently + sized devices within a single raidz or mirror group is also flagged as + an error unless -f is specified.

+

Unless the -R option is specified, the + default mount point is + /pool. The mount point + must not exist or must be empty, or else the root dataset cannot be + mounted. This can be overridden with the -m + option.

+

By default all supported features are enabled on the new pool + unless the -d option is specified.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with the -o option. + See zpool-features(5) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool + if altroot is specified. The mount point must be + an absolute path, + , + or none. For more information on dataset mount + points, see zfs(8).
+
+
Displays the configuration that would be used without actually + creating the pool. The actual pool creation can still fail due to + insufficient privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set.
+
+ feature@feature=value
+
Sets the given pool feature. See the + zpool-features(5) section for a list of valid + features that can be set. Value can be either disabled or + enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the + pool. See the Properties section + of zfs(8) for a list of valid properties that can be + set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to + + while the on-disk name will be the name specified as the pool name + . + This will set the default cachefile property to none. This is intended + to handle name space collisions when creating pools for other systems, + such as virtual machines or physical machines whose pools live on + network block devices.
+
+
+
zpool destroy + [-f] pool
+
Destroys the given pool, freeing up any devices for other use. This + command tries to unmount any active datasets before destroying the pool. +
+
+
Forces any active datasets contained within the pool to be + unmounted.
+
+
+
zpool detach + pool device
+
Detaches device from a mirror. The operation is + refused if there are no other valid replicas of the data. If device may be + re-added to the pool later on then consider the zpool + offline command instead.
+
zpool events + [-cfHv] [pool...]
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + For more information about the subclasses and event payloads that can be + generated see the zfs-events(5) man page. +
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
zpool export + [-a] [-f] + pool...
+
Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present. +

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, + so that ZFS can label the disks with portable EFI labels. Otherwise, + disk drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, using the + unmount -f command. +

This command will forcefully export the pool even if it + has a shared spare that is currently being used. This may lead to + potential data corruption.

+
+
+
+
zpool get + [-Hp] [-o + field[,field]...] + all|property[,property]... + pool...
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
        name          Name of storage pool
+        property      Property name
+        value         Property value
+        source        Property source, either 'default' or 'local'.
+
+

See the Properties + section for more information on the available pool properties.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
+
zpool history + [-il] [pool]...
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified. +
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard + format includes, the user name, the hostname, and the zone in which + the operation was performed.
+
+
+
zpool import + [-D] [-c + cachefile|-d + dir]
+
Lists pools available to import. If the -d option + is not specified, this command searches for devices in + /dev. The -d option can be + specified multiple times, and all directories are searched. If the device + appears to be part of an exported pool, this command displays a summary of + the pool with the name of the pool, a numeric identifier, as well as the + vdev layout and current health of the device for each device or file. + Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir
+
Searches for devices or files in dir. The + -d option can be specified multiple + times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DfmN] + [-F [-n] + [-T] [-X]] + [-c + cachefile|-d + dir] [-o + mntopts] [-o + property=value]... + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir
+
Searches for devices or files in dir. The + -d option can be specified multiple times. + This option is incompatible with the -c + option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dfm] [-F + [-n] [-t] + [-T] [-X]] + [-c + cachefile|-d + dir] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir
+
Searches for devices or files in dir. The + -d option can be specified multiple times. + This option is incompatible with the -c + option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set -o + cachefile=none when not explicitly specified.
+
+
+
zpool iostat + [[[-c SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
Displays I/O statistics for the given pools/vdevs. You can pass in a list + of pools, a pool and list of vdevs in that pool, or a list of any vdevs + from any pool. If no items are specified, statistics for every pool in the + system are shown. When given an interval, the + statistics are printed every interval seconds until + ^C is pressed. If count is specified, the command exits after count + reports are printed. The first report printed is always the statistics + since boot regardless of whether interval and + count are passed. However, this behavior can be + suppressed with the -y flag. Also note that the + units of , + , + that are + printed in the report are in base 1024. To get the raw values, use the + -p flag. +
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + iostat output. Users can run any script found + in their ~/.zpool.d directory or from the + system /etc/zfs/zpool.d directory. Script + names containing the slash (/) character are not allowed. The default + search path can be overridden by setting the ZPOOL_SCRIPTS_PATH + environment variable. A privileged user can run + -c if they have the ZPOOL_SCRIPTS_AS_ROOT + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or + add the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script + name, it prints a list of all scripts. -c + also sets verbose mode + (-v).

+

Script output should be in the form of + "name=value". The column name is set to "name" + and the value is set to "value". Multiple lines can be + used to output multiple columns. The first line of output not in the + "name=value" format is displayed without a column title, + and no more output after that is displayed. This can be useful for + printing error messages. Blank or NULL values are printed as a '-' + to make output awk-able.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
+
+
Underlying path to the vdev (/dev/sd*). For use with device + mapper, multipath, or partitioned vdevs.
+
+
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Print request size histograms for the leaf ZIOs. This includes + histograms of individual ZIOs ( ind) and + aggregate ZIOs ( agg ). These stats can be + useful for seeing how well the ZFS IO aggregator is working. Do not + confuse these request size stats with the block layer requests; it's + possible ZIOs can be broken up before being sent to the block + device.
+
+
Verbose statistics Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wide statistics.
+
+
Omit statistics since boot. Normally the first line of output reports + the statistics since boot. This option suppresses that first line of + output.
+
+
Display latency histograms: +

total_wait: Total IO time (queuing + + disk IO time). disk_wait: Disk IO time (time + reading/writing the disk). syncq_wait: Amount + of time IO spent in synchronous priority queues. Does not include + disk time. asyncq_wait: Amount of time IO + spent in asynchronous priority queues. Does not include disk time. + scrub: Amount of time IO spent in scrub queue. + Does not include disk time.

+
+
+
Include average latency statistics: +

total_wait: Average total IO time + (queuing + disk IO time). disk_wait: Average + disk IO time (time reading/writing the disk). + syncq_wait: Average amount of time IO spent in + synchronous priority queues. Does not include disk time. + asyncq_wait: Average amount of time IO spent + in asynchronous priority queues. Does not include disk time. + scrub: Average queuing time in scrub queue. + Does not include disk time.

+
+
+
Include active queue statistics. Each priority queue has both pending + ( pend) and active ( + activ) IOs. Pending IOs are waiting to be issued + to the disk, and active IOs have been issued to disk and are waiting + for completion. These stats are broken out by priority queue: +

syncq_read/write: Current number of + entries in synchronous priority queues. + asyncq_read/write: Current number of entries + in asynchronous priority queues. scrubq_read: + Current number of entries in scrub queue.

+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
zpool labelclear + [-f] device
+
Removes ZFS label information from the specified + device. The device must not be + part of an active pool configuration. +
+
+
Treat exported or foreign devices as inactive.
+
+
+
zpool list + [-HgLpPv] [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
Lists the given pools along with a health status and space usage. If no + pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until ^C is pressed. + If count is specified, the command exits after + count reports are printed. +
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + Properties section for a list of + valid properties. The default list is + + .
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L -flag.
+
+ u|d
+
Display a time stamp. Specify -u for a printed + representation of the internal representation of time. See + time(2). Specify -d for + standard date format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wise statistics.
+
+
+
zpool offline + [-f] [-t] + pool device...
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device...
+
Brings the specified physical device online. This command is not + applicable to spares or cache devices. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
zpool reguid + pool
+
Generates a new unique identifier for the pool. You must ensure that all + devices in this pool are online and healthy before performing this + action.
+
zpool reopen + pool
+
Reopen all the vdevs associated with the pool.
+
zpool remove + pool device...
+
Removes the specified device from the pool. This command currently only + supports removing hot spares, cache, and log devices. A mirrored log + device can be removed by specifying the top-level mirror for the log. + Non-log devices that are part of a mirrored configuration can be removed + using the zpool detach + command. Non-redundant and raidz devices cannot be removed from a + pool.
+
zpool replace + [-f] [-o + property=value] + pool device + [new_device]
+
Replaces old_device with + new_device. This is equivalent to attaching + new_device, waiting for it to resilver, and then + detaching old_device. +

The size of new_device must be greater + than or equal to the minimum size of all the devices in a mirror or + raidz configuration.

+

new_device is required if the pool is + not redundant. If new_device is not specified, it + defaults to old_device. This form of replacement + is useful after an existing disk has failed and has been physically + replaced. In this case, the new disk may have the same + /dev path as the old device, even though it is + actually a different disk. ZFS recognizes this.

+
+
+
Forces use of new_device, even if its appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool scrub + [-s | -p] + pool...
+
Begins a scrub or resumes a paused scrub. The scrub examines all data in + the specified pools to verify that it checksums correctly. For replicated + (mirror or raidz) devices, ZFS automatically repairs any damage discovered + during the scrub. The zpool + status command reports the progress of the scrub + and summarizes the results of the scrub upon completion. +

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be + out of date (for example, when attaching a new device to a mirror or + replacing an existing device), whereas scrubbing examines all data to + discover silent errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive + operations, ZFS only allows one at a time. If a scrub is paused, the + zpool scrub resumes it. + If a resilver is in progress, ZFS does not allow a scrub to be started + until the resilver completes.

+
+
+
Stop scrubbing.
+
+
+
+
Pause scrubbing. Scrub progress is periodically synced to disk so if + the system is restarted or pool is exported during a paused scrub, the + scrub will resume from the place where it was last checkpointed to + disk. To resume a paused scrub issue zpool + scrub again.
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + Properties section for more + information on what properties can be set and acceptable values.
+
zpool split + [-gLnP] [-o + property=value]... + [-R root] pool + newpool [device ...]
+
Splits devices off pool creating + newpool. All vdevs in pool + must be mirrors and the pool must not be in the process of resilvering. At + the time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool. +

The optional device specification causes the specified + device(s) to be included in the new pool and, + should any devices remain unspecified, the last device in each mirror is + used as would be by default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Do dry run, do not actually perform the split. Print out the expected + configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L -flag.
+
+ property=value
+
Sets the specified property for newpool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Set altroot for newpool to + root and automatically import it.
+
+
+
zpool status + [-c + [SCRIPT1[,SCRIPT2]...]] + [-gLPvxD] [-T + u|d] [pool]... + [interval [count]]
+
Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in + the system is displayed. For more information on pool and device health, + see the Device Failure + and Recovery section. +

If a scrub or resilver is in progress, this command reports + the percentage done and the estimated time to completion. Both of these + are only approximate, because the amount of data in the pool and the + other workloads on the system can change.

+
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + status output. See the + -c option of zpool + iostat for complete details.
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L -flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in + the pool) block counts and sizes by reference count.
+
+ u|d
+
Display a time stamp. Specify -u for a printed + representation of the internal representation of time. See + time(2). Specify -d for + standard date format. See date(1).
+
+
Displays verbose data error information, printing out a complete list + of all data errors since the last complete pool scrub.
+
+
Only display status for pools that are exhibiting errors or are + otherwise unavailable. Warnings about pools not using the latest + on-disk format will not be included.
+
+
+
zpool sync + [pool ...]
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all pools on the system. Otherwise, + it will sync only the specified pool(s).
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools.
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the current software. See + zpool-features(5) for a description of feature flags + features supported by the current software.
+
zpool upgrade + [-V version] + -a|pool...
+
Enables all supported features on the given pool. Once this is done, the + pool will no longer be accessible on systems that do not support feature + flags. See zfs-features(5) for details on compatibility + with systems that support feature flags, but do not support all features + enabled on the pool. +
+
+
Enables all supported features on all pools.
+
+ version
+
Upgrade to the specified legacy version. If the + -V flag is specified, no features will be + enabled on the pool. This option can only be used to increase the + version number up to the last supported legacy version number.
+
+
+
+
+
+
+

+

The following exit values are returned:

+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+

+
+
Creating a RAID-Z Storage Pool
+
The following command creates a pool with a single raidz root vdev that + consists of six disks. +
+
# zpool create tank raidz sda sdb sdc sdd sde sdf
+
+
+
Creating a Mirrored Storage Pool
+
The following command creates a pool with two mirrors, where each mirror + contains two disks. +
+
# zpool create tank mirror sda sdb mirror sdc sdd
+
+
+
Creating a ZFS Storage Pool by Using + Partitions
+
The following command creates an unmirrored pool using two disk + partitions. +
+
# zpool create tank sda1 sdb2
+
+
+
Creating a ZFS Storage Pool by Using + Files
+
The following command creates an unmirrored pool using files. While not + recommended, a pool based on files can be useful for experimental + purposes. +
+
# zpool create tank /path/to/file/a /path/to/file/b
+
+
+
Adding a Mirror to a ZFS Storage Pool
+
The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool. +
+
# zpool add tank mirror sda sdb
+
+
+
Listing Available ZFS Storage Pools
+
The following command lists all available pools on the system. In this + case, the pool + is + faulted due to a missing device. The results from this command are similar + to the following: +
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
Destroying a ZFS Storage Pool
+
The following command destroys the pool tank and any + datasets contained within. +
+
# zpool destroy -f tank
+
+
+
Exporting a ZFS Storage Pool
+
The following command exports the devices in pool tank + so that they can be relocated or later imported. +
+
# zpool export tank
+
+
+
Importing a ZFS Storage Pool
+
The following command displays available pools, and then imports the pool + tank for use on the system. The results from this + command are similar to the following: +
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
Upgrading All ZFS Storage Pools to the Current + Version
+
The following command upgrades all ZFS Storage pools to the current + version of the software. +
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
Managing Hot Spares
+
The following command creates a new pool with an available hot spare: +
+
# zpool create tank mirror sda sdb spare sdc
+
+

If one of the disks were to fail, the pool would be reduced to + the degraded state. The failed device can be replaced using the + following command:

+
+
# zpool replace tank sda sdd
+
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fails. The + hot spare can be permanently removed from the pool using the following + command:

+
+
# zpool remove tank sdc
+
+
+
Creating a ZFS Pool with Mirrored Separate + Intent Logs
+
The following command creates a ZFS storage pool consisting of two, + two-way mirrors and mirrored log devices: +
+
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
+  sde sdf
+
+
+
Adding Cache Devices to a ZFS Pool
+
The following command adds two disks for use as cache devices to a ZFS + storage pool: +
+
# zpool add pool cache sdc sdd
+
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take + over an hour for them to fill. Capacity and reads can be monitored using + the iostat option as follows:

+
+
# zpool iostat -v pool 5
+
+
+
Removing a Mirrored Log Device
+
The following command removes the mirrored log device + mirror-2. Given this configuration: +
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
+
# zpool remove tank mirror-2
+
+
+
Displaying expanded space on a + device
+
The following command displays the detailed information for the pool + . + This pool is comprised of a single raidz vdev where one of its devices + increased its capacity by 10GB. In this example, the pool will not be able + to utilize this extra capacity until all the devices under the raidz vdev + have been expanded. +
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
Adding output columns
+
Additional columns can be added to the zpool + status and zpool + iostat output with -c + option. +
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc slaves
+   capacity operations bandwidth
+   pool       alloc free  read  write read  write slaves
+   ---------- ----- ----- ----- ----- ----- ----- ---------
+   tank       20.4G 7.23T 26    152   20.7M 21.6M
+   mirror     20.4G 7.23T 26    152   20.7M 21.6M
+   U1         -     -     0     31    1.46K 20.6M sdb sdff
+   U10        -     -     0     1     3.77K 13.3K sdas sdgw
+   U11        -     -     0     1     288K  13.3K sdat sdgx
+   U12        -     -     0     1     78.4K 13.3K sdau sdgy
+   U13        -     -     0     1     128K  13.3K sdav sdgz
+   U14        -     -     0     1     63.2K 13.3K sdfk sdg
+
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running
+
+
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
+
+
Cause zpool subcommands to output vdev guids by default. + This behavior is identical to the zpool status + -g command line option.
+
+
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the zpool + status -L command line option.
+
+
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the zpool + status -p command line option.
+
+
+
+
Older ZFS on Linux implementations had issues when attempting to display + pool config VDEV names if a devid NVP value is present + in the pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and zpool + status would fail when listing the config. This would also be + true for future Linux based pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+
+
+
+
+
Allow a privileged user to run the zpool + status/iostat with the -c option. Normally, + only unprivileged users are allowed to run + -c.
+
+
+
+
The search path for scripts when running zpool + status/iostat with the -c option. This is a + colon-separated list of directories and overrides the default + ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
+
+
Allow a user to run zpool status/iostat with the + -c option. If + ZPOOL_SCRIPTS_ENABLED is not set, it is assumed that the + user is allowed to run zpool status/iostat + -c.
+
+
+
+

+

+
+
+

+

zed(8), zfs(8), + zfs-events(5), zfs-module-parameters(5), + zpool-features(5)

+
+
+ + + + + +
April 27, 2018Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/8/zstreamdump.8.html b/man/v0.7/8/zstreamdump.8.html new file mode 100644 index 000000000..6f692c3f7 --- /dev/null +++ b/man/v0.7/8/zstreamdump.8.html @@ -0,0 +1,198 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
zstreamdump(8)System Administration Commandszstreamdump(8)
+
+
+

+

zstreamdump - filter data in zfs send stream

+
+
+

+
zstreamdump [-C] [-v]
+

+
+
+

+

The zstreamdump utility reads from the output of the zfs + send command, then displays headers and some statistics from that + output. See zfs(1M).

+
+
+

+

The following options are supported:

+

-C

+

+
Suppress the validation of checksums.
+

+

-v

+

+
Verbose. Dump all headers, not only begin and end + headers.
+

+
+
+

+

zfs(8)

+
+
+ + + + + +
29 Aug 2012ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.7/index.html b/man/v0.7/index.html new file mode 100644 index 000000000..8c5582154 --- /dev/null +++ b/man/v0.7/index.html @@ -0,0 +1,143 @@ + + + + + + + v0.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/cstyle.1.html b/man/v0.8/1/cstyle.1.html new file mode 100644 index 000000000..a750d879b --- /dev/null +++ b/man/v0.8/1/cstyle.1.html @@ -0,0 +1,285 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
cstyle(1)General Commands Manualcstyle(1)
+
+
+

+

cstyle - check for some common stylistic errors in C source + files

+
+
+

+

cstyle [-chpvCP] [-o constructs] [file...]

+
+
+

+

cstyle inspects C source files (*.c and *.h) for common + stylistic errors. It attempts to check for the cstyle documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. Note that + there is much in that document that cannot be checked for; just + because your code is cstyle(1) clean does not mean that you've + followed Sun's C style. Caveat emptor.

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented exactly four + spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see CONTINUATION CHECKING, below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI #else and #endif + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + "u_int" and "u_long" were used, but they are now + deprecated in favor of the POSIX types uint_t, ulong_t, etc. This detects + any use of the deprecated types. Used as part of the putback checks.
+
+
Allow a comma-separated list of additional constructs. Available + constructs include:
+
+
Allow doxygen-style block comments (/** and /*!)
+
+
Allow splint-style lint comments (/*@...@*/)
+
+
+
+

+

The cstyle rule for the OS/Net consolidation is that all new files + must be -pP clean. For existing files, the following invocations are + run against both the old and new files:

+
+
+
+
+
+
+
+
+

If the old file gave no errors for one of the invocations, the new + file must also give no errors. This way, files can only become more + clean.

+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
+
1.
+
Preprocessor macros which cause unmatched parenthesis will confuse the + checker for that line. To fix this, you'll need to make sure that each + branch of the #if statement has balanced parenthesis.
+
2.
+
Some cpp macros do not require ;s after them. Any such macros + *must* be ALL_CAPS; any lower case letters will cause bad output.
+
+

The bad output will generally be corrected after the next + ;, {, or }.

+

Some continuation error messages deserve some additional + explanation

+
+
+
A multi-line statement which is not broken at statement boundaries. For + example:
+
+
+

if (this_is_a_long_variable == another_variable) a = +
+ b + c;

+

Will trigger this error. Instead, do:

+

if (this_is_a_long_variable == another_variable) +
+ a = b + c;

+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example:
+
+
+

while (do_something(&x) == 0);

+

Will trigger this error. Instead, do:

+

while (do_something(&x) == 0) +
+ ;

+
+

+
+
+ + + + + +
28 March 2005
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/index.html b/man/v0.8/1/index.html new file mode 100644 index 000000000..57c235116 --- /dev/null +++ b/man/v0.8/1/index.html @@ -0,0 +1,153 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/raidz_test.1.html b/man/v0.8/1/raidz_test.1.html new file mode 100644 index 000000000..f61ce98d8 --- /dev/null +++ b/man/v0.8/1/raidz_test.1.html @@ -0,0 +1,260 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
raidz_test(1)User Commandsraidz_test(1)
+
+

+
+

+

raidz_test - raidz implementation verification and + benchmarking tool

+
+
+

+

raidz_test <options>

+
+
+

+

This manual page documents briefly the raidz_test + command.

+

Purpose of this tool is to run all supported raidz implementation + and verify results of all methods. Tool also contains a parameter sweep + option where all parameters affecting RAIDZ block are verified (like ashift + size, data offset, data size, etc...). The tool also supports a benchmarking + mode using -B option.

+
+
+

+

-h

+
+
+
Print a help summary.
+
+

-a ashift (default: 9)

+
+
+
Ashift value.
+
+

-o zio_off_shift (default: 0)

+
+
+
Zio offset for raidz block. Offset value is 1 << + (zio_off_shift)
+
+

-d raidz_data_disks (default: 8)

+
+
+
Number of raidz data disks to use. Additional disks for parity will be + used during testing.
+
+

-s zio_size_shift (default: 19)

+
+
+
Size of data for raidz block. Size is 1 << (zio_size_shift).
+
+

-S(weep)

+
+
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for -a -o -d -s options. + Runtime using this option will be long.
+
+

-t(imeout)

+
+
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
+

-B(enchmark)

+
+
+
This options starts the benchmark mode. All implementations are + benchmarked using increasing per disk data size. Results are given as + throughput per disk, measured in MiB/s.
+
+

-v(erbose)

+
+
+
Increase verbosity.
+
+

-T(est the test)

+
+
+
Debugging option. When this option is specified tool is supposed to fail + all tests. This is to check if tests would properly verify + bit-exactness.
+
+

-D(ebug)

+
+
+
Debugging option. Specify to attach gdb when SIGSEGV or SIGABRT are + received.
+
+

+

+
+
+

+

ztest (1)

+
+
+

+

vdev_raidz, created for ZFS on Linux by Gvozden + Nešković <neskovic@gmail.com>

+
+
+ + + + + +
2016ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/zhack.1.html b/man/v0.8/1/zhack.1.html new file mode 100644 index 000000000..70a012784 --- /dev/null +++ b/man/v0.8/1/zhack.1.html @@ -0,0 +1,252 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
zhack(1)User Commandszhack(1)
+
+

+
+

+

zhack - libzpool debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+

zhack [-c cachefile] [-d dir] + <subcommand> [arguments]

+
+
+

+

-c cachefile

+
+
+
Read the pool configuration from the cachefile, which is + /etc/zfs/zpool.cache by default.
+
+

-d dir

+
+
+
Search for pool members in the dir path. Can be specified + more than once.
+
+
+
+

+

feature stat pool

+
+
+
List feature flags.
+
+

feature enable [-d description] [-r] pool + guid

+
+
+
Add a new feature to pool that is uniquely identified by + guid, which is specified in the same form as a zfs(8) user + property.
+
+
The description is a short human readable explanation of the new + feature.
+
+
The -r switch indicates that pool can be safely opened in + read-only mode by a system that does not have the guid + feature.
+
+

feature ref [-d|-m] pool guid

+
+
+
Increment the reference count of the guid feature in + pool.
+
+
The -d switch decrements the reference count of the guid + feature in pool.
+
+
The -m switch indicates that the guid feature is now + required to read the pool MOS.
+
+
+
+

+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
# zhack feature enable -d 'Predict future disk failures.' \
+
+ tank com.example:clairvoyance
+
# zhack feature ref tank com.example:clairvoyance
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

zfs(8), zpool-features(5), ztest(1)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/ztest.1.html b/man/v0.8/1/ztest.1.html new file mode 100644 index 000000000..338ed1d86 --- /dev/null +++ b/man/v0.8/1/ztest.1.html @@ -0,0 +1,349 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ztest(1)User Commandsztest(1)
+
+

+
+

+

ztest - was written by the ZFS Developers as a ZFS unit + test.

+
+
+

+

ztest <options>

+
+
+

+

This manual page documents briefly the ztest command.

+

ztest was written by the ZFS Developers as a ZFS unit test. + The tool was developed in tandem with the ZFS functionality and was executed + nightly as one of the many regression test against the daily build. As + features were added to ZFS, unit tests were also added to ztest. In + addition, a separate test development team wrote and executed more + functional and stress tests.

+

By default ztest runs for ten minutes and uses block files + (stored in /tmp) to create pools rather than using physical disks. Block + files afford ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you have a small tmp + directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 minutes. The -V option + can be used to increase the verbosity of the tool. Adding multiple -V option + is allowed and the more you add the more chatty ztest becomes.

+

After the ztest run completes, you should notice many + ztest.* files lying around. Once the run completes you can safely remove + these files. Note that you shouldn't remove these files during a run. You + can re-use these files in your next ztest run by using the -E + option.

+
+
+

+

-?

+
+
+
Print a help summary.
+
+

-v vdevs (default: 5)

+
+
+
Number of vdevs.
+
+

-s size_of_each_vdev (default: 64M)

+
+
+
Size of each vdev.
+
+

-a alignment_shift (default: 9) (use 0 for + random)

+
+
+
Used alignment in test.
+
+

-m mirror_copies (default: 2)

+
+
+
Number of mirror copies.
+
+

-r raidz_disks (default: 4)

+
+
+
Number of raidz disks.
+
+

-R raidz_parity (default: 1)

+
+
+
Raidz parity.
+
+

-d datasets (default: 7)

+
+
+
Number of datasets.
+
+

-t threads (default: 23)

+
+
+
Number of threads.
+
+

-g gang_block_threshold (default: 32K)

+
+
+
Gang block threshold.
+
+

-i initialize_pool_i_times (default: + 1)

+
+
+
Number of pool initialisations.
+
+

-k kill_percentage (default: 70%)

+
+
+
Kill percentage.
+
+

-p pool_name (default: ztest)

+
+
+
Pool name.
+
+

-V(erbose)

+
+
+
Verbose (use multiple times for ever more blather).
+
+

-E(xisting)

+
+
+
Use existing pool (use existing pool instead of creating new one).
+
+

-T time (default: 300 sec)

+
+
+
Total test run time.
+
+

-z zil_failure_rate (default: fail every 2^5 + allocs)

+
+
+
Injected failure rate.
+
+

-G

+
+
+
Dump zfs_dbgmsg buffer before exiting.
+
+
+
+

+

To override /tmp as your location for block files, you can use the + -f option:

+
+
+
ztest -f /
+
+

To get an idea of what ztest is actually testing try this:

+
+
+
ztest -f / -VVV
+
+

Maybe you'd like to run ztest for longer? To do so simply use the + -T option and specify the runlength in seconds like so:

+
+
+
ztest -f / -V -T 120 +

+
+
+
+
+

+
+
+
Use id instead of the SPL hostid to identify this host. Intended + for use with ztest, but this environment variable will affect any utility + which uses libzpool, including zpool(8). Since the kernel is + unaware of this setting results with utilities other than ztest are + undefined.
+
+
Limit the default stack size to stacksize bytes for the purpose of + detecting and debugging kernel stack overflows. This value defaults to + 32K which is double the default 16K Linux kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to 256K.

+
+
+
+
+

+

spl-module-parameters (5), zpool (1), zfs + (1), zdb (1),

+
+
+

+

This manual page was transferred to asciidoc by Michael + Gebetsroither <gebi@grml.org> from + http://opensolaris.org/os/community/zfs/ztest/

+
+
+ + + + + +
2009 NOV 01ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/1/zvol_wait.1.html b/man/v0.8/1/zvol_wait.1.html new file mode 100644 index 000000000..884d0219c --- /dev/null +++ b/man/v0.8/1/zvol_wait.1.html @@ -0,0 +1,191 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands Manual (smm)ZVOL_WAIT(1)
+
+
+

+

zvol_waitWait + for ZFS volume links in + to be + created.

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, ZFS will register each ZFS volume + (zvol) as a disk device with the system. As the disks are registered, + udev(7) will asynchronously create + symlinks under + + using the zvol's name. zvol_wait will wait for all + those symlinks to be created before returning.

+
+
+

+

udev(7)

+
+
+ + + + + +
July 5, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/index.html b/man/v0.8/5/index.html new file mode 100644 index 000000000..cf6dc050a --- /dev/null +++ b/man/v0.8/5/index.html @@ -0,0 +1,153 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/spl-module-parameters.5.html b/man/v0.8/5/spl-module-parameters.5.html new file mode 100644 index 000000000..71eff4f62 --- /dev/null +++ b/man/v0.8/5/spl-module-parameters.5.html @@ -0,0 +1,387 @@ + + + + + + + spl-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

spl-module-parameters.5

+
+ + + + + +
SPL-MODULE-PARAMETERS(5)File Formats ManualSPL-MODULE-PARAMETERS(5)
+
+
+

+

spl-module-parameters - SPL module parameters

+
+
+

+

Description of the different parameters to the SPL module.

+

+
+

+

+

spl_kmem_cache_expire (uint)

+
Cache expiration is part of default Illumos cache + behavior. The idea is that objects in magazines which have not been recently + accessed should be returned to the slabs periodically. This is known as cache + aging and when enabled objects will be typically returned after 15 seconds. +

On the other hand Linux slabs are designed to never move objects + back to the slabs unless there is memory pressure. This is possible because + under Linux the cache will be notified when memory is low and objects can be + released.

+

By default only the Linux method is enabled. It has been shown to + improve responsiveness on low memory systems and not negatively impact the + performance of systems with more memory. This policy may be changed by + setting the spl_kmem_cache_expire bit mask as follows, both policies + may be enabled concurrently.

+

0x01 - Aging (Illumos), 0x02 - Low memory (Linux)

+

Default value: 0x02

+
+

+

spl_kmem_cache_kmem_threads (uint)

+
The number of threads created for the spl_kmem_cache task + queue. This task queue is responsible for allocating new slabs for use by the + kmem caches. For the majority of systems and workloads only a small number of + threads are required. +

Default value: 4

+
+

+

spl_kmem_cache_reclaim (uint)

+
When this is set it prevents Linux from being able to + rapidly reclaim all the memory held by the kmem caches. This may be useful in + circumstances where it's preferable that Linux reclaim memory from some other + subsystem first. Setting this will increase the likelihood out of memory + events on a memory constrained system. +

Default value: 0

+
+

+

spl_kmem_cache_obj_per_slab (uint)

+
The preferred number of objects per slab in the cache. In + general, a larger value will increase the caches memory footprint while + decreasing the time required to perform an allocation. Conversely, a smaller + value will minimize the footprint and improve cache reclaim time but + individual allocations may take longer. +

Default value: 8

+
+

+

spl_kmem_cache_obj_per_slab_min (uint)

+
The minimum number of objects allowed per slab. Normally + slabs will contain spl_kmem_cache_obj_per_slab objects but for caches + that contain very large objects it's desirable to only have a few, or even + just one, object per slab. +

Default value: 1

+
+

+

spl_kmem_cache_max_size (uint)

+
The maximum size of a kmem cache slab in MiB. This + effectively limits the maximum cache object size to + spl_kmem_cache_max_size / spl_kmem_cache_obj_per_slab. Caches + may not be created with object sized larger than this limit. +

Default value: 32 (64-bit) or 4 (32-bit)

+
+

+

spl_kmem_cache_slab_limit (uint)

+
For small objects the Linux slab allocator should be used + to make the most efficient use of the memory. However, large objects are not + supported by the Linux slab and therefore the SPL implementation is preferred. + This value is used to determine the cutoff between a small and large object. +

Objects of spl_kmem_cache_slab_limit or smaller will be + allocated using the Linux slab allocator, large objects use the SPL + allocator. A cutoff of 16K was determined to be optimal for architectures + using 4K pages.

+

Default value: 16,384

+
+

+

spl_kmem_cache_kmem_limit (uint)

+
Depending on the size of a cache object it may be backed + by kmalloc()'d or vmalloc()'d memory. This is because the size of the required + allocation greatly impacts the best way to allocate the memory. +

When objects are small and only a small number of memory pages + need to be allocated, ideally just one, then kmalloc() is very efficient. + However, when allocating multiple pages with kmalloc() it gets increasingly + expensive because the pages must be physically contiguous.

+

For this reason we shift to vmalloc() for slabs of large objects + which which removes the need for contiguous pages. We cannot use vmalloc() + in all cases because there is significant locking overhead involved. This + function takes a single global lock over the entire virtual address range + which serializes all allocations. Using slightly different allocation + functions for small and large objects allows us to handle a wide range of + object sizes.

+

The spl_kmem_cache_kmem_limit value is used to determine + this cutoff size. One quarter the PAGE_SIZE is used as the default value + because spl_kmem_cache_obj_per_slab defaults to 16. This means that + at most we will need to allocate four contiguous pages.

+

Default value: PAGE_SIZE/4

+
+

+

spl_kmem_alloc_warn (uint)

+
As a general rule kmem_alloc() allocations should be + small, preferably just a few pages since they must by physically contiguous. + Therefore, a rate limited warning will be printed to the console for any + kmem_alloc() which exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped at + 32K to accommodate systems using large pages. This value was selected to be + small enough to ensure the largest allocations are quickly noticed and + fixed. But large enough to avoid logging any warnings when a allocation size + is larger than optimal but not a serious concern. Since this value is + tunable, developers are encouraged to set it lower when testing so any new + largish allocations are quickly caught. These warnings may be disabled by + setting the threshold to zero.

+

Default value: 32,768

+
+

+

spl_kmem_alloc_max (uint)

+
Large kmem_alloc() allocations will fail if they exceed + KMALLOC_MAX_SIZE. Allocations which are marginally smaller than this limit may + succeed but should still be avoided due to the expense of locating a + contiguous range of free pages. Therefore, a maximum kmem size with reasonable + safely margin of 4x is set. Kmem_alloc() allocations larger than this maximum + will quickly fail. Vmem_alloc() allocations less than or equal to this value + will use kmalloc(), but shift to vmalloc() when exceeding this value. +

Default value: KMALLOC_MAX_SIZE/4

+
+

+

spl_kmem_cache_magazine_size (uint)

+
Cache magazines are an optimization designed to minimize + the cost of allocating memory. They do this by keeping a per-cpu cache of + recently freed objects, which can then be reallocated without taking a lock. + This can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being immediately + released this may not be ideal for low memory machines. +

For this reason spl_kmem_cache_magazine_size can be used to + set a maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+

Default value: 0

+
+

+

spl_hostid (ulong)

+
The system hostid, when set this can be used to uniquely + identify a system. By default this value is set to zero which indicates the + hostid is disabled. It can be explicitly enabled by placing a unique non-zero + value in /etc/hostid/. +

Default value: 0

+
+

+

spl_hostid_path (charp)

+
The expected path to locate the system hostid when + specified. This value may be overridden for non-standard configurations. +

Default value: /etc/hostid

+
+

+

spl_panic_halt (uint)

+
Cause a kernel panic on assertion failures. When not + enabled, the thread is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+

Default value: 0

+
+

+

spl_taskq_kick (uint)

+
Kick stuck taskq to spawn threads. When writing a + non-zero value to it, it will scan all the taskqs. If any of them have a + pending task more than 5 seconds old, it will kick it to spawn more threads. + This can be used if you find a rare deadlock occurs because one or more taskqs + didn't spawn a thread when it should. +

Default value: 0

+
+

+

spl_taskq_thread_bind (int)

+
Bind taskq threads to specific CPUs. When enabled all + taskq threads will be distributed evenly over the available CPUs. By default, + this behavior is disabled to allow the Linux scheduler the maximum flexibility + to determine where a thread should run. +

Default value: 0

+
+

+

spl_taskq_thread_dynamic (int)

+
Allow dynamic taskqs. When enabled taskqs which set the + TASKQ_DYNAMIC flag will by default create only a single thread. New threads + will be created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will be + promptly destroyed. By default this behavior is enabled but it can be disabled + to aid performance analysis or troubleshooting. +

Default value: 1

+
+

+

spl_taskq_thread_priority (int)

+
Allow newly created taskq threads to set a non-default + scheduler priority. When enabled the priority specified when a taskq is + created will be applied to all threads created by that taskq. When disabled + all threads will use the default Linux kernel thread priority. By default, + this behavior is enabled. +

Default value: 1

+
+

+

spl_taskq_thread_sequential (int)

+
The number of items a taskq worker thread must handle + without interruption before requesting a new worker thread be spawned. This is + used to control how quickly taskqs ramp up the number of threads processing + the queue. Because Linux thread creation and destruction are relatively + inexpensive a small default value has been selected. This means that normally + threads will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for some + configurations. +

Default value: 4

+
+

+

spl_max_show_tasks (uint)

+
The maximum number of tasks per pending list in each + taskq shown in /proc/spl/{taskq,taskq-all}. Write 0 to turn off the limit. The + proc file will walk the lists with lock held, reading it could cause a lock up + if the list grow too large without limiting the output. + "(truncated)" will be shown if the list is larger than the limit. +

Default value: 512

+
+
+
+
+ + + + + +
October 28, 2017
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/vdev_id.conf.5.html b/man/v0.8/5/vdev_id.conf.5.html new file mode 100644 index 000000000..92f44cc16 --- /dev/null +++ b/man/v0.8/5/vdev_id.conf.5.html @@ -0,0 +1,345 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
vdev_id.conf(5)File Formats Manualvdev_id.conf(5)
+
+
+

+

vdev_id.conf - Configuration file for vdev_id

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of vdev_id(8) + while it is mapping a disk device name to an alias.

+

The vdev_id.conf file uses a simple format consisting of a + keyword followed by one or more values on a single line. Any line not + beginning with a recognized keyword is ignored. Comments may optionally + begin with a hash character.

+

The following keywords and values are used.

+
+
+
Maps a device link in the /dev directory hierarchy to a new device name. + The udev rule defining the device link must have run prior to + vdev_id(8). A defined alias takes precedence over a + topology-derived name, but the two naming methods can otherwise coexist. + For example, one might name drives in a JBOD with the sas_direct topology + while naming an internal L2ARC device with an alias. +

name - the name of the link to the device that will by + created in /dev/disk/by-vdev.

+

devlink - the name of the device link that has already + been defined by udev. This may be an absolute path or the base + filename.

+

+
+
+
Maps a physical path to a channel name (typically representing a single + disk enclosure). +

+
+ +
Additionally create /dev/by-enclosure symlinks to the disk enclosure sg + devices using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form of: +

/dev/by-enclosure/<prefix>-<channel><num>

+

Defaults to "enc" if not specified.

+
+
+
hosting the disk enclosure being mapped, as found in the output of + lspci(8). This argument is not used in sas_switch mode. +

port - specifies the numeric identifier of the HBA or + SAS switch port connected to the disk enclosure being mapped.

+

name - specifies the name of the channel.

+

+
+
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is specified then + the mapping is only applied to slots in the named channel, otherwise the + mapping is applied to all channels. The first-specified slot rule + that can match a slot takes precedence. Therefore a channel-specific + mapping for a given slot should generally appear before a generic mapping + for the same slot. In this way a custom mapping may be applied to a + particular channel and a default mapping applied to the others. +

+
+
+
Specifies whether vdev_id(8) will handle only dm-multipath devices. + If set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+

+
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4. +

+
+
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay. +

bay - read the slot number from the bay identifier.

+

phy - read the slot number from the phy identifier.

+

port - use the SAS port as the slot number.

+

id - use the scsi id as the slot number.

+

lun - use the scsi lun as the slot number.

+

ses - use the SCSI Enclosure Services (SES) enclosure + device slot number, as reported by sg_ses(8). This is intended + for use only on systems where bay is unsupported, noting that + port and id may be unstable across disk replacement.

+
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping.

+

+
	multipath     no
+	topology      sas_direct
+	phys_per_port 4
+	slot          bay
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         C
+	channel 86:00.0  0         D
+	# Custom mapping for Channel A
+	#    Linux      Mapped
+	#    Slot       Slot      Channel
+	slot 1          7         A
+	slot 2          10        A
+	slot 3          3         A
+	slot 4          6         A
+	# Default mapping for B, C, and D
+	slot 1          4
+	slot 2          2
+	slot 3          1
+	slot 4          3
+

A SAS-switch topology. Note that the channel keyword takes + only two arguments in this example.

+

+
	topology      sas_switch
+	#       SWITCH PORT  CHANNEL NAME
+	channel 1            A
+	channel 2            B
+	channel 3            C
+	channel 4            D
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path.

+

+
	multipath yes
+	#       PCI_SLOT HBA PORT  CHANNEL NAME
+	channel 85:00.0  1         A
+	channel 85:00.0  0         B
+	channel 86:00.0  1         A
+	channel 86:00.0  0         B
+

A configuration with enclosure_symlinks enabled.

+

+
	multipath yes
+	enclosure_symlinks yes
+	#          PCI_ID      HBA PORT     CHANNEL NAME
+	channel    05:00.0     1            U
+	channel    05:00.0     0            L
+	channel    06:00.0     1            U
+	channel    06:00.0     0            L
+In addition to the disks symlinks, this configuration will create: +

+
	/dev/by-enclosure/enc-L0
+	/dev/by-enclosure/enc-L1
+	/dev/by-enclosure/enc-U0
+	/dev/by-enclosure/enc-U1
+

A configuration using device link aliases.

+

+
	#     by-vdev
+	#     name     fully qualified or base name of device link
+	alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+	alias d2       wwn-0x5000c5002def789e
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/zfs-events.5.html b/man/v0.8/5/zfs-events.5.html new file mode 100644 index 000000000..dd352c0bf --- /dev/null +++ b/man/v0.8/5/zfs-events.5.html @@ -0,0 +1,848 @@ + + + + + + + zfs-events.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-events.5

+
+ + + + + +
ZFS-EVENTS(5)File Formats ManualZFS-EVENTS(5)
+
+
+

+

zfs-events - Events created by the ZFS filesystem.

+
+
+

+

Description of the different events generated by the ZFS + stack.

+

Most of these don't have any description. The events generated by + ZFS have never been publicly documented. What is here is intended as a + starting point to provide documentation for all possible events.

+

To view all events created since the loading of the ZFS + infrastructure (i.e, "the module"), run

+

+
zpool events
+

to get a short list, and

+

+
zpool events -v
+

to get a full detail of the events and what information is + available about it.

+

This man page lists the different subclasses that are issued in + the case of an event. The full event name would be + ereport.fs.zfs.SUBCLASS, but we only list the last part here.

+

+
+

+

+

checksum

+
Issued when a checksum error has been detected.
+

+

io

+
Issued when there is an I/O error in a vdev in the + pool.
+

+

data

+
Issued when there have been data errors in the + pool.
+

+

deadman

+
Issued when an I/O is determined to be "hung", + this can be caused by lost completion events due to flaky hardware or drivers. + See the zfs_deadman_failmode module option description for additional + information regarding "hung" I/O detection and configuration.
+

+

delay

+
Issued when a completed I/O exceeds the maximum allowed + time specified by the zio_slow_io_ms module option. This can be an + indicator of problems with the underlying storage device. The number of delay + events is ratelimited by the zfs_slow_io_events_per_second module + parameter.
+

+

config.sync

+
Issued every time a vdev change have been done to the + pool.
+

+

zpool

+
Issued when a pool cannot be imported.
+

+

zpool.destroy

+
Issued when a pool is destroyed.
+

+

zpool.export

+
Issued when a pool is exported.
+

+

zpool.import

+
Issued when a pool is imported.
+

+

zpool.reguid

+
Issued when a REGUID (new unique identifier for the pool + have been regenerated) have been detected.
+

+

vdev.unknown

+
Issued when the vdev is unknown. Such as trying to clear + device errors on a vdev that have failed/been kicked from the system/pool and + is no longer available.
+

+

vdev.open_failed

+
Issued when a vdev could not be opened (because it didn't + exist for example).
+

+

vdev.corrupt_data

+
Issued when corrupt data have been detected on a + vdev.
+

+

vdev.no_replicas

+
Issued when there are no more replicas to sustain the + pool. This would lead to the pool being DEGRADED.
+

+

vdev.bad_guid_sum

+
Issued when a missing device in the pool have been + detected.
+

+

vdev.too_small

+
Issued when the system (kernel) have removed a device, + and ZFS notices that the device isn't there any more. This is usually followed + by a probe_failure event.
+

+

vdev.bad_label

+
Issued when the label is OK but invalid.
+

+

vdev.bad_ashift

+
Issued when the ashift alignment requirement has + increased.
+

+

vdev.remove

+
Issued when a vdev is detached from a mirror (or a spare + detached from a vdev where it have been used to replace a failed drive - only + works if the original drive have been readded).
+

+

vdev.clear

+
Issued when clearing device errors in a pool. Such as + running zpool clear on a device in the pool.
+

+

vdev.check

+
Issued when a check to see if a given vdev could be + opened is started.
+

+

vdev.spare

+
Issued when a spare have kicked in to replace a failed + device.
+

+

vdev.autoexpand

+
Issued when a vdev can be automatically expanded.
+

+

io_failure

+
Issued when there is an I/O failure in a vdev in the + pool.
+

+

probe_failure

+
Issued when a probe fails on a vdev. This would occur if + a vdev have been kicked from the system outside of ZFS (such as the kernel + have removed the device).
+

+

log_replay

+
Issued when the intent log cannot be replayed. The can + occur in the case of a missing or damaged log device.
+

+

resilver.start

+
Issued when a resilver is started.
+

+

resilver.finish

+
Issued when the running resilver have finished.
+

+

scrub.start

+
Issued when a scrub is started on a pool.
+

+

scrub.finish

+
Issued when a pool has finished scrubbing.
+

+

scrub.abort

+
Issued when a scrub is aborted on a pool.
+

+

scrub.resume

+
Issued when a scrub is resumed on a pool.
+

+

scrub.paused

+
Issued when a scrub is paused on a pool.
+

+

bootfs.vdev.attach

+
+

+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to uppercase and prefixed with + ZEVENT_.

+

+

pool

+
Pool name.
+

+

pool_failmode

+
Failmode - wait, continue or panic. + See zpool(8) (failmode property) for more information.
+

+

pool_guid

+
The GUID of the pool.
+

+

pool_context

+
The load state for the pool (0=none, 1=open, 2=import, + 3=tryimport, 4=recover 5=error).
+

+

vdev_guid

+
The GUID of the vdev in question (the vdev failing or + operated upon with zpool clear etc).
+

+

vdev_type

+
Type of vdev - disk, file, mirror + etc. See zpool(8) under Virtual Devices for more information on + possible values.
+

+

vdev_path

+
Full path of the vdev, including any -partX.
+

+

vdev_devid

+
ID of vdev (if any).
+

+

vdev_fru

+
Physical FRU location.
+

+

vdev_state

+
State of vdev (0=uninitialized, 1=closed, 2=offline, + 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healthy).
+

+

vdev_ashift

+
The ashift value of the vdev.
+

+

vdev_complete_ts

+
The time the last I/O completed for the specified + vdev.
+

+

vdev_delta_ts

+
The time since the last I/O completed for the specified + vdev.
+

+

vdev_spare_paths

+
List of spares, including full path and any + -partX.
+

+

vdev_spare_guids

+
GUID(s) of spares.
+

+

vdev_read_errors

+
How many read errors that have been detected on the + vdev.
+

+

vdev_write_errors

+
How many write errors that have been detected on the + vdev.
+

+

vdev_cksum_errors

+
How many checksum errors that have been detected on the + vdev.
+

+

parent_guid

+
GUID of the vdev parent.
+

+

parent_type

+
Type of parent. See vdev_type.
+

+

parent_path

+
Path of the vdev parent (if any).
+

+

parent_devid

+
ID of the vdev parent (if any).
+

+

zio_objset

+
The object set number for a given I/O.
+

+

zio_object

+
The object number for a given I/O.
+

+

zio_level

+
The indirect level for the block. Level 0 is the lowest + level and includes data blocks. Values > 0 indicate metadata blocks at the + appropriate level.
+

+

zio_blkid

+
The block ID for a given I/O.
+

+

zio_err

+
The errno for a failure when handling a given I/O. The + errno is compatible with errno(3) with the value for EBADE (0x34) used + to indicate ZFS checksum error.
+

+

zio_offset

+
The offset in bytes of where to write the I/O for the + specified vdev.
+

+

zio_size

+
The size in bytes of the I/O.
+

+

zio_flags

+
The current flags describing how the I/O should be + handled. See the I/O FLAGS section for the full list of I/O + flags.
+

+

zio_stage

+
The current stage of the I/O in the pipeline. See the + I/O STAGES section for a full list of all the I/O stages.
+

+

zio_pipeline

+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+

+

zio_delay

+
The time elapsed (in nanoseconds) waiting for the block + layer to complete the I/O. Unlike zio_delta this does not include any + vdev queuing time and is therefore solely a measure of the block layer + performance.
+

+

zio_timestamp

+
The time when a given I/O was submitted.
+

+

zio_delta

+
The time required to service a given I/O.
+

+

prev_state

+
The previous state of the vdev.
+

+

cksum_expected

+
The expected checksum value for the block.
+

+

cksum_actual

+
The actual checksum value for an errant block.
+

+

cksum_algorithm

+
Checksum algorithm used. See zfs(8) for more + information on checksum algorithms available.
+

+

cksum_byteswap

+
Whether or not the data is byteswapped.
+

+

bad_ranges

+
[start, end) pairs of corruption offsets. Offsets are + always aligned on a 64-bit boundary, and can include some gaps of + non-corruption. (See bad_ranges_min_gap)
+

+

bad_ranges_min_gap

+
In order to bound the size of the bad_ranges + array, gaps of non-corruption less than or equal to bad_ranges_min_gap + bytes have been merged with adjacent corruption. Always at least 8 bytes, + since corruption is detected on a 64-bit word basis.
+

+

bad_range_sets

+
This array has one element per range in + bad_ranges. Each element contains the count of bits in that range which + were clear in the good data and set in the bad data.
+

+

bad_range_clears

+
This array has one element per range in + bad_ranges. Each element contains the count of bits for that range + which were set in the good data and clear in the bad data.
+

+

bad_set_bits

+
If this field exists, it is an array of: (bad data & + ~(good data)); that is, the bits set in the bad data which are cleared in the + good data. Each element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, the first element + is the first byte in the first bad_ranges range, and the last element + is the last byte in the last bad_ranges range.
+

+

bad_cleared_bits

+
Like bad_set_bits, but contains: (good data & + ~(bad data)); that is, the bits set in the good data which are cleared in the + bad data.
+

+

bad_set_histogram

+
If this field exists, it is an array of counters. Each + entry counts bits set in a particular bit of a big-endian uint64 type. The + first entry counts bits set in the high-order bit of the first byte, the 9th + byte, etc, and the last entry counts bits set of the low-order bit of the 8th + byte, the 16th byte, etc. This information is useful for observing a stuck bit + in a parallel data path, such as IDE or parallel SCSI.
+

+

bad_cleared_histogram

+
If this field exists, it is an array of counters. Each + entry counts bit clears in a particular bit of a big-endian uint64 type. The + first entry counts bits clears of the high-order bit of the first byte, the + 9th byte, etc, and the last entry counts clears of the low-order bit of the + 8th byte, the 16th byte, etc. This information is useful for observing a stuck + bit in a parallel data path, such as IDE or parallel SCSI.
+

+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_DDT_READ_START0x00000200R----
ZIO_STAGE_DDT_READ_DONE0x00000400R----
ZIO_STAGE_DDT_WRITE0x00000800-W---
ZIO_STAGE_DDT_FREE0x00001000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00002000RWFC-
ZIO_STAGE_GANG_ISSUE0x00004000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00008000-W---
ZIO_STAGE_DVA_ALLOCATE0x00010000-W---
ZIO_STAGE_DVA_FREE0x00020000--F--
ZIO_STAGE_DVA_CLAIM0x00040000---C-
ZIO_STAGE_READY0x00080000RWFCI
ZIO_STAGE_VDEV_IO_START0x00100000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00200000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00400000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x00800000R----
ZIO_STAGE_DONE0x01000000RWFCI
+

+
+
+

+

Every I/O in the pipeline contains a set of flags which describe + its function and are used to govern its behavior. These flags will be set in + an event as an zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+
+ + + + + +
October 24, 2018
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/zfs-module-parameters.5.html b/man/v0.8/5/zfs-module-parameters.5.html new file mode 100644 index 000000000..6968334c5 --- /dev/null +++ b/man/v0.8/5/zfs-module-parameters.5.html @@ -0,0 +1,2268 @@ + + + + + + + zfs-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-module-parameters.5

+
+ + + + + +
ZFS-MODULE-PARAMETERS(5)File Formats ManualZFS-MODULE-PARAMETERS(5)
+
+
+

+

zfs-module-parameters - ZFS module parameters

+
+
+

+

Description of the different parameters to the ZFS module.

+

+
+

+

+

dbuf_cache_max_bytes (ulong)

+
Maximum size in bytes of the dbuf cache. When 0 + this value will default to 1/2^dbuf_cache_shift (1/32) of the target + ARC size, otherwise the provided value in bytes will be used. The behavior of + the dbuf cache and its associated settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat. +

Default value: 0.

+
+

+

dbuf_metadata_cache_max_bytes (ulong)

+
Maximum size in bytes of the metadata dbuf cache. When + 0 this value will default to 1/2^dbuf_cache_shift (1/16) of the + target ARC size, otherwise the provided value in bytes will be used. The + behavior of the metadata dbuf cache and its associated settings can be + observed via the /proc/spl/kstat/zfs/dbufstats kstat. +

Default value: 0.

+
+

+

dbuf_cache_hiwater_pct (uint)

+
The percentage over dbuf_cache_max_bytes when + dbufs must be evicted directly. +

Default value: 10%.

+
+

+

dbuf_cache_lowater_pct (uint)

+
The percentage below dbuf_cache_max_bytes when the + evict thread stops evicting dbufs. +

Default value: 10%.

+
+

+

dbuf_cache_shift (int)

+
Set the size of the dbuf cache, + dbuf_cache_max_bytes, to a log2 fraction of the target arc size. +

Default value: 5.

+
+

+

dbuf_metadata_cache_shift (int)

+
Set the size of the dbuf metadata cache, + dbuf_metadata_cache_max_bytes, to a log2 fraction of the target arc + size. +

Default value: 6.

+
+

+

dmu_prefetch_max (int)

+
Limit the amount we can prefetch with one call to this + amount (in bytes). This helps to limit the amount of memory that can be used + by prefetching. +

Default value: 134,217,728 (128MB).

+
+

+

ignore_hole_birth (int)

+
This is an alias for + send_holes_without_birth_time.
+

+

l2arc_feed_again (int)

+
Turbo L2ARC warm-up. When the L2ARC is cold the fill + interval will be set as fast as possible. +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_feed_min_ms (ulong)

+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only applicable in related situations. +

Default value: 200.

+
+

+

l2arc_feed_secs (ulong)

+
Seconds between L2ARC writing +

Default value: 1.

+
+

+

l2arc_headroom (ulong)

+
How far through the ARC lists to search for L2ARC + cacheable content, expressed as a multiplier of l2arc_write_max +

Default value: 2.

+
+

+

l2arc_headroom_boost (ulong)

+
Scales l2arc_headroom by this percentage when + L2ARC contents are being successfully compressed before writing. A value of + 100 disables this feature. +

Default value: 200%.

+
+

+

l2arc_noprefetch (int)

+
Do not write buffers to L2ARC if they were prefetched but + not used by applications +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_norw (int)

+
No reads during writes +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_write_boost (ulong)

+
Cold L2ARC devices will have l2arc_write_max + increased by this amount while they remain cold. +

Default value: 8,388,608.

+
+

+

l2arc_write_max (ulong)

+
Max write bytes per interval +

Default value: 8,388,608.

+
+

+

metaslab_aliquot (ulong)

+
Metaslab granularity, in bytes. This is roughly similar + to what would be referred to as the "stripe size" in traditional + RAID arrays. In normal operation, ZFS will try to write this amount of data to + a top-level vdev before moving on to the next one. +

Default value: 524,288.

+
+

+

metaslab_bias_enabled (int)

+
Enable metaslab group biasing based on its vdev's over- + or under-utilization relative to the pool. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_force_ganging (ulong)

+
Make some blocks above a certain size be gang blocks. + This option is used by the test suite to facilitate testing. +

Default value: 16,777,217.

+
+

+

zfs_metaslab_segment_weight_enabled (int)

+
Enable/disable segment-based metaslab selection. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_metaslab_switch_threshold (int)

+
When using segment-based metaslab selection, continue + allocating from the active metaslab until zfs_metaslab_switch_threshold + worth of buckets have been exhausted. +

Default value: 2.

+
+

+

metaslab_debug_load (int)

+
Load all metaslabs during pool import. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_debug_unload (int)

+
Prevent metaslabs from being unloaded. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_fragmentation_factor_enabled (int)

+
Enable use of the fragmentation metric in computing + metaslab weights. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_df_max_search (int)

+
Maximum distance to search forward from the last offset. + Without this limit, fragmented pools can see >100,000 iterations and + metaslab_block_picker() becomes the performance limiting factor on + high-performance storage. +

With the default setting of 16MB, we typically see less than 500 + iterations, even with very fragmented, ashift=9 pools. The maximum number of + iterations possible is: metaslab_df_max_search / (2 * + (1<<ashift)). With the default setting of 16MB this is 16*1024 + (with ashift=9) or 2048 (with ashift=12).

+

Default value: 16,777,216 (16MB)

+
+

+

metaslab_df_use_largest_segment (int)

+
If we are not searching forward (due to + metaslab_df_max_search, metaslab_df_free_pct, or metaslab_df_alloc_threshold), + this tunable controls what segment is used. If it is set, we will use the + largest free segment. If it is not set, we will use a segment of exactly the + requested size (or larger). +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_vdev_default_ms_count (int)

+
When a vdev is added target this number of metaslabs per + top-level vdev. +

Default value: 200.

+
+

+

zfs_vdev_min_ms_count (int)

+
Minimum number of metaslabs to create in a top-level + vdev. +

Default value: 16.

+
+

+

vdev_ms_count_limit (int)

+
Practical upper limit of total metaslabs per top-level + vdev. +

Default value: 131,072.

+
+

+

metaslab_preload_enabled (int)

+
Enable metaslab group preloading. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_lba_weighting_enabled (int)

+
Give more weight to metaslabs with lower LBAs, assuming + they have greater bandwidth as is typically the case on a modern constant + angular velocity disk drive. +

Use 1 for yes (default) and 0 for no.

+
+

+

send_holes_without_birth_time (int)

+
When set, the hole_birth optimization will not be used, + and all holes will always be sent on zfs send. This is useful if you suspect + your datasets are affected by a bug in hole_birth. +

Use 1 for on (default) and 0 for off.

+
+

+

spa_config_path (charp)

+
SPA config file +

Default value: /etc/zfs/zpool.cache.

+
+

+

spa_asize_inflation (int)

+
Multiplication factor used to estimate actual disk + consumption from the size of data being written. The default value is a worst + case estimate, but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits. +

Default value: 24.

+
+

+

spa_load_print_vdev_tree (int)

+
Whether to print the vdev tree in the debugging message + buffer during pool import. Use 0 to disable and 1 to enable. +

Default value: 0.

+
+

+

spa_load_verify_data (int)

+
Whether to traverse data blocks during an "extreme + rewind" (-X) import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal skips non-metadata blocks. It can be toggled once the import has + started to stop or start the traversal of non-metadata blocks.

+

Default value: 1.

+
+

+

spa_load_verify_metadata (int)

+
Whether to traverse blocks during an "extreme + rewind" (-X) pool import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal is not performed. It can be toggled once the import has started to + stop or start the traversal.

+

Default value: 1.

+
+

+

spa_load_verify_shift (int)

+
Sets the maximum number of bytes to consume during pool + import to the log2 fraction of the target arc size. +

Default value: 4.

+
+

+

spa_slop_shift (int)

+
Normally, we don't allow the last 3.2% + (1/(2^spa_slop_shift)) of space in the pool to be consumed. This ensures that + we don't run the pool completely out of space, due to unaccounted changes + (e.g. to the MOS). It also limits the worst-case time to allocate space. If we + have less than this amount of free space, most ZPL operations (e.g. write, + create) will return ENOSPC. +

Default value: 5.

+
+

+

vdev_removal_max_span (int)

+
During top-level vdev removal, chunks of data are copied + from the vdev which may include free space in order to trade bandwidth for + IOPS. This parameter determines the maximum span of free space (in bytes) + which will be included as "unnecessary" data in a chunk of copied + data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept when doing + regular reads (but there's no reason it has to be the same).

+

Default value: 32,768.

+
+

+

zap_iterate_prefetch (int)

+
If this is set, when we start iterating over a ZAP + object, zfs will prefetch the entire object (all leaf blocks). However, this + is limited by dmu_prefetch_max. +

Use 1 for on (default) and 0 for off.

+
+

+

zfetch_array_rd_sz (ulong)

+
If prefetching is enabled, disable prefetching for reads + larger than this size. +

Default value: 1,048,576.

+
+

+

zfetch_max_distance (uint)

+
Max bytes to prefetch per stream (default 8MB). +

Default value: 8,388,608.

+
+

+

zfetch_max_streams (uint)

+
Max number of streams per zfetch (prefetch streams per + file). +

Default value: 8.

+
+

+

zfetch_min_sec_reap (uint)

+
Min time before an active prefetch stream can be + reclaimed +

Default value: 2.

+
+

+

zfs_abd_scatter_min_size (uint)

+
This is the minimum allocation size that will use scatter + (page-based) ABD's. Smaller allocations will use linear ABD's. +

Default value: 1536 (512B and 1KB allocations will be + linear).

+
+

+

zfs_arc_dnode_limit (ulong)

+
When the number of bytes consumed by dnodes in the ARC + exceeds this number of bytes, try to unpin some of it in response to demand + for non-metadata. This value acts as a ceiling to the amount of dnode + metadata, and defaults to 0 which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that may be used + for dnodes. +

See also zfs_arc_meta_prune which serves a similar purpose + but is used when the amount of metadata in the ARC exceeds + zfs_arc_meta_limit rather than in response to overall demand for + non-metadata.

+

+

Default value: 0.

+
+

+

zfs_arc_dnode_limit_percent (ulong)

+
Percentage that can be consumed by dnodes of ARC meta + buffers. +

See also zfs_arc_dnode_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

Default value: 10%.

+
+

+

zfs_arc_dnode_reduce_percent (ulong)

+
Percentage of ARC dnodes to try to scan in response to + demand for non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit. +

+

Default value: 10% of the number of dnodes in the ARC.

+
+

+

zfs_arc_average_blocksize (int)

+
The ARC's buffer hash table is sized based on the + assumption of an average block size of zfs_arc_average_blocksize + (default 8K). This works out to roughly 1MB of hash table per 1GB of physical + memory with 8-byte pointers. For configurations with a known larger average + block size this value can be increased to reduce the memory footprint. +

+

Default value: 8192.

+
+

+

zfs_arc_evict_batch_limit (int)

+
Number ARC headers to evict per sub-list before + proceeding to another sub-list. This batch-style operation prevents entire + sub-lists from being evicted at once but comes at a cost of additional + unlocking and locking. +

Default value: 10.

+
+

+

zfs_arc_grow_retry (int)

+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The arc_grow_retry value (default 5) is + the number of seconds the ARC will wait before trying to resume growth after a + memory pressure event. +

Default value: 0.

+
+

+

zfs_arc_lotsfree_percent (int)

+
Throttle I/O when free system memory drops below this + percentage of total system memory. Setting this value to 0 will disable the + throttle. +

Default value: 10%.

+
+

+

zfs_arc_max (ulong)

+
Max arc size of ARC in bytes. If set to 0 then it will + consume 1/2 of system RAM. This value must be at least 67108864 (64 + megabytes). +

This value can be changed dynamically with some caveats. It cannot + be set back to 0 while running and reducing it below the current ARC size + will not cause the ARC to shrink without memory pressure to induce + shrinking.

+

Default value: 0.

+
+

+

zfs_arc_meta_adjust_restarts (ulong)

+
The number of restart passes to make while scanning the + ARC attempting the free buffers in order to stay below the + zfs_arc_meta_limit. This value should not need to be tuned but is + available to facilitate performance analysis. +

Default value: 4096.

+
+

+

zfs_arc_meta_limit (ulong)

+
The maximum allowed size in bytes that meta data buffers + are allowed to consume in the ARC. When this limit is reached meta data + buffers will be reclaimed even if the overall arc_c_max has not been reached. + This value defaults to 0 which indicates that a percent which is based on + zfs_arc_meta_limit_percent of the ARC may be used for meta data. +

This value my be changed dynamically except that it cannot be set + back to 0 for a specific percent of the ARC; it must be set to an explicit + value.

+

Default value: 0.

+
+

+

zfs_arc_meta_limit_percent (ulong)

+
Percentage of ARC buffers that can be used for meta data. +

See also zfs_arc_meta_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

+

Default value: 75%.

+
+

+

zfs_arc_meta_min (ulong)

+
The minimum allowed size in bytes that meta data buffers + may consume in the ARC. This value defaults to 0 which disables a floor on the + amount of the ARC devoted meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_prune (int)

+
The number of dentries and inodes to be scanned looking + for entries which can be dropped. This may be required when the ARC reaches + the zfs_arc_meta_limit because dentries and inodes can pin buffers in + the ARC. Increasing this value will cause to dentry and inode caches to be + pruned more aggressively. Setting this value to 0 will disable pruning the + inode and dentry caches. +

Default value: 10,000.

+
+

+

zfs_arc_meta_strategy (int)

+
Define the strategy for ARC meta data buffer eviction + (meta reclaim strategy). A value of 0 (META_ONLY) will evict only the ARC meta + data buffers. A value of 1 (BALANCED) indicates that additional data buffers + may be evicted if that is required to in order to evict the required number of + meta data buffers. +

Default value: 1.

+
+

+

zfs_arc_min (ulong)

+
Min arc size of ARC in bytes. If set to 0 then arc_c_min + will default to consuming the larger of 32M or 1/32 of total system memory. +

Default value: 0.

+
+

+

zfs_arc_min_prefetch_ms (int)

+
Minimum time prefetched blocks are locked in the ARC, + specified in ms. A value of 0 will default to 1000 ms. +

Default value: 0.

+
+

+

zfs_arc_min_prescient_prefetch_ms (int)

+
Minimum time "prescient prefetched" blocks are + locked in the ARC, specified in ms. These blocks are meant to be prefetched + fairly aggressively ahead of the code that may use them. A value of 0 + will default to 6000 ms. +

Default value: 0.

+
+

+

zfs_max_missing_tvds (int)

+
Number of missing top-level vdevs which will be allowed + during pool import (only in read-only mode). +

Default value: 0

+
+

+

zfs_multilist_num_sublists (int)

+
To allow more fine-grained locking, each ARC state + contains a series of lists for both data and meta data objects. Locking is + performed at the level of these "sub-lists". This parameters + controls the number of sub-lists per ARC state, and also applies to other uses + of the multilist data structure. +

Default value: 4 or the number of online CPUs, whichever is + greater

+
+

+

zfs_arc_overflow_shift (int)

+
The ARC size is considered to be overflowing if it + exceeds the current ARC target size (arc_c) by a threshold determined by this + parameter. The threshold is calculated as a fraction of arc_c using the + formula "arc_c >> zfs_arc_overflow_shift". +

The default value of 8 causes the ARC to be considered to be + overflowing if it exceeds the target size by 1/256th (0.3%) of the target + size.

+

When the ARC is overflowing, new buffer allocations are stalled + until the reclaim thread catches up and the overflow condition no longer + exists.

+

Default value: 8.

+
+

+

+

zfs_arc_p_min_shift (int)

+
If set to a non zero value, this will update + arc_p_min_shift (default 4) with the new value. arc_p_min_shift is used to + shift of arc_c for calculating both min and max max arc_p +

Default value: 0.

+
+

+

zfs_arc_p_dampener_disable (int)

+
Disable arc_p adapt dampener +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_shrink_shift (int)

+
If set to a non zero value, this will update + arc_shrink_shift (default 7) with the new value. +

Default value: 0.

+
+

+

zfs_arc_pc_percent (uint)

+
Percent of pagecache to reclaim arc to +

This tunable allows ZFS arc to play more nicely with the kernel's + LRU pagecache. It can guarantee that the arc size won't collapse under + scanning pressure on the pagecache, yet still allows arc to be reclaimed + down to zfs_arc_min if necessary. This value is specified as percent of + pagecache size (as measured by NR_FILE_PAGES) where that percent may exceed + 100. This only operates during memory pressure/reclaim.

+

Default value: 0% (disabled).

+
+

+

zfs_arc_sys_free (ulong)

+
The target number of bytes the ARC should leave as free + memory on the system. Defaults to the larger of 1/64 of physical memory or + 512K. Setting this option to a non-zero value will override the default. +

Default value: 0.

+
+

+

zfs_autoimport_disable (int)

+
Disable pool import at module load by ignoring the cache + file (typically /etc/zfs/zpool.cache). +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_checksums_per_second (int)

+
Rate limit checksum events to this many per second. Note + that this should not be set below the zed thresholds (currently 10 checksums + over 10 sec) or else zed may not trigger any action. +

Default value: 20

+
+

+

zfs_commit_timeout_pct (int)

+
This controls the amount of time that a ZIL block (lwb) + will remain "open" when it isn't "full", and it has a + thread waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly impacting + the latency of each individual transaction record (itx). +

Default value: 5%.

+
+

+

zfs_condense_indirect_vdevs_enable (int)

+
Enable condensing indirect vdev mappings. When set to a + non-zero value, attempt to condense indirect vdev mappings if the mapping uses + more than zfs_condense_min_mapping_bytes bytes of memory and if the + obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The condensing process + is an attempt to save memory by removing obsolete mappings. +

Default value: 1.

+
+

+

zfs_condense_max_obsolete_bytes (ulong)

+
Only attempt to condense indirect vdev mappings if the + on-disk size of the obsolete space map object is greater than this number of + bytes (see fBzfs_condense_indirect_vdevs_enable). +

Default value: 1,073,741,824.

+
+

+

zfs_condense_min_mapping_bytes (ulong)

+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable). +

Default value: 131,072.

+
+

+

zfs_dbgmsg_enable (int)

+
Internally ZFS keeps a small log to facilitate debugging. + By default the log is disabled, to enable it set this option to 1. The + contents of the log can be accessed by reading the /proc/spl/kstat/zfs/dbgmsg + file. Writing 0 to this proc file clears the log. +

Default value: 0.

+
+

+

zfs_dbgmsg_maxsize (int)

+
The maximum size in bytes of the internal ZFS debug log. +

Default value: 4M.

+
+

+

zfs_dbuf_state_index (int)

+
This feature is currently unused. It is normally used for + controlling what reporting is available under /proc/spl/kstat/zfs. +

Default value: 0.

+
+

+

zfs_deadman_enabled (int)

+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms milliseconds, or when an individual I/O takes + longer than zfs_deadman_ziotime_ms milliseconds, then the operation is + considered to be "hung". If zfs_deadman_enabled is set then + the deadman behavior is invoked as described by the + zfs_deadman_failmode module option. By default the deadman is enabled + and configured to wait which results in "hung" I/Os only + being logged. The deadman is automatically disabled when a pool gets + suspended. +

Default value: 1.

+
+

+

zfs_deadman_failmode (charp)

+
Controls the failure behavior when the deadman detects a + "hung" I/O. Valid values are wait, continue, and + panic. +

wait - Wait for a "hung" I/O to complete. For + each "hung" I/O a "deadman" event will be posted + describing that I/O.

+

continue - Attempt to recover from a "hung" I/O + by re-dispatching it to the I/O pipeline if possible.

+

panic - Panic the system. This can be used to facilitate an + automatic fail-over to a properly configured fail-over partner.

+

Default value: wait.

+
+

+

zfs_deadman_checktime_ms (int)

+
Check time in milliseconds. This defines the frequency at + which we check for hung I/O and potentially invoke the + zfs_deadman_failmode behavior. +

Default value: 60,000.

+
+

+

zfs_deadman_synctime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and also the interval after which a pool sync operation is + considered to be "hung". Once this limit is exceeded the deadman + will be invoked every zfs_deadman_checktime_ms milliseconds until the + pool sync completes. +

Default value: 600,000.

+
+

+

zfs_deadman_ziotime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and an individual I/O operation is considered to be + "hung". As long as the I/O remains "hung" the deadman will + be invoked every zfs_deadman_checktime_ms milliseconds until the I/O + completes. +

Default value: 300,000.

+
+

+

zfs_dedup_prefetch (int)

+
Enable prefetching dedup-ed blks +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_delay_min_dirty_percent (int)

+
Start to delay each transaction once there is this amount + of dirty data, expressed as a percentage of zfs_dirty_data_max. This + value should be >= zfs_vdev_async_write_active_max_dirty_percent. See the + section "ZFS TRANSACTION DELAY". +

Default value: 60%.

+
+

+

zfs_delay_scale (int)

+
This controls how quickly the transaction delay + approaches infinity. Larger values cause longer delays for a given amount of + dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will smoothly + handle between 10x and 1/10th this number.

+

See the section "ZFS TRANSACTION DELAY".

+

Note: zfs_delay_scale * zfs_dirty_data_max must be + < 2^64.

+

Default value: 500,000.

+
+

+

zfs_slow_io_events_per_second (int)

+
Rate limit delay zevents (which report slow I/Os) to this + many per second. +

Default value: 20

+
+

+

zfs_unlink_suspend_progress (uint)

+
When enabled, files will not be asynchronously removed + from the list of pending unlinks and the space they consume will be leaked. + Once this option has been disabled and the dataset is remounted, the pending + unlinks will be processed and the freed space returned to the pool. This + option is used by the test suite to facilitate testing. +

Uses 0 (default) to allow progress and 1 to pause + progress.

+
+

+

zfs_delete_blocks (ulong)

+
This is the used to define a large file for the purposes + of delete. Files containing more than zfs_delete_blocks will be deleted + asynchronously while smaller files are deleted synchronously. Decreasing this + value will reduce the time spent in an unlink(2) system call at the expense of + a longer delay before the freed space is available. +

Default value: 20,480.

+
+

+

zfs_dirty_data_max (int)

+
Determines the dirty space limit in bytes. Once this + limit is exceeded, new writes are halted until space frees up. This parameter + takes precedence over zfs_dirty_data_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 10% of physical RAM, capped at + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_max_max (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed in bytes. This limit is only enforced at module load time, and will + be ignored if zfs_dirty_data_max is later changed. This parameter takes + precedence over zfs_dirty_data_max_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 25% of physical RAM.

+
+

+

zfs_dirty_data_max_max_percent (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed as a percentage of physical RAM. This limit is only enforced at + module load time, and will be ignored if zfs_dirty_data_max is later + changed. The parameter zfs_dirty_data_max_max takes precedence over + this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 25%.

+
+

+

zfs_dirty_data_max_percent (int)

+
Determines the dirty space limit, expressed as a + percentage of all memory. Once this limit is exceeded, new writes are halted + until space frees up. The parameter zfs_dirty_data_max takes precedence + over this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 10%, subject to + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_sync_percent (int)

+
Start syncing out a transaction group if there's at least + this much dirty data as a percentage of zfs_dirty_data_max. This should + be less than zfs_vdev_async_write_active_min_dirty_percent. +

Default value: 20% of zfs_dirty_data_max.

+
+

+

zfs_fletcher_4_impl (string)

+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, scalar, + sse2, ssse3, avx2, avx512f, and + aarch64_neon. All of the selectors except fastest and + scalar require instruction set extensions to be available and will + only appear if ZFS detects that they are present at runtime. If multiple + implementations of fletcher 4 are available, the fastest will be + chosen using a micro benchmark. Selecting scalar results in the + original, CPU based calculation, being used. Selecting any option other than + fastest and scalar results in vector instructions from the + respective CPU instruction set being used.

+

Default value: fastest.

+
+

+

zfs_free_bpobj_enabled (int)

+
Enable/disable the processing of the free_bpobj object. +

Default value: 1.

+
+

+

zfs_async_block_max_blocks (ulong)

+
Maximum number of blocks freed in a single txg. +

Default value: 100,000.

+
+

+

zfs_override_estimate_recordsize (ulong)

+
Record size calculation override for zfs send estimates. +

Default value: 0.

+
+

+

zfs_vdev_async_read_max_active (int)

+
Maximum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_async_read_min_active (int)

+
Minimum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_async_write_active_max_dirty_percent (int)

+
When the pool has more than + zfs_vdev_async_write_active_max_dirty_percent dirty data, use + zfs_vdev_async_write_max_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 60%.

+
+

+

zfs_vdev_async_write_active_min_dirty_percent (int)

+
When the pool has less than + zfs_vdev_async_write_active_min_dirty_percent dirty data, use + zfs_vdev_async_write_min_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 30%.

+
+

+

zfs_vdev_async_write_max_active (int)

+
Maximum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_async_write_min_active (int)

+
Minimum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of 2 was chosen as + a compromise. A value of 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+

Default value: 2.

+
+

+

zfs_vdev_initializing_max_active (int)

+
Maximum initializing I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_initializing_min_active (int)

+
Minimum initializing I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_max_active (int)

+
The maximum number of I/Os active to each device. + Ideally, this will be >= the sum of each queue's max_active. It must be at + least the sum of each queue's min_active. See the section "ZFS I/O + SCHEDULER". +

Default value: 1,000.

+
+

+

zfs_vdev_removal_max_active (int)

+
Maximum removal I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_removal_min_active (int)

+
Minimum removal I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_scrub_max_active (int)

+
Maximum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_scrub_min_active (int)

+
Minimum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_sync_read_max_active (int)

+
Maximum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_read_min_active (int)

+
Minimum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_max_active (int)

+
Maximum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_min_active (int)

+
Minimum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_trim_max_active (int)

+
Maximum trim/discard I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_trim_min_active (int)

+
Minimum trim/discard I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_queue_depth_pct (int)

+
Maximum number of queued allocations per top-level vdev + expressed as a percentage of zfs_vdev_async_write_max_active which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. It allows for + dynamic allocation distribution when devices are imbalanced as fuller devices + will tend to be slower than empty devices. +

See also zio_dva_throttle_enabled.

+

Default value: 1000%.

+
+

+

zfs_expire_snapshot (int)

+
Seconds to expire .zfs/snapshot +

Default value: 300.

+
+

+

zfs_admin_snapshot (int)

+
Allow the creation, removal, or renaming of entries in + the .zfs/snapshot directory to cause the creation, destruction, or renaming of + snapshots. When enabled this functionality works both locally and over NFS + exports which have the 'no_root_squash' option set. This functionality is + disabled by default. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_flags (int)

+
Set additional debugging flags. The following flags may + be bitwise-or'd together. +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic Name
Description
1ZFS_DEBUG_DPRINTF
Enable dprintf entries in the debug log.
2ZFS_DEBUG_DBUF_VERIFY *
Enable extra dbuf verifications.
4ZFS_DEBUG_DNODE_VERIFY *
Enable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMES
Enable snapshot name verification.
16ZFS_DEBUG_MODIFY
Check for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREE
Enable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFY
Enable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFY
Verify space accounting on disk matches in-core range_trees.
512ZFS_DEBUG_SET_ERROR
Enable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAP
Verify split blocks created by device removal.
2048ZFS_DEBUG_TRIM
Verify TRIM ranges are always within the allocatable range tree.
+

* Requires debug build.

+

Default value: 0.

+
+

+

zfs_free_leak_on_eio (int)

+
If destroy encounters an EIO while reading metadata (e.g. + indirect blocks), space referenced by the missing metadata can not be freed. + Normally this causes the background destroy to become "stalled", as + it is unable to make forward progress. While in this stalled state, all + remaining space to free from the error-encountering filesystem is + "temporarily leaked". Set this flag to cause it to ignore the EIO, + permanently leak the space from indirect blocks that can not be read, and + continue to free everything else that it can. +

The default, "stalling" behavior is useful if the + storage partially fails (i.e. some but not all i/os fail), and then later + recovers. In this case, we will be able to continue pool operations while it + is partially failed, and when it recovers, we can continue to free the + space, with no leaks. However, note that this case is actually fairly + rare.

+

Typically pools either (a) fail completely (but perhaps + temporarily, e.g. a top-level vdev going offline), or (b) have localized, + permanent errors (e.g. disk returns the wrong data due to bit flip or + firmware bug). In case (a), this setting does not matter because the pool + will be suspended and the sync thread will not be able to make forward + progress regardless. In case (b), because the error is permanent, the best + we can do is leak the minimum amount of space, which is what setting this + flag will do. Therefore, it is reasonable for this flag to normally be set, + but we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.

+

Default value: 0.

+
+

+

zfs_free_min_time_ms (int)

+
During a zfs destroy operation using + feature@async_destroy a minimum of this much time will be spent working + on freeing blocks per txg. +

Default value: 1,000.

+
+

+

zfs_immediate_write_sz (long)

+
Largest data block to write to zil. Larger blocks will be + treated as if the dataset being written to had the property setting + logbias=throughput. +

Default value: 32,768.

+
+

+

zfs_initialize_value (ulong)

+
Pattern written to vdev free space by zpool + initialize. +

Default value: 16,045,690,984,833,335,022 + (0xdeadbeefdeadbeee).

+
+

+

zfs_lua_max_instrlimit (ulong)

+
The maximum execution time limit that can be set for a + ZFS channel program, specified as a number of Lua instructions. +

Default value: 100,000,000.

+
+

+

zfs_lua_max_memlimit (ulong)

+
The maximum memory limit that can be set for a ZFS + channel program, specified in bytes. +

Default value: 104,857,600.

+
+

+

zfs_max_dataset_nesting (int)

+
The maximum depth of nested datasets. This value can be + tuned temporarily to fix existing datasets that exceed the predefined limit. +

Default value: 50.

+
+

+

zfs_max_recordsize (int)

+
We currently support block sizes from 512 bytes to 16MB. + The benefits of larger blocks, and thus larger I/O, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very large + blocks can have an impact on i/o latency, and also potentially on the memory + allocator. Therefore, we do not allow the recordsize to be set larger than + zfs_max_recordsize (default 1MB). Larger blocks can be created by changing + this tunable, and pools with larger blocks can always be imported and used, + regardless of this setting. +

Default value: 1,048,576.

+
+

+

zfs_metaslab_fragmentation_threshold (int)

+
Allow metaslabs to keep their active state as long as + their fragmentation percentage is less than or equal to this value. An active + metaslab that exceeds this threshold will no longer keep its active status + allowing better metaslabs to be selected. +

Default value: 70.

+
+

+

zfs_mg_fragmentation_threshold (int)

+
Metaslab groups are considered eligible for allocations + if their fragmentation metric (measured as a percentage) is less than or equal + to this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also crossed + this threshold. +

Default value: 95.

+
+

+

zfs_mg_noalloc_threshold (int)

+
Defines a threshold at which metaslab groups should be + eligible for allocations. The value is expressed as a percentage of free space + beyond which a metaslab group is always eligible for allocations. If a + metaslab group's free space is less than or equal to the threshold, the + allocator will avoid allocating to that group unless all groups in the pool + have reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of 0 disables the + feature and causes all metaslab groups to be eligible for allocations. +

This parameter allows one to deal with pools having heavily + imbalanced vdevs such as would be the case when a new vdev has been added. + Setting the threshold to a non-zero percentage will stop allocations from + being made to vdevs that aren't filled to the specified percentage and allow + lesser filled vdevs to acquire more allocations than they otherwise would + under the old zfs_mg_alloc_failures facility.

+

Default value: 0.

+
+

+

zfs_ddt_data_is_special (int)

+
If enabled, ZFS will place DDT data into the special + allocation class. +

Default value: 1.

+
+

+

zfs_user_indirect_is_special (int)

+
If enabled, ZFS will place user data (both file and zvol) + indirect blocks into the special allocation class. +

Default value: 1.

+
+

+

zfs_multihost_history (int)

+
Historical statistics for the last N multihost updates + will be available in /proc/spl/kstat/zfs/<pool>/multihost +

Default value: 0.

+
+

+

zfs_multihost_interval (ulong)

+
Used to control the frequency of multihost writes which + are performed when the multihost pool property is on. This is one + factor used to determine the length of the activity check during import. +

The multihost write period is zfs_multihost_interval / + leaf-vdevs milliseconds. On average a multihost write will be issued for + each leaf vdev every zfs_multihost_interval milliseconds. In + practice, the observed period can vary with the I/O load and this observed + value is the delay which is stored in the uberblock.

+

Default value: 1000.

+
+

+

zfs_multihost_import_intervals (uint)

+
Used to control the duration of the activity test on + import. Smaller values of zfs_multihost_import_intervals will reduce + the import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval * + zfs_multihost_import_intervals, or the same product computed on the host + which last had the pool imported (whichever is greater). The activity check + time may be further extended if the value of mmp delay found in the best + uberblock indicates actual multihost updates happened at longer intervals + than zfs_multihost_interval. A minimum value of 100ms is + enforced.

+

A value of 0 is ignored and treated as if it was set to 1.

+

Default value: 20.

+
+

+

zfs_multihost_fail_intervals (uint)

+
Controls the behavior of the pool when multihost write + failures or delays are detected. +

When zfs_multihost_fail_intervals = 0, multihost write + failures or delays are ignored. The failures will still be reported to the + ZED which depending on its configuration may take action such as suspending + the pool or offlining a device.

+

+

When zfs_multihost_fail_intervals > 0, the pool will be + suspended if zfs_multihost_fail_intervals * zfs_multihost_interval + milliseconds pass without a successful mmp write. This guarantees the + activity test will see mmp writes if the pool is imported. A value of 1 is + ignored and treated as if it was set to 2. This is necessary to prevent the + pool from being suspended due to normal, small I/O latency variations.

+

+

Default value: 10.

+
+

+

zfs_no_scrub_io (int)

+
Set for no scrub I/O. This results in scrubs not actually + scrubbing data and simply doing a metadata crawl of the pool instead. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_no_scrub_prefetch (int)

+
Set to disable block prefetching for scrubs. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nocacheflush (int)

+
Disable cache flush operations on disks when writing. + Setting this will cause pool corruption on power loss if a volatile + out-of-order write cache is enabled. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nopwrite_enabled (int)

+
Enable NOP writes +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_dmu_offset_next_sync (int)

+
Enable forcing txg sync to find holes. When enabled + forces ZFS to act like prior versions when SEEK_HOLE or SEEK_DATA flags are + used, which when a dnode is dirty causes txg's to be synced so that this data + can be found. +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_pd_bytes_max (int)

+
The number of bytes which should be prefetched during a + pool traversal (eg: zfs send or other data crawling operations) +

Default value: 52,428,800.

+
+

+

zfs_per_txg_dirty_frees_percent (ulong)

+
Tunable to control percentage of dirtied indirect blocks + from frees allowed into one TXG. After this threshold is crossed, additional + frees will wait until the next TXG. A value of zero will disable this + throttle. +

Default value: 5, set to 0 to disable.

+
+

+

zfs_prefetch_disable (int)

+
This tunable disables predictive prefetch. Note that it + leaves "prescient" prefetch (e.g. prefetch for zfs send) intact. + Unlike predictive prefetch, prescient prefetch never issues i/os that end up + not being needed, so it can't hurt performance. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_checksum_disable (int)

+
This tunable disables qat hardware acceleration for + sha256 checksums. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_compress_disable (int)

+
This tunable disables qat hardware acceleration for gzip + compression. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_encrypt_disable (int)

+
This tunable disables qat hardware acceleration for + AES-GCM encryption. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_read_chunk_size (long)

+
Bytes to read per chunk +

Default value: 1,048,576.

+
+

+

zfs_read_history (int)

+
Historical statistics for the last N reads will be + available in /proc/spl/kstat/zfs/<pool>/reads +

Default value: 0 (no data is kept).

+
+

+

zfs_read_history_hits (int)

+
Include cache hits in read history +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_reconstruct_indirect_combinations_max (int)

+
If an indirect split block contains more than this many + possible unique combinations when being reconstructed, consider it too + computationally expensive to check them all. Instead, try at most + zfs_reconstruct_indirect_combinations_max randomly-selected + combinations each time the block is accessed. This allows all segment copies + to participate fairly in the reconstruction when all combinations cannot be + checked and prevents repeated use of one bad copy. +

Default value: 4096.

+
+

+

zfs_recover (int)

+
Set to attempt to recover from fatal errors. This should + only be used as a last resort, as it typically results in leaked space, or + worse. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_removal_ignore_errors (int)

+
+

Ignore hard IO errors during device removal. When set, if a device + encounters a hard IO error during the removal process the removal will not + be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is not recommended. This should only be used as a + last resort when the pool cannot be returned to a healthy state prior to + removing the device.

+

Default value: 0.

+
+

+

zfs_removal_suspend_progress (int)

+
+

This is used by the test suite so that it can ensure that certain + actions happen while in the middle of a removal.

+

Default value: 0.

+
+

+

zfs_remove_max_segment (int)

+
+

The largest contiguous segment that we will attempt to allocate + when removing a device. This can be no larger than 16MB. If there is a + performance problem with attempting to allocate large blocks, consider + decreasing this.

+

Default value: 16,777,216 (16MB).

+
+

+

zfs_resilver_min_time_ms (int)

+
Resilvers are processed by the sync thread. While + resilvering it will spend at least this much time working on a resilver + between txg flushes. +

Default value: 3,000.

+
+

+

zfs_scan_ignore_errors (int)

+
If set to a nonzero value, remove the DTL (dirty time + list) upon completion of a pool scan (scrub) even if there were unrepairable + errors. It is intended to be used during pool repair or recovery to stop + resilvering when the pool is next imported. +

Default value: 0.

+
+

+

zfs_scrub_min_time_ms (int)

+
Scrubs are processed by the sync thread. While scrubbing + it will spend at least this much time working on a scrub between txg flushes. +

Default value: 1,000.

+
+

+

zfs_scan_checkpoint_intval (int)

+
To preserve progress across reboots the sequential scan + algorithm periodically needs to stop metadata scanning and issue all the + verifications I/Os to disk. The frequency of this flushing is determined by + the zfs_scan_checkpoint_intval tunable. +

Default value: 7200 seconds (every 2 hours).

+
+

+

zfs_scan_fill_weight (int)

+
This tunable affects how scrub and resilver I/O segments + are ordered. A higher number indicates that we care more about how filled in a + segment is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + affect on scrub or resilver performance. +

Default value: 3.

+
+

+

zfs_scan_issue_strategy (int)

+
Determines the order that data will be verified while + scrubbing or resilvering. If set to 1, data will be verified as + sequentially as possible, given the amount of memory reserved for scrubbing + (see zfs_scan_mem_lim_fact). This may improve scrub performance if the + pool's data is very fragmented. If set to 2, the largest + mostly-contiguous chunk of found data will be verified first. By deferring + scrubbing of small segments, we may later find adjacent data to coalesce and + increase the segment size. If set to 0, zfs will use strategy 1 + during normal verification and strategy 2 while taking a checkpoint. +

Default value: 0.

+
+

+

zfs_scan_legacy (int)

+
A value of 0 indicates that scrubs and resilvers will + gather metadata in memory before issuing sequential I/O. A value of 1 + indicates that the legacy algorithm will be used where I/O is initiated as + soon as it is discovered. Changing this value to 0 will not affect scrubs or + resilvers that are already in progress. +

Default value: 0.

+
+

+

zfs_scan_max_ext_gap (int)

+
Indicates the largest gap in bytes between scrub / + resilver I/Os that will still be considered sequential for sorting purposes. + Changing this value will not affect scrubs or resilvers that are already in + progress. +

Default value: 2097152 (2 MB).

+
+

+

zfs_scan_mem_lim_fact (int)

+
Maximum fraction of RAM used for I/O sorting by + sequential scan algorithm. This tunable determines the hard limit for I/O + sorting memory usage. When the hard limit is reached we stop scanning metadata + and start issuing data verification I/O. This is done until we get below the + soft limit. +

Default value: 20 which is 5% of RAM (1/20).

+
+

+

zfs_scan_mem_lim_soft_fact (int)

+
The fraction of the hard limit used to determined the + soft limit for I/O sorting by the sequential scan algorithm. When we cross + this limit from below no action is taken. When we cross this limit from above + it is because we are issuing verification I/O. In this case (unless the + metadata scan is done) we stop issuing verification I/O and start scanning + metadata again until we get to the hard limit. +

Default value: 20 which is 5% of the hard limit (1/20).

+
+

+

zfs_scan_vdev_limit (int)

+
Maximum amount of data that can be concurrently issued at + once for scrubs and resilvers per leaf device, given in bytes. +

Default value: 41943040.

+
+

+

zfs_send_corrupt_data (int)

+
Allow sending of corrupt data (ignore read/checksum + errors when sending data) +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_send_unmodified_spill_blocks (int)

+
Include unmodified spill blocks in the send stream. Under + certain circumstances previous versions of ZFS could incorrectly remove the + spill block from an existing object. Including unmodified copies of the spill + blocks creates a backwards compatible stream which will recreate a spill block + if it was incorrectly removed. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_send_queue_length (int)

+
The maximum number of bytes allowed in the zfs + send queue. This value must be at least twice the maximum block size in + use. +

Default value: 16,777,216.

+
+

+

zfs_recv_queue_length (int)

+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice the maximum block size in + use. +

Default value: 16,777,216.

+
+

+

zfs_sync_pass_deferred_free (int)

+
Flushing of data to disk is done in passes. Defer frees + starting in this pass +

Default value: 2.

+
+

+

zfs_spa_discard_memory_limit (int)

+
Maximum memory used for prefetching a checkpoint's space + map on each vdev while discarding the checkpoint. +

Default value: 16,777,216.

+
+

+

zfs_special_class_metadata_reserve_pct (int)

+
Only allow small data blocks to be allocated on the + special and dedup vdev types when the available free space percentage on these + vdevs exceeds this value. This ensures reserved space is available for pool + meta data as the special vdevs approach capacity. +

Default value: 25.

+
+

+

zfs_sync_pass_dont_compress (int)

+
Starting in this sync pass, we disable compression + (including of metadata). With the default setting, in practice, we don't have + this many sync passes, so this has no effect. +

The original intent was that disabling compression would help the + sync passes to converge. However, in practice disabling compression + increases the average number of sync passes, because when we turn + compression off, a lot of block's size will change and thus we have to + re-allocate (not overwrite) them. It also increases the number of 128KB + allocations (e.g. for indirect blocks and spacemaps) because these will not + be compressed. The 128K allocations are especially detrimental to + performance on highly fragmented systems, which may have very few free + segments of this size, and may need to load new metaslabs to satisfy 128K + allocations.

+

Default value: 8.

+
+

+

zfs_sync_pass_rewrite (int)

+
Rewrite new block pointers starting in this pass +

Default value: 2.

+
+

+

zfs_sync_taskq_batch_pct (int)

+
This controls the number of threads used by the + dp_sync_taskq. The default value of 75% will create a maximum of one thread + per cpu. +

Default value: 75%.

+
+

+

zfs_trim_extent_bytes_max (unsigned int)

+
Maximum size of TRIM command. Ranges larger than this + will be split in to chunks no larger than zfs_trim_extent_bytes_max + bytes before being issued to the device. +

Default value: 134,217,728.

+
+

+

zfs_trim_extent_bytes_min (unsigned int)

+
Minimum size of TRIM commands. TRIM ranges smaller than + this will be skipped unless they're part of a larger range which was broken in + to chunks. This is done because it's common for these small TRIMs to + negatively impact overall performance. This value can be set to 0 to TRIM all + unallocated space. +

Default value: 32,768.

+
+

+

zfs_trim_metaslab_skip (unsigned int)

+
Skip uninitialized metaslabs during the TRIM process. + This option is useful for pools constructed from large thinly-provisioned + devices where TRIM operations are slow. As a pool ages an increasing fraction + of the pools metaslabs will be initialized progressively degrading the + usefulness of this option. This setting is stored when starting a manual TRIM + and will persist for the duration of the requested TRIM. +

Default value: 0.

+
+

+

zfs_trim_queue_limit (unsigned int)

+
Maximum number of queued TRIMs outstanding per leaf vdev. + The number of concurrent TRIM commands issued to the device is controlled by + the zfs_vdev_trim_min_active and zfs_vdev_trim_max_active module + options. +

Default value: 10.

+
+

+

zfs_trim_txg_batch (unsigned int)

+
The number of transaction groups worth of frees which + should be aggregated before TRIM operations are issued to the device. This + setting represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available for + use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite effect. + The default value of 32 was determined to be a reasonable compromise.

+

Default value: 32.

+
+

+

zfs_txg_history (int)

+
Historical statistics for the last N txgs will be + available in /proc/spl/kstat/zfs/<pool>/txgs +

Default value: 0.

+
+

+

zfs_txg_timeout (int)

+
Flush dirty data to disk at least every N seconds + (maximum txg duration) +

Default value: 5.

+
+

+

zfs_vdev_aggregate_trim (int)

+
Allow TRIM I/Os to be aggregated. This is normally not + helpful because the extents to be trimmed will have been already been + aggregated by the metaslab. This option is provided for debugging and + performance analysis. +

Default value: 0.

+
+

+

zfs_vdev_aggregation_limit (int)

+
Max vdev I/O aggregation size +

Default value: 1,048,576.

+
+

+

zfs_vdev_aggregation_limit_non_rotating (int)

+
Max vdev I/O aggregation size for non-rotating media +

Default value: 131,072.

+
+

+

zfs_vdev_cache_bshift (int)

+
Shift size to inflate reads too +

Default value: 16 (effectively 65536).

+
+

+

zfs_vdev_cache_max (int)

+
Inflate reads smaller than this value to meet the + zfs_vdev_cache_bshift size (default 64k). +

Default value: 16384.

+
+

+

zfs_vdev_cache_size (int)

+
Total size of the per-disk cache in bytes. +

Currently this feature is disabled as it has been found to not be + helpful for performance and in some cases harmful.

+

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load. +

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 5.

+
+

+

zfs_vdev_mirror_rotating_seek_offset (int)

+
The maximum distance for the last queued I/O in which the + balancing algorithm considers an I/O to have locality. See the section + "ZFS I/O SCHEDULER". +

Default value: 1048576.

+
+

+

zfs_vdev_mirror_non_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/Os do not immediately follow one another. +

Default value: 0.

+
+

+

zfs_vdev_mirror_non_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 1.

+
+

+

zfs_vdev_read_gap_limit (int)

+
Aggregate read I/O operations if the gap on-disk between + them is within this threshold. +

Default value: 32,768.

+
+

+

zfs_vdev_write_gap_limit (int)

+
Aggregate write I/O over gap +

Default value: 4,096.

+
+

+

zfs_vdev_raidz_impl (string)

+
Parameter for selecting raidz parity implementation to + use. +

Options marked (always) below may be selected on module load as + they are supported on all systems. The remaining options may only be set + after the module is loaded, as they are available only if the + implementations are compiled in and supported on the running system.

+

Once the module is loaded, the content of + /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options + with the currently selected one enclosed in []. Possible options are: +
+ fastest - (always) implementation selected using built-in benchmark +
+ original - (always) original raidz implementation +
+ scalar - (always) scalar raidz implementation +
+ sse2 - implementation using SSE2 instruction set (64bit x86 only) +
+ ssse3 - implementation using SSSE3 instruction set (64bit x86 only) +
+ avx2 - implementation using AVX2 instruction set (64bit x86 only) +
+ avx512f - implementation using AVX512F instruction set (64bit x86 only) +
+ avx512bw - implementation using AVX512F & AVX512BW instruction sets + (64bit x86 only) +
+ aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) +
+ aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 + bit ARMv8 only)

+

Default value: fastest.

+
+

+

zfs_zevent_cols (int)

+
When zevents are logged to the console use this as the + word wrap width. +

Default value: 80.

+
+

+

zfs_zevent_console (int)

+
Log events to the console +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_zevent_len_max (int)

+
Max event queue length. A value of 0 will result in a + calculated value which increases with the number of CPUs in the system + (minimum 64 events). Events in the queue can be viewed with the zpool + events command. +

Default value: 0.

+
+

+

zfs_zil_clean_taskq_maxalloc (int)

+
The maximum number of taskq entries that are allowed to + be cached. When this limit is exceeded transaction records (itxs) will be + cleaned synchronously. +

Default value: 1048576.

+
+

+

zfs_zil_clean_taskq_minalloc (int)

+
The number of taskq entries that are pre-populated when + the taskq is first created and are immediately available for use. +

Default value: 1024.

+
+

+

zfs_zil_clean_taskq_nthr_pct (int)

+
This controls the number of threads used by the + dp_zil_clean_taskq. The default value of 100% will create a maximum of one + thread per cpu. +

Default value: 100%.

+
+

+

zil_maxblocksize (int)

+
This sets the maximum block size used by the ZIL. On very + fragmented pools, lowering this (typically to 36KB) can improve performance. +

Default value: 131072 (128KB).

+
+

+

zil_nocacheflush (int)

+
Disable the cache flush commands that are normally sent + to the disk(s) by the ZIL after an LWB write has completed. Setting this will + cause ZIL corruption on power loss if a volatile out-of-order write cache is + enabled. +

Use 1 for yes and 0 for no (default).

+
+

+

zil_replay_disable (int)

+
Disable intent logging replay. Can be disabled for + recovery from corrupted ZIL +

Use 1 for yes and 0 for no (default).

+
+

+

zil_slog_bulk (ulong)

+
Limit SLOG write size per commit executed with + synchronous priority. Any writes above that will be executed with lower + (asynchronous) priority to limit potential SLOG device abuse by single active + ZIL writer. +

Default value: 786,432.

+
+

+

zio_deadman_log_all (int)

+
If non-zero, the zio deadman will produce debugging + messages (see zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to gain + diagnostic information for hang conditions which don't involve a mutex or + other locking primitive; typically conditions in which a thread in the zio + pipeline is looping indefinitely. +

Default value: 0.

+
+

+

zio_decompress_fail_fraction (int)

+
If non-zero, this value represents the denominator of the + probability that zfs should induce a decompression failure. For instance, for + a 5% decompression failure rate, this value should be set to 20. +

Default value: 0.

+
+

+

zio_slow_io_ms (int)

+
When an I/O operation takes more than + zio_slow_io_ms milliseconds to complete is marked as a slow I/O. Each + slow I/O causes a delay zevent. Slow I/O counters can be seen with "zpool + status -s". +

+

Default value: 30,000.

+
+

+

zio_dva_throttle_enabled (int)

+
Throttle block allocations in the I/O pipeline. This + allows for dynamic allocation distribution when devices are imbalanced. When + enabled, the maximum number of pending allocations per top-level vdev is + limited by zfs_vdev_queue_depth_pct. +

Default value: 1.

+
+

+

zio_requeue_io_start_cut_in_line (int)

+
Prioritize requeued I/O +

Default value: 0.

+
+

+

zio_taskq_batch_pct (uint)

+
Percentage of online CPUs (or CPU cores, etc) which will + run a worker thread for I/O. These workers are responsible for I/O work such + as compression and checksum calculations. Fractional number of CPUs will be + rounded down. +

The default value of 75 was chosen to avoid using all CPUs which + can result in latency issues and inconsistent application performance, + especially when high compression is enabled.

+

Default value: 75.

+
+

+

zvol_inhibit_dev (uint)

+
Do not create zvol device nodes. This may slightly + improve startup time on systems with a very large number of zvols. +

Use 1 for yes and 0 for no (default).

+
+

+

zvol_major (uint)

+
Major number for zvol block devices +

Default value: 230.

+
+

+

zvol_max_discard_blocks (ulong)

+
Discard (aka TRIM) operations done on zvols will be done + in batches of this many blocks, where block size is determined by the + volblocksize property of a zvol. +

Default value: 16,384.

+
+

+

zvol_prefetch_bytes (uint)

+
When adding a zvol to the system prefetch + zvol_prefetch_bytes from the start and end of the volume. Prefetching + these regions of the volume is desirable because they are likely to be + accessed immediately by blkid(8) or by the kernel scanning for a + partition table. +

Default value: 131,072.

+
+

+

zvol_request_sync (uint)

+
When processing I/O requests for a zvol submit them + synchronously. This effectively limits the queue depth to 1 for each I/O + submitter. When set to 0 requests are handled asynchronously by a thread pool. + The number of requests which can be handled concurrently is controller by + zvol_threads. +

Default value: 0.

+
+

+

zvol_threads (uint)

+
Max number of threads which can handle zvol I/O requests + concurrently. +

Default value: 32.

+
+

+

zvol_volmode (uint)

+
Defines zvol block devices behaviour when volmode + is set to default. Valid values are 1 (full), 2 (dev) and + 3 (none). +

Default value: 1.

+
+

+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/Os. The I/O scheduler determines when and in what order those operations + are issued. The I/O scheduler divides operations into five I/O classes + prioritized in the following order: sync read, sync write, async read, async + write, and scrub/resilver. Each queue defines the minimum and maximum number + of concurrent operations that may be issued to the device. In addition, the + device has an aggregate maximum, zfs_vdev_max_active. Note that the + sum of the per-queue minimums must not exceed the aggregate maximum. If the + sum of the per-queue maximums exceeds the aggregate maximum, then the number + of active I/Os may reach zfs_vdev_max_active, in which case no + further I/Os will be issued regardless of whether all per-queue minimums + have been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Further, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been hit + or if there are no operations queued for an I/O class that has not hit its + maximum. Every time an I/O is queued or an operation completes, the I/O + scheduler looks for new operations to issue.

+

In general, smaller max_active's will lead to lower latency of + synchronous operations. Larger max_active's may lead to higher overall + throughput, depending on underlying storage.

+

The ratio of the queues' max_actives determines the balance of + performance between reads, writes, and scrubs. E.g., increasing + zfs_vdev_scrub_max_active will cause the scrub or resilver to + complete more quickly, but reads and writes to have higher latency and lower + throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write I/Os according to + the amount of dirty data in the pool. Since both throughput and latency + typically increase with the number of concurrent operations issued to + physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other -- and + in particular synchronous -- queues. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there's + more dirty data in the pool.

+

Async Writes

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points.

+
+
+ | o---------| <-- zfs_vdev_async_write_max_active +
+ ^ | /^ | +
+ | | / | | +active | / | | +
+ I/O | / | | +count | / | | +
+ | / | | +
+ |-------o | | <-- zfs_vdev_async_write_min_active +
+ 0|_______^______|_________| +
+ 0% | | 100% of zfs_dirty_data_max +
+ | | +
+ | `-- zfs_vdev_async_write_active_max_dirty_percent +
+ `--------- zfs_vdev_async_write_active_min_dirty_percent +
+Until the amount of dirty data exceeds a minimum percentage of the dirty data + allowed in the pool, the I/O scheduler will limit the number of concurrent + operations to the minimum. As that threshold is crossed, the number of + concurrent operations issued increases linearly to the maximum at the + specified maximum percentage of the dirty data allowed in the pool. +

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the + maximum percentage, this indicates that the rate of incoming data is greater + than the rate that the backend storage can handle. In this case, we must + further throttle incoming writes, as described in the next section.

+

+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as:

+
+
+ min_time = zfs_delay_scale * (dirty - min) / (max - dirty) +
+ min_time is then capped at 100 milliseconds.
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so that we only start + to delay after writing at full speed has failed to keep up with the incoming + write rate. The scale of the curve is defined by zfs_delay_scale. + Roughly speaking, this variable determines the amount of delay at the + midpoint of the curve.

+

+
delay
+
+ 10ms +-------------------------------------------------------------*+ +
+ | *| +
+ 9ms + *+ +
+ | *| +
+ 8ms + *+ +
+ | * | +
+ 7ms + * + +
+ | * | +
+ 6ms + * + +
+ | * | +
+ 5ms + * + +
+ | * | +
+ 4ms + * + +
+ | * | +
+ 3ms + * + +
+ | * | +
+ 2ms + (midpoint) * + +
+ | | ** | +
+ 1ms + v *** + +
+ | zfs_delay_scale ----------> ******** | +
+ 0 +-------------------------------------*********----------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note that since the delay is added to the outstanding time + remaining on the most recent transaction, the delay is effectively the + inverse of IOPS. Here the midpoint of 500us translates to 2000 IOPS. The + shape of the curve was chosen such that small changes in the amount of + accumulated dirty data in the first 3/4 of the curve yield relatively small + differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay + is represented on a log scale:

+

+
delay
+100ms +-------------------------------------------------------------++
+
+ + + +
+ | | +
+ + *+ +
+ 10ms + *+ +
+ + ** + +
+ | (midpoint) ** | +
+ + | ** + +
+ 1ms + v **** + +
+ + zfs_delay_scale ----------> ***** + +
+ | **** | +
+ + **** + +100us + ** + +
+ + * + +
+ | * | +
+ + * + +
+ 10us + * + +
+ + + +
+ | | +
+ + + +
+ +--------------------------------------------------------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the backend storage, and then by changing the value of + zfs_delay_scale to increase the steepness of the curve.

+
+
+ + + + + +
February 15, 2019
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/5/zpool-features.5.html b/man/v0.8/5/zpool-features.5.html new file mode 100644 index 000000000..18a5513a2 --- /dev/null +++ b/man/v0.8/5/zpool-features.5.html @@ -0,0 +1,1005 @@ + + + + + + + zpool-features.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.5

+
+ + + + + +
ZPOOL-FEATURES(5)File Formats ManualZPOOL-FEATURES(5)
+
+
+

+

zpool-features - ZFS pool feature descriptions

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the upgrade subcommand of the zpool(8) command, or set the + feature@feature_name property to enabled.

+

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

+

Since most features can be enabled independently of each other the + on-disk format of the pool is specified by the set of all features marked as + active on the pool. If the pool was created by another software + version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature_name. The reverse DNS name ensures that the + feature's GUID is unique across all ZFS implementations. When unsupported + features are encountered on a pool they will be identified by their GUIDs. + Refer to the documentation for the ZFS implementation that created the pool + for information about those features.

+

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the ':' (e.g. + com.example:feature_name would have the short name + feature_name), however a feature's short name may differ across ZFS + implementations if following the convention would result in name + conflicts.

+
+
+

+

Features can be in one of three states:

+

active

+
This feature's on-disk format changes are in effect on + the pool. Support for this feature is required to import the pool in + read-write mode. If this feature is not read-only compatible, support is also + required to import the pool in read-only mode (see "Read-only + compatibility").
+

+

enabled

+
An administrator has marked this feature as enabled on + the pool, but the feature's on-disk format changes have not been made yet. The + pool can still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support returning to the + enabled state after becoming active. See feature-specific + documentation for details.
+

+

disabled

+
This feature's on-disk format changes have not been made + and will not be made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they have been + enabled.
+

+

+

The state of supported features is exposed through pool properties + of the form feature@short_name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as "read-only compatible". If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly property during + import (see zpool(8) for details on importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool a pool + property named unsupported@feature_name will indicate why the import + was allowed despite the unsupported feature. Possible values for this + property are:

+

+

inactive

+
The feature is in the enabled state and therefore + the pool's on-disk format is still compatible with software that does not + support this feature.
+

+

readonly

+
The feature is read-only compatible and the pool has been + imported in read-only mode.
+

+
+
+

+

Some features depend on other features being enabled in order to + function properly. Enabling a feature will automatically enable any features + it depends on.

+
+
+
+

+

The following features are supported on this system:

+

+

allocation_classes

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:allocation_classes
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables support for separate allocation classes.

+

This feature becomes active when a dedicated allocation + class vdev (dedup or special) is created with the zpool create or + zpool add subcommands. With device removal, it can be returned to the + enabled state if all the dedicated allocation class vdevs are + removed.

+
+

+

async_destroy

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:async_destroy
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Destroying a file system requires traversing all of its data in + order to return its used space to the pool. Without async_destroy the + file system is not fully removed until all space has been reclaimed. If the + destroy operation is interrupted by a reboot or power outage the next + attempt to open the pool will need to complete the destroy operation + synchronously.

+

When async_destroy is enabled the file system's data will + be reclaimed by a background process, allowing the destroy operation to + complete without traversing the entire file system. The background process + is able to resume interrupted destroys after the pool has been opened, + eliminating the need to finish interrupted destroys as part of the open + operation. The amount of space remaining to be reclaimed by the background + process is available through the freeing property.

+

This feature is only active while freeing is + non-zero.

+
+

+

bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmarks
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables use of the zfs bookmark + subcommand.

+

This feature is active while any bookmarks exist in the + pool. All bookmarks in the pool can be listed by running zfs list -t + bookmark -r poolname.

+
+

+

bookmark_v2

+
+ + + + + + + + + + + + + +
GUIDcom.datto:bookmark_v2
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark, extensible_dataset
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 bookmark is created + and will be returned to the enabled state when all v2 bookmarks are + destroyed.

+
+

+

device_removal

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:device_removal
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature enables the zpool remove subcommand to remove + top-level vdevs, evacuating them to reduce the total size of the pool.

+

This feature becomes active when the zpool remove + subcommand is used on a top-level vdev, and will never return to being + enabled.

+
+

+

edonr

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:edonr
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Edon-R hash algorithm for + checksum, including for nopwrite (if compression is also enabled, an + overwrite of a block whose checksum matches the data being written will be + ignored). In an abundance of caution, Edon-R requires verification when used + with dedup: zfs set dedup=edonr,verify. See zfs(8).

+

Edon-R is a very high-performance hash algorithm that was part of + the NIST SHA-3 competition. It provides extremely high hash performance + (over 350% faster than SHA-256), but was not selected because of its + unsuitability as a general purpose secure hash algorithm. This + implementation utilizes the new salted checksumming functionality in ZFS, + which means that the checksum is pre-seeded with a secret 256-bit random key + (stored on the pool) before being fed the data block to be checksummed. Thus + the produced checksums are unique to a given pool.

+

When the edonr feature is set to enabled, the + administrator can turn on the edonr checksum on any dataset using the + zfs set checksum=edonr. See zfs(8). This feature becomes + active once a checksum property has been set to edonr, + and will return to being enabled once all filesystems that have ever + had their checksum set to edonr are destroyed.

+

The edonr feature is not supported by GRUB and must not be + used on the pool if GRUB needs to access the pool (e.g. for /boot).

+
+

+

embedded_data

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:embedded_data
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 bytes + or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of highly-compressible + blocks are stored in the block "pointer" itself (a misnomer in + this case, as it contains the compressed data, rather than a pointer to its + location on disk). Thus the space of the block (one sector, typically 512 + bytes or 4KB) is saved, and no additional i/o is needed to read and write + the data block.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

empty_bpobj

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:empty_bpobj
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also reduces + the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobj's) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobj's are empty. This feature + allows us to create each bpobj on-demand, thus eliminating the empty + bpobjs.

+

This feature is active while there are any filesystems, + volumes, or snapshots which were created after enabling this feature.

+
+

+

enabled_txg

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:enabled_txg
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Once this feature is enabled ZFS records the transaction group + number in which new features are enabled. This has no user-visible impact, + but other features may depend on this feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

encryption

+
+ + + + + + + + + + + + + +
GUIDcom.datto:encryption
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark_v2, extensible_dataset
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an encrypted dataset is + created and will be returned to the enabled state when all datasets + that use this feature are destroyed.

+
+

+

extensible_dataset

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:extensible_dataset
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first dependent + feature uses it, and will be returned to the enabled state when all + datasets that use this feature are destroyed.

+
+

+

filesystem_limits

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:filesystem_limits
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables filesystem and snapshot limits. These limits + can be used to control how many filesystems and/or snapshots can be created + at the point in the tree on which the limits are set.

+

This feature is active once either of the limit properties + has been set on a dataset. Once activated the feature is never + deactivated.

+
+

+

hole_birth

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:hole_birth
READ-ONLY COMPATIBLEno
DEPENDENCIESenabled_txg
+

This feature has/had bugs, the result of which is that, if you do + a zfs send -i (or -R, since it uses -i) from an + affected dataset, the receiver will not see any checksum or other errors, + but the resulting destination snapshot will not match the source. Its use by + zfs send -i has been disabled by default. See the + send_holes_without_birth_time module parameter in + zfs-module-parameters(5).

+

This feature improves performance of incremental sends (zfs + send -i) and receives for objects with many holes. The most common case + of hole-filled objects is zvols.

+

An incremental send stream from snapshot A to snapshot + B contains information about every block that changed between + A and B. Blocks which did not change between those snapshots + can be identified and omitted from the stream using a piece of metadata + called the 'block birth time', but birth times are not recorded for holes + (blocks filled only with zeroes). Since holes created after A cannot + be distinguished from holes created before A, information about every + hole in the entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. However, + when incrementally replicating filesystems or zvols with many holes (for + example a zvol formatted with another filesystem) a lot of time will be + spent sending and receiving unnecessary information about holes that already + exist on the receiving side.

+

Once the hole_birth feature has been enabled the block + birth times of all new holes will be recorded. Incremental sends between + snapshots created after this feature is enabled will use this new metadata + to avoid sending information about holes that already exist on the receiving + side.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

large_blocks

+
+ + + + + + + + + + + + + +
GUIDorg.open-zfs:large_blocks
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_block feature allows the record size on a dataset + to be set larger than 128KB.

+

This feature becomes active once a dataset contains a file + with a block size larger than 128KB, and will return to being enabled + once all filesystems that have ever had their recordsize larger than 128KB + are destroyed.

+
+

+

large_dnode

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:large_dnode
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_dnode feature allows the size of dnodes in a + dataset to be set larger than 512B.

+

This feature becomes active once a dataset contains an + object with a dnode larger than 512B, which occurs as a result of setting + the dnodesize dataset property to a value other than legacy. + The feature will return to being enabled once all filesystems that + have ever contained a dnode larger than 512B are destroyed. Large dnodes + allow more data to be stored in the bonus buffer, thus potentially improving + performance by avoiding the use of spill blocks.

+
+

+

lz4_compress

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:lz4_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

lz4 is a high-performance real-time compression algorithm + that features significantly faster compression and decompression as well as + a higher compression ratio than the older lzjb compression. + Typically, lz4 compression is approximately 50% faster on + compressible data and 200% faster on incompressible data than lzjb. + It is also approximately 80% faster on decompression, while giving + approximately 10% better compression ratio.

+

When the lz4_compress feature is set to enabled, the + administrator can turn on lz4 compression on any dataset on the pool + using the zfs(8) command. Please note that doing so will immediately + activate the lz4_compress feature on the underlying pool using the + zfs(8) command. Also, all newly written metadata will be compressed with + lz4 algorithm. Since this feature is not read-only compatible, this + operation will render the pool unimportable on systems without support for + the lz4_compress feature.

+

Booting off of lz4-compressed root pools is supported.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

multi_vdev_crash_dump

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:multi_vdev_crash_dump
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored or + raidz configuration.

+

When the multi_vdev_crash_dump feature is set to + enabled, the administrator can use the dumpadm(1M) command to + configure a dump device on a pool comprised of multiple vdevs.

+

Under Linux this feature is registered for compatibility but not + used. New pools created under Linux will have the feature enabled but + will never transition to active. This functionality is not + required in order to support crash dumps under Linux. Existing pools where + this feature is active can be imported.

+
+

+

obsolete_counts

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:obsolete_counts
READ-ONLY COMPATIBLEyes
DEPENDENCIESdevice_removal
+

This feature is an enhancement of device_removal, which will over + time reduce the memory used to track removed devices. When indirect blocks + are freed or remapped, we note that their part of the indirect mapping is + "obsolete", i.e. no longer needed.

+

This feature becomes active when the zpool remove + subcommand is used on a top-level vdev, and will never return to being + enabled.

+
+

+

project_quota

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:project_quota
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is new object-based attribute. When upgrading an + existing filesystem, object without project ID attribute will be assigned a + zero project ID. After this feature is enabled, newly created object will + inherit its parent directory's project ID if the parent inherit flag is set + (via chattr +/-P or zfs project [-s|-C]). Otherwise, the new + object's project ID will be set as zero. An object's project ID can be + changed at anytime by the owner (or privileged user) via chattr -p + $prjid or zfs project -p $prjid.

+

This feature will become active as soon as it is enabled + and will never return to being disabled. Each filesystem will be + upgraded automatically when remounted or when new file is created under that + filesystem. The upgrade can also be triggered on filesystems via `zfs set + version=current <pool/fs>`. The upgrade process runs in the background + and may take a while to complete for the filesystems containing a large + number of files.

+
+

+

resilver_defer

+
+ + + + + + + + + + + + + +
GUIDcom.datto:resilver_defer
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature allows zfs to postpone new resilvers if an existing + one is already in progress. Without this feature, any new resilvers will + cause the currently running one to be immediately restarted from the + beginning.

+

This feature becomes active once a resilver has been + deferred, and returns to being enabled when the deferred resilver + begins.

+
+

+

sha512

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:sha512
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit arithmetic + of SHA-512 provides an approximate 50% performance boost over SHA-256 on + 64-bit hardware and is thus a good minimum-change replacement candidate for + systems where hash performance is important, but these systems cannot for + whatever reason utilize the faster skein and edonr + algorithms.

+

When the sha512 feature is set to enabled, the + administrator can turn on the sha512 checksum on any dataset using + zfs set checksum=sha512. See zfs(8). This feature becomes + active once a checksum property has been set to sha512, + and will return to being enabled once all filesystems that have ever + had their checksum set to sha512 are destroyed.

+

The sha512 feature is not supported by GRUB and must not be + used on the pool if GRUB needs to access the pool (e.g. for /boot).

+
+

+

skein

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:skein
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm that + was a finalist in the NIST SHA-3 competition. It provides a very high + security margin and high performance on 64-bit hardware (80% faster than + SHA-256). This implementation also utilizes the new salted checksumming + functionality in ZFS, which means that the checksum is pre-seeded with a + secret 256-bit random key (stored on the pool) before being fed the data + block to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the skein feature is set to enabled, the + administrator can turn on the skein checksum on any dataset using + zfs set checksum=skein. See zfs(8). This feature becomes + active once a checksum property has been set to skein, + and will return to being enabled once all filesystems that have ever + had their checksum set to skein are destroyed.

+

The skein feature is not supported by GRUB and must not be + used on the pool if GRUB needs to access the pool (e.g. for /boot).

+
+

+

spacemap_histogram

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_histogram
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This features allows ZFS to maintain more information about how + free space is organized within the pool. If this feature is enabled, + ZFS will set this feature to active when a new space map object is + created or an existing space map is upgraded to the new format. Once the + feature is active, it will remain in that state until the pool is + destroyed.

+
+

+

spacemap_v2

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_v2
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the use of the new space map encoding which + consists of two words (instead of one) whenever it is advantageous. The new + encoding allows space maps to represent large regions of space more + efficiently on-disk while also increasing their maximum addressable + offset.

+

This feature becomes active once it is enabled, and + never returns back to being enabled.

+
+

+

userobj_accounting

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:userobj_accounting
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled. Each filesystem will be upgraded + automatically when remounted, or when new files are created under that + filesystem. The upgrade can also be started manually on filesystems by + running `zfs set version=current <pool/fs>`. The upgrade process runs + in the background and may take a while to complete for filesystems + containing a large number of files.

+
+

+

zpool_checkpoint

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:zpool_checkpoint
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the zpool checkpoint subcommand that + can checkpoint the state of the pool at the time it was issued and later + rewind back to it or discard it.

+

This feature becomes active when the zpool + checkpoint subcommand is used to checkpoint the pool. The feature will + only return back to being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+

+
+
+

+

zpool(8)

+
+
+ + + + + +
June 8, 2018
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/fsck.zfs.8.html b/man/v0.8/8/fsck.zfs.8.html new file mode 100644 index 000000000..e986fabcd --- /dev/null +++ b/man/v0.8/8/fsck.zfs.8.html @@ -0,0 +1,219 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
fsck.zfs(8)System Administration Commandsfsck.zfs(8)
+
+

+
+

+

fsck.zfs - Dummy ZFS filesystem checker.

+

+
+
+

+

fsck.zfs [options] + <dataset>

+

+
+
+

+

fsck.zfs is a shell stub that does nothing and always + returns true. It is installed by ZoL because some Linux distributions expect + a fsck helper for all filesystems.

+

+
+
+

+

All options and the dataset are ignored.

+

+
+
+

+

ZFS datasets are checked by running zpool scrub on the + containing pool. An individual ZFS dataset is never checked independently of + its pool, which is unlike a regular filesystem.

+

+
+
+

+

On some systems, if the dataset is in a degraded pool, then + it might be appropriate for fsck.zfs to return exit code 4 to + indicate an uncorrected filesystem error.

+

Similarly, if the dataset is in a faulted pool and has a + legacy /etc/fstab record, then fsck.zfs should return exit code 8 to + indicate a fatal operational error.

+

+
+
+

+

Darik Horn <dajhorn@vanadac.com>.

+

+
+
+

+

fsck(8), fstab(5), zpool(8)

+
+
+ + + + + +
2013 MAR 16ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/index.html b/man/v0.8/8/index.html new file mode 100644 index 000000000..7e8876c58 --- /dev/null +++ b/man/v0.8/8/index.html @@ -0,0 +1,169 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/mount.zfs.8.html b/man/v0.8/8/mount.zfs.8.html new file mode 100644 index 000000000..9e972bf69 --- /dev/null +++ b/man/v0.8/8/mount.zfs.8.html @@ -0,0 +1,268 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
mount.zfs(8)System Administration Commandsmount.zfs(8)
+
+

+
+

+

mount.zfs - mount a ZFS filesystem

+
+
+

+

mount.zfs [-sfnvh] [-o options] dataset + mountpoint

+

+
+
+

+

mount.zfs is part of the zfsutils package for Linux. It is + a helper program that is usually invoked by the mount(8) or + zfs(8) commands to mount a ZFS dataset.

+

All options are handled according to the FILESYSTEM + INDEPENDENT MOUNT OPTIONS section in the mount(8) manual, except for + those described below.

+

The dataset parameter is a ZFS filesystem name, as output + by the zfs list -H -o name command. This parameter never has a + leading slash character and is not a device name.

+

The mountpoint parameter is the path name of a + directory.

+

+

+
+
+

+
+
+
Ignore bad or sloppy mount options.
+
+
Do a fake mount; do not perform the mount operation.
+
+
Do not update the /etc/mtab file.
+
+
Increase verbosity.
+
+
Print the usage message.
+
+
This flag sets the SELinux context for all files in the filesystem under + that mountpoint.
+
+
This flag sets the SELinux context for the filesystem being mounted.
+
+
This flag sets the SELinux context for unlabeled files.
+
+
This flag sets the SELinux context for the root inode of the + filesystem.
+
+
This private flag indicates that the dataset has an entry in the + /etc/fstab file.
+
+
This private flag disables extended attributes.
+
+
This private flag enables directory-based extended attributes and, if + appropriate, adds a ZFS context to the selinux system policy.
+
+
This private flag enables system attributed-based extended attributes and, + if appropriate, adds a ZFS context to the selinux system policy.
+
+
Equivalent to xattr.
+
+
This private flag indicates that mount(8) is being called by the + zfs(8) command. +

+
+
+
+
+

+

ZFS conventionally requires that the mountpoint be an empty + directory, but the Linux implementation inconsistently enforces the + requirement.

+

The mount.zfs helper does not mount the contents of + zvols.

+

+
+
+

+
+
/etc/fstab
+
The static filesystem table.
+
/etc/mtab
+
The mounted filesystem table.
+
+
+
+

+

The primary author of mount.zfs is Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

fstab(5), mount(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/vdev_id.8.html b/man/v0.8/8/vdev_id.8.html new file mode 100644 index 000000000..7771ea214 --- /dev/null +++ b/man/v0.8/8/vdev_id.8.html @@ -0,0 +1,238 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
vdev_id(8)System Manager's Manualvdev_id(8)
+
+
+

+

vdev_id - generate user-friendly names for JBOD disks

+
+
+

+
vdev_id <-d dev> [-c config_file] [-g sas_direct|sas_switch]
+
+ [-m] [-p phys_per_port] +vdev_id -h
+
+
+

+

The vdev_id command is a udev helper which parses the file + /etc/zfs/vdev_id.conf(5) to map a physical path in a storage topology + to a channel name. The channel name is combined with a disk enclosure slot + number to create an alias that reflects the physical location of the drive. + This is particularly helpful when it comes to tasks like replacing failed + drives. Slot numbers may also be re-mapped in case the default numbering is + unsatisfactory. The drive aliases will be created as symbolic links in + /dev/disk/by-vdev.

+

The currently supported topologies are sas_direct and sas_switch. + A multipath mode is supported in which dm-mpath devices are handled by + examining the first-listed running component disk as reported by the + multipath(8) command. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating aliases based on existing + udev links in the /dev hierarchy using the alias configuration file + keyword. See the vdev_id.conf(5) man page for details.

+

+
+
+

+
+
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+
This is the only mandatory argument. Specifies the name of a device in + /dev, i.e. "sda".
+
+
Identifies a physical topology that governs how physical paths are mapped + to channels. +

sas_direct - in this mode a channel is uniquely + identified by a PCI slot and a HBA port number

+

sas_switch - in this mode a channel is uniquely + identified by a SAS switch port number

+
+
+
Specifies that vdev_id(8) will handle only dm-multipath devices. If + set to "yes" then vdev_id(8) will examine the first + running component disk of a dm-multipath device as listed by the + multipath(8) command to determine the physical path.
+
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to determine + which HBA or switch port a device is connected to. The default is 4.
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zdb.8.html b/man/v0.8/8/zdb.8.html new file mode 100644 index 000000000..86944e180 --- /dev/null +++ b/man/v0.8/8/zdb.8.html @@ -0,0 +1,581 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's Manual (smm)ZDB(8)
+
+
+

+

zdbdisplay + zpool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMPsvXY] [-e + [-V] [-p + path ...]] [-I + inflight I/Os] [-o + var=value]... + [-t txg] + [-U cache] + [-x dumpdir] + [poolname [object ...]]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path ...]] [-U + cache] dataset + [object ...]
+
+ + + + + +
zdb-C [-A] + [-U cache]
+
+ + + + + +
zdb-E [-A] + word0:word1:...:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path ...]] + [-t txg] + [-U cache] + poolname [vdev + [metaslab ...]]
+
+ + + + + +
zdb-O dataset path
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path ...]] + [-U cache] + poolname + vdev:offset:[<lsize>/]<psize>[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path ...]] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. This is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as pool/ (pool name followed by a + slash).

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+
+
+

+

Display options:

+
+
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs are specified, display information about those + specific objects only.

+
+
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + * compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
+ word0:word1:...:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
+ device
+
Read the vdev labels from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. Each unique configuration is displayed only once.
+
+ device
+
In addition display label space usage stats.
+
+ device
+
Display every configuration, unique or not. +

If the -q option is also specified, + don't print the labels.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
+ poolname + vdev:offset:[<lsize>/]<psize>[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
+
Display the current uberblock.
+
+

Other options:

+
+
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
+ [-p path ...]
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
+ dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
+ inflight I/Os
+
Limit the number of outstanding checksum I/Os to the specified value. The + default value is 200. This option affects the performance of the + -c option.
+
+ var=value ...
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
+
Print numbers in an unscaled form more amenable to parsing, eg. 1000000 + rather than 1M.
+
+ transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
+ cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
+
Enable verbosity. Specify multiple times for increased verbosity.
+
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+
Display the configuration of imported pool + rpool
+
+
+
# zdb -C rpool
+
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ ...
+
+
+
Display basic dataset information about + rpool
+
+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ ...
+
+
+
Display basic information about object 0 in + rpool/export/home
+
+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
Display the predicted effect of enabling deduplication on + rpool
+
+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ ...
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
April 14, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zed.8.html b/man/v0.8/8/zed.8.html new file mode 100644 index 000000000..25d4508f2 --- /dev/null +++ b/man/v0.8/8/zed.8.html @@ -0,0 +1,380 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Administration CommandsZED(8)
+
+

+
+

+

ZED - ZFS Event Daemon

+

+
+
+

+

zed [-d zedletdir] [-f] [-F] + [-h] [-L] [-M] [-p pidfile] [-P + path] [-s statefile] [-v] [-V] + [-Z]

+

+
+
+

+

ZED (ZFS Event Daemon) monitors events generated by the ZFS + kernel module. When a zevent (ZFS Event) is posted, ZED will run any + ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) that have been + enabled for the corresponding zevent class.

+

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Run the daemon in the foreground.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+
Read the enabled ZEDLETs from the specified directory.
+
+
Write the daemon's process ID to the specified file.
+
+
Custom $PATH for zedlets to use. Normally zedlets run in a locked-down + environment, with hardcoded paths to the ZFS commands ($ZFS, $ZPOOL, $ZED, + ...), and a hardcoded $PATH. This is done for security reasons. However, + the ZFS test suite uses a custom PATH for its ZFS commands, and passes it + to zed with -P. In short, -P is only to be used by the ZFS test suite; + never use it in production!
+
+
Write the daemon's state to the specified file.
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the "zpool + events -v" command.

+

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory. These can be symlinked or copied from the + installed-zedlets directory; symlinks allow for automatic updates + from the installed ZEDLETs, whereas copies preserve local modifications. As + a security measure, ZEDLETs must be owned by root. They must have execute + permissions for the user, but they must not have write permissions for group + or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they should be + invoked. In particular, a ZEDLET will be invoked for a given zevent if + either its class or subclass string is a prefix of its filename (and is + followed by a non-alphabetic character). As a special case, the prefix + "all" matches all zevents. Multiple ZEDLETs may be invoked for a + given zevent.

+

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + "ZED_".

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner: 1) it is prefixed with "ZEVENT_", 2) it is converted to + uppercase, and 3) each non-alphanumeric character is converted to an + underscore. Some additional environment variables have been defined to + present certain nvpair values in a more convenient form. An incomplete list + of zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as + "seconds nanoseconds" since the Epoch.
+
+
The seconds component of ZEVENT_TIME.
+
+
The nanoseconds component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The ZFS alias (name-version-release) string used to build the + daemon.
+
+
The ZFS version used to build the daemon.
+
+
The ZFS release used to build the daemon.
+
+

ZEDLETs may need to call other ZFS commands. The installation + paths of the following executables are defined: ZDB, ZED, + ZFS, ZINJECT, and ZPOOL. These variables can be + overridden in the rc file if needed.

+

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state. +

+
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
+
Terminate the daemon. +

+
+
+
+
+

+

ZED requires root privileges.

+

+
+
+

+

Events are processed synchronously by a single thread. This can + delay the processing of simultaneous zevents.

+

There is no maximum timeout for ZEDLET execution. Consequently, a + misbehaving ZEDLET can delay the processing of subsequent zevents.

+

The ownership and permissions of the enabled-zedlets + directory (along with all parent directories) are not checked. If any of + these directories are improperly owned or permissioned, an unprivileged user + could insert a ZEDLET to be executed as root. The requirement that ZEDLETs + be owned by root mitigates this to some extent.

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Some zevent nvpair types are not handled. These are denoted by + zevent environment variables having a "_NOT_IMPLEMENTED_" + value.

+

Internationalization support via gettext has not been added.

+

The configuration file is not yet implemented.

+

The diagnosis engine is not yet implemented.

+

+
+
+

+

ZED (ZFS Event Daemon) is distributed under the terms of + the Common Development and Distribution License Version 1.0 (CDDL-1.0).

+

Developed at Lawrence Livermore National Laboratory + (LLNL-CODE-403049).

+

+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
Octember 1, 2013ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zfs-mount-generator.8.html b/man/v0.8/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..d16ec4efe --- /dev/null +++ b/man/v0.8/8/zfs-mount-generator.8.html @@ -0,0 +1,324 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)zfs-mount-generatorZFS-MOUNT-GENERATOR(8)
+
+

+

+
+

+

zfs-mount-generator - generates systemd mount units for ZFS

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+

+
+
+

+

zfs-mount-generator implements the Generators Specification + of systemd(1), and is called during early boot to generate + systemd.mount(5) units for automatically mounted datasets. Mount + ordering and dependencies are created for all tracked pools (see below).

+

+
+

+

If the dataset is an encryption root, a service that loads the + associated key (either from file or through a systemd-ask-password(1) + prompt) will be created. This service RequiresMountsFor the path of + the key (if file-based) and also copies the mount unit's After, + Before and Requires. All mount units of encrypted datasets add + the key-load service for their encryption root to their Wants and + After. The service will not be Wanted or Required by + local-fs.target directly, and so will only be started manually or as + a dependency of a started mount unit.

+

+
+
+

+

mount unit's Before -> key-load service (if any) -> + mount unit -> mount unit's After

+

It is worth nothing that when a mount unit is activated, it + activates all available mount units for parent paths to its mountpoint, i.e. + activating the mount unit for /tmp/foo/1/2/3 automatically activates all + available mount units for /tmp, /tmp/foo, /tmp/foo/1, and /tmp/foo/1/2. This + is true for any combination of mount units from any sources, not just + ZFS.

+

+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of the command

+

+
zfs list -H -o + name,mountpoint,canmount,atime,relatime,devices,exec,readonly,setuid,nbmand,encroot,keylocation,org.openzfs.systemd:requires,org.openzfs.systemd:requires-mounts-for,org.openzfs.systemd:before,org.openzfs.systemd:after,org.openzfs.systemd:wanted-by,org.openzfs.systemd:required-by,org.openzfs.systemd:nofail,org.openzfs.systemd:ignore +

+
+

for datasets that should be mounted by systemd, should be kept + separate from the pool, at

+

+
@sysconfdir@/zfs/zfs-list.cache/POOLNAME
+

The cache file, if writeable, will be kept synchronized with the + pool state by the ZEDLET

+

+
history_event-zfs-list-cacher.sh .
+
+
+

+

The behavior of the generator script can be influenced by the + following dataset properties:

+

+
+
+
If a dataset has mountpoint set and canmount is not + off, a mount unit will be generated. Additionally, if + canmount is on, local-fs.target will gain a + dependency on the mount unit. +

This behavior is equal to the auto and noauto + legacy mount options, see systemd.mount(5).

+

Encryption roots always generate a key-load service, even for + canmount=off.

+
+
+
Space-separated list of mountpoints to require to be mounted for this + mount unit
+
+
The mount unit and associated key-load service will be ordered before this + space-separated list of units.
+
+
The mount unit and associated key-load service will be ordered after this + space-separated list of units.
+
+
Space-separated list of units that will gain a Wants dependency on + this mount unit. Setting this property implies noauto.
+
+
Space-separated list of units that will gain a Requires dependency + on this mount unit. Setting this property implies noauto.
+
+
Toggles between a Wants and Requires type of dependency + between the mount unit and local-fs.target, if noauto isn't + set or implied. +

on: Mount will be WantedBy local-fs.target

+

off: Mount will be Before and RequiredBy + local-fs.target

+

unset: Mount will be Before and WantedBy + local-fs.target

+
+
+
If set to on, do not generate a mount unit for this dataset. +

+
+
+
+See also systemd.mount(5) +

+
+
+
+

+

To begin, enable tracking for the pool:

+

+
touch + @sysconfdir@/zfs/zfs-list.cache/POOLNAME
+

Then, enable the tracking ZEDLET:

+

+
ln -s + "@zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh" + "@sysconfdir@/zfs/zed.d" +

systemctl enable zfs-zed.service

+

systemctl restart zfs-zed.service

+
+

Force the running of the ZEDLET by setting a monitored property, + e.g. canmount, for at least one dataset in the pool:

+

+
zfs set canmount=on DATASET
+

This forces an update to the stale cache file.

+

To test the generator output, run

+

+
@systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator . .
+

This will generate units and dependencies in + /tmp/zfs-mount-generator for you to inspect them. The second and + third argument are ignored.

+

If you're satisfied with the generated units, instruct systemd to + re-run all generators:

+

+
systemctl daemon-reload
+

+

+
+
+

+

zfs(5) zfs-events(5) zed(8) zpool(5) + systemd(1) systemd.target(5) systemd.special(7) + systemd.mount(7)

+
+
+ + + + + +
2020-01-19ZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zfs-program.8.html b/man/v0.8/8/zfs-program.8.html new file mode 100644 index 000000000..a35b1c49c --- /dev/null +++ b/man/v0.8/8/zfs-program.8.html @@ -0,0 +1,693 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs program — + executes ZFS channel programs

+
+
+

+

zfs program [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script

+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at:

+ +

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MB, and can be set to a maximum of 100 MB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libZFS interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libZFS interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
+
return ret0, ret1, ret2, ...
+
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
+
error: "error string, including Lua stack trace"
+
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

+ ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libZFS interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+
+
string -> string
+number -> int64
+boolean -> boolean_value
+nil -> boolean (no value)
+table -> nvlist
+
+

Likewise, table keys are replaced by string equivalents as + follows:

+
+
string -> no change
+number -> signed decimal string ("%lld")
+boolean -> "true" | "false"
+
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+
+
assert                  rawlen
+collectgarbage          rawget
+error                   rawset
+getmetatable            select
+ipairs                  setmetatable
+next                    tonumber
+pairs                   tostring
+rawequal                type
+
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+
+
dofile
+loadfile
+load
+pcall
+print
+xpcall
+
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
+
zfs.sync.destroy("rpool@snap")
+
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
+
zfs.sync.destroy({1="rpool@snap", defer=true})
+
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
+
zfs.sync.snapshot{"rpool@snap", defer=true}
+
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
+
errno = zfs.sync.promote(dataset)
+
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+
+
EPERM     ECHILD      ENODEV      ENOSPC
+ENOENT    EAGAIN      ENOTDIR     ESPIPE
+ESRCH     ENOMEM      EISDIR      EROFS
+EINTR     EACCES      EINVAL      EMLINK
+EIO       EFAULT      ENFILE      EPIPE
+ENXIO     ENOTBLK     EMFILE      EDOM
+E2BIG     EBUSY       ENOTTY      ERANGE
+ENOEXEC   EEXIST      ETXTBSY     EDQUOT
+EBADF     EXDEV       EFBIG
+
+
+
+

+

For detailed descriptions of the exact behavior of any zfs + administrative operations, see the main zfs(1) manual + page.

+
+
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running: +
+
  dtrace -n 'zfs-dbgmsg{trace(stringof(arg0))}'
+
+

msg (string)

+
Debug message to be printed.
+
+
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns false, but + zfs.exists("somepool/fs_that_may_exist") will error. +

dataset (string)

+
Dataset to check for existence. Must be in the + target pool.
+
+
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like guid) may wrap around and appear negative. +

dataset (string)

+
Filesystem or snapshot path to retrieve properties + from.
+

property (string)

+
Name of property to retrieve. All filesystem, + snapshot and volume properties are supported except for 'mounted' and + 'iscsioptions.' Also supports the 'written@snap' and 'written#bookmark' + properties and the '<user|group><quota|used>@id' properties, + though the id must be in numeric form.
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

dataset (string)

+
Filesystem or snapshot to be destroyed.
+

[optional] defer (boolean)

+
Valid only for destroying snapshots. If set to + true, and the snapshot has holds or clones, allows the snapshot to be + marked for deferred deletion rather than failing.
+
+
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

dataset (string)

+
Clone to be promoted.
+
+
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

filesystem (string)

+
Filesystem to rollback.
+
+
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

dataset (string)

+
Name of snapshot to create.
+
+
+
+
+
For each function in the zfs.sync submodule, there is a corresponding + zfs.check function which performs a "dry run" of the same + operation. Each takes the same arguments as its zfs.sync counterpart and + returns 0 if the operation would succeed, or a non-zero error code if it + would fail, along with any other error details. That is, each has the same + behavior as the corresponding sync function except for actually executing + the requested change. For example, + + returns 0 if + + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
+
 
+
+
 
+
+
 
+
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
+
Iterate through all clones of the given snapshot. +

snapshot (string)

+
Must be a valid snapshot path in the current + pool.
+
+
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all user properties for the given dataset. +

dataset (string)

+
Must be a valid filesystem, snapshot, or + volume.
+
+
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

dataset (string)

+
Must be a valid filesystem, snapshot or + volume.
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
February 26, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zfs.8.html b/man/v0.8/8/zfs.8.html new file mode 100644 index 000000000..4ece7c64d --- /dev/null +++ b/man/v0.8/8/zfs.8.html @@ -0,0 +1,4308 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's Manual (smm)ZFS(8)
+
+
+

+

zfsconfigures + ZFS file systems

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfscreate [-p] + [-o + property=value]... + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]... + -V size + volume
+
+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+ + + + + +
zfssnapshot [-r] + [-o + property=value]... + filesystem@snapname|volume@snapname...
+
+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+ + + + + +
zfsclone [-p] + [-o + property=value]... + snapshot + filesystem|volume
+
+ + + + + +
zfspromote + clone-filesystem
+
+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename [-fp] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a | filesystem
+
+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot
+
+ + + + + +
zfsproject + [-d|-r] + file|directory...
+
+ + + + + +
zfsproject -C + [-kr] + file|directory...
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory...
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory...
+
+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Olv] + [-o options] + -a | filesystem
+
+ + + + + +
zfsunmount [-f] + -a | + filesystem|mountpoint
+
+ + + + + +
zfsshare -a | + filesystem
+
+ + + + + +
zfsunshare -a | + filesystem|mountpoint
+
+ + + + + +
zfsbookmark snapshot + bookmark
+
+ + + + + +
zfssend [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-LPcenvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend [-Penv] + -t receive_resume_token
+
+ + + + + +
zfsreceive [-Fhnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-Fhnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsallow + filesystem|volume
+
+ + + + + +
zfsallow [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + -@setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfshold [-r] + tag snapshot...
+
+ + + + + +
zfsholds [-rH] + snapshot...
+
+ + + + + +
zfsrelease [-r] + tag snapshot...
+
+ + + + + +
zfsdiff [-FHt] + snapshot + snapshot|filesystem
+
+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script [--] arg1 + ...
+
+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a | filesystem
+
+ + + + + +
zfsunload-key [-r] + -a | filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+ + + + + +
zfsversion
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace. For + example:

+
+
pool/{filesystem,volume,snapshot}
+
+

where the maximum length of a dataset name is + MAXNAMELEN (256 bytes) and the maximum amount of + nesting allowed in a path is 50 levels deep.

+

A dataset can be one of the following:

+
+
+
A ZFS dataset of type filesystem can be mounted within + the standard system namespace and behaves like other file systems. While + ZFS file systems are designed to be POSIX compliant, known issues exist + that prevent compliance in some cases. Applications that depend on + standards conformance might fail due to non-standard behavior when + checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of volumes can be + cloned or rolled back, visibility is determined by the + snapdev property of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the snapdir property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a snapshot is + cloned, it creates an implicit dependency between the parent and child. Even + though the clone is created somewhere else in the dataset hierarchy, the + original snapshot cannot be destroyed as long as a clone exists. The + origin property exposes this dependency, and the + destroy command lists any such dependencies, if they + exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set + in the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if pool/home has a mount + point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + none prevents the file system from being mounted.

+

If needed, ZFS file systems can also be managed with traditional + tools (mount, umount, + /etc/fstab). If a file system's mount point is set + to legacy, ZFS makes no attempt to manage the file system, + and the administrator is responsible for mounting and unmounting the file + system. Because pools must be imported before a legacy mount can succeed, + administrators should ensure that legacy mounts are only attempted after the + zpool import process finishes at boot time. For example, on machines using + systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the dedup property enabled, duplicate data blocks are + removed synchronously. The result is that only unique data is stored and + common components are shared among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow IO and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk IO.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. As an alternative to deduplication consider using + , + as a less resource-intensive alternative.

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, k, + , + M, + , and so + forth, up to Z for zettabyte). The following are all valid + (and equal) specifications: 1536M, 1.5g, 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its shortened column + name, avail.

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the + Encryption section for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + unavailable. See zfs + load-key and zfs + unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid , the objsetid of a dataset is + not transferred to other pools when the snapshot is copied with a + send/receive operation. The objsetid can be reused (for + a new datatset) after the dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive -s, this opaque token can be provided to + zfs send -t to resume and complete the zfs + receive.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: filesystem, + volume, or snapshot.
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section) is space that is + referenced exclusively by this snapshot. If this snapshot is destroyed, + the amount of used space will be freed. Space that is + shared by multiple snapshots isn't accounted for in this metric. When a + snapshot is destroyed, space that was previously shared with this + snapshot can become unique to snapshots adjacent to it, thus changing + the used space of those snapshots. The used space of the latest snapshot + can also be affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced does not + take into account pending changes. Pending changes are generally + accounted for within a few seconds. Committing a change to a disk using + fsync(2) or O_SYNC does not + necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du and + ls -s. See the + zfs userspace subcommand + for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@... + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on is set on a file + system additional objects will be created per-file to store extended + attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal objects are normally + required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property xattr=on is set + on a fileset, ZFS will create additional objects per-file to store + extended attributes. These additional objects are reflected in the + projectobjused value and are counted against the + project's projectobjquota. When a filesystem is + configured to use xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 8 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which for + clones may be a snapshot in the origin's filesystem (or the origin of + the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=off|noacl|posixacl
+
Controls whether ACLs are enabled and if so what type of ACL to use. +
+
+
default, when a file system has the acltype property + set to off then ACLs are disabled.
+
+
an alias for off
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+

To obtain the best performance when setting + posixacl users are strongly encouraged to set the + xattr=sa property. This will result in the POSIX ACL + being stored more efficiently on disk. But as a consequence, all new + extended attributes will only be accessible from OpenZFS implementations + which support the xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, and + edonr checksum algorithms require enabling the + appropriate features on the pool. These pool features are not supported + by GRUB and must not be used on the pool if GRUB needs to access the + pool (e.g. for /boot).

+

Please see zpool-features(5) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle
+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the + current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(5) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where N is + an integer from 1 (fastest) to 9 (best compression ratio). Currently, + gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example 8k + blocks on disks with 4k disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=1||3
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a + missing top-level vdev. Do NOT create, for example a + two-disk striped pool and set + on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the implementation + stores some encryption metadata where the third copy would normally + be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify||||
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + +

If set to verify, ZFS will do a byte-to-byte + comparsion in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should NOT be enabled on a + system. See Deduplication + above.

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy requires the + large_dnode pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the workload makes heavy + use of extended attributes. This may be applicable to SELinux-enabled + systems, Lustre servers, and Samba servers, for example. Literal values + are supported for cases where the optimal size is known in advance and + for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode feature, or if you + need to import this pool on a system that doesn't support the + large_dnode feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section.

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
+
# dd if=/dev/urandom of=/path/to/output/key bs=32 count=1
+
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt|
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is + +

Even though the encryption suite cannot be changed after + dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via + STDIN, but users should be careful not to place keys which should be + kept secret on the command line. If a file URI is selected, the key will + be loaded from the specified absolute file path.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(5)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512B up to 1M. The default size is 0 which means no + small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpool(8) for more details on + the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section for more + information on how this property is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none, or if they were mounted before the property + was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non Blocking mandatory locks). This is used for + SMB clients. Changes to this property only take effect when the file + system is umounted and remounted. See mount(8) for more + information on nbmand mounts. This property is not used + on Linux.
+
=off|on
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux file + systems. For consistency with OpenZFS on other platforms overlay mounts + are off by default. Set to on to + enable overlay mounts.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(5)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace subcommand + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@... properties are not + displayed by zfs get + all. The user's name must be appended after the + @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + rw mount options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two greater than or + equal to 512 and less than or equal to 128 Kbytes. If the + large_blocks feature is enabled on the pool, the size + may be up to 1 Mbyte. See zpool-features(5) for + details on ZFS feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 100 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + + is set. Turning this property on causes the access time to be updated + relative to the modify or change time. Access time is only updated if the + previous access time was earlier than the current modify or change time or + if the existing access time hasn't been updated within the past 24 hours. + The default value is off. The values + on and off are equivalent to the + relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a USERSHARE. +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", ie. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user, system + passwd/shadow, LDAP or smbpasswd based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the + + file. Otherwise, the file system is automatically shared and unshared with + the zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section. The default value + is hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse + volume" (also known as "thin provisioned") can be created + by specifying the -s option to the + zfs create + -V command, or by changing the value of the + refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the + + A volume that is not sparse is said to be "thick provisioned". + A sparse volume can become thick provisioned by setting + refreservation to auto.

+
+
=default + | + + | + + | + | +
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshoted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + zvol_volmode, where full, + dev and none are encoded as 1, 2 and 3 + respectively. The default values is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used on Linux.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported either directory based or + system attribute based. +

The default value of on enables directory + based extended attributes. This style of extended attribute imposes no + practical limit on either the size or number of attributes which can be + set on a file. Although under Linux the getxattr(2) + and setxattr(2) system calls limit the maximum size to + 64K. This is the most compatible style of extended attribute and is + supported by all OpenZFS implementations.

+

System attribute based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk IO required. Up to + 64K of data may be stored per-file in the space reserved for system + attributes. If there is not enough space available for an extended + attribute then it will be automatically written as a directory based + xattr. System attribute based extended attributes are not accessible on + platforms which do not support the xattr=sa + feature.

+

The use of system attribute based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely of extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=on|off
+
Controls whether the dataset is managed from a non-global zone. Zones are + a Solaris feature and are not relevant on Linux. The default value is + off.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
    PROPERTY                MOUNT OPTION
+    atime                   atime/noatime
+    canmount                auto/noauto
+    devices                 dev/nodev
+    exec                    exec/noexec
+    readonly                ro/rw
+    relatime                relatime/norelatime
+    setuid                  suid/nosuid
+    xattr                   xattr/noxattr
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as module:property, but + this namespace is not enforced by ZFS. User property names can be at most + 256 characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the chance + that two independently-developed packages use the same property name for + different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+

+

ZFS volumes may be used as swap devices. After creating the volume + with the zfs create + -V command set up and enable the swap area using the + mkswap(8) and swapon(8) commands. Do not + swap to a file on a ZFS file system. A ZFS swap file configuration is not + supported.

+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + zvol data, file attributes, ACLs, permission bits, directory listings, FUID + mappings, and userused / groupused data. + ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the zfs + load-key subcommand for more info on key + loading).

+

Creating an encrypted dataset requires specifying the + encryption and keyformat properties at + creation time, along with an optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, and + pbkdf2iters) do not inherit like other ZFS properties and + instead use the value determined by their encryption root. Encryption root + inheritance can be tracked via the read-only + encryptionroot property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only dedup against themselves, their + snapshots, and their clones.

+

There are a few limitations on encrypted datasets. Encrypted data + cannot be embedded via the embedded_data feature. + Encrypted datasets may not have copies=3 + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost per block written.

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
An alias for the zfs + version subcommand.
+
zfs create + [-p] [-o + property=value]... + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]... + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically rounded up to the + nearest 128 Kbytes to ensure that the volume has an integral number of + blocks regardless of blocksize.

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + volsize in the + Native Properties section + for more information about sparse volumes.
+
+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Force an unmount of any file systems using the + unmount -f command. + This option has no effect on non-file systems or unmounted file + systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + @) should be specified when using a range or + comma-separated list to identify multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
zfs snapshot + [-r] [-o + property=value]... + filesystem@snapname|volume@snapname...
+
Creates snapshots with the given names. All previous modifications by + successful system calls to the file system are part of the snapshots. + Snapshots are taken atomically, so that all snapshots correspond to the + same moment in time. zfs + snap can be used as an alias for + zfs snapshot. See the + Snapshots section for details. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
zfs rollback + [-Rfr] snapshot
+
Roll back the given dataset to a previous snapshot. When a dataset is + rolled back, all data that has changed since the snapshot is discarded, + and the dataset reverts to the state at the time of the snapshot. By + default, the command refuses to roll back to a snapshot other than the + most recent one. In order to do so, all intermediate snapshots and + bookmarks must be destroyed by specifying the -r + option. +

The -rR options do not recursively + destroy the child snapshots of a recursive snapshot. Only direct + snapshots of the specified filesystem are destroyed by either of these + options. To completely roll back a recursive snapshot, you must rollback + the individual child snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones + of those snapshots.
+
+
Used with the -R option to force an unmount of + any clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
zfs clone + [-p] [-o + property=value]... + snapshot + filesystem|volume
+
Creates a clone of the given snapshot. See the + Clones section for details. The target + dataset can be located anywhere in the ZFS hierarchy, and is created as + the same type as the original. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. If + the target filesystem or volume already exists, the operation + completes successfully.
+
+
+
zfs promote + clone-filesystem
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot. This makes it possible to destroy the file + system that the clone was created from. The clone parent-child dependency + relationship is reversed, so that the origin file system becomes a clone + of the specified file system. +

The snapshot that was cloned, and any snapshots previous to + this snapshot, are now owned by the promoted clone. The space they use + moves from the origin file system to the promoted clone, so enough space + must be available to accommodate these snapshots. No new space is + consumed by this operation, but the space accounting is adjusted. The + promoted clone must not have any conflicting snapshot names of its own. + The rename subcommand can be used to rename any + conflicting snapshots.

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + [-fp] + filesystem|volume + filesystem|volume
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any filesystems that need to be unmounted in the + process.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
zfs list + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
Lists the property information for the given datasets in tabular form. If + specified, you can list property information by the absolute pathname or + the relative pathname. By default, all file systems and volumes are + displayed. Snapshots are displayed if the listsnaps + property is on (the default is off). + The following fields are displayed: name, + used, available, + referenced, mountpoint. +
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ property
+
Same as the -s option, but sorts by property + in descending order.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A + depth of 1 will display only + the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: +
    +
  • One of the properties described in the + Native Properties + section
  • +
  • A user property
  • +
  • The value name to display the dataset name
  • +
  • The value + to + display space usage properties on file systems and volumes. This + is a shortcut for specifying -o + name,avail,used,,,, + -t + filesystem,volume syntax.
  • +
+
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command + line.
+
+ property
+
A property for sorting the output by column in ascending order based + on the value of the property. The property must be one of the + properties described in the + Properties section or the value + name to sort by the dataset name. Multiple + properties can be specified at one time using multiple + -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior + of zfs list is + preserved.

+
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or all. For example, + specifying -t snapshot + displays only snapshots.
+
+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Sets the property or list of properties to the given value(s) for each + dataset. Only some properties can be edited. See the + Properties section for more + information on what properties can be set and acceptable values. Numeric + values can be specified as exact values, or in a human-readable form with + a suffix of , + , + M, + , + , + , + , + Z (for bytes, kilobytes, megabytes, gigabytes, + terabytes, petabytes, exabytes, or zettabytes, respectively). User + properties can be set on snapshots. For more information, see the + User Properties section.
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source  local, default, inherited,
+              temporary, received or none (-).
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections.

+

The value all can be used to display all + properties that apply to the given dataset's type (filesystem, volume, + snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + 1 will display only the dataset and its direct + children.
+
+ field
+
A comma-separated list of columns to display. + name,property,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + default, + , + , + , + and none. The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See the Properties + section for a listing of default values, and details on which properties + can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] -a | + filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of the software. zfs + send streams generated from new snapshots of these + file systems cannot be accessed on systems running older versions of the + software. +

In general, the file system version is independent of the pool + version. See zpool(8) for information on the + zpool upgrade + command.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to the specified version. If the + -V flag is not specified, this command + upgrades to the most recent version. This option can only be used to + increase the version number, and only up to the most recent version + supported by this software.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each user in the specified + filesystem or snapshot. This corresponds to the + user, + user, + userquota@ + and userobjquota@user properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + used, quota. The default is to + display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + all, posixuser, + smbuser, posixgroup, + smbgroup. The default is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + zfs userspace, except that + the default types to display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + zfs userspace, except that + the project identifier is numeral, not name. So need neither the option + -i for SID to POSIX ID nor -n for + numeric ID, nor -t for types.
+
zfs project + [-d|-r] + file|directory...
+
List project identifier (ID) and inherit flag of file(s) or directories. +
+
+
Show the directory project ID and inherit flag, not its childrens. It + will overwrite the former specified -r + option.
+
+
Show on subdirectories recursively. It will overwrite the former + specified -d option.
+
+
+
zfs project + -C [-kr] + file|directory...
+
Clear project inherit flag and/or ID on the file(s) or directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset as zero.
+
+
Clear on subdirectories recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory...
+
Check project ID and inherit flag on the file(s) or directories, report + the entries without project inherit flag or with different project IDs + from the specified (via -p option) value or the + target directory's project ID. +
+
+
Print file name with a trailing NUL instead of newline (by default), + like "find -print0".
+
+
Check the directory project ID and inherit flag, not its childrens. It + will overwrite the former specified -r + option.
+
+
Specify the referenced ID for comparing with the target file(s) or + directories' project IDs. If not specified, the target (top) + directory's project ID will be used as the referenced one.
+
+
Check on subdirectories recursively. It will overwrite the former + specified -d option.
+
+
+
zfs project + [-p id] + [-rs] + file|directory...
+
Set project ID and/or inherit flag on the file(s) or directories. +
+
+
Set the file(s)' or directories' project ID with the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given file(s) or directories. It is + usually used for setup tree quota on the directory target with + -r option specified together. When setup tree + quota, by default the directory's project ID will be set to all its + descendants unless you specify the project ID via + -p option explicitly.
+
+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Olv] [-o + options] -a | + filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to legacy, the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + Temporary Mount + Point Properties section for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has a + keylocation of prompt this will + cause the terminal to interactively block after asking for the + key.
+
+
Report mount progress.
+
+
+
zfs unmount + [-f] -a | + filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
Forcefully unmount the file system, even if it is currently in + use.
+
+
+
zfs share + -a | filesystem
+
Shares available ZFS file systems. +
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a | + filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
zfs bookmark + snapshot bookmark
+
Creates a bookmark of the given snapshot. Bookmarks mark the point in time + when the snapshot was created, and can be used as the incremental source + for a zfs send command. +

This feature must be enabled to be used. See + zpool-features(5) for details on ZFS feature flags and + the + + feature.

+
+
zfs send + [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
+ --dedup
+
Generate a deduplicated stream. Deduplicated send is deprecated and + will be removed in a future release. (In the future, the flag will + be accepted but a regular, non-deduplicated stream will be generated.) + Blocks which would have been sent multiple times in the send stream + will only be sent once. The receiving system must also support this + feature to receive a deduplicated stream. This flag can be used + regardless of the dataset's dedup property, but + performance will be much better if the filesystem uses a dedup-capable + checksum (for example, sha256).
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold command), and indicating to + zfs receive that the holds be applied to the dataset + on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-LPcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + [-Penv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs receive -s for more details.
+
zfs receive + [-Fhnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-Fhnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send and receive + deduplicated send streams is deprecated. In the future, the ability + to receive a deduplicated send stream with zfs + receive will be removed. However, in the future, + a utility will be provided to convert a deduplicated send stream to a + regular (non-deduplicated) stream. This future utility will require that + the send stream be located in a seek-able file, rather than provided by + a pipe.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set ( + -o ) or inherited ( -x ) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin=snapshot is a special case + because, even if origin is a read-only property and + cannot be set, it's allowed to receive the send stream as a clone of the + given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w ) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using stdin for the send stream. Instead, the + property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

Any editable property can be set at receive time. Set-once + properties bound to the received data, such as + normalization and + casesensitivity, cannot be set at receive time + even when the datasets are newly created by + zfs receive. + Additionally both settable properties version and + volsize cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
+
# zfs send tank/test@snap1 | zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile
+
+

Note that [-o + keylocation=prompt] may + not be specified here, since stdin is already being utilized for the + send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying [-x + encryption] to force the property to be + inherited. Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with a stream generated by + zfs send + -t token, where the + token is the value of the + receive_resume_token property of the filesystem or + volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(5) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + mountpoint, canmount, + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]...
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]...
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]...
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]...
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+
+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is
+                                being allowed
+clone            subcommand     Must also have the 'create' ability and
+                                'mount' ability in the origin file system
+create           subcommand     Must also have the 'mount' ability.
+                                Must also have the 'refreservation' ability to
+                                create a non-sparse volume.
+destroy          subcommand     Must also have the 'mount' ability
+diff             subcommand     Allows lookup of paths within a dataset
+                                given an object number, and the ability
+                                to create snapshots necessary to
+                                'zfs diff'.
+load-key         subcommand     Allows loading and unloading of encryption key
+                                (see 'zfs load-key' and 'zfs unload-key').
+change-key       subcommand     Allows changing an encryption key via
+                                'zfs change-key'.
+mount            subcommand     Allows mount/umount of ZFS datasets
+promote          subcommand     Must also have the 'mount' and 'promote'
+                                ability in the origin file system
+receive          subcommand     Must also have the 'mount' and 'create'
+                                ability
+rename           subcommand     Must also have the 'mount' and 'create'
+                                ability in the new parent
+rollback         subcommand     Must also have the 'mount' ability
+send             subcommand
+share            subcommand     Allows sharing file systems over NFS
+                                or SMB protocols
+snapshot         subcommand     Must also have the 'mount' ability
+
+groupquota       other          Allows accessing any groupquota@...
+                                property
+groupused        other          Allows reading any groupused@... property
+userprop         other          Allows changing any user property
+userquota        other          Allows accessing any userquota@...
+                                property
+userused         other          Allows reading any userused@... property
+projectobjquota  other          Allows accessing any projectobjquota@...
+                                property
+projectquota     other          Allows accessing any projectquota@... property
+projectobjused   other          Allows reading any projectobjused@... property
+projectused      other          Allows reading any projectused@... property
+
+aclinherit       property
+acltype          property
+atime            property
+canmount         property
+casesensitivity  property
+checksum         property
+compression      property
+copies           property
+devices          property
+exec             property
+filesystem_limit property
+mountpoint       property
+nbmand           property
+normalization    property
+primarycache     property
+quota            property
+readonly         property
+recordsize       property
+refquota         property
+refreservation   property
+reservation      property
+secondarycache   property
+setuid           property
+sharenfs         property
+sharesmb         property
+snapdir          property
+snapshot_limit   property
+utf8only         property
+version          property
+volblocksize     property
+volsize          property
+vscan            property
+xattr            property
+zoned            property
+
+
+
zfs allow + -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
zfs hold + [-r] tag + snapshot...
+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its + own tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot...
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot...
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return + EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
zfs diff + [-FHt] snapshot + snapshot|filesystem
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are: +
+
-       The path has been removed
++       The path has been created
+M       The path has been modified
+R       The path has been renamed
+
+
+
+
Display an indication of the type of file, in a manner similar to the + - option of ls(1). +
+
B       Block device
+C       Character device
+/       Directory
+>       Door
+|       Named pipe
+@       Symbolic link
+P       Event port
+=       Socket
+F       Regular file
+
+
+
+
Give more parsable tab-separated output, without header lines and + without arrows.
+
+
Display the path's inode change time as the first column of + output.
+
+
+
zfs program + [-jn] [-t + instruction-limit] [-m + memory-limit] pool script [--] + arg1 ...
+
Executes script as a ZFS channel program on + pool. The ZFS channel program interface allows ZFS + administrative operations to be run programmatically via a Lua script. The + entire script is executed atomically, with no other administrative + operations taking effect concurrently. A library of ZFS calls is made + available to channel program scripts. Channel programs may only be run + with root privileges. +

For full documentation of the ZFS channel program interface, + see the manual page for zfs-program(8).

+
+
+
Display channel program output in JSON format. When this flag is + specified and standard output is empty - channel program encountered + an error. The details of such an error will be printed to standard + error in plain text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). + Without this flag, all pending changes must be synced to disk before a + channel program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. + The default memory limit is 10 MB, and can be set to a maximum of 100 + MB. +

All remaining argument strings are passed directly to the + channel program as arguments. See zfs-program(8) + for more information.

+
+
+
+
zfs load-key + [-nr] [-L + keylocation] -a | + filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset. Once the + key is loaded the keystatus property will become + available. +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. This will cause zfs to + simply check that the provided key is correct. This command may be run + even if the key is already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs unload-key + [-r] -a | + filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + unavailable. +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Allows a user to change the encryption key used to access a dataset. This + command requires that the existing key for the dataset is already loaded + into ZFS. This command may also be used to change the + keylocation, keyformat, and + pbkdf2iters properties as needed. If the dataset was not + previously an encryption root it will become one. Alternatively, the + -i flag may be provided to cause an encryption + root to inherit the parent's key instead. +
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to "zfs + load-key filesystem; + zfs change-key + filesystem"
+
+ property=value
+
Allows the user to set encryption key properties ( + keyformat, keylocation, and + pbkdf2iters ) while changing the key. This is the + only way to alter keyformat and + pbkdf2iters after the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+
+

+

The zfs utility exits 0 on success, 1 if + an error occurs, and 2 if invalid command line options were specified.

+
+
+

+
+
Creating a ZFS File System Hierarchy
+
The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, + and is automatically inherited by the child file system. +
+
# zfs create pool/home
+# zfs set mountpoint=/export/home pool/home
+# zfs create pool/home/bob
+
+
+
Creating a ZFS Snapshot
+
The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system. +
+
# zfs snapshot pool/home/bob@yesterday
+
+
+
Creating and Destroying Multiple + Snapshots
+
The following command creates snapshots named yesterday + of pool/home and all of its descendent file systems. + Each snapshot is mounted on demand in the + .zfs/snapshot directory at the root of its file + system. The second command destroys the newly created snapshots. +
+
# zfs snapshot -r pool/home@yesterday
+# zfs destroy -r pool/home@yesterday
+
+
+
Disabling and Enabling File System + Compression
+
The following command disables the compression property + for all file systems under pool/home. The next command + explicitly enables compression for + pool/home/anne. +
+
# zfs set compression=off pool/home
+# zfs set compression=on pool/home/anne
+
+
+
Listing ZFS Datasets
+
The following command lists all active file systems and volumes in the + system. Snapshots are displayed if the listsnaps + property is on. The default is off. + See zpool(8) for more information on pool properties. +
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
Setting a Quota on a ZFS File System
+
The following command sets a quota of 50 Gbytes for + pool/home/bob. +
+
# zfs set quota=50G pool/home/bob
+
+
+
Listing ZFS Properties
+
The following command lists all properties for + pool/home/bob. +
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value.

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+ The following command lists all properties with local settings for + pool/home/bob. +
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
Rolling Back a ZFS File System
+
The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots. +
+
# zfs rollback -r pool/home/anne@yesterday
+
+
+
Creating a ZFS Clone
+
The following command creates a writable file system whose initial + contents are the same as + . +
+
# zfs clone pool/home/bob@yesterday pool/clone
+
+
+
Promoting a ZFS Clone
+
The following commands illustrate how to test out changes to a file + system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming: +
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
Inheriting ZFS Properties
+
The following command causes pool/home/bob and + pool/home/anne to inherit the checksum + property from their parent. +
+
# zfs inherit checksum pool/home/bob pool/home/anne
+
+
+
Remotely Replicating ZFS Data
+
The following commands send a full stream and then an incremental stream + to a remote machine, restoring them into + + and + , + respectively. poolB must contain the file system + poolB/received, and must not initially contain + . +
+
# zfs send pool/fs@a | \
+  ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b | \
+  ssh host zfs receive poolB/received/fs
+
+
+
Using the zfs receive -d Option
+
The following command sends a full stream of + + to a remote machine, receiving it into + . + The + + portion of the received snapshot's name is determined from the name of the + sent snapshot. poolB must contain the file system + poolB/received. If + + does not exist, it is created as an empty file system. +
+
# zfs send poolA/fsA/fsB@snap | \
+  ssh host zfs receive -d poolB/received
+
+
+
Setting User Properties
+
The following example sets the user-defined + + property for a dataset. +
+
# zfs set com.example:department=12345 tank/accounting
+
+
+
Performing a Rolling Snapshot
+
The following example shows how to maintain a history of snapshots with a + consistent naming scheme. To keep a week's worth of snapshots, the user + destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows: +
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
Setting sharenfs Property Options on a ZFS File + System
+
The following commands show how to set sharenfs property + options to enable rw access for a set of + addresses + and to enable root access for system + on the + + file system. +
+
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
+
+

If you are using DNS for host name + resolution, specify the fully qualified hostname.

+
+
Delegating ZFS Administration Permissions on a + ZFS Dataset
+
The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots on + tank/cindys. The permissions on + tank/cindys are also displayed. +
+
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point + access:

+
+
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
+
+
+
Delegating Create Time Permissions on a ZFS + Dataset
+
The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed. +
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
Defining and Granting a Permission Set on a ZFS + Dataset
+
The following example shows how to define and grant a permission set on + the tank/users file system. The permissions on + tank/users are also displayed. +
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Delegating Property Permissions on a ZFS + Dataset
+
The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed. +
+
# zfs allow cindys quota,reservation users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
Removing ZFS Delegated Permissions on a ZFS + Dataset
+
The following example shows how to remove the snapshot permission from the + staff group on the tank/users file + system. The permissions on tank/users are also + displayed. +
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Showing the differences between a snapshot and a + ZFS Dataset
+
The following example shows how to see what has changed between a prior + snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected. +
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
Creating a bookmark
+
The following example create a bookmark to a snapshot. This bookmark can + then be used instead of snapshot in send streams. +
+
# zfs bookmark rpool@snapshot rpool#bookmark
+
+
+
Setting sharesmb Property Options on a ZFS File + System
+
The following example show how to share SMB filesystem through ZFS. Note + that that a user and his/her password must be given. +
+
# smbmount //127.0.0.1/share_tmp /mnt/tmp \
+  -o user=workgroup/turbo,password=obrut,uid=1000
+
+

Minimal + + configuration required:

+

Samba will need to listen to 'localhost' (127.0.0.1) for the + ZFS utilities to communicate with Samba. This is the default behavior + for most Linux distributions.

+

Samba must be able to authenticate a user. This can be done in + a number of ways, depending on if using the system password file, LDAP + or the Samba specific smbpasswd file. How to do this is outside the + scope of this manual. Please refer to the smb.conf(5) + man page for more information.

+

See the USERSHARE section of the + smb.conf(5) man page for all configuration options in + case you need to modify any options to the share afterwards. Do note + that any changes done with the net(8) command will be + undone if the share is ever unshared (such as at a reboot etc).

+
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-program(8), zpool(8)

+
+
+ + + + + +
April 30, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zfsprops.8.html b/man/v0.8/8/zfsprops.8.html new file mode 100644 index 000000000..781142c8e --- /dev/null +++ b/man/v0.8/8/zfsprops.8.html @@ -0,0 +1,167 @@ + + + + + + + zfsprops.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfsprops.8

+
+ + + + + +
()()
+
+ + + + + +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zgenhostid.8.html b/man/v0.8/8/zgenhostid.8.html new file mode 100644 index 000000000..071a142d6 --- /dev/null +++ b/man/v0.8/8/zgenhostid.8.html @@ -0,0 +1,231 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's Manual (smm)ZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate and store a hostid in + /etc/hostid

+
+
+

+ + + + + +
zgenhostid[hostid]
+
+
+

+

If /etc/hostid does not exist, create it and + store a hostid in it. If the user provides [hostid] on + the command line, store that value. Otherwise, randomly generate a value to + store.

+

This emulates the genhostid(1) utility and is + provided for use on systems which do not include the utility.

+
+
+

+

[hostid] Specifies the value to be placed in + /etc/hostid. It must be a number with a value between 1 + and 2^32-1. This value + be + unique among your systems. It must be expressed in hexadecimal and be + exactly 8 digits long.

+
+
+

+
+
Generate a random hostid and store it
+
+
+
# zgenhostid
+
+
+
Record the libc-generated hostid in /etc/hostid
+
+
+
# zgenhostid $(hostid)
+
+
+
Record a custom hostid (0xdeadbeef) in +
+
+
+
# zgenhostid deadbeef
+
+
+
+
+
+

+

genhostid(1), hostid(1), + spl-module-parameters(5)

+
+
+ + + + + +
September 16, 2017Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zinject.8.html b/man/v0.8/8/zinject.8.html new file mode 100644 index 000000000..6b2fac990 --- /dev/null +++ b/man/v0.8/8/zinject.8.html @@ -0,0 +1,332 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
zinject(8)System Administration Commandszinject(8)
+
+

+
+

+

zinject - ZFS Fault Injector

+
+
+

+

zinject creates artificial problems in a ZFS pool by + simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+
List injection records.
+
zinject -b objset:object:level:blkd [-f + frequency] [-amu] pool
+
Force an error into the pool at a bookmark.
+
zinject -c <id | all>
+
Cancel injection records.
+
zinject -d vdev -A <degrade|fault> + pool
+
Force a vdev into the DEGRADED or FAULTED state.
+
zinject -d vdev -D latency:lanes + pool
+
+

Add an artificial delay to IO requests on a particular device, + such that the requests take a minimum of 'latency' milliseconds to + complete. Each delay has an associated number of 'lanes' which defines + the number of concurrent IO requests that can be processed.

+

For example, with a single lane delay of 10 ms (-D 10:1), the + device will only be able to service a single IO request at a time with + each request taking 10 ms to complete. So, if only a single request is + submitted every 10 ms, the average latency will be 10 ms; but if more + than one request is submitted every 10 ms, the average latency will be + more than 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D 10:2), then the device will be able to service two requests at a + time, each with a minimum latency of 10 ms. So, if two requests are + submitted every 10 ms, then the average latency will be 10 ms; but if + more than two requests are submitted every 10 ms, the average latency + will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + '-D 10:1', is roughly equivalent to a single invocation of '-D 10:2'. + This also means, one can specify multiple lanes with differing target + latencies. For example, an invocation of '-D 10:1' followed by '-D 25:2' + will create 3 lanes on the device; one lane with a latency of 10 ms and + two lanes with a 25 ms latency.

+

+
+
zinject -d vdev [-e device_error] [-L + label_error] [-T failure] [-f + frequency] [-F] pool
+
Force a vdev error.
+
zinject -I [-s seconds | -g txgs] + pool
+
Simulate a hardware failure that fails to honor a cache flush.
+
zinject -p function pool
+
Panic inside the specified function.
+
zinject -t data [-C dvas] [-e device_error] [-f + frequency] [-l level] [-r range] + [-amq] path
+
Force an error into the contents of a file.
+
zinject -t dnode [-C dvas] [-e device_error] + [-f frequency] [-l level] [-amq] + path
+
Force an error into the metadnode for a file or directory.
+
zinject -t mos_type [-C dvas] [-e + device_error] [-f frequency] [-l + level] [-r range] [-amqu] + pool
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas (ex. '0,2'). + This option is not applicable to logical data errors such as + decompress and decrypt.
+
+
A vdev specified by path or GUID.
+
+
Specify checksum for an ECKSUM error, decompress for a data + decompression error, decrypt for a data decryption error, + corrupt to flip a bit in the data after a read, dtl for an + ECHILD error, io for an EIO error where reopening the device will + succeed, or nxio for an ENXIO error where reopening the device will + fail. For EIO and ENXIO, the "failed" reads or writes still + occur. The probe simply sets the error value reported by the I/O pipeline + so it appears the read or write failed. Decryption errors only currently + work with file data.
+
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between 0.0001 and 100.
+
+
Fail faster. Do fewer checks.
+
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+
Inject an error at a particular block level. The default is 0.
+
+
Set the label error region to one of nvlist, pad1, + pad2, or uber.
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+
Run for this many seconds before reporting failure.
+
+
Set the failure type to one of all, claim, free, + read, or write.
+
+
Set this to mos for any data in the MOS, mosdir for an + object directory, config for the pool configuration, bpobj + for the block pointer list, spacemap for the space map, + metaslab for the metaslab, or errlog for the persistent + error log.
+
+
Unload the pool after injection. +

+
+
+
+
+

+
+
+
Run zinject in debug mode. +

+
+
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com> excerpting the zinject usage message and + source code.

+

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
2013 FEB 28ZFS on Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zpool.8.html b/man/v0.8/8/zpool.8.html new file mode 100644 index 000000000..d15bf9ef3 --- /dev/null +++ b/man/v0.8/8/zpool.8.html @@ -0,0 +1,2629 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's Manual (smm)ZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev...
+
+ + + + + +
zpoolattach [-f] + [-o + property=value] + pool device new_device
+
+ + + + + +
zpoolcheckpoint [-d, + --discard] pool
+
+ + + + + +
zpoolclear pool + [device]
+
+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]... + [-o + feature@feature=value] + [-O + file-system-property=value]... + [-R root] + pool vdev...
+
+ + + + + +
zpooldestroy [-f] + pool
+
+ + + + + +
zpooldetach pool device
+
+ + + + + +
zpoolevents [-vHf + [pool] | -c]
+
+ + + + + +
zpoolexport [-a] + [-f] pool...
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]...] + all|property[,property]... + [pool]...
+
+ + + + + +
zpoolhistory [-il] + [pool]...
+
+ + + + + +
zpoolimport [-D] + [-d dir|device]
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-n] [-T] + [-X]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root]
+
+ + + + + +
zpoolimport [-Dflm] + [-F [-n] + [-T] [-X]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool [-t]]
+
+ + + + + +
zpoolinitialize [-c | + -s] pool + [device...]
+
+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
+ + + + + +
zpoollabelclear [-f] + device
+
+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
+ + + + + +
zpooloffline [-f] + [-t] pool + device...
+
+ + + + + +
zpoolonline [-e] + pool device...
+
+ + + + + +
zpoolreguid pool
+
+ + + + + +
zpoolreopen [-n] + pool
+
+ + + + + +
zpoolremove [-np] + pool device...
+
+ + + + + +
zpoolremove -s + pool
+
+ + + + + +
zpoolreplace [-f] + [-o + property=value] + pool device + [new_device]
+
+ + + + + +
zpoolresilver pool...
+
+ + + + + +
zpoolscrub [-s | + -p] pool...
+
+ + + + + +
zpooltrim [-d] + [-r rate] + [-c | -s] + pool [device...]
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]... + [-R root] + pool newpool [device]...
+
+ + + + + +
zpoolstatus [-c + SCRIPT] [-DigLpPstvx] + [-T u|d] + [pool]... [interval + [count]]
+
+ + + + + +
zpoolsync [pool]...
+
+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool...
+
+ + + + + +
zpoolversion
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system of which it + is a part. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with N disks of size X + can hold X bytes and can withstand (N-1) devices failing before data + integrity is compromised.
+
, + raidz1, raidz2, + raidz3
+
A variation on RAID-5 that allows for better distribution of parity and + eliminates the RAID-5 "write hole" (in which data and parity + become inconsistent after a power loss). Data and parity is striped across + all disks within a raidz group. +

A raidz group can have single-, double-, or triple-parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N disks of size X with P parity disks can + hold approximately (N-P)*X bytes and can withstand P device(s) failing + before data integrity is compromised. The minimum number of devices in a + raidz group is one more than the number of parity disks. The recommended + number is between 3 and 9 to help increase performance.

+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device dedicated solely for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested, so a mirror or raidz virtual + device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. The keywords mirror and + raidz are used to distinguish where a group ends and + another begins. For example, the following creates two root vdevs, each a + mirror of two disks:

+
+
# zpool create mypool mirror sda sdb mirror sdc sdd
+
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three states: + online, degraded, or faulted. An online pool has all devices operating + normally. A degraded pool is one in which one or more devices have failed, + but the data is still available due to a redundant configuration. A faulted + pool has corrupted metadata, or one or more faulted devices, and + insufficient replicas to continue functioning.

+

The health of the top-level vdev, such as mirror or raidz device, + is potentially impacted by the state of its associated vdevs, or component + devices. A top-level vdev or component device is in one of the following + states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

If a device is removed and later re-attached to the system, ZFS + attempts to put the device online automatically. Device attach detection is + hardware-dependent and might not be supported on all platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
+
# zpool create pool mirror sda sdb spare sdc sdd
+
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
+
# zpool create pool sda sdb log sdc
+
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allow much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
+
# zpool create pool sda sdb cache sdc sdd
+
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is considered volatile, as is the + case with other system caches.

+
+
+

+

Before starting critical procedures that include destructive + actions (e.g zfs destroy ), + an administrator can checkpoint the pool's state and in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, while a pool has a checkpoint + certain operations are not allowed. Specifically, vdev + removal/attach/detach, mirror splitting, and changing the pool's guid. + Adding a new vdev is supported but in the case of a rewind it will have to + be added again. Finally, users of this feature should keep in mind that + scrubs in a pool that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
+
# zpool checkpoint pool
+
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
+
# zpool export pool
+# zpool import --rewind-to-checkpoint pool
+
+

To discard the checkpoint from a pool:

+
+
# zpool checkpoint -d pool
+
+

Dataset reservations (controlled by the + reservation or + refreservation zfs properties) may be unenforceable + while a checkpoint exists, because the checkpoint is allowed to consume the + dataset's reservation. Finally, data that is part of the checkpoint but has + been freed in the current state of the pool won't be scanned during a + scrub.

+
+
+

+

The allocations in the special class are dedicated to specific + block types. By default this includes all metadata, the indirect blocks of + user data, and any deduplication tables. The class can also be provisioned + to accept small file blocks.

+

A pool must always have at least one normal (non-dedup/special) + vdev before other devices can be assigned to the special class. If the + special class becomes full, then allocations intended for it will spill back + into the normal class.

+

Deduplication tables can be excluded + from the special class by setting the + + zfs module parameter to false (0).

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + dataset property. It defaults to zero, so you must opt-in by setting it to a + non-zero value. See zfs(8) for more info on setting this + property.

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. Uninitialized space consists + of any space on an EFI labeled vdev which has not been brought online + (e.g, using zpool online + -e). This space occurs when a LUN is dynamically + expanded.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz party; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs-module-parameters(5) for more + information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
The current health of the pool. Health can be one of + ONLINE, DEGRADED, + FAULTED, + , UNAVAIL.
+
+
A unique identifier for the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (e.g. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
+
Information about unsupported features that are enabled on the pool. See + zpool-features(5) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpool command does not. For non-full pools of a + reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of 2 (internally + referred to as ashift ). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space vs. performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + + (which is 1<<12 = 4096). When set, this property is used as the + default hint value in subsequent vdev operations (add, attach and + replace). Changing this value will not modify any existing vdev, not even + on disk replacement; however it can be used, for instance, to replace a + dying 512B sectors disk with a newer 4KiB sectors device: this will + probably result in bad performance but at the same time could prevent loss + of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) man page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) man page for more details.
+
=|pool/dataset
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=number
+
This property is deprecated. In a future release, it will no longer have + any effect. +

Threshold for the number of block ditto copies. If + the reference count for a deduplicated block increases above this + number, a new ditto copy of this block is automatically stored. The + default setting is 0 which causes no ditto copies to + be created for deduplicated blocks. The minimum legal nonzero setting is + .

+
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default setting for this property is + off. +

Automatic TRIM does not immediately reclaim blocks after a + free. Instead, it will optimistically delay allowing smaller ranges to + be aggregated in to a few larger ones. These can then be issued more + efficiently to the storage.

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(5) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It does not + protect against an individual device being used in multiple pools, + regardless of the type of vdev. See the discussion under + zpool create.

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs-module-parameters(5) man page. In order to + enable this property each host must set a unique hostid. See + zgenhostid(8) + spl-module-parameters(5) for additional details. The + default value is off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
An alias for the zpool + version subcommand.
+
zpool add + [-fgLnP] [-o + property=value] + pool vdev...
+
Adds the specified virtual devices to the given pool. The + vdev specification is described in the + Virtual Devices section. The + behavior of the -f option, and the device checks + performed are described in the zpool + create subcommand. +
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all + symbolic links. This can be used to look up the current block device + name regardless of the /dev/disk/ path used to open it.
+
+
Displays the configuration that would be used without actually adding + the vdevs. The actual pool creation can still + fail due to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool attach + [-f] [-o + property=value] + pool device new_device
+
Attaches new_device to the existing + device. The existing device cannot be part of a + raidz configuration. If device is not currently part + of a mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part + of a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately. +
+
+
Forces use of new_device, even if it appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool checkpoint + [-d, --discard] + pool
+
Checkpoints the current state of pool , which can be + later restored by zpool import + --rewind-to-checkpoint. The existence of a checkpoint in a pool + prohibits the following zpool commands: + remove, attach, + detach, split, and + reguid. In addition, it may break reservation + boundaries if the pool lacks free space. The zpool + status command indicates the existence of a + checkpoint or the progress of discarding a checkpoint from a pool. The + zpool list command reports + how much space the checkpoint takes from the pool. +
+
+ --discard
+
Discards an existing checkpoint from pool.
+
+
+
zpool clear + pool [device]
+
Clears device errors in a pool. If no arguments are specified, all device + errors within the pool are cleared. If one or more devices is specified, + only those errors associated with the specified device or devices are + cleared. If multihost is enabled, and the pool has been suspended, this + will not resume I/O. While the pool was suspended, it may have been + imported on another host, and resuming I/O could result in pool + damage.
+
zpool create + [-dfn] [-m + mountpoint] [-o + property=value]... + [-o + feature@feature=value]... + [-O + file-system-property=value]... + [-R root] + [-t tname] + pool vdev...
+
Creates a new storage pool containing the virtual devices specified on the + command line. The pool name must begin with a letter, and can only contain + alphanumeric characters as well as underscore + (""), dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + spare and log are reserved, as are + names beginning with mirror, raidz, + spare, and the pattern + . + The vdev specification is described in the + Virtual Devices section. +

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts + to use a new device in different pools, even if + multihost is + The + administrator must ensure that simultaneous invocations of any + combination of zpool replace, zpool + create, zpool add, or zpool + labelclear, do not refer to the same device. Using the same device + in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or + specified as the dedicated dump device, that prevents a device from ever + being used by ZFS. Other uses, such as having a preexisting UFS file + system, can be overridden with the -f + option.

+

The command also checks that the replication strategy for the + pool is consistent. An attempt to combine redundant and non-redundant + storage in a single pool, or to mix disks and files, results in an error + unless -f is specified. The use of differently + sized devices within a single raidz or mirror group is also flagged as + an error unless -f is specified.

+

Unless the -R option is specified, the + default mount point is + /pool. The mount point + must not exist or must be empty, or else the root dataset cannot be + mounted. This can be overridden with the -m + option.

+

By default all supported features are enabled on the new pool + unless the -d option is specified.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with the -o option. + See zpool-features(5) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool + if altroot is specified. The mount point must be + an absolute path, + , + or none. For more information on dataset mount + points, see zfs(8).
+
+
Displays the configuration that would be used without actually + creating the pool. The actual pool creation can still fail due to + insufficient privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set.
+
+ feature@feature=value
+
Sets the given pool feature. See the + zpool-features(5) section for a list of valid + features that can be set. Value can be either disabled or + enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the + pool. See the Properties section + of zfs(8) for a list of valid properties that can be + set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to + + while the on-disk name will be the name specified as the pool name + . + This will set the default cachefile property to none. This is intended + to handle name space collisions when creating pools for other systems, + such as virtual machines or physical machines whose pools live on + network block devices.
+
+
+
zpool destroy + [-f] pool
+
Destroys the given pool, freeing up any devices for other use. This + command tries to unmount any active datasets before destroying the pool. +
+
+
Forces any active datasets contained within the pool to be + unmounted.
+
+
+
zpool detach + pool device
+
Detaches device from a mirror. The operation is + refused if there are no other valid replicas of the data. If device may be + re-added to the pool later on then consider the zpool + offline command instead.
+
zpool events + [-vHf [pool] | + -c]
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + For more information about the subclasses and event payloads that can be + generated see the zfs-events(5) man page. +
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
zpool export + [-a] [-f] + pool...
+
Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present. +

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, + so that ZFS can label the disks with portable EFI labels. Otherwise, + disk drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, using the + unmount -f command. +

This command will forcefully export the pool even if it + has a shared spare that is currently being used. This may lead to + potential data corruption.

+
+
+
+
zpool get + [-Hp] [-o + field[,field]...] + all|property[,property]... + [pool]...
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
        name          Name of storage pool
+        property      Property name
+        value         Property value
+        source        Property source, either 'default' or 'local'.
+
+

See the Properties + section for more information on the available pool properties.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
+
zpool history + [-il] [pool]...
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified. +
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard + format includes, the user name, the hostname, and the zone in which + the operation was performed.
+
+
+
zpool import + [-D] [-d + dir|device]
+
Lists pools available to import. If the -d option + is not specified, this command searches for devices in + /dev. The -d option can be + specified multiple times, and all directories are searched. If the device + appears to be part of an exported pool, this command displays a summary of + the pool with the name of the pool, a numeric identifier, as well as the + vdev layout and current health of the device for each device or file. + Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-n] + [-T] [-X]] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the readonly mounting option is enabled. In + this case, the checkpointed state of the pool is opened and an + administrator can see how the pool would look like if they were to + fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflm] [-F + [-n] [-t] + [-T] [-X]] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set -o + cachefile=none when not explicitly specified.
+
+
+
zpool initialize + [-c | -s] + pool [device...]
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified. Only leaf data or log devices may be initialized. +
+
+ --cancel
+
Cancel initializing on the specified devices, or all eligible devices + if none are specified. If one or more target devices are invalid or + are not currently being initialized, the command will fail and no + cancellation will occur on any device.
+
+ --suspend
+
Suspend initializing on the specified devices, or all eligible devices + if none are specified. If one or more target devices are invalid or + are not currently being initialized, the command will fail and no + suspension will occur on any device. Initializing can then be resumed + by running zpool + initialize with no flags on the relevant + target devices.
+
+
+
zpool iostat + [[[-c SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os + may be observed via iostat(1). If writes are located + nearby, they may be merged into a single larger operation. Additional I/O + may be generated depending on the level of vdev redundancy. To filter + output, you may pass in a list of pools, a pool and list of vdevs in that + pool, or a list of any vdevs from any pool. If no items are specified, + statistics for every pool in the system are shown. When given an + interval, the statistics are printed every + interval seconds until ^C is pressed. If + -n flag is specified the headers are displayed + only once, otherwise they are displayed periodically. If count is + specified, the command exits after count reports are printed. The first + report printed is always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the + -y flag. Also note that the units of + , + , + that are + printed in the report are in base 1024. To get the raw values, use the + -p flag. +
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + iostat output. Users can run any script found + in their ~/.zpool.d directory or from the + system /etc/zfs/zpool.d directory. Script + names containing the slash (/) character are not allowed. The default + search path can be overridden by setting the ZPOOL_SCRIPTS_PATH + environment variable. A privileged user can run + -c if they have the ZPOOL_SCRIPTS_AS_ROOT + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or + add the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script + name, it prints a list of all scripts. -c + also sets verbose mode + (-v).

+

Script output should be in the form of + "name=value". The column name is set to "name" + and the value is set to "value". Multiple lines can be + used to output multiple columns. The first line of output not in the + "name=value" format is displayed without a column title, + and no more output after that is displayed. This can be useful for + printing error messages. Blank or NULL values are printed as a '-' + to make output awk-able.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
+
+
Underlying path to the vdev (/dev/sd*). For use with device + mapper, multipath, or partitioned vdevs.
+
+
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Print request size histograms for the leaf vdev's IO. This includes + histograms of individual IOs (ind) and aggregate IOs (agg). These + stats can be useful for observing how well IO aggregation is working. + Note that TRIM IOs may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wide statistics.
+
+
Omit statistics since boot. Normally the first line of output reports + the statistics since boot. This option suppresses that first line of + output. interval
+
+
Display latency histograms: +

total_wait: Total IO time (queuing + + disk IO time). disk_wait: Disk IO time (time + reading/writing the disk). syncq_wait: Amount + of time IO spent in synchronous priority queues. Does not include + disk time. asyncq_wait: Amount of time IO + spent in asynchronous priority queues. Does not include disk time. + scrub: Amount of time IO spent in scrub queue. + Does not include disk time.

+
+
+
Include average latency statistics: +

total_wait: Average total IO time + (queuing + disk IO time). disk_wait: Average + disk IO time (time reading/writing the disk). + syncq_wait: Average amount of time IO spent in + synchronous priority queues. Does not include disk time. + asyncq_wait: Average amount of time IO spent + in asynchronous priority queues. Does not include disk time. + scrub: Average queuing time in scrub queue. + Does not include disk time. trim: Average + queuing time in trim queue. Does not include disk time.

+
+
+
Include active queue statistics. Each priority queue has both pending + ( pend) and active ( + activ) IOs. Pending IOs are waiting to be issued + to the disk, and active IOs have been issued to disk and are waiting + for completion. These stats are broken out by priority queue: +

syncq_read/write: Current number of + entries in synchronous priority queues. + asyncq_read/write: Current number of entries + in asynchronous priority queues. scrubq_read: + Current number of entries in scrub queue. + trimq_write: Current number of entries in trim + queue.

+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
zpool labelclear + [-f] device
+
Removes ZFS label information from the specified + device. The device must not be + part of an active pool configuration. +
+
+
Treat exported or foreign devices as inactive.
+
+
+
zpool list + [-HgLpPv] [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
Lists the given pools along with a health status and space usage. If no + pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until ^C is pressed. + If count is specified, the command exits after + count reports are printed. +
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + Properties section for a list of + valid properties. The default list is name, + size, allocated, + free, checkpoint, + expandsize, fragmentation, + capacity, dedupratio, + health, altroot.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wise statistics.
+
+
+
zpool offline + [-f] [-t] + pool device...
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device...
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
zpool reguid + pool
+
Generates a new unique identifier for the pool. You must ensure that all + devices in this pool are online and healthy before performing this + action.
+
zpool reopen + [-n] pool
+
Reopen all the vdevs associated with the pool. +
+
+
Do not restart an in-progress scrub operation. This is not recommended + and can result in partially resilvered devices unless a second scrub + is performed.
+
+
+
zpool + remove [-np] + pool device...
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. When the primary pool + storage includes a top-level raidz vdev only hot spare, cache, and log + devices can be removed. +

Removing a top-level vdev reduces the total amount of space in + the storage pool. The specified device will be evacuated by copying all + allocated space from it to the other devices in the pool. In this case, + the zpool remove command + initiates the removal and returns, while the evacuation continues in the + background. The removal progress can be monitored with + zpool status. If an IO + error is encountered during the removal process it will be cancelled. + The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(5).

+

A mirrored top-level device (log or data) can be removed by + specifying the top-level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("no-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
zpool replace + [-f] [-o + property=value] + pool device + [new_device]
+
Replaces old_device with + new_device. This is equivalent to attaching + new_device, waiting for it to resilver, and then + detaching old_device. +

The size of new_device must be greater + than or equal to the minimum size of all the devices in a mirror or + raidz configuration.

+

new_device is required if the pool is + not redundant. If new_device is not specified, it + defaults to old_device. This form of replacement + is useful after an existing disk has failed and has been physically + replaced. In this case, the new disk may have the same + /dev path as the old device, even though it is + actually a different disk. ZFS recognizes this.

+
+
+
Forces use of new_device, even if it appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + Properties section for a list of + valid properties that can be set. The only property supported at the + moment is ashift.
+
+
+
zpool scrub + [-s | -p] + pool...
+
Begins a scrub or resumes a paused scrub. The scrub examines all data in + the specified pools to verify that it checksums correctly. For replicated + (mirror or raidz) devices, ZFS automatically repairs any damage discovered + during the scrub. The zpool + status command reports the progress of the scrub + and summarizes the results of the scrub upon completion. +

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be + out of date (for example, when attaching a new device to a mirror or + replacing an existing device), whereas scrubbing examines all data to + discover silent errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive + operations, ZFS only allows one at a time. If a scrub is paused, the + zpool scrub resumes it. + If a resilver is in progress, ZFS does not allow a scrub to be started + until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During + this period, no completion time estimate will be provided.

+
+
+
Stop scrubbing.
+
+
+
+
Pause scrubbing. Scrub pause state and progress are periodically + synced to disk. If the system is restarted or pool is exported during + a paused scrub, even after import, scrub will remain paused until it + is resumed. Once resumed the scrub will pick up from the place where + it was last checkpointed to disk. To resume a paused scrub issue + zpool scrub + again.
+
+
+
zpool + resilver pool...
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning. Any drives that were scheduled for a + deferred resilver will be added to the new one. This requires the + + feature.
+
zpool trim + [-d] [-c | + -s] pool + [device...]
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space. +

A manual on-demand TRIM operation can be initiated + irrespective of the autotrim pool property setting. + See the documentation for the autotrim property above + for the types of vdev devices which can be trimmed.

+
+
+ --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, + the device guarantees that data stored on the trimmed blocks has been + erased. This requires support from the device and is not supported by + all SSDs.
+
+ --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
+ --cancel
+
Cancel trimming on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are + not currently being trimmed, the command will fail and no cancellation + will occur on any device.
+
+ --suspend
+
Suspend trimming on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are + not currently being trimmed, the command will fail and no suspension + will occur on any device. Trimming can then be resumed by running + zpool trim with no + flags on the relevant target devices.
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + Properties section for more + information on what properties can be set and acceptable values.
+
zpool split + [-gLlnP] [-o + property=value]... + [-R root] pool + newpool [device ...]
+
Splits devices off pool creating + newpool. All vdevs in pool + must be mirrors and the pool must not be in the process of resilvering. At + the time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool. +

The optional device specification causes the specified + device(s) to be included in the new pool and, + should any devices remain unspecified, the last device in each mirror is + used as would be by default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the new pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Do dry run, do not actually perform the split. Print out the expected + configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the specified property for newpool. See the + Properties section for more + information on the available pool properties.
+
+ root
+
Set altroot for newpool to + root and automatically import it.
+
+
+
zpool status + [-c + [SCRIPT1[,SCRIPT2]...]] + [-DigLpPstvx] [-T + u|d] [pool]... + [interval [count]]
+
Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in + the system is displayed. For more information on pool and device health, + see the Device Failure + and Recovery section. +

If a scrub or resilver is in progress, this command reports + the percentage done and the estimated time to completion. Both of these + are only approximate, because the amount of data in the pool and the + other workloads on the system can change.

+
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + status output. See the + -c option of zpool + iostat for complete details.
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in + the pool) block counts and sizes by reference count.
+
+
Display the number of leaf VDEV slow IOs. This is the number of IOs + that didn't complete in zio_slow_io_ms milliseconds (default 30 + seconds). This does not necessarily mean the IOs failed to complete, + just took an unreasonably long amount of time. This may indicate a + problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Displays verbose data error information, printing out a complete list + of all data errors since the last complete pool scrub.
+
+
Only display status for pools that are exhibiting errors or are + otherwise unavailable. Warnings about pools not using the latest + on-disk format will not be included.
+
+
+
zpool sync + [pool ...]
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all pools on the system. Otherwise, + it will sync only the specified pool(s).
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools.
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the current software. See + zpool-features(5) for a description of feature flags + features supported by the current software.
+
zpool upgrade + [-V version] + -a|pool...
+
Enables all supported features on the given pool. Once this is done, the + pool will no longer be accessible on systems that do not support feature + flags. See zpool-features(5) for details on + compatibility with systems that support feature flags, but do not support + all features enabled on the pool. +
+
+
Enables all supported features on all pools.
+
+ version
+
Upgrade to the specified legacy version. If the + -V flag is specified, no features will be + enabled on the pool. This option can only be used to increase the + version number up to the last supported legacy version number.
+
+
+
zpool version
+
Displays the software version of the zpool + userland utility and the zfs kernel module.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+

+
+
Creating a RAID-Z Storage Pool
+
The following command creates a pool with a single raidz root vdev that + consists of six disks. +
+
# zpool create tank raidz sda sdb sdc sdd sde sdf
+
+
+
Creating a Mirrored Storage Pool
+
The following command creates a pool with two mirrors, where each mirror + contains two disks. +
+
# zpool create tank mirror sda sdb mirror sdc sdd
+
+
+
Creating a ZFS Storage Pool by Using + Partitions
+
The following command creates an unmirrored pool using two disk + partitions. +
+
# zpool create tank sda1 sdb2
+
+
+
Creating a ZFS Storage Pool by Using + Files
+
The following command creates an unmirrored pool using files. While not + recommended, a pool based on files can be useful for experimental + purposes. +
+
# zpool create tank /path/to/file/a /path/to/file/b
+
+
+
Adding a Mirror to a ZFS Storage Pool
+
The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool. +
+
# zpool add tank mirror sda sdb
+
+
+
Listing Available ZFS Storage Pools
+
The following command lists all available pools on the system. In this + case, the pool + is + faulted due to a missing device. The results from this command are similar + to the following: +
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
Destroying a ZFS Storage Pool
+
The following command destroys the pool tank and any + datasets contained within. +
+
# zpool destroy -f tank
+
+
+
Exporting a ZFS Storage Pool
+
The following command exports the devices in pool tank + so that they can be relocated or later imported. +
+
# zpool export tank
+
+
+
Importing a ZFS Storage Pool
+
The following command displays available pools, and then imports the pool + tank for use on the system. The results from this + command are similar to the following: +
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
Upgrading All ZFS Storage Pools to the Current + Version
+
The following command upgrades all ZFS Storage pools to the current + version of the software. +
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
Managing Hot Spares
+
The following command creates a new pool with an available hot spare: +
+
# zpool create tank mirror sda sdb spare sdc
+
+

If one of the disks were to fail, the pool would be reduced to + the degraded state. The failed device can be replaced using the + following command:

+
+
# zpool replace tank sda sdd
+
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The + hot spare can be permanently removed from the pool using the following + command:

+
+
# zpool remove tank sdc
+
+
+
Creating a ZFS Pool with Mirrored Separate + Intent Logs
+
The following command creates a ZFS storage pool consisting of two, + two-way mirrors and mirrored log devices: +
+
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
+  sde sdf
+
+
+
Adding Cache Devices to a ZFS Pool
+
The following command adds two disks for use as cache devices to a ZFS + storage pool: +
+
# zpool add pool cache sdc sdd
+
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take + over an hour for them to fill. Capacity and reads can be monitored using + the iostat option as follows:

+
+
# zpool iostat -v pool 5
+
+
+
Removing a Mirrored top-level (Log or Data) + Device
+
The following commands remove the mirrored log device + mirror-2 and mirrored top-level data device + mirror-1. +

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
+
# zpool remove tank mirror-2
+
+

The command to remove the mirrored data + mirror-1 is:

+
+
# zpool remove tank mirror-1
+
+
+
Displaying expanded space on a + device
+
The following command displays the detailed information for the pool + . + This pool is comprised of a single raidz vdev where one of its devices + increased its capacity by 10GB. In this example, the pool will not be able + to utilize this extra capacity until all the devices under the raidz vdev + have been expanded. +
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
Adding output columns
+
Additional columns can be added to the zpool + status and zpool + iostat output with -c + option. +
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc slaves
+   capacity operations bandwidth
+   pool       alloc free  read  write read  write slaves
+   ---------- ----- ----- ----- ----- ----- ----- ---------
+   tank       20.4G 7.23T 26    152   20.7M 21.6M
+   mirror     20.4G 7.23T 26    152   20.7M 21.6M
+   U1         -     -     0     31    1.46K 20.6M sdb sdff
+   U10        -     -     0     1     3.77K 13.3K sdas sdgw
+   U11        -     -     0     1     288K  13.3K sdat sdgx
+   U12        -     -     0     1     78.4K 13.3K sdau sdgy
+   U13        -     -     0     1     128K  13.3K sdav sdgz
+   U14        -     -     0     1     63.2K 13.3K sdfk sdg
+
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool status + -g command line option.
+
+
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the zpool + status -L command line option.
+
+
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the zpool + status -p command line option.
+
+
+
+
Older ZFS on Linux implementations had issues when attempting to display + pool config VDEV names if a devid NVP value is present + in the pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and zpool + status would fail when listing the config. This would also be + true for future Linux based pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+
+
+
+
+
Allow a privileged user to run the zpool + status/iostat with the -c option. Normally, + only unprivileged users are allowed to run + -c.
+
+
+
+
The search path for scripts when running zpool + status/iostat with the -c option. This is a + colon-separated list of directories and overrides the default + ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
+
+
Allow a user to run zpool status/iostat with the + -c option. If + ZPOOL_SCRIPTS_ENABLED is not set, it is assumed that the + user is allowed to run zpool status/iostat + -c.
+
+
+
+

+

+
+
+

+

zfs-events(5), + zfs-module-parameters(5), + zpool-features(5), zed(8), + zfs(8)

+
+
+ + + + + +
May 2, 2019Linux
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/8/zstreamdump.8.html b/man/v0.8/8/zstreamdump.8.html new file mode 100644 index 000000000..b15ed8b8e --- /dev/null +++ b/man/v0.8/8/zstreamdump.8.html @@ -0,0 +1,205 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
zstreamdump(8)System Administration Commandszstreamdump(8)
+
+
+

+

zstreamdump - filter data in zfs send stream

+
+
+

+
zstreamdump [-C] [-v] [-d]
+

+
+
+

+

The zstreamdump utility reads from the output of the zfs + send command, then displays headers and some statistics from that + output. See zfs(8).

+
+
+

+

The following options are supported:

+

-C

+

+
Suppress the validation of checksums.
+

+

-v

+

+
Verbose. Dump all headers, not only begin and end + headers.
+

+

-d

+

+
Dump contents of blocks modified. Implies verbose.
+

+
+
+

+

zfs(8)

+
+
+ + + + + +
29 Aug 2012ZFS pool 28, filesystem 5
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v0.8/index.html b/man/v0.8/index.html new file mode 100644 index 000000000..f0272e0c9 --- /dev/null +++ b/man/v0.8/index.html @@ -0,0 +1,143 @@ + + + + + + + v0.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/arcstat.1.html b/man/v2.0/1/arcstat.1.html new file mode 100644 index 000000000..194cf4a69 --- /dev/null +++ b/man/v2.0/1/arcstat.1.html @@ -0,0 +1,364 @@ + + + + + + + arcstat.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

arcstat.1

+
+ + + + + +
ARCSTAT(1)General Commands ManualARCSTAT(1)
+
+
+

+

arcstat - report ZFS ARC and L2ARC statistics

+
+
+

+
arcstat [-havxp] [-f field[,field]...] [-o file] [-s string] [interval [count]]
+

+
+
+

+

The arcstat utility print various ZFS ARC and L2ARC + statistics in vmstat-like fashion.

+

+

+

+

The arcstat command reports the following information:

+

+

+

c

+
ARC target size
+

+

dh%

+
Demand data hit percentage
+

+

dm%

+
Demand data miss percentage
+

+

mfu

+
MFU list hits per second
+

+

mh%

+
Metadata hit percentage
+

+

mm%

+
Metadata miss percentage
+

+

mru

+
MRU list hits per second
+

+

ph%

+
Prefetch hits percentage
+

+

pm%

+
Prefetch miss percentage
+

+

dhit

+
Demand data hits per second
+

+

dmis

+
Demand data misses per second
+

+

hit%

+
ARC hit percentage
+

+

hits

+
ARC reads per second
+

+

mfug

+
MFU ghost list hits per second
+

+

mhit

+
Metadata hits per second
+

+

miss

+
ARC misses per second
+

+

mmis

+
Metadata misses per second
+

+

mrug

+
MRU ghost list hits per second
+

+

phit

+
Prefetch hits per second
+

+

pmis

+
Prefetch misses per second
+

+

read

+
Total ARC accesses per second
+

+

time

+
Time
+

+

size

+
ARC size
+

+

arcsz

+
Alias for size
+

+

dread

+
Demand data accesses per second
+

+

eskip

+
evict_skip per second
+

+

miss%

+
ARC miss percentage
+

+

mread

+
Metadata accesses per second
+

+

pread

+
Prefetch accesses per second
+

+

l2hit%

+
L2ARC access hit percentage
+

+

l2hits

+
L2ARC hits per second
+

+

l2miss

+
L2ARC misses per second
+

+

l2read

+
Total L2ARC accesses per second
+

+

l2size

+
Size of the L2ARC
+

+

mtxmis

+
mutex_miss per second
+

+

l2bytes

+
Bytes read per second from the L2ARC
+

+

l2miss%

+
L2ARC access miss percentage
+

+

l2asize

+
Actual (compressed) size of the L2ARC
+

+

grow

+
ARC grow disabled
+

+

need

+
ARC reclaim needed
+

+

free

+
The ARC's idea of how much free memory there is, which + includes evictable memory in the page cache. Since the ARC tries to keep + avail above zero, avail is usually more instructive to observe + than free.
+

+

avail

+
The ARC's idea of how much free memory is available to + it, which is a bit less than free. May temporarily be negative, in + which case the ARC will reduce the target size c.
+

+
+
+

+

The following options are supported:

+

+

-a

+
Print all possible stats.
+

+

-f

+
Display only specific fields. See DESCRIPTION for + supported statistics.
+

+

-h

+
Display help message.
+

+

-o

+
Report statistics to a file instead of the standard + output.
+

+

-p

+
Disable auto-scaling of numerical fields (for raw, + machine-parsable values).
+

+

-s

+
Display data with a specified separator (default: 2 + spaces).
+

+

-x

+
Print extended stats (same as -f + time,mfu,mru,mfug,mrug,eskip,mtxmis,dread,pread,read).
+

+

-v

+
Show field headers and definitions
+

+
+
+

+

The following operands are supported:

+

count

+
Display only count reports.
+

+

interval

+
Specify the sampling interval in seconds.
+

+
+
+

+

arcstat was originally written in Perl by Neelakanth Nadgir and + supported only ZFS ARC statistics. Mike Harsch updated it to support L2ARC + statistics. John Hixson ported it to Python for FreeNAS over some beer, + after which many individuals from the OpenZFS community continued to + maintain and improve it.

+
+
+ + + + + +
October 20, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/cstyle.1.html b/man/v2.0/1/cstyle.1.html new file mode 100644 index 000000000..3700e91fe --- /dev/null +++ b/man/v2.0/1/cstyle.1.html @@ -0,0 +1,286 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
CSTYLE(1)General Commands ManualCSTYLE(1)
+
+
+

+

cstyle - check for some common stylistic errors in C source + files

+
+
+

+

cstyle [-chpvCP] [-o constructs] [file...]

+
+
+

+

cstyle inspects C source files (*.c and *.h) for common + stylistic errors. It attempts to check for the cstyle documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. Note that + there is much in that document that cannot be checked for; just + because your code is cstyle(1) clean does not mean that you've + followed Sun's C style. Caveat emptor.

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented exactly four + spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see CONTINUATION CHECKING, below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI #else and #endif + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + "u_int" and "u_long" were used, but they are now + deprecated in favor of the POSIX types uint_t, ulong_t, etc. This detects + any use of the deprecated types. Used as part of the putback checks.
+
+
Allow a comma-separated list of additional constructs. Available + constructs include:
+
+
Allow doxygen-style block comments (/** and /*!)
+
+
Allow splint-style lint comments (/*@...@*/)
+
+
+
+

+

The cstyle rule for the OS/Net consolidation is that all new files + must be -pP clean. For existing files, the following invocations are + run against both the old and new files:

+
+
+
+
+
+
+
+
+

If the old file gave no errors for one of the invocations, the new + file must also give no errors. This way, files can only become more + clean.

+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
+
1.
+
Preprocessor macros which cause unmatched parenthesis will confuse the + checker for that line. To fix this, you'll need to make sure that each + branch of the #if statement has balanced parenthesis.
+
2.
+
Some cpp macros do not require ;s after them. Any such macros + *must* be ALL_CAPS; any lower case letters will cause bad output.
+
+

The bad output will generally be corrected after the next + ;, {, or }.

+

Some continuation error messages deserve some additional + explanation

+
+
+
A multi-line statement which is not broken at statement boundaries. For + example:
+
+
+

if (this_is_a_long_variable == another_variable) a = +
+ b + c;

+

Will trigger this error. Instead, do:

+

if (this_is_a_long_variable == another_variable) +
+ a = b + c;

+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example:
+
+
+

while (do_something(&x) == 0);

+

Will trigger this error. Instead, do:

+

while (do_something(&x) == 0) +
+ ;

+
+

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/index.html b/man/v2.0/1/index.html new file mode 100644 index 000000000..ea31d7096 --- /dev/null +++ b/man/v2.0/1/index.html @@ -0,0 +1,155 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/raidz_test.1.html b/man/v2.0/1/raidz_test.1.html new file mode 100644 index 000000000..27b4863d9 --- /dev/null +++ b/man/v2.0/1/raidz_test.1.html @@ -0,0 +1,261 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
RAIDZ_TEST(1)General Commands ManualRAIDZ_TEST(1)
+
+

+
+

+

raidz_test - raidz implementation verification and + benchmarking tool

+
+
+

+

raidz_test <options>

+
+
+

+

This manual page documents briefly the raidz_test + command.

+

Purpose of this tool is to run all supported raidz implementation + and verify results of all methods. Tool also contains a parameter sweep + option where all parameters affecting RAIDZ block are verified (like ashift + size, data offset, data size, etc...). The tool also supports a benchmarking + mode using -B option.

+
+
+

+

-h

+
+
+
Print a help summary.
+
+

-a ashift (default: 9)

+
+
+
Ashift value.
+
+

-o zio_off_shift (default: 0)

+
+
+
Zio offset for raidz block. Offset value is 1 << + (zio_off_shift)
+
+

-d raidz_data_disks (default: 8)

+
+
+
Number of raidz data disks to use. Additional disks for parity will be + used during testing.
+
+

-s zio_size_shift (default: 19)

+
+
+
Size of data for raidz block. Size is 1 << (zio_size_shift).
+
+

-S(weep)

+
+
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for -a -o -d -s options. + Runtime using this option will be long.
+
+

-t(imeout)

+
+
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
+

-B(enchmark)

+
+
+
This options starts the benchmark mode. All implementations are + benchmarked using increasing per disk data size. Results are given as + throughput per disk, measured in MiB/s.
+
+

-v(erbose)

+
+
+
Increase verbosity.
+
+

-T(est the test)

+
+
+
Debugging option. When this option is specified tool is supposed to fail + all tests. This is to check if tests would properly verify + bit-exactness.
+
+

-D(ebug)

+
+
+
Debugging option. Specify to attach gdb when SIGSEGV or SIGABRT are + received.
+
+

+

+
+
+

+

ztest (1)

+
+
+

+

vdev_raidz, created for OpenZFS by Gvozden Nešković + <neskovic@gmail.com>

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/zhack.1.html b/man/v2.0/1/zhack.1.html new file mode 100644 index 000000000..343f84624 --- /dev/null +++ b/man/v2.0/1/zhack.1.html @@ -0,0 +1,253 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
ZHACK(1)General Commands ManualZHACK(1)
+
+

+
+

+

zhack - libzpool debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+

zhack [-c cachefile] [-d dir] + <subcommand> [arguments]

+
+
+

+

-c cachefile

+
+
+
Read the pool configuration from the cachefile, which is + /etc/zfs/zpool.cache by default.
+
+

-d dir

+
+
+
Search for pool members in the dir path. Can be specified + more than once.
+
+
+
+

+

feature stat pool

+
+
+
List feature flags.
+
+

feature enable [-d description] [-r] pool + guid

+
+
+
Add a new feature to pool that is uniquely identified by + guid, which is specified in the same form as a zfs(8) user + property.
+
+
The description is a short human readable explanation of the new + feature.
+
+
The -r switch indicates that pool can be safely opened in + read-only mode by a system that does not have the guid + feature.
+
+

feature ref [-d|-m] pool guid

+
+
+
Increment the reference count of the guid feature in + pool.
+
+
The -d switch decrements the reference count of the guid + feature in pool.
+
+
The -m switch indicates that the guid feature is now + required to read the pool MOS.
+
+
+
+

+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
# zhack feature enable -d 'Predict future disk failures.' \
+
+ tank com.example:clairvoyance
+
# zhack feature ref tank com.example:clairvoyance
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

zfs(8), zpool-features(5), ztest(1)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/ztest.1.html b/man/v2.0/1/ztest.1.html new file mode 100644 index 000000000..fd49702bb --- /dev/null +++ b/man/v2.0/1/ztest.1.html @@ -0,0 +1,350 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ZTEST(1)General Commands ManualZTEST(1)
+
+

+
+

+

ztest - was written by the ZFS Developers as a ZFS unit + test.

+
+
+

+

ztest <options>

+
+
+

+

This manual page documents briefly the ztest command.

+

ztest was written by the ZFS Developers as a ZFS unit test. + The tool was developed in tandem with the ZFS functionality and was executed + nightly as one of the many regression test against the daily build. As + features were added to ZFS, unit tests were also added to ztest. In + addition, a separate test development team wrote and executed more + functional and stress tests.

+

By default ztest runs for ten minutes and uses block files + (stored in /tmp) to create pools rather than using physical disks. Block + files afford ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you have a small tmp + directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 minutes. The -V option + can be used to increase the verbosity of the tool. Adding multiple -V option + is allowed and the more you add the more chatty ztest becomes.

+

After the ztest run completes, you should notice many + ztest.* files lying around. Once the run completes you can safely remove + these files. Note that you shouldn't remove these files during a run. You + can re-use these files in your next ztest run by using the -E + option.

+
+
+

+

-?

+
+
+
Print a help summary.
+
+

-v vdevs (default: 5)

+
+
+
Number of vdevs.
+
+

-s size_of_each_vdev (default: 64M)

+
+
+
Size of each vdev.
+
+

-a alignment_shift (default: 9) (use 0 for + random)

+
+
+
Used alignment in test.
+
+

-m mirror_copies (default: 2)

+
+
+
Number of mirror copies.
+
+

-r raidz_disks (default: 4)

+
+
+
Number of raidz disks.
+
+

-R raidz_parity (default: 1)

+
+
+
Raidz parity.
+
+

-d datasets (default: 7)

+
+
+
Number of datasets.
+
+

-t threads (default: 23)

+
+
+
Number of threads.
+
+

-g gang_block_threshold (default: 32K)

+
+
+
Gang block threshold.
+
+

-i initialize_pool_i_times (default: + 1)

+
+
+
Number of pool initialisations.
+
+

-k kill_percentage (default: 70%)

+
+
+
Kill percentage.
+
+

-p pool_name (default: ztest)

+
+
+
Pool name.
+
+

-V(erbose)

+
+
+
Verbose (use multiple times for ever more blather).
+
+

-E(xisting)

+
+
+
Use existing pool (use existing pool instead of creating new one).
+
+

-T time (default: 300 sec)

+
+
+
Total test run time.
+
+

-z zil_failure_rate (default: fail every 2^5 + allocs)

+
+
+
Injected failure rate.
+
+

-G

+
+
+
Dump zfs_dbgmsg buffer before exiting.
+
+
+
+

+

To override /tmp as your location for block files, you can use the + -f option:

+
+
+
ztest -f /
+
+

To get an idea of what ztest is actually testing try this:

+
+
+
ztest -f / -VVV
+
+

Maybe you'd like to run ztest for longer? To do so simply use the + -T option and specify the runlength in seconds like so:

+
+
+
ztest -f / -V -T 120 +

+
+
+
+
+

+
+
+
Use id instead of the SPL hostid to identify this host. Intended + for use with ztest, but this environment variable will affect any utility + which uses libzpool, including zpool(8). Since the kernel is + unaware of this setting results with utilities other than ztest are + undefined.
+
+
Limit the default stack size to stacksize bytes for the purpose of + detecting and debugging kernel stack overflows. This value defaults to + 32K which is double the default 16K Linux kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to 256K.

+
+
+
+
+

+

spl-module-parameters (5), zpool (1), zfs + (1), zdb (1),

+
+
+

+

This manual page was transferred to asciidoc by Michael + Gebetsroither <gebi@grml.org> from + http://opensolaris.org/os/community/zfs/ztest/

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/1/zvol_wait.1.html b/man/v2.0/1/zvol_wait.1.html new file mode 100644 index 000000000..5413620b0 --- /dev/null +++ b/man/v2.0/1/zvol_wait.1.html @@ -0,0 +1,192 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands Manual (smm)ZVOL_WAIT(1)
+
+
+

+

zvol_waitWait + for ZFS volume links in + to be + created.

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, ZFS will register each ZFS volume + (zvol) as a disk device with the system. As the disks are registered, + udev(7) will asynchronously create + symlinks under + + using the zvol's name. zvol_wait will wait for all + those symlinks to be created before returning.

+
+
+

+

udev(7)

+
+
+ + + + + +
July 5, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/index.html b/man/v2.0/5/index.html new file mode 100644 index 000000000..41dc9bbfc --- /dev/null +++ b/man/v2.0/5/index.html @@ -0,0 +1,153 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/spl-module-parameters.5.html b/man/v2.0/5/spl-module-parameters.5.html new file mode 100644 index 000000000..ec66c3f8a --- /dev/null +++ b/man/v2.0/5/spl-module-parameters.5.html @@ -0,0 +1,365 @@ + + + + + + + spl-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

spl-module-parameters.5

+
+ + + + + +
SPL-MODULE-PARAMETERS(5)File Formats ManualSPL-MODULE-PARAMETERS(5)
+
+
+

+

spl-module-parameters - SPL module parameters

+
+
+

+

Description of the different parameters to the SPL module.

+

+
+

+

+

spl_kmem_cache_expire (uint)

+
Cache expiration is part of default Illumos cache + behavior. The idea is that objects in magazines which have not been recently + accessed should be returned to the slabs periodically. This is known as cache + aging and when enabled objects will be typically returned after 15 seconds. +

On the other hand Linux slabs are designed to never move objects + back to the slabs unless there is memory pressure. This is possible because + under Linux the cache will be notified when memory is low and objects can be + released.

+

By default only the Linux method is enabled. It has been shown to + improve responsiveness on low memory systems and not negatively impact the + performance of systems with more memory. This policy may be changed by + setting the spl_kmem_cache_expire bit mask as follows, both policies + may be enabled concurrently.

+

0x01 - Aging (Illumos), 0x02 - Low memory (Linux)

+

Default value: 0x02

+
+

+

spl_kmem_cache_kmem_threads (uint)

+
The number of threads created for the spl_kmem_cache task + queue. This task queue is responsible for allocating new slabs for use by the + kmem caches. For the majority of systems and workloads only a small number of + threads are required. +

Default value: 4

+
+

+

spl_kmem_cache_reclaim (uint)

+
When this is set it prevents Linux from being able to + rapidly reclaim all the memory held by the kmem caches. This may be useful in + circumstances where it's preferable that Linux reclaim memory from some other + subsystem first. Setting this will increase the likelihood out of memory + events on a memory constrained system. +

Default value: 0

+
+

+

spl_kmem_cache_obj_per_slab (uint)

+
The preferred number of objects per slab in the cache. In + general, a larger value will increase the caches memory footprint while + decreasing the time required to perform an allocation. Conversely, a smaller + value will minimize the footprint and improve cache reclaim time but + individual allocations may take longer. +

Default value: 8

+
+

+

spl_kmem_cache_obj_per_slab_min (uint)

+
The minimum number of objects allowed per slab. Normally + slabs will contain spl_kmem_cache_obj_per_slab objects but for caches + that contain very large objects it's desirable to only have a few, or even + just one, object per slab. +

Default value: 1

+
+

+

spl_kmem_cache_max_size (uint)

+
The maximum size of a kmem cache slab in MiB. This + effectively limits the maximum cache object size to + spl_kmem_cache_max_size / spl_kmem_cache_obj_per_slab. Caches + may not be created with object sized larger than this limit. +

Default value: 32 (64-bit) or 4 (32-bit)

+
+

+

spl_kmem_cache_slab_limit (uint)

+
For small objects the Linux slab allocator should be used + to make the most efficient use of the memory. However, large objects are not + supported by the Linux slab and therefore the SPL implementation is preferred. + This value is used to determine the cutoff between a small and large object. +

Objects of spl_kmem_cache_slab_limit or smaller will be + allocated using the Linux slab allocator, large objects use the SPL + allocator. A cutoff of 16K was determined to be optimal for architectures + using 4K pages.

+

Default value: 16,384

+
+

+

spl_kmem_alloc_warn (uint)

+
As a general rule kmem_alloc() allocations should be + small, preferably just a few pages since they must by physically contiguous. + Therefore, a rate limited warning will be printed to the console for any + kmem_alloc() which exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped at + 32K to accommodate systems using large pages. This value was selected to be + small enough to ensure the largest allocations are quickly noticed and + fixed. But large enough to avoid logging any warnings when a allocation size + is larger than optimal but not a serious concern. Since this value is + tunable, developers are encouraged to set it lower when testing so any new + largish allocations are quickly caught. These warnings may be disabled by + setting the threshold to zero.

+

Default value: 32,768

+
+

+

spl_kmem_alloc_max (uint)

+
Large kmem_alloc() allocations will fail if they exceed + KMALLOC_MAX_SIZE. Allocations which are marginally smaller than this limit may + succeed but should still be avoided due to the expense of locating a + contiguous range of free pages. Therefore, a maximum kmem size with reasonable + safely margin of 4x is set. Kmem_alloc() allocations larger than this maximum + will quickly fail. Vmem_alloc() allocations less than or equal to this value + will use kmalloc(), but shift to vmalloc() when exceeding this value. +

Default value: KMALLOC_MAX_SIZE/4

+
+

+

spl_kmem_cache_magazine_size (uint)

+
Cache magazines are an optimization designed to minimize + the cost of allocating memory. They do this by keeping a per-cpu cache of + recently freed objects, which can then be reallocated without taking a lock. + This can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being immediately + released this may not be ideal for low memory machines. +

For this reason spl_kmem_cache_magazine_size can be used to + set a maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+

Default value: 0

+
+

+

spl_hostid (ulong)

+
The system hostid, when set this can be used to uniquely + identify a system. By default this value is set to zero which indicates the + hostid is disabled. It can be explicitly enabled by placing a unique non-zero + value in /etc/hostid/. +

Default value: 0

+
+

+

spl_hostid_path (charp)

+
The expected path to locate the system hostid when + specified. This value may be overridden for non-standard configurations. +

Default value: /etc/hostid

+
+

+

spl_panic_halt (uint)

+
Cause a kernel panic on assertion failures. When not + enabled, the thread is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+

Default value: 0

+
+

+

spl_taskq_kick (uint)

+
Kick stuck taskq to spawn threads. When writing a + non-zero value to it, it will scan all the taskqs. If any of them have a + pending task more than 5 seconds old, it will kick it to spawn more threads. + This can be used if you find a rare deadlock occurs because one or more taskqs + didn't spawn a thread when it should. +

Default value: 0

+
+

+

spl_taskq_thread_bind (int)

+
Bind taskq threads to specific CPUs. When enabled all + taskq threads will be distributed evenly over the available CPUs. By default, + this behavior is disabled to allow the Linux scheduler the maximum flexibility + to determine where a thread should run. +

Default value: 0

+
+

+

spl_taskq_thread_dynamic (int)

+
Allow dynamic taskqs. When enabled taskqs which set the + TASKQ_DYNAMIC flag will by default create only a single thread. New threads + will be created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will be + promptly destroyed. By default this behavior is enabled but it can be disabled + to aid performance analysis or troubleshooting. +

Default value: 1

+
+

+

spl_taskq_thread_priority (int)

+
Allow newly created taskq threads to set a non-default + scheduler priority. When enabled the priority specified when a taskq is + created will be applied to all threads created by that taskq. When disabled + all threads will use the default Linux kernel thread priority. By default, + this behavior is enabled. +

Default value: 1

+
+

+

spl_taskq_thread_sequential (int)

+
The number of items a taskq worker thread must handle + without interruption before requesting a new worker thread be spawned. This is + used to control how quickly taskqs ramp up the number of threads processing + the queue. Because Linux thread creation and destruction are relatively + inexpensive a small default value has been selected. This means that normally + threads will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for some + configurations. +

Default value: 4

+
+

+

spl_max_show_tasks (uint)

+
The maximum number of tasks per pending list in each + taskq shown in /proc/spl/{taskq,taskq-all}. Write 0 to turn off the limit. The + proc file will walk the lists with lock held, reading it could cause a lock up + if the list grow too large without limiting the output. + "(truncated)" will be shown if the list is larger than the limit. +

Default value: 512

+
+
+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/vdev_id.conf.5.html b/man/v2.0/5/vdev_id.conf.5.html new file mode 100644 index 000000000..98eb7c35f --- /dev/null +++ b/man/v2.0/5/vdev_id.conf.5.html @@ -0,0 +1,372 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
VDEV_ID.CONF(5)File Formats ManualVDEV_ID.CONF(5)
+
+
+

+

vdev_id.conf — + Configuration file for vdev_id

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the + default behavior of vdev_id(8) + while it is mapping a disk device name to an alias.

+

The vdev_id.conf file uses a simple format + consisting of a keyword followed by one or more values on a single line. Any + line not beginning with a recognized keyword is ignored. Comments may + optionally begin with a hash character.

+

The following keywords and values are used.

+
+
+ name devlink
+
Maps a device link in the /dev directory hierarchy + to a new device name. The udev rule defining the device link must have run + prior to vdev_id(8). A defined + alias takes precedence over a topology-derived name, but the two naming + methods can otherwise coexist. For example, one might name drives in a + JBOD with the sas_direct topology while naming an + internal L2ARC device with an alias. +

name is the name of the link to the + device that will by created under + /dev/disk/by-vdev.

+

devlink is the name of the device link + that has already been defined by udev. This may be an absolute path or + the base filename.

+
+
+ [pci_slot] port + name
+
Maps a physical path to a channel name (typically representing a single + disk enclosure).
+ +
Additionally create /dev/by-enclosure symlinks to + the disk enclosure + devices + using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for + sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form + /dev/by-enclosure/prefix⟩-⟨channel⟩⟨num⟩ +

Defaults to + “”.

+
+
+ prefix new + [channel]
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is + specified then the mapping is only applied to slots in the named channel, + otherwise the mapping is applied to all channels. The first-specified + slot rule that can match a slot takes precedence. + Therefore a channel-specific mapping for a given slot should generally + appear before a generic mapping for the same slot. In this way a custom + mapping may be applied to a particular channel and a default mapping + applied to the others.
+
+ yes|no
+
Specifies whether vdev_id(8) + will handle only dm-multipath devices. If set to yes + then vdev_id(8) will examine the + first running component disk of a dm-multipath device as provided by the + driver command to determine the physical path.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+ num
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) + internally uses this value to determine which HBA or switch port a device + is connected to. The default is + .
+
+ bay|phy|port|id|lun|ses
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay: +
+
+
read the slot number from the bay identifier.
+
+
read the slot number from the phy identifier.
+
+
use the SAS port as the slot number.
+
+
use the scsi id as the slot number.
+
+
use the scsi lun as the slot number.
+
+
use the SCSI Enclosure Services (SES) enclosure device slot number, as + reported by sg_ses(8). Intended for use only on + systems where bay is unsupported, noting that + port and id may be unstable across + disk replacement.
+
+
+
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for + vdev_id(8).
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping:

+
+
multipath     no
+topology      sas_direct
+phys_per_port 4
+slot          bay
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         C
+channel 86:00.0  0         D
+
+# Custom mapping for Channel A
+
+#    Linux      Mapped
+#    Slot       Slot      Channel
+slot 1          7         A
+slot 2          10        A
+slot 3          3         A
+slot 4          6         A
+
+# Default mapping for B, C, and D
+
+slot 1          4
+slot 2          2
+slot 3          1
+slot 4          3
+
+

A SAS-switch topology. Note, that the + channel keyword takes only two arguments in this + example.

+
+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path.

+
+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+

A configuration with enclosure_symlinks enabled.

+
+
multipath yes
+enclosure_symlinks yes
+
+#          PCI_ID      HBA PORT     CHANNEL NAME
+channel    05:00.0     1            U
+channel    05:00.0     0            L
+channel    06:00.0     1            U
+channel    06:00.0     0            L
+
+

In addition to the disks symlinks, this configuration will + create:

+
+
/dev/by-enclosure/enc-L0
+/dev/by-enclosure/enc-L1
+/dev/by-enclosure/enc-U0
+/dev/by-enclosure/enc-U1
+
+

A configuration using device link aliases.

+
+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/zfs-events.5.html b/man/v2.0/5/zfs-events.5.html new file mode 100644 index 000000000..cbe2c29e4 --- /dev/null +++ b/man/v2.0/5/zfs-events.5.html @@ -0,0 +1,848 @@ + + + + + + + zfs-events.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-events.5

+
+ + + + + +
ZFS-EVENTS(5)File Formats ManualZFS-EVENTS(5)
+
+
+

+

zfs-events - Events created by the ZFS filesystem.

+
+
+

+

Description of the different events generated by the ZFS + stack.

+

Most of these don't have any description. The events generated by + ZFS have never been publicly documented. What is here is intended as a + starting point to provide documentation for all possible events.

+

To view all events created since the loading of the ZFS + infrastructure (i.e, "the module"), run

+

+
zpool events
+

to get a short list, and

+

+
zpool events -v
+

to get a full detail of the events and what information is + available about it.

+

This man page lists the different subclasses that are issued in + the case of an event. The full event name would be + ereport.fs.zfs.SUBCLASS, but we only list the last part here.

+

+
+

+

+

checksum

+
Issued when a checksum error has been detected.
+

+

io

+
Issued when there is an I/O error in a vdev in the + pool.
+

+

data

+
Issued when there have been data errors in the + pool.
+

+

deadman

+
Issued when an I/O is determined to be "hung", + this can be caused by lost completion events due to flaky hardware or drivers. + See the zfs_deadman_failmode module option description for additional + information regarding "hung" I/O detection and configuration.
+

+

delay

+
Issued when a completed I/O exceeds the maximum allowed + time specified by the zio_slow_io_ms module option. This can be an + indicator of problems with the underlying storage device. The number of delay + events is ratelimited by the zfs_slow_io_events_per_second module + parameter.
+

+

config.sync

+
Issued every time a vdev change have been done to the + pool.
+

+

zpool

+
Issued when a pool cannot be imported.
+

+

zpool.destroy

+
Issued when a pool is destroyed.
+

+

zpool.export

+
Issued when a pool is exported.
+

+

zpool.import

+
Issued when a pool is imported.
+

+

zpool.reguid

+
Issued when a REGUID (new unique identifier for the pool + have been regenerated) have been detected.
+

+

vdev.unknown

+
Issued when the vdev is unknown. Such as trying to clear + device errors on a vdev that have failed/been kicked from the system/pool and + is no longer available.
+

+

vdev.open_failed

+
Issued when a vdev could not be opened (because it didn't + exist for example).
+

+

vdev.corrupt_data

+
Issued when corrupt data have been detected on a + vdev.
+

+

vdev.no_replicas

+
Issued when there are no more replicas to sustain the + pool. This would lead to the pool being DEGRADED.
+

+

vdev.bad_guid_sum

+
Issued when a missing device in the pool have been + detected.
+

+

vdev.too_small

+
Issued when the system (kernel) have removed a device, + and ZFS notices that the device isn't there any more. This is usually followed + by a probe_failure event.
+

+

vdev.bad_label

+
Issued when the label is OK but invalid.
+

+

vdev.bad_ashift

+
Issued when the ashift alignment requirement has + increased.
+

+

vdev.remove

+
Issued when a vdev is detached from a mirror (or a spare + detached from a vdev where it have been used to replace a failed drive - only + works if the original drive have been readded).
+

+

vdev.clear

+
Issued when clearing device errors in a pool. Such as + running zpool clear on a device in the pool.
+

+

vdev.check

+
Issued when a check to see if a given vdev could be + opened is started.
+

+

vdev.spare

+
Issued when a spare have kicked in to replace a failed + device.
+

+

vdev.autoexpand

+
Issued when a vdev can be automatically expanded.
+

+

io_failure

+
Issued when there is an I/O failure in a vdev in the + pool.
+

+

probe_failure

+
Issued when a probe fails on a vdev. This would occur if + a vdev have been kicked from the system outside of ZFS (such as the kernel + have removed the device).
+

+

log_replay

+
Issued when the intent log cannot be replayed. The can + occur in the case of a missing or damaged log device.
+

+

resilver.start

+
Issued when a resilver is started.
+

+

resilver.finish

+
Issued when the running resilver have finished.
+

+

scrub.start

+
Issued when a scrub is started on a pool.
+

+

scrub.finish

+
Issued when a pool has finished scrubbing.
+

+

scrub.abort

+
Issued when a scrub is aborted on a pool.
+

+

scrub.resume

+
Issued when a scrub is resumed on a pool.
+

+

scrub.paused

+
Issued when a scrub is paused on a pool.
+

+

bootfs.vdev.attach

+
+

+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to uppercase and prefixed with + ZEVENT_.

+

+

pool

+
Pool name.
+

+

pool_failmode

+
Failmode - wait, continue or panic. + See zpool(8) (failmode property) for more information.
+

+

pool_guid

+
The GUID of the pool.
+

+

pool_context

+
The load state for the pool (0=none, 1=open, 2=import, + 3=tryimport, 4=recover 5=error).
+

+

vdev_guid

+
The GUID of the vdev in question (the vdev failing or + operated upon with zpool clear etc).
+

+

vdev_type

+
Type of vdev - disk, file, mirror + etc. See zpool(8) under Virtual Devices for more information on + possible values.
+

+

vdev_path

+
Full path of the vdev, including any -partX.
+

+

vdev_devid

+
ID of vdev (if any).
+

+

vdev_fru

+
Physical FRU location.
+

+

vdev_state

+
State of vdev (0=uninitialized, 1=closed, 2=offline, + 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healthy).
+

+

vdev_ashift

+
The ashift value of the vdev.
+

+

vdev_complete_ts

+
The time the last I/O completed for the specified + vdev.
+

+

vdev_delta_ts

+
The time since the last I/O completed for the specified + vdev.
+

+

vdev_spare_paths

+
List of spares, including full path and any + -partX.
+

+

vdev_spare_guids

+
GUID(s) of spares.
+

+

vdev_read_errors

+
How many read errors that have been detected on the + vdev.
+

+

vdev_write_errors

+
How many write errors that have been detected on the + vdev.
+

+

vdev_cksum_errors

+
How many checksum errors that have been detected on the + vdev.
+

+

parent_guid

+
GUID of the vdev parent.
+

+

parent_type

+
Type of parent. See vdev_type.
+

+

parent_path

+
Path of the vdev parent (if any).
+

+

parent_devid

+
ID of the vdev parent (if any).
+

+

zio_objset

+
The object set number for a given I/O.
+

+

zio_object

+
The object number for a given I/O.
+

+

zio_level

+
The indirect level for the block. Level 0 is the lowest + level and includes data blocks. Values > 0 indicate metadata blocks at the + appropriate level.
+

+

zio_blkid

+
The block ID for a given I/O.
+

+

zio_err

+
The errno for a failure when handling a given I/O. The + errno is compatible with errno(3) with the value for EBADE (0x34) used + to indicate ZFS checksum error.
+

+

zio_offset

+
The offset in bytes of where to write the I/O for the + specified vdev.
+

+

zio_size

+
The size in bytes of the I/O.
+

+

zio_flags

+
The current flags describing how the I/O should be + handled. See the I/O FLAGS section for the full list of I/O + flags.
+

+

zio_stage

+
The current stage of the I/O in the pipeline. See the + I/O STAGES section for a full list of all the I/O stages.
+

+

zio_pipeline

+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+

+

zio_delay

+
The time elapsed (in nanoseconds) waiting for the block + layer to complete the I/O. Unlike zio_delta this does not include any + vdev queuing time and is therefore solely a measure of the block layer + performance.
+

+

zio_timestamp

+
The time when a given I/O was submitted.
+

+

zio_delta

+
The time required to service a given I/O.
+

+

prev_state

+
The previous state of the vdev.
+

+

cksum_expected

+
The expected checksum value for the block.
+

+

cksum_actual

+
The actual checksum value for an errant block.
+

+

cksum_algorithm

+
Checksum algorithm used. See zfs(8) for more + information on checksum algorithms available.
+

+

cksum_byteswap

+
Whether or not the data is byteswapped.
+

+

bad_ranges

+
[start, end) pairs of corruption offsets. Offsets are + always aligned on a 64-bit boundary, and can include some gaps of + non-corruption. (See bad_ranges_min_gap)
+

+

bad_ranges_min_gap

+
In order to bound the size of the bad_ranges + array, gaps of non-corruption less than or equal to bad_ranges_min_gap + bytes have been merged with adjacent corruption. Always at least 8 bytes, + since corruption is detected on a 64-bit word basis.
+

+

bad_range_sets

+
This array has one element per range in + bad_ranges. Each element contains the count of bits in that range which + were clear in the good data and set in the bad data.
+

+

bad_range_clears

+
This array has one element per range in + bad_ranges. Each element contains the count of bits for that range + which were set in the good data and clear in the bad data.
+

+

bad_set_bits

+
If this field exists, it is an array of: (bad data & + ~(good data)); that is, the bits set in the bad data which are cleared in the + good data. Each element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, the first element + is the first byte in the first bad_ranges range, and the last element + is the last byte in the last bad_ranges range.
+

+

bad_cleared_bits

+
Like bad_set_bits, but contains: (good data & + ~(bad data)); that is, the bits set in the good data which are cleared in the + bad data.
+

+

bad_set_histogram

+
If this field exists, it is an array of counters. Each + entry counts bits set in a particular bit of a big-endian uint64 type. The + first entry counts bits set in the high-order bit of the first byte, the 9th + byte, etc, and the last entry counts bits set of the low-order bit of the 8th + byte, the 16th byte, etc. This information is useful for observing a stuck bit + in a parallel data path, such as IDE or parallel SCSI.
+

+

bad_cleared_histogram

+
If this field exists, it is an array of counters. Each + entry counts bit clears in a particular bit of a big-endian uint64 type. The + first entry counts bits clears of the high-order bit of the first byte, the + 9th byte, etc, and the last entry counts clears of the low-order bit of the + 8th byte, the 16th byte, etc. This information is useful for observing a stuck + bit in a parallel data path, such as IDE or parallel SCSI.
+

+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_DDT_READ_START0x00000200R----
ZIO_STAGE_DDT_READ_DONE0x00000400R----
ZIO_STAGE_DDT_WRITE0x00000800-W---
ZIO_STAGE_DDT_FREE0x00001000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00002000RWFC-
ZIO_STAGE_GANG_ISSUE0x00004000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00008000-W---
ZIO_STAGE_DVA_ALLOCATE0x00010000-W---
ZIO_STAGE_DVA_FREE0x00020000--F--
ZIO_STAGE_DVA_CLAIM0x00040000---C-
ZIO_STAGE_READY0x00080000RWFCI
ZIO_STAGE_VDEV_IO_START0x00100000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00200000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00400000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x00800000R----
ZIO_STAGE_DONE0x01000000RWFCI
+

+
+
+

+

Every I/O in the pipeline contains a set of flags which describe + its function and are used to govern its behavior. These flags will be set in + an event as an zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/zfs-module-parameters.5.html b/man/v2.0/5/zfs-module-parameters.5.html new file mode 100644 index 000000000..d36a0a08e --- /dev/null +++ b/man/v2.0/5/zfs-module-parameters.5.html @@ -0,0 +1,2797 @@ + + + + + + + zfs-module-parameters.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-module-parameters.5

+
+ + + + + +
ZFS-MODULE-PARAMETERS(5)File Formats ManualZFS-MODULE-PARAMETERS(5)
+
+
+

+

zfs-module-parameters - ZFS module parameters

+
+
+

+

Description of the different parameters to the ZFS module.

+

+
+

+

+

dbuf_cache_max_bytes (ulong)

+
Maximum size in bytes of the dbuf cache. The target size + is determined by the MIN versus 1/2^dbuf_cache_shift (1/32) of the + target ARC size. The behavior of the dbuf cache and its associated settings + can be observed via the /proc/spl/kstat/zfs/dbufstats kstat. +

Default value: ULONG_MAX.

+
+

+

dbuf_metadata_cache_max_bytes (ulong)

+
Maximum size in bytes of the metadata dbuf cache. The + target size is determined by the MIN versus + 1/2^dbuf_metadata_cache_shift (1/64) of the target ARC size. The + behavior of the metadata dbuf cache and its associated settings can be + observed via the /proc/spl/kstat/zfs/dbufstats kstat. +

Default value: ULONG_MAX.

+
+

+

dbuf_cache_hiwater_pct (uint)

+
The percentage over dbuf_cache_max_bytes when + dbufs must be evicted directly. +

Default value: 10%.

+
+

+

dbuf_cache_lowater_pct (uint)

+
The percentage below dbuf_cache_max_bytes when the + evict thread stops evicting dbufs. +

Default value: 10%.

+
+

+

dbuf_cache_shift (int)

+
Set the size of the dbuf cache, + dbuf_cache_max_bytes, to a log2 fraction of the target ARC size. +

Default value: 5.

+
+

+

dbuf_metadata_cache_shift (int)

+
Set the size of the dbuf metadata cache, + dbuf_metadata_cache_max_bytes, to a log2 fraction of the target ARC + size. +

Default value: 6.

+
+

+

dmu_object_alloc_chunk_shift (int)

+
dnode slots allocated in a single operation as a power of + 2. The default value minimizes lock contention for the bulk operation + performed. +

Default value: 7 (128).

+
+

+

dmu_prefetch_max (int)

+
Limit the amount we can prefetch with one call to this + amount (in bytes). This helps to limit the amount of memory that can be used + by prefetching. +

Default value: 134,217,728 (128MB).

+
+

+

ignore_hole_birth (int)

+
This is an alias for + send_holes_without_birth_time.
+

+

l2arc_feed_again (int)

+
Turbo L2ARC warm-up. When the L2ARC is cold the fill + interval will be set as fast as possible. +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_feed_min_ms (ulong)

+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only applicable in related situations. +

Default value: 200.

+
+

+

l2arc_feed_secs (ulong)

+
Seconds between L2ARC writing +

Default value: 1.

+
+

+

l2arc_headroom (ulong)

+
How far through the ARC lists to search for L2ARC + cacheable content, expressed as a multiplier of l2arc_write_max. ARC + persistence across reboots can be achieved with persistent L2ARC by setting + this parameter to 0 allowing the full length of ARC lists to be + searched for cacheable content. +

Default value: 2.

+
+

+

l2arc_headroom_boost (ulong)

+
Scales l2arc_headroom by this percentage when + L2ARC contents are being successfully compressed before writing. A value of + 100 disables this feature. +

Default value: 200%.

+
+

+

l2arc_mfuonly (int)

+
Controls whether only MFU metadata and data are cached + from ARC into L2ARC. This may be desired to avoid wasting space on L2ARC when + reading/writing large amounts of data that are not expected to be accessed + more than once. The default is 0, meaning both MRU and MFU data and + metadata are cached. When turning off (0) this feature some MRU buffers + will still be present in ARC and eventually cached on L2ARC. +

Use 0 for no (default) and 1 for yes.

+
+

+

l2arc_meta_percent (int)

+
Percent of ARC size allowed for L2ARC-only headers. Since + L2ARC buffers are not evicted on memory pressure, too large amount of headers + on system with irrationaly large L2ARC can render it slow or unusable. This + parameter limits L2ARC writes and rebuild to achieve it. +

Default value: 33%.

+
+

+

l2arc_trim_ahead (ulong)

+
Trims ahead of the current write size + (l2arc_write_max) on L2ARC devices by this percentage of write size if + we have filled the device. If set to 100 we TRIM twice the space + required to accommodate upcoming writes. A minimum of 64MB will be trimmed. It + also enables TRIM of the whole L2ARC device upon creation or addition to an + existing pool or if the header of the device is invalid upon importing a pool + or onlining a cache device. A value of 0 disables TRIM on L2ARC + altogether and is the default as it can put significant stress on the + underlying storage devices. This will vary depending of how well the specific + device handles these commands. +

Default value: 0%.

+
+

+

l2arc_noprefetch (int)

+
Do not write buffers to L2ARC if they were prefetched but + not used by applications. +

Use 1 for yes (default) and 0 to disable.

+
+

+

l2arc_norw (int)

+
No reads during writes. +

Use 1 for yes and 0 for no (default).

+
+

+

l2arc_write_boost (ulong)

+
Cold L2ARC devices will have l2arc_write_max + increased by this amount while they remain cold. +

Default value: 8,388,608.

+
+

+

l2arc_write_max (ulong)

+
Max write bytes per interval. +

Default value: 8,388,608.

+
+

+

l2arc_rebuild_enabled (int)

+
Rebuild the L2ARC when importing a pool (persistent + L2ARC). This can be disabled if there are problems importing a pool or + attaching an L2ARC device (e.g. the L2ARC device is slow in reading stored log + metadata, or the metadata has become somehow fragmented/unusable). +

Use 1 for yes (default) and 0 for no.

+
+

+

l2arc_rebuild_blocks_min_l2size (ulong)

+
Min size (in bytes) of an L2ARC device required in order + to write log blocks in it. The log blocks are used upon importing the pool to + rebuild the L2ARC (persistent L2ARC). Rationale: for L2ARC devices less than + 1GB, the amount of data l2arc_evict() evicts is significant compared to the + amount of restored L2ARC data. In this case do not write log blocks in L2ARC + in order not to waste space. +

Default value: 1,073,741,824 (1GB).

+
+

+

metaslab_aliquot (ulong)

+
Metaslab granularity, in bytes. This is roughly similar + to what would be referred to as the "stripe size" in traditional + RAID arrays. In normal operation, ZFS will try to write this amount of data to + a top-level vdev before moving on to the next one. +

Default value: 524,288.

+
+

+

metaslab_bias_enabled (int)

+
Enable metaslab group biasing based on its vdev's over- + or under-utilization relative to the pool. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_force_ganging (ulong)

+
Make some blocks above a certain size be gang blocks. + This option is used by the test suite to facilitate testing. +

Default value: 16,777,217.

+
+

+

zfs_history_output_max (int)

+
When attempting to log the output nvlist of an ioctl in + the on-disk history, the output will not be stored if it is larger than size + (in bytes). This must be less then DMU_MAX_ACCESS (64MB). This applies + primarily to zfs_ioc_channel_program(). +

Default value: 1MB.

+
+

+

zfs_keep_log_spacemaps_at_export (int)

+
Prevent log spacemaps from being destroyed during pool + exports and destroys. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_metaslab_segment_weight_enabled (int)

+
Enable/disable segment-based metaslab selection. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_metaslab_switch_threshold (int)

+
When using segment-based metaslab selection, continue + allocating from the active metaslab until zfs_metaslab_switch_threshold + worth of buckets have been exhausted. +

Default value: 2.

+
+

+

metaslab_debug_load (int)

+
Load all metaslabs during pool import. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_debug_unload (int)

+
Prevent metaslabs from being unloaded. +

Use 1 for yes and 0 for no (default).

+
+

+

metaslab_fragmentation_factor_enabled (int)

+
Enable use of the fragmentation metric in computing + metaslab weights. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_df_max_search (int)

+
Maximum distance to search forward from the last offset. + Without this limit, fragmented pools can see >100,000 iterations and + metaslab_block_picker() becomes the performance limiting factor on + high-performance storage. +

With the default setting of 16MB, we typically see less than 500 + iterations, even with very fragmented, ashift=9 pools. The maximum number of + iterations possible is: metaslab_df_max_search / (2 * + (1<<ashift)). With the default setting of 16MB this is 16*1024 + (with ashift=9) or 2048 (with ashift=12).

+

Default value: 16,777,216 (16MB)

+
+

+

metaslab_df_use_largest_segment (int)

+
If we are not searching forward (due to + metaslab_df_max_search, metaslab_df_free_pct, or metaslab_df_alloc_threshold), + this tunable controls what segment is used. If it is set, we will use the + largest free segment. If it is not set, we will use a segment of exactly the + requested size (or larger). +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_metaslab_max_size_cache_sec (ulong)

+
When we unload a metaslab, we cache the size of the + largest free chunk. We use that cached size to determine whether or not to + load a metaslab for a given allocation. As more frees accumulate in that + metaslab while it's unloaded, the cached max size becomes less and less + accurate. After a number of seconds controlled by this tunable, we stop + considering the cached max size and start considering only the histogram + instead. +

Default value: 3600 seconds (one hour)

+
+

+

zfs_metaslab_mem_limit (int)

+
When we are loading a new metaslab, we check the amount + of memory being used to store metaslab range trees. If it is over a threshold, + we attempt to unload the least recently used metaslab to prevent the system + from clogging all of its memory with range trees. This tunable sets the + percentage of total system memory that is the threshold. +

Default value: 25 percent

+
+

+

zfs_vdev_default_ms_count (int)

+
When a vdev is added target this number of metaslabs per + top-level vdev. +

Default value: 200.

+
+

+

zfs_vdev_default_ms_shift (int)

+
Default limit for metaslab size. +

Default value: 29 [meaning (1 << 29) = 512MB].

+
+

+

zfs_vdev_max_auto_ashift (ulong)

+
Maximum ashift used when optimizing for logical -> + physical sector size on new top-level vdevs. +

Default value: ASHIFT_MAX (16).

+
+

+

zfs_vdev_min_auto_ashift (ulong)

+
Minimum ashift used when creating new top-level vdevs. +

Default value: ASHIFT_MIN (9).

+
+

+

zfs_vdev_min_ms_count (int)

+
Minimum number of metaslabs to create in a top-level + vdev. +

Default value: 16.

+
+

+

vdev_validate_skip (int)

+
Skip label validation steps during pool import. Changing + is not recommended unless you know what you are doing and are recovering a + damaged label. +

Default value: 0.

+
+

+

zfs_vdev_ms_count_limit (int)

+
Practical upper limit of total metaslabs per top-level + vdev. +

Default value: 131,072.

+
+

+

metaslab_preload_enabled (int)

+
Enable metaslab group preloading. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_lba_weighting_enabled (int)

+
Give more weight to metaslabs with lower LBAs, assuming + they have greater bandwidth as is typically the case on a modern constant + angular velocity disk drive. +

Use 1 for yes (default) and 0 for no.

+
+

+

metaslab_unload_delay (int)

+
After a metaslab is used, we keep it loaded for this many + txgs, to attempt to reduce unnecessary reloading. Note that both this many + txgs and metaslab_unload_delay_ms milliseconds must pass before + unloading will occur. +

Default value: 32.

+
+

+

metaslab_unload_delay_ms (int)

+
After a metaslab is used, we keep it loaded for this many + milliseconds, to attempt to reduce unnecessary reloading. Note that both this + many milliseconds and metaslab_unload_delay txgs must pass before + unloading will occur. +

Default value: 600000 (ten minutes).

+
+

+

send_holes_without_birth_time (int)

+
When set, the hole_birth optimization will not be used, + and all holes will always be sent on zfs send. This is useful if you suspect + your datasets are affected by a bug in hole_birth. +

Use 1 for on (default) and 0 for off.

+
+

+

spa_config_path (charp)

+
SPA config file +

Default value: /etc/zfs/zpool.cache.

+
+

+

spa_asize_inflation (int)

+
Multiplication factor used to estimate actual disk + consumption from the size of data being written. The default value is a worst + case estimate, but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits. +

Default value: 24.

+
+

+

spa_load_print_vdev_tree (int)

+
Whether to print the vdev tree in the debugging message + buffer during pool import. Use 0 to disable and 1 to enable. +

Default value: 0.

+
+

+

spa_load_verify_data (int)

+
Whether to traverse data blocks during an "extreme + rewind" (-X) import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal skips non-metadata blocks. It can be toggled once the import has + started to stop or start the traversal of non-metadata blocks.

+

Default value: 1.

+
+

+

spa_load_verify_metadata (int)

+
Whether to traverse blocks during an "extreme + rewind" (-X) pool import. Use 0 to disable and 1 to enable. +

An extreme rewind import normally performs a full traversal of all + blocks in the pool for verification. If this parameter is set to 0, the + traversal is not performed. It can be toggled once the import has started to + stop or start the traversal.

+

Default value: 1.

+
+

+

spa_load_verify_shift (int)

+
Sets the maximum number of bytes to consume during pool + import to the log2 fraction of the target ARC size. +

Default value: 4.

+
+

+

spa_slop_shift (int)

+
Normally, we don't allow the last 3.2% + (1/(2^spa_slop_shift)) of space in the pool to be consumed. This ensures that + we don't run the pool completely out of space, due to unaccounted changes + (e.g. to the MOS). It also limits the worst-case time to allocate space. If we + have less than this amount of free space, most ZPL operations (e.g. write, + create) will return ENOSPC. +

Default value: 5.

+
+

+

vdev_removal_max_span (int)

+
During top-level vdev removal, chunks of data are copied + from the vdev which may include free space in order to trade bandwidth for + IOPS. This parameter determines the maximum span of free space (in bytes) + which will be included as "unnecessary" data in a chunk of copied + data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept when doing + regular reads (but there's no reason it has to be the same).

+

Default value: 32,768.

+
+

+

vdev_file_logical_ashift (ulong)

+
Logical ashift for file-based devices. +

Default value: 9.

+
+

+

vdev_file_physical_ashift (ulong)

+
Physical ashift for file-based devices. +

Default value: 9.

+
+

+

zap_iterate_prefetch (int)

+
If this is set, when we start iterating over a ZAP + object, zfs will prefetch the entire object (all leaf blocks). However, this + is limited by dmu_prefetch_max. +

Use 1 for on (default) and 0 for off.

+
+

+

zfetch_array_rd_sz (ulong)

+
If prefetching is enabled, disable prefetching for reads + larger than this size. +

Default value: 1,048,576.

+
+

+

zfetch_max_distance (uint)

+
Max bytes to prefetch per stream. +

Default value: 8,388,608 (8MB).

+
+

+

zfetch_max_idistance (uint)

+
Max bytes to prefetch indirects for per stream. +

Default vaule: 67,108,864 (64MB).

+
+

+

zfetch_max_streams (uint)

+
Max number of streams per zfetch (prefetch streams per + file). +

Default value: 8.

+
+

+

zfetch_min_sec_reap (uint)

+
Min time before an active prefetch stream can be + reclaimed +

Default value: 2.

+
+

+

zfs_abd_scatter_enabled (int)

+
Enables ARC from using scatter/gather lists and forces + all allocations to be linear in kernel memory. Disabling can improve + performance in some code paths at the expense of fragmented kernel memory. +

Default value: 1.

+
+

+

zfs_abd_scatter_max_order (iunt)

+
Maximum number of consecutive memory pages allocated in a + single block for scatter/gather lists. Default value is specified by the + kernel itself. +

Default value: 10 at the time of this writing.

+
+

+

zfs_abd_scatter_min_size (uint)

+
This is the minimum allocation size that will use scatter + (page-based) ABD's. Smaller allocations will use linear ABD's. +

Default value: 1536 (512B and 1KB allocations will be + linear).

+
+

+

zfs_arc_dnode_limit (ulong)

+
When the number of bytes consumed by dnodes in the ARC + exceeds this number of bytes, try to unpin some of it in response to demand + for non-metadata. This value acts as a ceiling to the amount of dnode + metadata, and defaults to 0 which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that may be used + for dnodes. +

See also zfs_arc_meta_prune which serves a similar purpose + but is used when the amount of metadata in the ARC exceeds + zfs_arc_meta_limit rather than in response to overall demand for + non-metadata.

+

+

Default value: 0.

+
+

+

zfs_arc_dnode_limit_percent (ulong)

+
Percentage that can be consumed by dnodes of ARC meta + buffers. +

See also zfs_arc_dnode_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

Default value: 10%.

+
+

+

zfs_arc_dnode_reduce_percent (ulong)

+
Percentage of ARC dnodes to try to scan in response to + demand for non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit. +

+

Default value: 10% of the number of dnodes in the ARC.

+
+

+

zfs_arc_average_blocksize (int)

+
The ARC's buffer hash table is sized based on the + assumption of an average block size of zfs_arc_average_blocksize + (default 8K). This works out to roughly 1MB of hash table per 1GB of physical + memory with 8-byte pointers. For configurations with a known larger average + block size this value can be increased to reduce the memory footprint. +

+

Default value: 8192.

+
+

+

zfs_arc_eviction_pct (int)

+
When arc_is_overflowing(), + arc_get_data_impl() waits for this percent of the requested amount of + data to be evicted. For example, by default for every 2KB that's evicted, 1KB + of it may be "reused" by a new allocation. Since this is above 100%, + it ensures that progress is made towards getting arc_size under + arc_c. Since this is finite, it ensures that allocations can still + happen, even during the potentially long time that arc_size is more + than arc_c. +

Default value: 200.

+
+

+

zfs_arc_evict_batch_limit (int)

+
Number ARC headers to evict per sub-list before + proceeding to another sub-list. This batch-style operation prevents entire + sub-lists from being evicted at once but comes at a cost of additional + unlocking and locking. +

Default value: 10.

+
+

+

zfs_arc_grow_retry (int)

+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The arc_grow_retry value (default 5) is + the number of seconds the ARC will wait before trying to resume growth after a + memory pressure event. +

Default value: 0.

+
+

+

zfs_arc_lotsfree_percent (int)

+
Throttle I/O when free system memory drops below this + percentage of total system memory. Setting this value to 0 will disable the + throttle. +

Default value: 10%.

+
+

+

zfs_arc_max (ulong)

+
Max size of ARC in bytes. If set to 0 then the max size + of ARC is determined by the amount of system memory installed. For Linux, 1/2 + of system memory will be used as the limit. For FreeBSD, the larger of all + system memory - 1GB or 5/8 of system memory will be used as the limit. This + value must be at least 67108864 (64 megabytes). +

This value can be changed dynamically with some caveats. It cannot + be set back to 0 while running and reducing it below the current ARC size + will not cause the ARC to shrink without memory pressure to induce + shrinking.

+

Default value: 0.

+
+

+

zfs_arc_meta_adjust_restarts (ulong)

+
The number of restart passes to make while scanning the + ARC attempting the free buffers in order to stay below the + zfs_arc_meta_limit. This value should not need to be tuned but is + available to facilitate performance analysis. +

Default value: 4096.

+
+

+

zfs_arc_meta_limit (ulong)

+
The maximum allowed size in bytes that meta data buffers + are allowed to consume in the ARC. When this limit is reached meta data + buffers will be reclaimed even if the overall arc_c_max has not been reached. + This value defaults to 0 which indicates that a percent which is based on + zfs_arc_meta_limit_percent of the ARC may be used for meta data. +

This value my be changed dynamically except that it cannot be set + back to 0 for a specific percent of the ARC; it must be set to an explicit + value.

+

Default value: 0.

+
+

+

zfs_arc_meta_limit_percent (ulong)

+
Percentage of ARC buffers that can be used for meta data. +

See also zfs_arc_meta_limit which serves a similar purpose + but has a higher priority if set to nonzero value.

+

+

Default value: 75%.

+
+

+

zfs_arc_meta_min (ulong)

+
The minimum allowed size in bytes that meta data buffers + may consume in the ARC. This value defaults to 0 which disables a floor on the + amount of the ARC devoted meta data. +

Default value: 0.

+
+

+

zfs_arc_meta_prune (int)

+
The number of dentries and inodes to be scanned looking + for entries which can be dropped. This may be required when the ARC reaches + the zfs_arc_meta_limit because dentries and inodes can pin buffers in + the ARC. Increasing this value will cause to dentry and inode caches to be + pruned more aggressively. Setting this value to 0 will disable pruning the + inode and dentry caches. +

Default value: 10,000.

+
+

+

zfs_arc_meta_strategy (int)

+
Define the strategy for ARC meta data buffer eviction + (meta reclaim strategy). A value of 0 (META_ONLY) will evict only the ARC meta + data buffers. A value of 1 (BALANCED) indicates that additional data buffers + may be evicted if that is required to in order to evict the required number of + meta data buffers. +

Default value: 1.

+
+

+

zfs_arc_min (ulong)

+
Min size of ARC in bytes. If set to 0 then arc_c_min will + default to consuming the larger of 32M or 1/32 of total system memory. +

Default value: 0.

+
+

+

zfs_arc_min_prefetch_ms (int)

+
Minimum time prefetched blocks are locked in the ARC, + specified in ms. A value of 0 will default to 1000 ms. +

Default value: 0.

+
+

+

zfs_arc_min_prescient_prefetch_ms (int)

+
Minimum time "prescient prefetched" blocks are + locked in the ARC, specified in ms. These blocks are meant to be prefetched + fairly aggressively ahead of the code that may use them. A value of 0 + will default to 6000 ms. +

Default value: 0.

+
+

+

zfs_max_missing_tvds (int)

+
Number of missing top-level vdevs which will be allowed + during pool import (only in read-only mode). +

Default value: 0

+
+

+

zfs_max_nvlist_src_size (ulong)

+
Maximum size in bytes allowed to be passed as + zc_nvlist_src_size for ioctls on /dev/zfs. This prevents a user from causing + the kernel to allocate an excessive amount of memory. When the limit is + exceeded, the ioctl fails with EINVAL and a description of the error is sent + to the zfs-dbgmsg log. This parameter should not need to be touched under + normal circumstances. On FreeBSD, the default is based on the system limit on + user wired memory. On Linux, the default is 128MB. +

Default value: 0 (kernel decides)

+
+

+

zfs_multilist_num_sublists (int)

+
To allow more fine-grained locking, each ARC state + contains a series of lists for both data and meta data objects. Locking is + performed at the level of these "sub-lists". This parameters + controls the number of sub-lists per ARC state, and also applies to other uses + of the multilist data structure. +

Default value: 4 or the number of online CPUs, whichever is + greater

+
+

+

zfs_arc_overflow_shift (int)

+
The ARC size is considered to be overflowing if it + exceeds the current ARC target size (arc_c) by a threshold determined by this + parameter. The threshold is calculated as a fraction of arc_c using the + formula "arc_c >> zfs_arc_overflow_shift". +

The default value of 8 causes the ARC to be considered to be + overflowing if it exceeds the target size by 1/256th (0.3%) of the target + size.

+

When the ARC is overflowing, new buffer allocations are stalled + until the reclaim thread catches up and the overflow condition no longer + exists.

+

Default value: 8.

+
+

+

+

zfs_arc_p_min_shift (int)

+
If set to a non zero value, this will update + arc_p_min_shift (default 4) with the new value. arc_p_min_shift is used to + shift of arc_c for calculating both min and max max arc_p +

Default value: 0.

+
+

+

zfs_arc_p_dampener_disable (int)

+
Disable arc_p adapt dampener +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_arc_shrink_shift (int)

+
If set to a non zero value, this will update + arc_shrink_shift (default 7) with the new value. +

Default value: 0.

+
+

+

zfs_arc_pc_percent (uint)

+
Percent of pagecache to reclaim arc to +

This tunable allows ZFS arc to play more nicely with the kernel's + LRU pagecache. It can guarantee that the ARC size won't collapse under + scanning pressure on the pagecache, yet still allows arc to be reclaimed + down to zfs_arc_min if necessary. This value is specified as percent of + pagecache size (as measured by NR_FILE_PAGES) where that percent may exceed + 100. This only operates during memory pressure/reclaim.

+

Default value: 0% (disabled).

+
+

+

zfs_arc_shrinker_limit (int)

+
This is a limit on how many pages the ARC shrinker makes + available for eviction in response to one page allocation attempt. Note that + in practice, the kernel's shrinker can ask us to evict up to about 4x this for + one allocation attempt. +

The default limit of 10,000 (in practice, 160MB per allocation + attempt with 4K pages) limits the amount of time spent attempting to reclaim + ARC memory to less than 100ms per allocation attempt, even with a small + average compressed block size of ~8KB.

+

The parameter can be set to 0 (zero) to disable the limit.

+

This parameter only applies on Linux.

+

Default value: 10,000.

+
+

+

zfs_arc_sys_free (ulong)

+
The target number of bytes the ARC should leave as free + memory on the system. Defaults to the larger of 1/64 of physical memory or + 512K. Setting this option to a non-zero value will override the default. +

Default value: 0.

+
+

+

zfs_autoimport_disable (int)

+
Disable pool import at module load by ignoring the cache + file (typically /etc/zfs/zpool.cache). +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_checksum_events_per_second (uint)

+
Rate limit checksum events to this many per second. Note + that this should not be set below the zed thresholds (currently 10 checksums + over 10 sec) or else zed may not trigger any action. +

Default value: 20

+
+

+

zfs_commit_timeout_pct (int)

+
This controls the amount of time that a ZIL block (lwb) + will remain "open" when it isn't "full", and it has a + thread waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly impacting + the latency of each individual transaction record (itx). +

Default value: 5%.

+
+

+

zfs_condense_indirect_commit_entry_delay_ms (int)

+
Vdev indirection layer (used for device removal) sleeps + for this many milliseconds during mapping generation. Intended for use with + the test suite to throttle vdev removal speed. +

Default value: 0 (no throttle).

+
+

+

zfs_condense_indirect_obsolete_pct (int)

+
Minimum percent of obsolete bytes in vdev mapping + required to attempt to condense (see + zfs_condense_indirect_vdevs_enable). Intended for use with the test + suite to facilitate triggering condensing as needed. +

Default value: 25%.

+
+

+

zfs_condense_indirect_vdevs_enable (int)

+
Enable condensing indirect vdev mappings. When set to a + non-zero value, attempt to condense indirect vdev mappings if the mapping uses + more than zfs_condense_min_mapping_bytes bytes of memory and if the + obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The condensing process + is an attempt to save memory by removing obsolete mappings. +

Default value: 1.

+
+

+

zfs_condense_max_obsolete_bytes (ulong)

+
Only attempt to condense indirect vdev mappings if the + on-disk size of the obsolete space map object is greater than this number of + bytes (see fBzfs_condense_indirect_vdevs_enable). +

Default value: 1,073,741,824.

+
+

+

zfs_condense_min_mapping_bytes (ulong)

+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable). +

Default value: 131,072.

+
+

+

zfs_dbgmsg_enable (int)

+
Internally ZFS keeps a small log to facilitate debugging. + By default the log is disabled, to enable it set this option to 1. The + contents of the log can be accessed by reading the /proc/spl/kstat/zfs/dbgmsg + file. Writing 0 to this proc file clears the log. +

Default value: 0.

+
+

+

zfs_dbgmsg_maxsize (int)

+
The maximum size in bytes of the internal ZFS debug log. +

Default value: 4M.

+
+

+

zfs_dbuf_state_index (int)

+
This feature is currently unused. It is normally used for + controlling what reporting is available under /proc/spl/kstat/zfs. +

Default value: 0.

+
+

+

zfs_deadman_enabled (int)

+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms milliseconds, or when an individual I/O takes + longer than zfs_deadman_ziotime_ms milliseconds, then the operation is + considered to be "hung". If zfs_deadman_enabled is set then + the deadman behavior is invoked as described by the + zfs_deadman_failmode module option. By default the deadman is enabled + and configured to wait which results in "hung" I/Os only + being logged. The deadman is automatically disabled when a pool gets + suspended. +

Default value: 1.

+
+

+

zfs_deadman_failmode (charp)

+
Controls the failure behavior when the deadman detects a + "hung" I/O. Valid values are wait, continue, and + panic. +

wait - Wait for a "hung" I/O to complete. For + each "hung" I/O a "deadman" event will be posted + describing that I/O.

+

continue - Attempt to recover from a "hung" I/O + by re-dispatching it to the I/O pipeline if possible.

+

panic - Panic the system. This can be used to facilitate an + automatic fail-over to a properly configured fail-over partner.

+

Default value: wait.

+
+

+

zfs_deadman_checktime_ms (int)

+
Check time in milliseconds. This defines the frequency at + which we check for hung I/O and potentially invoke the + zfs_deadman_failmode behavior. +

Default value: 60,000.

+
+

+

zfs_deadman_synctime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and also the interval after which a pool sync operation is + considered to be "hung". Once this limit is exceeded the deadman + will be invoked every zfs_deadman_checktime_ms milliseconds until the + pool sync completes. +

Default value: 600,000.

+
+

+

zfs_deadman_ziotime_ms (ulong)

+
Interval in milliseconds after which the deadman is + triggered and an individual I/O operation is considered to be + "hung". As long as the I/O remains "hung" the deadman will + be invoked every zfs_deadman_checktime_ms milliseconds until the I/O + completes. +

Default value: 300,000.

+
+

+

zfs_dedup_prefetch (int)

+
Enable prefetching dedup-ed blks +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_delay_min_dirty_percent (int)

+
Start to delay each transaction once there is this amount + of dirty data, expressed as a percentage of zfs_dirty_data_max. This + value should be >= zfs_vdev_async_write_active_max_dirty_percent. See the + section "ZFS TRANSACTION DELAY". +

Default value: 60%.

+
+

+

zfs_delay_scale (int)

+
This controls how quickly the transaction delay + approaches infinity. Larger values cause longer delays for a given amount of + dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will smoothly + handle between 10x and 1/10th this number.

+

See the section "ZFS TRANSACTION DELAY".

+

Note: zfs_delay_scale * zfs_dirty_data_max must be + < 2^64.

+

Default value: 500,000.

+
+

+

zfs_disable_ivset_guid_check (int)

+
Disables requirement for IVset guids to be present and + match when doing a raw receive of encrypted datasets. Intended for users whose + pools were created with OpenZFS pre-release versions and now have + compatibility issues. +

Default value: 0.

+
+

+

zfs_key_max_salt_uses (ulong)

+
Maximum number of uses of a single salt value before + generating a new one for encrypted datasets. The default value is also the + maximum that will be accepted. +

Default value: 400,000,000.

+
+

+

zfs_object_mutex_size (uint)

+
Size of the znode hashtable used for holds. +

Due to the need to hold locks on objects that may not exist yet, + kernel mutexes are not created per-object and instead a hashtable is used + where collisions will result in objects waiting when there is not actually + contention on the same object.

+

Default value: 64.

+
+

+

zfs_slow_io_events_per_second (int)

+
Rate limit delay and deadman zevents (which report slow + I/Os) to this many per second. +

Default value: 20

+
+

+

zfs_unflushed_max_mem_amt (ulong)

+
Upper-bound limit for unflushed metadata changes to be + held by the log spacemap in memory (in bytes). +

Default value: 1,073,741,824 (1GB).

+
+

+

zfs_unflushed_max_mem_ppm (ulong)

+
Percentage of the overall system memory that ZFS allows + to be used for unflushed metadata changes by the log spacemap. (value is + calculated over 1000000 for finer granularity). +

Default value: 1000 (which is divided by 1000000, resulting + in the limit to be 0.1% of memory)

+
+

+

zfs_unflushed_log_block_max (ulong)

+
Describes the maximum number of log spacemap blocks + allowed for each pool. The default value of 262144 means that the space in all + the log spacemaps can add up to no more than 262144 blocks (which means 32GB + of logical space before compression and ditto blocks, assuming that blocksize + is 128k). +

This tunable is important because it involves a trade-off between + import time after an unclean export and the frequency of flushing metaslabs. + The higher this number is, the more log blocks we allow when the pool is + active which means that we flush metaslabs less often and thus decrease the + number of I/Os for spacemap updates per TXG. At the same time though, that + means that in the event of an unclean export, there will be more log + spacemap blocks for us to read, inducing overhead in the import time of the + pool. The lower the number, the amount of flushing increases destroying log + blocks quicker as they become obsolete faster, which leaves less blocks to + be read during import time after a crash.

+

Each log spacemap block existing during pool import leads to + approximately one extra logical I/O issued. This is the reason why this + tunable is exposed in terms of blocks rather than space used.

+

Default value: 262144 (256K).

+
+

+

zfs_unflushed_log_block_min (ulong)

+
If the number of metaslabs is small and our incoming rate + is high, we could get into a situation that we are flushing all our metaslabs + every TXG. Thus we always allow at least this many log blocks. +

Default value: 1000.

+
+

+

zfs_unflushed_log_block_pct (ulong)

+
Tunable used to determine the number of blocks that can + be used for the spacemap log, expressed as a percentage of the total number of + metaslabs in the pool. +

Default value: 400 (read as 400% - meaning that the + number of log spacemap blocks are capped at 4 times the number of metaslabs + in the pool).

+
+

+

zfs_unlink_suspend_progress (uint)

+
When enabled, files will not be asynchronously removed + from the list of pending unlinks and the space they consume will be leaked. + Once this option has been disabled and the dataset is remounted, the pending + unlinks will be processed and the freed space returned to the pool. This + option is used by the test suite to facilitate testing. +

Uses 0 (default) to allow progress and 1 to pause + progress.

+
+

+

zfs_delete_blocks (ulong)

+
This is the used to define a large file for the purposes + of delete. Files containing more than zfs_delete_blocks will be deleted + asynchronously while smaller files are deleted synchronously. Decreasing this + value will reduce the time spent in an unlink(2) system call at the expense of + a longer delay before the freed space is available. +

Default value: 20,480.

+
+

+

zfs_dirty_data_max (int)

+
Determines the dirty space limit in bytes. Once this + limit is exceeded, new writes are halted until space frees up. This parameter + takes precedence over zfs_dirty_data_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 10% of physical RAM, capped at + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_max_max (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed in bytes. This limit is only enforced at module load time, and will + be ignored if zfs_dirty_data_max is later changed. This parameter takes + precedence over zfs_dirty_data_max_max_percent. See the section + "ZFS TRANSACTION DELAY". +

Default value: 25% of physical RAM.

+
+

+

zfs_dirty_data_max_max_percent (int)

+
Maximum allowable value of zfs_dirty_data_max, + expressed as a percentage of physical RAM. This limit is only enforced at + module load time, and will be ignored if zfs_dirty_data_max is later + changed. The parameter zfs_dirty_data_max_max takes precedence over + this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 25%.

+
+

+

zfs_dirty_data_max_percent (int)

+
Determines the dirty space limit, expressed as a + percentage of all memory. Once this limit is exceeded, new writes are halted + until space frees up. The parameter zfs_dirty_data_max takes precedence + over this one. See the section "ZFS TRANSACTION DELAY". +

Default value: 10%, subject to + zfs_dirty_data_max_max.

+
+

+

zfs_dirty_data_sync_percent (int)

+
Start syncing out a transaction group if there's at least + this much dirty data as a percentage of zfs_dirty_data_max. This should + be less than zfs_vdev_async_write_active_min_dirty_percent. +

Default value: 20% of zfs_dirty_data_max.

+
+

+

zfs_fallocate_reserve_percent (uint)

+
Since ZFS is a copy-on-write filesystem with snapshots, + blocks cannot be preallocated for a file in order to guarantee that later + writes will not run out of space. Instead, fallocate() space preallocation + only checks that sufficient space is currently available in the pool or the + user's project quota allocation, and then creates a sparse file of the + requested size. The requested space is multiplied by + zfs_fallocate_reserve_percent to allow additional space for indirect + blocks and other internal metadata. Setting this value to 0 disables support + for fallocate(2) and returns EOPNOTSUPP for fallocate() space preallocation + again. +

Default value: 110%

+
+

+

zfs_fletcher_4_impl (string)

+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, scalar, + sse2, ssse3, avx2, avx512f, avx512bw, and + aarch64_neon. All of the selectors except fastest and + scalar require instruction set extensions to be available and will + only appear if ZFS detects that they are present at runtime. If multiple + implementations of fletcher 4 are available, the fastest will be + chosen using a micro benchmark. Selecting scalar results in the + original, CPU based calculation, being used. Selecting any option other than + fastest and scalar results in vector instructions from the + respective CPU instruction set being used.

+

Default value: fastest.

+
+

+

zfs_free_bpobj_enabled (int)

+
Enable/disable the processing of the free_bpobj object. +

Default value: 1.

+
+

+

zfs_async_block_max_blocks (ulong)

+
Maximum number of blocks freed in a single txg. +

Default value: ULONG_MAX (unlimited).

+
+

+

zfs_max_async_dedup_frees (ulong)

+
Maximum number of dedup blocks freed in a single txg. +

Default value: 100,000.

+
+

+

zfs_override_estimate_recordsize (ulong)

+
Record size calculation override for zfs send estimates. +

Default value: 0.

+
+

+

zfs_vdev_async_read_max_active (int)

+
Maximum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_async_read_min_active (int)

+
Minimum asynchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_async_write_active_max_dirty_percent (int)

+
When the pool has more than + zfs_vdev_async_write_active_max_dirty_percent dirty data, use + zfs_vdev_async_write_max_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 60%.

+
+

+

zfs_vdev_async_write_active_min_dirty_percent (int)

+
When the pool has less than + zfs_vdev_async_write_active_min_dirty_percent dirty data, use + zfs_vdev_async_write_min_active to limit active async writes. If the + dirty data is between min and max, the active I/O limit is linearly + interpolated. See the section "ZFS I/O SCHEDULER". +

Default value: 30%.

+
+

+

zfs_vdev_async_write_max_active (int)

+
Maximum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_async_write_min_active (int)

+
Minimum asynchronous write I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of 2 was chosen as + a compromise. A value of 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+

Default value: 2.

+
+

+

zfs_vdev_initializing_max_active (int)

+
Maximum initializing I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_initializing_min_active (int)

+
Minimum initializing I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_max_active (int)

+
The maximum number of I/Os active to each device. + Ideally, this will be >= the sum of each queue's max_active. See the + section "ZFS I/O SCHEDULER". +

Default value: 1,000.

+
+

+

zfs_vdev_rebuild_max_active (int)

+
Maximum sequential resilver I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 3.

+
+

+

zfs_vdev_rebuild_min_active (int)

+
Minimum sequential resilver I/Os active to each device. + See the section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_removal_max_active (int)

+
Maximum removal I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_removal_min_active (int)

+
Minimum removal I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_scrub_max_active (int)

+
Maximum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_scrub_min_active (int)

+
Minimum scrub I/Os active to each device. See the section + "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_sync_read_max_active (int)

+
Maximum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_read_min_active (int)

+
Minimum synchronous read I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_max_active (int)

+
Maximum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_sync_write_min_active (int)

+
Minimum synchronous write I/Os active to each device. See + the section "ZFS I/O SCHEDULER". +

Default value: 10.

+
+

+

zfs_vdev_trim_max_active (int)

+
Maximum trim/discard I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 2.

+
+

+

zfs_vdev_trim_min_active (int)

+
Minimum trim/discard I/Os active to each device. See the + section "ZFS I/O SCHEDULER". +

Default value: 1.

+
+

+

zfs_vdev_nia_delay (int)

+
For non-interactive I/O (scrub, resilver, removal, + initialize and rebuild), the number of concurrently-active I/O's is limited to + *_min_active, unless the vdev is "idle". When there are no + interactive I/Os active (sync or async), and zfs_vdev_nia_delay I/Os have + completed since the last interactive I/O, then the vdev is considered to be + "idle", and the number of concurrently-active non-interactive I/O's + is increased to *_max_active. See the section "ZFS I/O SCHEDULER". +

Default value: 5.

+
+

+

zfs_vdev_nia_credit (int)

+
Some HDDs tend to prioritize sequential I/O so high, that + concurrent random I/O latency reaches several seconds. On some HDDs it happens + even if sequential I/Os are submitted one at a time, and so setting + *_max_active to 1 does not help. To prevent non-interactive I/Os, like scrub, + from monopolizing the device no more than zfs_vdev_nia_credit I/Os can be sent + while there are outstanding incomplete interactive I/Os. This enforced wait + ensures the HDD services the interactive I/O within a reasonable amount of + time. See the section "ZFS I/O SCHEDULER". +

Default value: 5.

+
+

+

zfs_vdev_queue_depth_pct (int)

+
Maximum number of queued allocations per top-level vdev + expressed as a percentage of zfs_vdev_async_write_max_active which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. It allows for + dynamic allocation distribution when devices are imbalanced as fuller devices + will tend to be slower than empty devices. +

See also zio_dva_throttle_enabled.

+

Default value: 1000%.

+
+

+

zfs_expire_snapshot (int)

+
Seconds to expire .zfs/snapshot +

Default value: 300.

+
+

+

zfs_admin_snapshot (int)

+
Allow the creation, removal, or renaming of entries in + the .zfs/snapshot directory to cause the creation, destruction, or renaming of + snapshots. When enabled this functionality works both locally and over NFS + exports which have the 'no_root_squash' option set. This functionality is + disabled by default. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_flags (int)

+
Set additional debugging flags. The following flags may + be bitwise-or'd together. +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic Name
Description
1ZFS_DEBUG_DPRINTF
Enable dprintf entries in the debug log.
2ZFS_DEBUG_DBUF_VERIFY *
Enable extra dbuf verifications.
4ZFS_DEBUG_DNODE_VERIFY *
Enable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMES
Enable snapshot name verification.
16ZFS_DEBUG_MODIFY
Check for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREE
Enable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFY
Enable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFY
Verify space accounting on disk matches in-core range_trees.
512ZFS_DEBUG_SET_ERROR
Enable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAP
Verify split blocks created by device removal.
2048ZFS_DEBUG_TRIM
Verify TRIM ranges are always within the allocatable range tree.
4096ZFS_DEBUG_LOG_SPACEMAP
Verify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and flushing.
+

* Requires debug build.

+

Default value: 0.

+
+

+

zfs_free_leak_on_eio (int)

+
If destroy encounters an EIO while reading metadata (e.g. + indirect blocks), space referenced by the missing metadata can not be freed. + Normally this causes the background destroy to become "stalled", as + it is unable to make forward progress. While in this stalled state, all + remaining space to free from the error-encountering filesystem is + "temporarily leaked". Set this flag to cause it to ignore the EIO, + permanently leak the space from indirect blocks that can not be read, and + continue to free everything else that it can. +

The default, "stalling" behavior is useful if the + storage partially fails (i.e. some but not all i/os fail), and then later + recovers. In this case, we will be able to continue pool operations while it + is partially failed, and when it recovers, we can continue to free the + space, with no leaks. However, note that this case is actually fairly + rare.

+

Typically pools either (a) fail completely (but perhaps + temporarily, e.g. a top-level vdev going offline), or (b) have localized, + permanent errors (e.g. disk returns the wrong data due to bit flip or + firmware bug). In case (a), this setting does not matter because the pool + will be suspended and the sync thread will not be able to make forward + progress regardless. In case (b), because the error is permanent, the best + we can do is leak the minimum amount of space, which is what setting this + flag will do. Therefore, it is reasonable for this flag to normally be set, + but we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.

+

Default value: 0.

+
+

+

zfs_free_min_time_ms (int)

+
During a zfs destroy operation using + feature@async_destroy a minimum of this much time will be spent working + on freeing blocks per txg. +

Default value: 1,000.

+
+

+

zfs_obsolete_min_time_ms (int)

+
Similar to zfs_free_min_time_ms but for cleanup of + old indirection records for removed vdevs. +

Default value: 500.

+
+

+

zfs_immediate_write_sz (long)

+
Largest data block to write to zil. Larger blocks will be + treated as if the dataset being written to had the property setting + logbias=throughput. +

Default value: 32,768.

+
+

+

zfs_initialize_value (ulong)

+
Pattern written to vdev free space by zpool + initialize. +

Default value: 16,045,690,984,833,335,022 + (0xdeadbeefdeadbeee).

+
+

+

zfs_initialize_chunk_size (ulong)

+
Size of writes used by zpool initialize. This + option is used by the test suite to facilitate testing. +

Default value: 1,048,576

+
+

+

zfs_livelist_max_entries (ulong)

+
The threshold size (in block pointers) at which we create + a new sub-livelist. Larger sublists are more costly from a memory perspective + but the fewer sublists there are, the lower the cost of insertion. +

Default value: 500,000.

+
+

+

zfs_livelist_min_percent_shared (int)

+
If the amount of shared space between a snapshot and its + clone drops below this threshold, the clone turns off the livelist and reverts + to the old deletion method. This is in place because once a clone has been + overwritten enough livelists no long give us a benefit. +

Default value: 75.

+
+

+

zfs_livelist_condense_new_alloc (int)

+
Incremented each time an extra ALLOC blkptr is added to a + livelist entry while it is being condensed. This option is used by the test + suite to track race conditions. +

Default value: 0.

+
+

+

zfs_livelist_condense_sync_cancel (int)

+
Incremented each time livelist condensing is canceled + while in spa_livelist_condense_sync. This option is used by the test suite to + track race conditions. +

Default value: 0.

+
+

+

zfs_livelist_condense_sync_pause (int)

+
When set, the livelist condense process pauses + indefinitely before executing the synctask - spa_livelist_condense_sync. This + option is used by the test suite to trigger race conditions. +

Default value: 0.

+
+

+

zfs_livelist_condense_zthr_cancel (int)

+
Incremented each time livelist condensing is canceled + while in spa_livelist_condense_cb. This option is used by the test suite to + track race conditions. +

Default value: 0.

+
+

+

zfs_livelist_condense_zthr_pause (int)

+
When set, the livelist condense process pauses + indefinitely before executing the open context condensing work in + spa_livelist_condense_cb. This option is used by the test suite to trigger + race conditions. +

Default value: 0.

+
+

+

zfs_lua_max_instrlimit (ulong)

+
The maximum execution time limit that can be set for a + ZFS channel program, specified as a number of Lua instructions. +

Default value: 100,000,000.

+
+

+

zfs_lua_max_memlimit (ulong)

+
The maximum memory limit that can be set for a ZFS + channel program, specified in bytes. +

Default value: 104,857,600.

+
+

+

zfs_max_dataset_nesting (int)

+
The maximum depth of nested datasets. This value can be + tuned temporarily to fix existing datasets that exceed the predefined limit. +

Default value: 50.

+
+

+

zfs_max_log_walking (ulong)

+
The number of past TXGs that the flushing algorithm of + the log spacemap feature uses to estimate incoming log blocks. +

Default value: 5.

+
+

+

zfs_max_logsm_summary_length (ulong)

+
Maximum number of rows allowed in the summary of the + spacemap log. +

Default value: 10.

+
+

+

zfs_max_recordsize (int)

+
We currently support block sizes from 512 bytes to 16MB. + The benefits of larger blocks, and thus larger I/O, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very large + blocks can have an impact on i/o latency, and also potentially on the memory + allocator. Therefore, we do not allow the recordsize to be set larger than + zfs_max_recordsize (default 1MB). Larger blocks can be created by changing + this tunable, and pools with larger blocks can always be imported and used, + regardless of this setting. +

Default value: 1,048,576.

+
+

+

zfs_allow_redacted_dataset_mount (int)

+
Allow datasets received with redacted send/receive to be + mounted. Normally disabled because these datasets may be missing key data. +

Default value: 0.

+
+

+

zfs_min_metaslabs_to_flush (ulong)

+
Minimum number of metaslabs to flush per dirty TXG +

Default value: 1.

+
+

+

zfs_metaslab_fragmentation_threshold (int)

+
Allow metaslabs to keep their active state as long as + their fragmentation percentage is less than or equal to this value. An active + metaslab that exceeds this threshold will no longer keep its active status + allowing better metaslabs to be selected. +

Default value: 70.

+
+

+

zfs_mg_fragmentation_threshold (int)

+
Metaslab groups are considered eligible for allocations + if their fragmentation metric (measured as a percentage) is less than or equal + to this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also crossed + this threshold. +

Default value: 95.

+
+

+

zfs_mg_noalloc_threshold (int)

+
Defines a threshold at which metaslab groups should be + eligible for allocations. The value is expressed as a percentage of free space + beyond which a metaslab group is always eligible for allocations. If a + metaslab group's free space is less than or equal to the threshold, the + allocator will avoid allocating to that group unless all groups in the pool + have reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of 0 disables the + feature and causes all metaslab groups to be eligible for allocations. +

This parameter allows one to deal with pools having heavily + imbalanced vdevs such as would be the case when a new vdev has been added. + Setting the threshold to a non-zero percentage will stop allocations from + being made to vdevs that aren't filled to the specified percentage and allow + lesser filled vdevs to acquire more allocations than they otherwise would + under the old zfs_mg_alloc_failures facility.

+

Default value: 0.

+
+

+

zfs_ddt_data_is_special (int)

+
If enabled, ZFS will place DDT data into the special + allocation class. +

Default value: 1.

+
+

+

zfs_user_indirect_is_special (int)

+
If enabled, ZFS will place user data (both file and zvol) + indirect blocks into the special allocation class. +

Default value: 1.

+
+

+

zfs_multihost_history (int)

+
Historical statistics for the last N multihost updates + will be available in /proc/spl/kstat/zfs/<pool>/multihost +

Default value: 0.

+
+

+

zfs_multihost_interval (ulong)

+
Used to control the frequency of multihost writes which + are performed when the multihost pool property is on. This is one + factor used to determine the length of the activity check during import. +

The multihost write period is zfs_multihost_interval / + leaf-vdevs milliseconds. On average a multihost write will be issued for + each leaf vdev every zfs_multihost_interval milliseconds. In + practice, the observed period can vary with the I/O load and this observed + value is the delay which is stored in the uberblock.

+

Default value: 1000.

+
+

+

zfs_multihost_import_intervals (uint)

+
Used to control the duration of the activity test on + import. Smaller values of zfs_multihost_import_intervals will reduce + the import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval * + zfs_multihost_import_intervals, or the same product computed on the host + which last had the pool imported (whichever is greater). The activity check + time may be further extended if the value of mmp delay found in the best + uberblock indicates actual multihost updates happened at longer intervals + than zfs_multihost_interval. A minimum value of 100ms is + enforced.

+

A value of 0 is ignored and treated as if it was set to 1.

+

Default value: 20.

+
+

+

zfs_multihost_fail_intervals (uint)

+
Controls the behavior of the pool when multihost write + failures or delays are detected. +

When zfs_multihost_fail_intervals = 0, multihost write + failures or delays are ignored. The failures will still be reported to the + ZED which depending on its configuration may take action such as suspending + the pool or offlining a device.

+

+

When zfs_multihost_fail_intervals > 0, the pool will be + suspended if zfs_multihost_fail_intervals * zfs_multihost_interval + milliseconds pass without a successful mmp write. This guarantees the + activity test will see mmp writes if the pool is imported. A value of 1 is + ignored and treated as if it was set to 2. This is necessary to prevent the + pool from being suspended due to normal, small I/O latency variations.

+

+

Default value: 10.

+
+

+

zfs_no_scrub_io (int)

+
Set for no scrub I/O. This results in scrubs not actually + scrubbing data and simply doing a metadata crawl of the pool instead. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_no_scrub_prefetch (int)

+
Set to disable block prefetching for scrubs. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nocacheflush (int)

+
Disable cache flush operations on disks when writing. + Setting this will cause pool corruption on power loss if a volatile + out-of-order write cache is enabled. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_nopwrite_enabled (int)

+
Enable NOP writes +

Use 1 for yes (default) and 0 to disable.

+
+

+

zfs_dmu_offset_next_sync (int)

+
Enable forcing txg sync to find holes. When enabled + forces ZFS to act like prior versions when SEEK_HOLE or SEEK_DATA flags are + used, which when a dnode is dirty causes txg's to be synced so that this data + can be found. +

Use 1 for yes and 0 to disable (default).

+
+

+

zfs_pd_bytes_max (int)

+
The number of bytes which should be prefetched during a + pool traversal (eg: zfs send or other data crawling operations) +

Default value: 52,428,800.

+
+

+

zfs_per_txg_dirty_frees_percent (ulong)

+
Tunable to control percentage of dirtied indirect blocks + from frees allowed into one TXG. After this threshold is crossed, additional + frees will wait until the next TXG. A value of zero will disable this + throttle. +

Default value: 5, set to 0 to disable.

+
+

+

zfs_prefetch_disable (int)

+
This tunable disables predictive prefetch. Note that it + leaves "prescient" prefetch (e.g. prefetch for zfs send) intact. + Unlike predictive prefetch, prescient prefetch never issues i/os that end up + not being needed, so it can't hurt performance. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_checksum_disable (int)

+
This tunable disables qat hardware acceleration for + sha256 checksums. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_compress_disable (int)

+
This tunable disables qat hardware acceleration for gzip + compression. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_qat_encrypt_disable (int)

+
This tunable disables qat hardware acceleration for + AES-GCM encryption. It may be set after the zfs modules have been loaded to + initialize the qat hardware as long as support is compiled in and the qat + driver is present. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_read_chunk_size (long)

+
Bytes to read per chunk +

Default value: 1,048,576.

+
+

+

zfs_read_history (int)

+
Historical statistics for the last N reads will be + available in /proc/spl/kstat/zfs/<pool>/reads +

Default value: 0 (no data is kept).

+
+

+

zfs_read_history_hits (int)

+
Include cache hits in read history +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_rebuild_max_segment (ulong)

+
Maximum read segment size to issue when sequentially + resilvering a top-level vdev. +

Default value: 1,048,576.

+
+

+

zfs_reconstruct_indirect_combinations_max (int)

+
If an indirect split block contains more than this many + possible unique combinations when being reconstructed, consider it too + computationally expensive to check them all. Instead, try at most + zfs_reconstruct_indirect_combinations_max randomly-selected + combinations each time the block is accessed. This allows all segment copies + to participate fairly in the reconstruction when all combinations cannot be + checked and prevents repeated use of one bad copy. +

Default value: 4096.

+
+

+

zfs_recover (int)

+
Set to attempt to recover from fatal errors. This should + only be used as a last resort, as it typically results in leaked space, or + worse. +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_removal_ignore_errors (int)

+
+

Ignore hard IO errors during device removal. When set, if a device + encounters a hard IO error during the removal process the removal will not + be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is not recommended. This should only be used as a + last resort when the pool cannot be returned to a healthy state prior to + removing the device.

+

Default value: 0.

+
+

+

zfs_removal_suspend_progress (int)

+
+

This is used by the test suite so that it can ensure that certain + actions happen while in the middle of a removal.

+

Default value: 0.

+
+

+

zfs_remove_max_segment (int)

+
+

The largest contiguous segment that we will attempt to allocate + when removing a device. This can be no larger than 16MB. If there is a + performance problem with attempting to allocate large blocks, consider + decreasing this.

+

Default value: 16,777,216 (16MB).

+
+

+

zfs_resilver_disable_defer (int)

+
Disables the resilver_defer feature, causing an + operation that would start a resilver to restart one in progress immediately. +

Default value: 0 (feature enabled).

+
+

+

zfs_resilver_min_time_ms (int)

+
Resilvers are processed by the sync thread. While + resilvering it will spend at least this much time working on a resilver + between txg flushes. +

Default value: 3,000.

+
+

+

zfs_scan_ignore_errors (int)

+
If set to a nonzero value, remove the DTL (dirty time + list) upon completion of a pool scan (scrub) even if there were unrepairable + errors. It is intended to be used during pool repair or recovery to stop + resilvering when the pool is next imported. +

Default value: 0.

+
+

+

zfs_scrub_min_time_ms (int)

+
Scrubs are processed by the sync thread. While scrubbing + it will spend at least this much time working on a scrub between txg flushes. +

Default value: 1,000.

+
+

+

zfs_scan_checkpoint_intval (int)

+
To preserve progress across reboots the sequential scan + algorithm periodically needs to stop metadata scanning and issue all the + verifications I/Os to disk. The frequency of this flushing is determined by + the zfs_scan_checkpoint_intval tunable. +

Default value: 7200 seconds (every 2 hours).

+
+

+

zfs_scan_fill_weight (int)

+
This tunable affects how scrub and resilver I/O segments + are ordered. A higher number indicates that we care more about how filled in a + segment is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + affect on scrub or resilver performance. +

Default value: 3.

+
+

+

zfs_scan_issue_strategy (int)

+
Determines the order that data will be verified while + scrubbing or resilvering. If set to 1, data will be verified as + sequentially as possible, given the amount of memory reserved for scrubbing + (see zfs_scan_mem_lim_fact). This may improve scrub performance if the + pool's data is very fragmented. If set to 2, the largest + mostly-contiguous chunk of found data will be verified first. By deferring + scrubbing of small segments, we may later find adjacent data to coalesce and + increase the segment size. If set to 0, zfs will use strategy 1 + during normal verification and strategy 2 while taking a checkpoint. +

Default value: 0.

+
+

+

zfs_scan_legacy (int)

+
A value of 0 indicates that scrubs and resilvers will + gather metadata in memory before issuing sequential I/O. A value of 1 + indicates that the legacy algorithm will be used where I/O is initiated as + soon as it is discovered. Changing this value to 0 will not affect scrubs or + resilvers that are already in progress. +

Default value: 0.

+
+

+

zfs_scan_max_ext_gap (int)

+
Indicates the largest gap in bytes between scrub / + resilver I/Os that will still be considered sequential for sorting purposes. + Changing this value will not affect scrubs or resilvers that are already in + progress. +

Default value: 2097152 (2 MB).

+
+

+

zfs_scan_mem_lim_fact (int)

+
Maximum fraction of RAM used for I/O sorting by + sequential scan algorithm. This tunable determines the hard limit for I/O + sorting memory usage. When the hard limit is reached we stop scanning metadata + and start issuing data verification I/O. This is done until we get below the + soft limit. +

Default value: 20 which is 5% of RAM (1/20).

+
+

+

zfs_scan_mem_lim_soft_fact (int)

+
The fraction of the hard limit used to determined the + soft limit for I/O sorting by the sequential scan algorithm. When we cross + this limit from below no action is taken. When we cross this limit from above + it is because we are issuing verification I/O. In this case (unless the + metadata scan is done) we stop issuing verification I/O and start scanning + metadata again until we get to the hard limit. +

Default value: 20 which is 5% of the hard limit (1/20).

+
+

+

zfs_scan_strict_mem_lim (int)

+
Enforces tight memory limits on pool scans when a + sequential scan is in progress. When disabled the memory limit may be exceeded + by fast disks. +

Default value: 0.

+
+

+

zfs_scan_suspend_progress (int)

+
Freezes a scrub/resilver in progress without actually + pausing it. Intended for testing/debugging. +

Default value: 0.

+
+

+

+

zfs_scan_vdev_limit (int)

+
Maximum amount of data that can be concurrently issued at + once for scrubs and resilvers per leaf device, given in bytes. +

Default value: 41943040.

+
+

+

zfs_send_corrupt_data (int)

+
Allow sending of corrupt data (ignore read/checksum + errors when sending data) +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_send_unmodified_spill_blocks (int)

+
Include unmodified spill blocks in the send stream. Under + certain circumstances previous versions of ZFS could incorrectly remove the + spill block from an existing object. Including unmodified copies of the spill + blocks creates a backwards compatible stream which will recreate a spill block + if it was incorrectly removed. +

Use 1 for yes (default) and 0 for no.

+
+

+

zfs_send_no_prefetch_queue_ff (int)

+
The fill fraction of the zfs send internal queues. + The fill fraction controls the timing with which internal threads are woken + up. +

Default value: 20.

+
+

+

zfs_send_no_prefetch_queue_length (int)

+
The maximum number of bytes allowed in zfs send's + internal queues. +

Default value: 1,048,576.

+
+

+

zfs_send_queue_ff (int)

+
The fill fraction of the zfs send prefetch queue. + The fill fraction controls the timing with which internal threads are woken + up. +

Default value: 20.

+
+

+

zfs_send_queue_length (int)

+
The maximum number of bytes allowed that will be + prefetched by zfs send. This value must be at least twice the maximum + block size in use. +

Default value: 16,777,216.

+
+

+

zfs_recv_queue_ff (int)

+
The fill fraction of the zfs receive queue. The + fill fraction controls the timing with which internal threads are woken up. +

Default value: 20.

+
+

+

zfs_recv_queue_length (int)

+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice the maximum block size in + use. +

Default value: 16,777,216.

+
+

+

zfs_recv_write_batch_size (int)

+
The maximum amount of data (in bytes) that zfs + receive will write in one DMU transaction. This is the uncompressed size, + even when receiving a compressed send stream. This setting will not reduce the + write size below a single block. Capped at a maximum of 32MB +

Default value: 1MB.

+
+

+

zfs_override_estimate_recordsize (ulong)

+
Setting this variable overrides the default logic for + estimating block sizes when doing a zfs send. The default heuristic is that + the average block size will be the current recordsize. Override this value if + most data in your dataset is not of that size and you require accurate zfs + send size estimates. +

Default value: 0.

+
+

+

zfs_sync_pass_deferred_free (int)

+
Flushing of data to disk is done in passes. Defer frees + starting in this pass +

Default value: 2.

+
+

+

zfs_spa_discard_memory_limit (int)

+
Maximum memory used for prefetching a checkpoint's space + map on each vdev while discarding the checkpoint. +

Default value: 16,777,216.

+
+

+

zfs_special_class_metadata_reserve_pct (int)

+
Only allow small data blocks to be allocated on the + special and dedup vdev types when the available free space percentage on these + vdevs exceeds this value. This ensures reserved space is available for pool + meta data as the special vdevs approach capacity. +

Default value: 25.

+
+

+

zfs_sync_pass_dont_compress (int)

+
Starting in this sync pass, we disable compression + (including of metadata). With the default setting, in practice, we don't have + this many sync passes, so this has no effect. +

The original intent was that disabling compression would help the + sync passes to converge. However, in practice disabling compression + increases the average number of sync passes, because when we turn + compression off, a lot of block's size will change and thus we have to + re-allocate (not overwrite) them. It also increases the number of 128KB + allocations (e.g. for indirect blocks and spacemaps) because these will not + be compressed. The 128K allocations are especially detrimental to + performance on highly fragmented systems, which may have very few free + segments of this size, and may need to load new metaslabs to satisfy 128K + allocations.

+

Default value: 8.

+
+

+

zfs_sync_pass_rewrite (int)

+
Rewrite new block pointers starting in this pass +

Default value: 2.

+
+

+

zfs_sync_taskq_batch_pct (int)

+
This controls the number of threads used by the + dp_sync_taskq. The default value of 75% will create a maximum of one thread + per cpu. +

Default value: 75%.

+
+

+

zfs_trim_extent_bytes_max (uint)

+
Maximum size of TRIM command. Ranges larger than this + will be split in to chunks no larger than zfs_trim_extent_bytes_max + bytes before being issued to the device. +

Default value: 134,217,728.

+
+

+

zfs_trim_extent_bytes_min (uint)

+
Minimum size of TRIM commands. TRIM ranges smaller than + this will be skipped unless they're part of a larger range which was broken in + to chunks. This is done because it's common for these small TRIMs to + negatively impact overall performance. This value can be set to 0 to TRIM all + unallocated space. +

Default value: 32,768.

+
+

+

zfs_trim_metaslab_skip (uint)

+
Skip uninitialized metaslabs during the TRIM process. + This option is useful for pools constructed from large thinly-provisioned + devices where TRIM operations are slow. As a pool ages an increasing fraction + of the pools metaslabs will be initialized progressively degrading the + usefulness of this option. This setting is stored when starting a manual TRIM + and will persist for the duration of the requested TRIM. +

Default value: 0.

+
+

+

zfs_trim_queue_limit (uint)

+
Maximum number of queued TRIMs outstanding per leaf vdev. + The number of concurrent TRIM commands issued to the device is controlled by + the zfs_vdev_trim_min_active and zfs_vdev_trim_max_active module + options. +

Default value: 10.

+
+

+

zfs_trim_txg_batch (uint)

+
The number of transaction groups worth of frees which + should be aggregated before TRIM operations are issued to the device. This + setting represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available for + use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite effect. + The default value of 32 was determined to be a reasonable compromise.

+

Default value: 32.

+
+

+

zfs_txg_history (int)

+
Historical statistics for the last N txgs will be + available in /proc/spl/kstat/zfs/<pool>/txgs +

Default value: 0.

+
+

+

zfs_txg_timeout (int)

+
Flush dirty data to disk at least every N seconds + (maximum txg duration) +

Default value: 5.

+
+

+

zfs_vdev_aggregate_trim (int)

+
Allow TRIM I/Os to be aggregated. This is normally not + helpful because the extents to be trimmed will have been already been + aggregated by the metaslab. This option is provided for debugging and + performance analysis. +

Default value: 0.

+
+

+

zfs_vdev_aggregation_limit (int)

+
Max vdev I/O aggregation size +

Default value: 1,048,576.

+
+

+

zfs_vdev_aggregation_limit_non_rotating (int)

+
Max vdev I/O aggregation size for non-rotating media +

Default value: 131,072.

+
+

+

zfs_vdev_cache_bshift (int)

+
Shift size to inflate reads too +

Default value: 16 (effectively 65536).

+
+

+

zfs_vdev_cache_max (int)

+
Inflate reads smaller than this value to meet the + zfs_vdev_cache_bshift size (default 64k). +

Default value: 16384.

+
+

+

zfs_vdev_cache_size (int)

+
Total size of the per-disk cache in bytes. +

Currently this feature is disabled as it has been found to not be + helpful for performance and in some cases harmful.

+

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load. +

Default value: 0.

+
+

+

zfs_vdev_mirror_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 5.

+
+

+

zfs_vdev_mirror_rotating_seek_offset (int)

+
The maximum distance for the last queued I/O in which the + balancing algorithm considers an I/O to have locality. See the section + "ZFS I/O SCHEDULER". +

Default value: 1048576.

+
+

+

zfs_vdev_mirror_non_rotating_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/Os do not immediately follow one another. +

Default value: 0.

+
+

+

zfs_vdev_mirror_non_rotating_seek_inc (int)

+
A number by which the balancing algorithm increments the + load calculation for the purpose of selecting the least busy mirror member + when an I/O lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. I/Os within this that are not + immediately following the previous I/O are incremented by half. +

Default value: 1.

+
+

+

zfs_vdev_read_gap_limit (int)

+
Aggregate read I/O operations if the gap on-disk between + them is within this threshold. +

Default value: 32,768.

+
+

+

zfs_vdev_write_gap_limit (int)

+
Aggregate write I/O over gap +

Default value: 4,096.

+
+

+

zfs_vdev_raidz_impl (string)

+
Parameter for selecting raidz parity implementation to + use. +

Options marked (always) below may be selected on module load as + they are supported on all systems. The remaining options may only be set + after the module is loaded, as they are available only if the + implementations are compiled in and supported on the running system.

+

Once the module is loaded, the content of + /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options + with the currently selected one enclosed in []. Possible options are: +
+ fastest - (always) implementation selected using built-in benchmark +
+ original - (always) original raidz implementation +
+ scalar - (always) scalar raidz implementation +
+ sse2 - implementation using SSE2 instruction set (64bit x86 only) +
+ ssse3 - implementation using SSSE3 instruction set (64bit x86 only) +
+ avx2 - implementation using AVX2 instruction set (64bit x86 only) +
+ avx512f - implementation using AVX512F instruction set (64bit x86 only) +
+ avx512bw - implementation using AVX512F & AVX512BW instruction sets + (64bit x86 only) +
+ aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) +
+ aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 + bit ARMv8 only) +
+ powerpc_altivec - implementation using Altivec (PowerPC only)

+

Default value: fastest.

+
+

+

zfs_vdev_scheduler (charp)

+
DEPRECATED: This option exists for compatibility + with older user configurations. It does nothing except print a warning to the + kernel log if set. +

+
+

+

zfs_zevent_cols (int)

+
When zevents are logged to the console use this as the + word wrap width. +

Default value: 80.

+
+

+

zfs_zevent_console (int)

+
Log events to the console +

Use 1 for yes and 0 for no (default).

+
+

+

zfs_zevent_len_max (int)

+
Max event queue length. Events in the queue can be viewed + with the zpool events command. +

Default value: 512.

+
+

+

zfs_zevent_retain_max (int)

+
Maximum recent zevent records to retain for duplicate + checking. Setting this value to zero disables duplicate detection. +

Default value: 2000.

+
+

+

zfs_zevent_retain_expire_secs (int)

+
Lifespan for a recent ereport that was retained for + duplicate checking. +

Default value: 900.

+
+

zfs_zil_clean_taskq_maxalloc (int)

+
The maximum number of taskq entries that are allowed to + be cached. When this limit is exceeded transaction records (itxs) will be + cleaned synchronously. +

Default value: 1048576.

+
+

+

zfs_zil_clean_taskq_minalloc (int)

+
The number of taskq entries that are pre-populated when + the taskq is first created and are immediately available for use. +

Default value: 1024.

+
+

+

zfs_zil_clean_taskq_nthr_pct (int)

+
This controls the number of threads used by the + dp_zil_clean_taskq. The default value of 100% will create a maximum of one + thread per cpu. +

Default value: 100%.

+
+

+

zil_maxblocksize (int)

+
This sets the maximum block size used by the ZIL. On very + fragmented pools, lowering this (typically to 36KB) can improve performance. +

Default value: 131072 (128KB).

+
+

+

zil_nocacheflush (int)

+
Disable the cache flush commands that are normally sent + to the disk(s) by the ZIL after an LWB write has completed. Setting this will + cause ZIL corruption on power loss if a volatile out-of-order write cache is + enabled. +

Use 1 for yes and 0 for no (default).

+
+

+

zil_replay_disable (int)

+
Disable intent logging replay. Can be disabled for + recovery from corrupted ZIL +

Use 1 for yes and 0 for no (default).

+
+

+

zil_slog_bulk (ulong)

+
Limit SLOG write size per commit executed with + synchronous priority. Any writes above that will be executed with lower + (asynchronous) priority to limit potential SLOG device abuse by single active + ZIL writer. +

Default value: 786,432.

+
+

+

zio_deadman_log_all (int)

+
If non-zero, the zio deadman will produce debugging + messages (see zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to gain + diagnostic information for hang conditions which don't involve a mutex or + other locking primitive; typically conditions in which a thread in the zio + pipeline is looping indefinitely. +

Default value: 0.

+
+

+

zio_decompress_fail_fraction (int)

+
If non-zero, this value represents the denominator of the + probability that zfs should induce a decompression failure. For instance, for + a 5% decompression failure rate, this value should be set to 20. +

Default value: 0.

+
+

+

zio_slow_io_ms (int)

+
When an I/O operation takes more than + zio_slow_io_ms milliseconds to complete is marked as a slow I/O. Each + slow I/O causes a delay zevent. Slow I/O counters can be seen with "zpool + status -s". +

+

Default value: 30,000.

+
+

+

zio_dva_throttle_enabled (int)

+
Throttle block allocations in the I/O pipeline. This + allows for dynamic allocation distribution when devices are imbalanced. When + enabled, the maximum number of pending allocations per top-level vdev is + limited by zfs_vdev_queue_depth_pct. +

Default value: 1.

+
+

+

zio_requeue_io_start_cut_in_line (int)

+
Prioritize requeued I/O +

Default value: 0.

+
+

+

zio_taskq_batch_pct (uint)

+
Percentage of online CPUs (or CPU cores, etc) which will + run a worker thread for I/O. These workers are responsible for I/O work such + as compression and checksum calculations. Fractional number of CPUs will be + rounded down. +

The default value of 75 was chosen to avoid using all CPUs which + can result in latency issues and inconsistent application performance, + especially when high compression is enabled.

+

Default value: 75.

+
+

+

zvol_inhibit_dev (uint)

+
Do not create zvol device nodes. This may slightly + improve startup time on systems with a very large number of zvols. +

Use 1 for yes and 0 for no (default).

+
+

+

zvol_major (uint)

+
Major number for zvol block devices +

Default value: 230.

+
+

+

zvol_max_discard_blocks (ulong)

+
Discard (aka TRIM) operations done on zvols will be done + in batches of this many blocks, where block size is determined by the + volblocksize property of a zvol. +

Default value: 16,384.

+
+

+

zvol_prefetch_bytes (uint)

+
When adding a zvol to the system prefetch + zvol_prefetch_bytes from the start and end of the volume. Prefetching + these regions of the volume is desirable because they are likely to be + accessed immediately by blkid(8) or by the kernel scanning for a + partition table. +

Default value: 131,072.

+
+

+

zvol_request_sync (uint)

+
When processing I/O requests for a zvol submit them + synchronously. This effectively limits the queue depth to 1 for each I/O + submitter. When set to 0 requests are handled asynchronously by a thread pool. + The number of requests which can be handled concurrently is controller by + zvol_threads. +

Default value: 0.

+
+

+

zvol_threads (uint)

+
Max number of threads which can handle zvol I/O requests + concurrently. +

Default value: 32.

+
+

+

zvol_volmode (uint)

+
Defines zvol block devices behaviour when volmode + is set to default. Valid values are 1 (full), 2 (dev) and + 3 (none). +

Default value: 1.

+
+

+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/Os. The I/O scheduler determines when and in what order those operations + are issued. The I/O scheduler divides operations into five I/O classes + prioritized in the following order: sync read, sync write, async read, async + write, and scrub/resilver. Each queue defines the minimum and maximum number + of concurrent operations that may be issued to the device. In addition, the + device has an aggregate maximum, zfs_vdev_max_active. Note that the + sum of the per-queue minimums must not exceed the aggregate maximum. If the + sum of the per-queue maximums exceeds the aggregate maximum, then the number + of active I/Os may reach zfs_vdev_max_active, in which case no + further I/Os will be issued regardless of whether all per-queue minimums + have been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Further, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been hit + or if there are no operations queued for an I/O class that has not hit its + maximum. Every time an I/O is queued or an operation completes, the I/O + scheduler looks for new operations to issue.

+

In general, smaller max_active's will lead to lower latency of + synchronous operations. Larger max_active's may lead to higher overall + throughput, depending on underlying storage.

+

The ratio of the queues' max_actives determines the balance of + performance between reads, writes, and scrubs. E.g., increasing + zfs_vdev_scrub_max_active will cause the scrub or resilver to + complete more quickly, but reads and writes to have higher latency and lower + throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write I/Os according to + the amount of dirty data in the pool. Since both throughput and latency + typically increase with the number of concurrent operations issued to + physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other -- and + in particular synchronous -- queues. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there's + more dirty data in the pool.

+

Async Writes

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points.

+
+
+ | o---------| <-- zfs_vdev_async_write_max_active +
+ ^ | /^ | +
+ | | / | | +active | / | | +
+ I/O | / | | +count | / | | +
+ | / | | +
+ |-------o | | <-- zfs_vdev_async_write_min_active +
+ 0|_______^______|_________| +
+ 0% | | 100% of zfs_dirty_data_max +
+ | | +
+ | `-- zfs_vdev_async_write_active_max_dirty_percent +
+ `--------- zfs_vdev_async_write_active_min_dirty_percent +
+Until the amount of dirty data exceeds a minimum percentage of the dirty data + allowed in the pool, the I/O scheduler will limit the number of concurrent + operations to the minimum. As that threshold is crossed, the number of + concurrent operations issued increases linearly to the maximum at the + specified maximum percentage of the dirty data allowed in the pool. +

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the + maximum percentage, this indicates that the rate of incoming data is greater + than the rate that the backend storage can handle. In this case, we must + further throttle incoming writes, as described in the next section.

+

+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as:

+
+
+ min_time = zfs_delay_scale * (dirty - min) / (max - dirty) +
+ min_time is then capped at 100 milliseconds.
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be at or above + zfs_vdev_async_write_active_max_dirty_percent so that we only start + to delay after writing at full speed has failed to keep up with the incoming + write rate. The scale of the curve is defined by zfs_delay_scale. + Roughly speaking, this variable determines the amount of delay at the + midpoint of the curve.

+

+
delay
+
+ 10ms +-------------------------------------------------------------*+ +
+ | *| +
+ 9ms + *+ +
+ | *| +
+ 8ms + *+ +
+ | * | +
+ 7ms + * + +
+ | * | +
+ 6ms + * + +
+ | * | +
+ 5ms + * + +
+ | * | +
+ 4ms + * + +
+ | * | +
+ 3ms + * + +
+ | * | +
+ 2ms + (midpoint) * + +
+ | | ** | +
+ 1ms + v *** + +
+ | zfs_delay_scale ----------> ******** | +
+ 0 +-------------------------------------*********----------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note that since the delay is added to the outstanding time + remaining on the most recent transaction, the delay is effectively the + inverse of IOPS. Here the midpoint of 500us translates to 2000 IOPS. The + shape of the curve was chosen such that small changes in the amount of + accumulated dirty data in the first 3/4 of the curve yield relatively small + differences in the amount of delay.

+

The effects can be easier to understand when the amount of delay + is represented on a log scale:

+

+
delay
+100ms +-------------------------------------------------------------++
+
+ + + +
+ | | +
+ + *+ +
+ 10ms + *+ +
+ + ** + +
+ | (midpoint) ** | +
+ + | ** + +
+ 1ms + v **** + +
+ + zfs_delay_scale ----------> ***** + +
+ | **** | +
+ + **** + +100us + ** + +
+ + * + +
+ | * | +
+ + * + +
+ 10us + * + +
+ + + +
+ | | +
+ + + +
+ +--------------------------------------------------------------+ +
+ 0% <- zfs_dirty_data_max -> 100%
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the backend storage, and then by changing the value of + zfs_delay_scale to increase the steepness of the curve.

+
+
+ + + + + +
March 31, 2021OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/5/zpool-features.5.html b/man/v2.0/5/zpool-features.5.html new file mode 100644 index 000000000..6e70396eb --- /dev/null +++ b/man/v2.0/5/zpool-features.5.html @@ -0,0 +1,1181 @@ + + + + + + + zpool-features.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.5

+
+ + + + + +
ZPOOL-FEATURES(5)File Formats ManualZPOOL-FEATURES(5)
+
+
+

+

zpool-features - ZFS pool feature descriptions

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the upgrade subcommand of the zpool(8) command, or set the + feature@feature_name property to enabled.

+

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

+

Since most features can be enabled independently of each other the + on-disk format of the pool is specified by the set of all features marked as + active on the pool. If the pool was created by another software + version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature_name. The reversed DNS name ensures that the + feature's GUID is unique across all ZFS implementations. When unsupported + features are encountered on a pool they will be identified by their GUIDs. + Refer to the documentation for the ZFS implementation that created the pool + for information about those features.

+

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the ':' (e.g. + com.example:feature_name would have the short name + feature_name), however a feature's short name may differ across ZFS + implementations if following the convention would result in name + conflicts.

+
+
+

+

Features can be in one of three states:

+

active

+
This feature's on-disk format changes are in effect on + the pool. Support for this feature is required to import the pool in + read-write mode. If this feature is not read-only compatible, support is also + required to import the pool in read-only mode (see "Read-only + compatibility").
+

+

enabled

+
An administrator has marked this feature as enabled on + the pool, but the feature's on-disk format changes have not been made yet. The + pool can still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support returning to the + enabled state after becoming active. See feature-specific + documentation for details.
+

+

disabled

+
This feature's on-disk format changes have not been made + and will not be made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they have been + enabled.
+

+

+

The state of supported features is exposed through pool properties + of the form feature@short_name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as "read-only compatible". If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly property during + import (see zpool(8) for details on importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool a pool + property named unsupported@feature_name will indicate why the import + was allowed despite the unsupported feature. Possible values for this + property are:

+

+

inactive

+
The feature is in the enabled state and therefore + the pool's on-disk format is still compatible with software that does not + support this feature.
+

+

readonly

+
The feature is read-only compatible and the pool has been + imported in read-only mode.
+

+
+
+

+

Some features depend on other features being enabled in order to + function properly. Enabling a feature will automatically enable any features + it depends on.

+
+
+
+

+

The following features are supported on this system:

+

+

allocation_classes

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:allocation_classes
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables support for separate allocation classes.

+

This feature becomes active when a dedicated allocation + class vdev (dedup or special) is created with the zpool create or + zpool add subcommands. With device removal, it can be returned to the + enabled state if all the dedicated allocation class vdevs are + removed.

+
+

+

async_destroy

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:async_destroy
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Destroying a file system requires traversing all of its data in + order to return its used space to the pool. Without async_destroy the + file system is not fully removed until all space has been reclaimed. If the + destroy operation is interrupted by a reboot or power outage the next + attempt to open the pool will need to complete the destroy operation + synchronously.

+

When async_destroy is enabled the file system's data will + be reclaimed by a background process, allowing the destroy operation to + complete without traversing the entire file system. The background process + is able to resume interrupted destroys after the pool has been opened, + eliminating the need to finish interrupted destroys as part of the open + operation. The amount of space remaining to be reclaimed by the background + process is available through the freeing property.

+

This feature is only active while freeing is + non-zero.

+
+

+

bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmarks
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables use of the zfs bookmark + subcommand.

+

This feature is active while any bookmarks exist in the + pool. All bookmarks in the pool can be listed by running zfs list -t + bookmark -r poolname.

+
+

+

bookmark_v2

+
+ + + + + + + + + + + + + +
GUIDcom.datto:bookmark_v2
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark, extensible_dataset
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 bookmark is created + and will be returned to the enabled state when all v2 bookmarks are + destroyed.

+
+

+

bookmark_written

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:bookmark_written
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark, extensible_dataset, bookmark_v2
+

This feature enables additional bookmark accounting fields, + enabling the written#<bookmark> property (space written since a + bookmark) and estimates of send stream sizes for incrementals from + bookmarks.

+

This feature becomes active when a bookmark is created and + will be returned to the enabled state when all bookmarks with these + fields are destroyed.

+
+

+

device_rebuild

+
+ + + + + + + + + + + + + +
GUIDorg.openzfs:device_rebuild
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the ability for the zpool attach and + zpool replace subcommands to perform sequential reconstruction + (instead of healing reconstruction) when resilvering.

+

Sequential reconstruction resilvers a device in LBA order without + immediately verifying the checksums. Once complete a scrub is started which + then verifies the checksums. This approach allows full redundancy to be + restored to the pool in the minimum amount of time. This two phase approach + will take longer than a healing resilver when the time to verify the + checksums is included. However, unless there is additional pool damage no + checksum errors should be reported by the scrub. This feature is + incompatible with raidz configurations.

+

This feature becomes active while a sequential resilver is + in progress, and returns to enabled when the resilver completes.

+
+

+

device_removal

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:device_removal
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature enables the zpool remove subcommand to remove + top-level vdevs, evacuating them to reduce the total size of the pool.

+

This feature becomes active when the zpool remove + subcommand is used on a top-level vdev, and will never return to being + enabled.

+
+

+

edonr

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:edonr
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Edon-R hash algorithm for + checksum, including for nopwrite (if compression is also enabled, an + overwrite of a block whose checksum matches the data being written will be + ignored). In an abundance of caution, Edon-R requires verification when used + with dedup: zfs set dedup=edonr,verify. See zfs(8).

+

Edon-R is a very high-performance hash algorithm that was part of + the NIST SHA-3 competition. It provides extremely high hash performance + (over 350% faster than SHA-256), but was not selected because of its + unsuitability as a general purpose secure hash algorithm. This + implementation utilizes the new salted checksumming functionality in ZFS, + which means that the checksum is pre-seeded with a secret 256-bit random key + (stored on the pool) before being fed the data block to be checksummed. Thus + the produced checksums are unique to a given pool.

+

When the edonr feature is set to enabled, the + administrator can turn on the edonr checksum on any dataset using the + zfs set checksum=edonr. See zfs(8). This feature becomes + active once a checksum property has been set to edonr, + and will return to being enabled once all filesystems that have ever + had their checksum set to edonr are destroyed.

+

FreeBSD does not support the edonr feature.

+
+

+

embedded_data

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:embedded_data
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 bytes + or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of highly-compressible + blocks are stored in the block "pointer" itself (a misnomer in + this case, as it contains the compressed data, rather than a pointer to its + location on disk). Thus the space of the block (one sector, typically 512 + bytes or 4KB) is saved, and no additional i/o is needed to read and write + the data block.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

empty_bpobj

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:empty_bpobj
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also reduces + the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobj's) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobj's are empty. This feature + allows us to create each bpobj on-demand, thus eliminating the empty + bpobjs.

+

This feature is active while there are any filesystems, + volumes, or snapshots which were created after enabling this feature.

+
+

+

enabled_txg

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:enabled_txg
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

Once this feature is enabled ZFS records the transaction group + number in which new features are enabled. This has no user-visible impact, + but other features may depend on this feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

encryption

+
+ + + + + + + + + + + + + +
GUIDcom.datto:encryption
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmark_v2, extensible_dataset
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an encrypted dataset is + created and will be returned to the enabled state when all datasets + that use this feature are destroyed.

+
+

+

extensible_dataset

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:extensible_dataset
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first dependent + feature uses it, and will be returned to the enabled state when all + datasets that use this feature are destroyed.

+
+

+

filesystem_limits

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:filesystem_limits
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature enables filesystem and snapshot limits. These limits + can be used to control how many filesystems and/or snapshots can be created + at the point in the tree on which the limits are set.

+

This feature is active once either of the limit properties + has been set on a dataset. Once activated the feature is never + deactivated.

+
+

+

hole_birth

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:hole_birth
READ-ONLY COMPATIBLEno
DEPENDENCIESenabled_txg
+

This feature has/had bugs, the result of which is that, if you do + a zfs send -i (or -R, since it uses -i) from an + affected dataset, the receiver will not see any checksum or other errors, + but the resulting destination snapshot will not match the source. Its use by + zfs send -i has been disabled by default. See the + send_holes_without_birth_time module parameter in + zfs-module-parameters(5).

+

This feature improves performance of incremental sends (zfs + send -i) and receives for objects with many holes. The most common case + of hole-filled objects is zvols.

+

An incremental send stream from snapshot A to snapshot + B contains information about every block that changed between + A and B. Blocks which did not change between those snapshots + can be identified and omitted from the stream using a piece of metadata + called the 'block birth time', but birth times are not recorded for holes + (blocks filled only with zeroes). Since holes created after A cannot + be distinguished from holes created before A, information about every + hole in the entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. However, + when incrementally replicating filesystems or zvols with many holes (for + example a zvol formatted with another filesystem) a lot of time will be + spent sending and receiving unnecessary information about holes that already + exist on the receiving side.

+

Once the hole_birth feature has been enabled the block + birth times of all new holes will be recorded. Incremental sends between + snapshots created after this feature is enabled will use this new metadata + to avoid sending information about holes that already exist on the receiving + side.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

large_blocks

+
+ + + + + + + + + + + + + +
GUIDorg.open-zfs:large_blocks
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_block feature allows the record size on a dataset + to be set larger than 128KB.

+

This feature becomes active once a dataset contains a file + with a block size larger than 128KB, and will return to being enabled + once all filesystems that have ever had their recordsize larger than 128KB + are destroyed.

+
+

+

large_dnode

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:large_dnode
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

The large_dnode feature allows the size of dnodes in a + dataset to be set larger than 512B.

+

This feature becomes active once a dataset contains an + object with a dnode larger than 512B, which occurs as a result of setting + the dnodesize dataset property to a value other than legacy. + The feature will return to being enabled once all filesystems that + have ever contained a dnode larger than 512B are destroyed. Large dnodes + allow more data to be stored in the bonus buffer, thus potentially improving + performance by avoiding the use of spill blocks.

+
+

+

livelist

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:livelist
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+This feature allows clones to be deleted faster than the traditional method when + a large number of random/sparse writes have been made to the clone. All blocks + allocated and freed after a clone is created are tracked by the the clone's + livelist which is referenced during the deletion of the clone. The feature is + activated when a clone is created and remains active until all clones have + been destroyed.
+

+

log_spacemap

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:log_spacemap
READ-ONLY COMPATIBLEyes
DEPENDENCIEScom.delphix:spacemap_v2
+

This feature improves performance for heavily-fragmented pools, + especially when workloads are heavy in random-writes. It does so by logging + all the metaslab changes on a single spacemap every TXG instead of + scattering multiple writes to all the metaslab spacemaps.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

lz4_compress

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:lz4_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

lz4 is a high-performance real-time compression algorithm + that features significantly faster compression and decompression as well as + a higher compression ratio than the older lzjb compression. + Typically, lz4 compression is approximately 50% faster on + compressible data and 200% faster on incompressible data than lzjb. + It is also approximately 80% faster on decompression, while giving + approximately 10% better compression ratio.

+

When the lz4_compress feature is set to enabled, the + administrator can turn on lz4 compression on any dataset on the pool + using the zfs(8) command. Please note that doing so will immediately + activate the lz4_compress feature on the underlying pool using the + zfs(8) command. Also, all newly written metadata will be compressed with + lz4 algorithm. Since this feature is not read-only compatible, this + operation will render the pool unimportable on systems without support for + the lz4_compress feature.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled.

+
+

+

multi_vdev_crash_dump

+
+ + + + + + + + + + + + + +
GUIDcom.joyent:multi_vdev_crash_dump
READ-ONLY COMPATIBLEno
DEPENDENCIESnone
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored or + raidz configuration.

+

When the multi_vdev_crash_dump feature is set to + enabled, the administrator can use the dumpadm(1M) command to + configure a dump device on a pool comprised of multiple vdevs.

+

Under FreeBSD and Linux this feature is registered for + compatibility but not used. New pools created under FreeBSD and Linux will + have the feature enabled but will never transition to + active. This functionality is not required in order to support + crash dumps under FreeBSD and Linux. Existing pools where this feature is + active can be imported.

+
+

+

obsolete_counts

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:obsolete_counts
READ-ONLY COMPATIBLEyes
DEPENDENCIESdevice_removal
+

This feature is an enhancement of device_removal, which will over + time reduce the memory used to track removed devices. When indirect blocks + are freed or remapped, we note that their part of the indirect mapping is + "obsolete", i.e. no longer needed.

+

This feature becomes active when the zpool remove + subcommand is used on a top-level vdev, and will never return to being + enabled.

+
+

+

project_quota

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:project_quota
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is new object-based attribute. When upgrading an + existing filesystem, object without project ID attribute will be assigned a + zero project ID. After this feature is enabled, newly created object will + inherit its parent directory's project ID if the parent inherit flag is set + (via chattr +/-P or zfs project [-s|-C]). Otherwise, the new + object's project ID will be set as zero. An object's project ID can be + changed at anytime by the owner (or privileged user) via chattr -p + $prjid or zfs project -p $prjid.

+

This feature will become active as soon as it is enabled + and will never return to being disabled. Each filesystem will be + upgraded automatically when remounted or when new file is created under that + filesystem. The upgrade can also be triggered on filesystems via `zfs set + version=current <pool/fs>`. The upgrade process runs in the background + and may take a while to complete for the filesystems containing a large + number of files.

+
+

+

redaction_bookmarks

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:redaction_bookmarks
READ-ONLY COMPATIBLEno
DEPENDENCIESbookmarks, extensible_dataset
+

This feature enables the use of the redacted zfs send. Redacted + zfs send creates redaction bookmarks, which store the list of blocks + redacted by the send that created them. For more information about redacted + send, see zfs(8).

+

+
+

+

redacted_datasets

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:redacted_datasets
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the receiving of redacted zfs send streams. + Redacted zfs send streams create redacted datasets when received. These + datasets are missing some of their blocks, and so cannot be safely mounted, + and their contents cannot be safely read. For more information about + redacted receive, see zfs(8).

+
+

+

resilver_defer

+
+ + + + + + + + + + + + + +
GUIDcom.datto:resilver_defer
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature allows zfs to postpone new resilvers if an existing + one is already in progress. Without this feature, any new resilvers will + cause the currently running one to be immediately restarted from the + beginning.

+

This feature becomes active once a resilver has been + deferred, and returns to being enabled when the deferred resilver + begins.

+
+

+

sha512

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:sha512
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit arithmetic + of SHA-512 provides an approximate 50% performance boost over SHA-256 on + 64-bit hardware and is thus a good minimum-change replacement candidate for + systems where hash performance is important, but these systems cannot for + whatever reason utilize the faster skein and edonr + algorithms.

+

When the sha512 feature is set to enabled, the + administrator can turn on the sha512 checksum on any dataset using + zfs set checksum=sha512. See zfs(8). This feature becomes + active once a checksum property has been set to sha512, + and will return to being enabled once all filesystems that have ever + had their checksum set to sha512 are destroyed.

+
+

+

skein

+
+ + + + + + + + + + + + + +
GUIDorg.illumos:skein
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm that + was a finalist in the NIST SHA-3 competition. It provides a very high + security margin and high performance on 64-bit hardware (80% faster than + SHA-256). This implementation also utilizes the new salted checksumming + functionality in ZFS, which means that the checksum is pre-seeded with a + secret 256-bit random key (stored on the pool) before being fed the data + block to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the skein feature is set to enabled, the + administrator can turn on the skein checksum on any dataset using + zfs set checksum=skein. See zfs(8). This feature becomes + active once a checksum property has been set to skein, + and will return to being enabled once all filesystems that have ever + had their checksum set to skein are destroyed.

+
+

+

spacemap_histogram

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_histogram
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This features allows ZFS to maintain more information about how + free space is organized within the pool. If this feature is enabled, + ZFS will set this feature to active when a new space map object is + created or an existing space map is upgraded to the new format. Once the + feature is active, it will remain in that state until the pool is + destroyed.

+
+

+

spacemap_v2

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:spacemap_v2
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the use of the new space map encoding which + consists of two words (instead of one) whenever it is advantageous. The new + encoding allows space maps to represent large regions of space more + efficiently on-disk while also increasing their maximum addressable + offset.

+

This feature becomes active once it is enabled, and + never returns back to being enabled.

+
+

+

userobj_accounting

+
+ + + + + + + + + + + + + +
GUIDorg.zfsonlinux:userobj_accounting
READ-ONLY COMPATIBLEyes
DEPENDENCIESextensible_dataset
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes active as soon as it is enabled and + will never return to being enabled. Each filesystem will be upgraded + automatically when remounted, or when new files are created under that + filesystem. The upgrade can also be started manually on filesystems by + running `zfs set version=current <pool/fs>`. The upgrade process runs + in the background and may take a while to complete for filesystems + containing a large number of files.

+
+

+

zpool_checkpoint

+
+ + + + + + + + + + + + + +
GUIDcom.delphix:zpool_checkpoint
READ-ONLY COMPATIBLEyes
DEPENDENCIESnone
+

This feature enables the zpool checkpoint subcommand that + can checkpoint the state of the pool at the time it was issued and later + rewind back to it or discard it.

+

This feature becomes active when the zpool + checkpoint subcommand is used to checkpoint the pool. The feature will + only return back to being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+

+

zstd_compress

+
+ + + + + + + + + + + + + +
GUIDorg.freebsd:zstd_compress
READ-ONLY COMPATIBLEno
DEPENDENCIESextensible_dataset
+

zstd is a high-performance compression algorithm that + features a combination of high compression ratios and high speed. Compared + to gzip, zstd offers slighty better compression at much higher + speeds. Compared to lz4, zstd offers much better compression + while being only modestly slower. Typically, zstd compression speed + ranges from 250 to 500 MB/s per thread and decompression speed is over 1 + GB/s per thread.

+

When the zstd feature is set to enabled, the + administrator can turn on zstd compression of any dataset by running + `zfs set compress=zstd <pool/fs>`.

+

This feature becomes active once a compress property + has been set to zstd, and will return to being enabled once + all filesystems that have ever had their compress property set to + zstd are destroyed.

+
+

+
+
+

+

zpool(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/fsck.zfs.8.html b/man/v2.0/8/fsck.zfs.8.html new file mode 100644 index 000000000..a6567189a --- /dev/null +++ b/man/v2.0/8/fsck.zfs.8.html @@ -0,0 +1,290 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
FSCK.ZFS(8)System Manager's ManualFSCK.ZFS(8)
+
+

+
+

+

fsck.zfs - Dummy ZFS filesystem checker.

+

+
+
+

+

fsck.zfs [options] + <dataset>

+

+
+
+

+

fsck.zfs is a shell stub that does nothing and always + returns true. It is installed by ZoL because some Linux distributions expect + a fsck helper for all filesystems.

+

+
+
+

+

All options and the dataset are ignored.

+

+
+
+

+

ZFS datasets are checked by running zpool scrub on the + containing pool. An individual ZFS dataset is never checked independently of + its pool, which is unlike a regular filesystem.

+

+
+
+

+

On some systems, if the dataset is in a degraded pool, then + it might be appropriate for fsck.zfs to return exit code 4 to + indicate an uncorrected filesystem error.

+

Similarly, if the dataset is in a faulted pool and has a + legacy /etc/fstab record, then fsck.zfs should return exit code 8 to + indicate a fatal operational error.

+

+
+
+

+

Darik Horn <dajhorn@vanadac.com>.

+

+
+
+

+

fsck(8), fstab(5), zpool-scrub(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/index.html b/man/v2.0/8/index.html new file mode 100644 index 000000000..8dd51c0f2 --- /dev/null +++ b/man/v2.0/8/index.html @@ -0,0 +1,311 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/mount.zfs.8.html b/man/v2.0/8/mount.zfs.8.html new file mode 100644 index 000000000..dfab77b82 --- /dev/null +++ b/man/v2.0/8/mount.zfs.8.html @@ -0,0 +1,339 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
MOUNT.ZFS(8)System Manager's ManualMOUNT.ZFS(8)
+
+

+
+

+

mount.zfs - mount a ZFS filesystem

+
+
+

+

mount.zfs [-sfnvh] [-o options] dataset + mountpoint

+

+
+
+

+

mount.zfs is part of the zfsutils package for Linux. It is + a helper program that is usually invoked by the mount(8) or + zfs(8) commands to mount a ZFS dataset.

+

All options are handled according to the FILESYSTEM + INDEPENDENT MOUNT OPTIONS section in the mount(8) manual, except for + those described below.

+

The dataset parameter is a ZFS filesystem name, as output + by the zfs list -H -o name command. This parameter never has a + leading slash character and is not a device name.

+

The mountpoint parameter is the path name of a + directory.

+

+

+
+
+

+
+
+
Ignore bad or sloppy mount options.
+
+
Do a fake mount; do not perform the mount operation.
+
+
Do not update the /etc/mtab file.
+
+
Increase verbosity.
+
+
Print the usage message.
+
+
This flag sets the SELinux context for all files in the filesystem under + that mountpoint.
+
+
This flag sets the SELinux context for the filesystem being mounted.
+
+
This flag sets the SELinux context for unlabeled files.
+
+
This flag sets the SELinux context for the root inode of the + filesystem.
+
+
This private flag indicates that the dataset has an entry in the + /etc/fstab file.
+
+
This private flag disables extended attributes.
+
+
This private flag enables directory-based extended attributes and, if + appropriate, adds a ZFS context to the selinux system policy.
+
+
This private flag enables system attributed-based extended attributes and, + if appropriate, adds a ZFS context to the selinux system policy.
+
+
Equivalent to xattr.
+
+
This private flag indicates that mount(8) is being called by the + zfs(8) command. +

+
+
+
+
+

+

ZFS conventionally requires that the mountpoint be an empty + directory, but the Linux implementation inconsistently enforces the + requirement.

+

The mount.zfs helper does not mount the contents of + zvols.

+

+
+
+

+
+
/etc/fstab
+
The static filesystem table.
+
/etc/mtab
+
The mounted filesystem table.
+
+
+
+

+

The primary author of mount.zfs is Brian Behlendorf + <behlendorf1@llnl.gov>.

+

This man page was written by Darik Horn + <dajhorn@vanadac.com>.

+
+
+

+

fstab(5), mount(8), zfs(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/vdev_id.8.html b/man/v2.0/8/vdev_id.8.html new file mode 100644 index 000000000..0cdd1e7d4 --- /dev/null +++ b/man/v2.0/8/vdev_id.8.html @@ -0,0 +1,322 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
VDEV_ID(8)System Manager's ManualVDEV_ID(8)
+
+
+

+

vdev_idgenerate + user-friendly names for JBOD disks

+
+
+

+ + + + + +
vdev_id-d dev + -c config_file + -g + sas_direct|sas_switch|scsi + -m -p + phys_per_port
+
+
+

+

vdev_id is an udev helper which parses + vdev_id.conf(5) to map a physical path in a storage + topology to a channel name. The channel name is combined with a disk + enclosure slot number to create an alias that reflects the physical location + of the drive. This is particularly helpful when it comes to tasks like + replacing failed drives. Slot numbers may also be remapped in case the + default numbering is unsatisfactory. The drive aliases will be created as + symbolic links in /dev/disk/by-vdev.

+

The currently supported topologies are + sas_direct, sas_switch, and + scsi. A multipath mode is supported in which dm-mpath + devices are handled by examining the first running component disk as + reported by the driver. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating + aliases based on existing udev links in the /dev hierarchy using the + configuration + file keyword. See vdev_id.conf(5) for details.

+
+
+

+
+
+ device
+
The device node to classify, like /dev/sda.
+
+ config_file
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+
Only handle dm-multipath devices. If specified, examine the first running + component disk of a dm-multipath device as provided by the driver to + determine the physical path.
+
+ phys_per_port
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zdb.8.html b/man/v2.0/8/zdb.8.html new file mode 100644 index 000000000..c68d3ee8c --- /dev/null +++ b/man/v2.0/8/zdb.8.html @@ -0,0 +1,697 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's Manual (smm)ZDB(8)
+
+
+

+

zdbdisplay + zpool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMPsvXYy] + [-e [-V] + [-p path ...]] + [-I inflight I/Os] + [-o + var=value]... + [-t txg] + [-U cache] + [-x dumpdir] + [poolname[/dataset | objset + ID]] [object | range + ...]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path ...]] [-U + cache] poolname[/dataset | + objset ID] [object | + range ...]
+
+ + + + + +
zdb-C [-A] + [-U cache]
+
+ + + + + +
zdb-E [-A] + word0:word1:...:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path ...]] + [-t txg] + [-U cache] + poolname [vdev + [metaslab ...]]
+
+ + + + + +
zdb-O dataset path
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path ...]] + [-U cache] + poolname + vdev:offset:[<lsize>/]<psize>[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path ...]] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. This is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as pool/ (pool name followed by a + slash).

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+
+
+

+

Display options:

+
+
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs or object ID ranges are specified, display + information about those specific objects or ranges only.

+

An object ID range is specified in terms of a colon-separated + tuple of the form + ⟨start⟩:⟨end⟩[:⟨flags⟩]. The + fields start and end are + integer object identifiers that denote the upper and lower bounds of the + range. An end value of -1 specifies a range with + no upper bound. The flags field optionally + specifies a set of flags, described below, that control which object + types are dumped. By default, all object types are dumped. A minus sign + (-) negates the effect of the flag that follows it and has no effect + unless preceded by the A flag. For example, the + range 0:-1:A-d will dump all object types except for directories.

+

+
+
+
Dump all objects (this is the default)
+
+
Dump ZFS directory objects
+
+
Dump ZFS plain file objects
+
+
Dump SPA space map objects
+
+
Dump ZAP objects
+
-
+
Negate the effect of next flag
+
+
+
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + * compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
+ word0:word1:...:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
+ device
+
Read the vdev labels and L2ARC header from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. The presence of L2ARC header is indicated by a specific sequence + (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size or the + number of L2ARC log blocks zdb + -l will return 1. Each unique configuration is + displayed only once.
+
+ device
+
In addition display label space usage stats. If a valid L2ARC header was + found also display the properties of log blocks used for restoring L2ARC + contents (persistent L2ARC).
+
+ device
+
Display every configuration, unique or not. If a valid L2ARC header was + found also display the properties of log entries in log blocks used for + restoring L2ARC contents (persistent L2ARC). +

If the -q option is also specified, + don't print the labels or the L2ARC header.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
+
Display the offset, spacemap, free space of each metaslab, all the log + spacemaps and their obsolete entry statistics.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
+ poolname + vdev:offset:[<lsize>/]<psize>[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer at hex offset
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
+
Display the current uberblock.
+
+

Other options:

+
+
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
+ [-p path ...]
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
+ dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
+ inflight I/Os
+
Limit the number of outstanding checksum I/Os to the specified value. The + default value is 200. This option affects the performance of the + -c option.
+
+ var=value ...
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
+
Print numbers in an unscaled form more amenable to parsing, eg. 1000000 + rather than 1M.
+
+ transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
+ cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
+
Enable verbosity. Specify multiple times for increased verbosity.
+
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
+
Perform validation for livelists that are being deleted. Scans through the + livelist and metaslabs, checking for duplicate entries and compares the + two, checking for potential double frees. If it encounters issues, + warnings will be printed, but the command will not necessarily fail.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+
Display the configuration of imported pool + rpool
+
+
+
# zdb -C rpool
+
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ ...
+
+
+
Display basic dataset information about + rpool
+
+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ ...
+
+
+
Display basic information about object 0 in + rpool/export/home
+
+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
Display the predicted effect of enabling deduplication on + rpool
+
+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ ...
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
April 14, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zed.8.html b/man/v2.0/8/zed.8.html new file mode 100644 index 000000000..d468b9298 --- /dev/null +++ b/man/v2.0/8/zed.8.html @@ -0,0 +1,456 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Manager's ManualZED(8)
+
+

+
+

+

ZED - ZFS Event Daemon

+

+
+
+

+

zed [-d zedletdir] [-f] [-F] + [-h] [-I] [-L] [-M] [-p pidfile] + [-P path] [-s statefile] [-v] [-V] + [-Z]

+

+
+
+

+

ZED (ZFS Event Daemon) monitors events generated by the ZFS + kernel module. When a zevent (ZFS Event) is posted, ZED will run any + ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) that have been + enabled for the corresponding zevent class.

+

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Run the daemon in the foreground.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Request that the daemon idle rather than exit when the kernel modules are + not loaded. Processing of events will start, or resume, when the kernel + modules are (re)loaded. Under Linux the kernel modules cannot be unloaded + while the daemon is running.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+
Read the enabled ZEDLETs from the specified directory.
+
+
Write the daemon's process ID to the specified file.
+
+
Custom $PATH for zedlets to use. Normally zedlets run in a locked-down + environment, with hardcoded paths to the ZFS commands ($ZFS, $ZPOOL, $ZED, + ...), and a hardcoded $PATH. This is done for security reasons. However, + the ZFS test suite uses a custom PATH for its ZFS commands, and passes it + to zed with -P. In short, -P is only to be used by the ZFS test suite; + never use it in production!
+
+
Write the daemon's state to the specified file.
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the "zpool + events -v" command.

+

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory. These can be symlinked or copied from the + installed-zedlets directory; symlinks allow for automatic updates + from the installed ZEDLETs, whereas copies preserve local modifications. As + a security measure, ZEDLETs must be owned by root. They must have execute + permissions for the user, but they must not have write permissions for group + or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they should be + invoked. In particular, a ZEDLET will be invoked for a given zevent if + either its class or subclass string is a prefix of its filename (and is + followed by a non-alphabetic character). As a special case, the prefix + "all" matches all zevents. Multiple ZEDLETs may be invoked for a + given zevent.

+

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + "ZED_".

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner: 1) it is prefixed with "ZEVENT_", 2) it is converted to + uppercase, and 3) each non-alphanumeric character is converted to an + underscore. Some additional environment variables have been defined to + present certain nvpair values in a more convenient form. An incomplete list + of zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as + "seconds nanoseconds" since the Epoch.
+
+
The seconds component of ZEVENT_TIME.
+
+
The nanoseconds component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The ZFS alias (name-version-release) string used to build the + daemon.
+
+
The ZFS version used to build the daemon.
+
+
The ZFS release used to build the daemon.
+
+

ZEDLETs may need to call other ZFS commands. The installation + paths of the following executables are defined: ZDB, ZED, + ZFS, ZINJECT, and ZPOOL. These variables can be + overridden in the rc file if needed.

+

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state. +

+
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
+
Terminate the daemon. +

+
+
+
+
+

+

ZED requires root privileges.

+

+
+
+

+

Events are processed synchronously by a single thread. This can + delay the processing of simultaneous zevents.

+

ZEDLETs are killed after a maximum of ten seconds. This can lead + to a violation of a ZEDLET's atomicity assumptions.

+

The ownership and permissions of the enabled-zedlets + directory (along with all parent directories) are not checked. If any of + these directories are improperly owned or permissioned, an unprivileged user + could insert a ZEDLET to be executed as root. The requirement that ZEDLETs + be owned by root mitigates this to some extent.

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Some zevent nvpair types are not handled. These are denoted by + zevent environment variables having a "_NOT_IMPLEMENTED_" + value.

+

Internationalization support via gettext has not been added.

+

The configuration file is not yet implemented.

+

The diagnosis engine is not yet implemented.

+

+
+
+

+

ZED (ZFS Event Daemon) is distributed under the terms of + the Common Development and Distribution License Version 1.0 (CDDL-1.0).

+

Developed at Lawrence Livermore National Laboratory + (LLNL-CODE-403049).

+

+
+
+

+

zfs(8), zpool(8) zpool-events(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-allow.8.html b/man/v2.0/8/zfs-allow.8.html new file mode 100644 index 000000000..97982903f --- /dev/null +++ b/man/v2.0/8/zfs-allow.8.html @@ -0,0 +1,540 @@ + + + + + + + zfs-allow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-allow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + Delegates ZFS administration permission for the file + systems to non-privileged users.

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]...
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]...
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]...
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]...
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+
+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is
+                                being allowed
+clone            subcommand     Must also have the 'create' ability and
+                                'mount' ability in the origin file system
+create           subcommand     Must also have the 'mount' ability.
+                                Must also have the 'refreservation' ability to
+                                create a non-sparse volume.
+destroy          subcommand     Must also have the 'mount' ability
+diff             subcommand     Allows lookup of paths within a dataset
+                                given an object number, and the ability
+                                to create snapshots necessary to
+                                'zfs diff'.
+hold             subcommand     Allows adding a user hold to a snapshot
+load-key         subcommand     Allows loading and unloading of encryption key
+                                (see 'zfs load-key' and 'zfs unload-key').
+change-key       subcommand     Allows changing an encryption key via
+                                'zfs change-key'.
+mount            subcommand     Allows mount/umount of ZFS datasets
+promote          subcommand     Must also have the 'mount' and 'promote'
+                                ability in the origin file system
+receive          subcommand     Must also have the 'mount' and 'create'
+                                ability
+release          subcommand     Allows releasing a user hold which might
+                                destroy the snapshot
+rename           subcommand     Must also have the 'mount' and 'create'
+                                ability in the new parent
+rollback         subcommand     Must also have the 'mount' ability
+send             subcommand
+share            subcommand     Allows sharing file systems over NFS
+                                or SMB protocols
+snapshot         subcommand     Must also have the 'mount' ability
+
+groupquota       other          Allows accessing any groupquota@...
+                                property
+groupused        other          Allows reading any groupused@... property
+userprop         other          Allows changing any user property
+userquota        other          Allows accessing any userquota@...
+                                property
+userused         other          Allows reading any userused@... property
+projectobjquota  other          Allows accessing any projectobjquota@...
+                                property
+projectquota     other          Allows accessing any projectquota@... property
+projectobjused   other          Allows reading any projectobjused@... property
+projectused      other          Allows reading any projectused@... property
+
+aclinherit       property
+acltype          property
+atime            property
+canmount         property
+casesensitivity  property
+checksum         property
+compression      property
+copies           property
+devices          property
+exec             property
+filesystem_limit property
+mountpoint       property
+nbmand           property
+normalization    property
+primarycache     property
+quota            property
+readonly         property
+recordsize       property
+refquota         property
+refreservation   property
+reservation      property
+secondarycache   property
+setuid           property
+sharenfs         property
+sharesmb         property
+snapdir          property
+snapshot_limit   property
+utf8only         property
+version          property
+volblocksize     property
+volsize          property
+vscan            property
+xattr            property
+zoned            property
+
+
+
zfs allow + -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-bookmark.8.html b/man/v2.0/8/zfs-bookmark.8.html new file mode 100644 index 000000000..215d9fe7a --- /dev/null +++ b/man/v2.0/8/zfs-bookmark.8.html @@ -0,0 +1,274 @@ + + + + + + + zfs-bookmark.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-bookmark.8

+
+ + + + + +
ZFS-BOOKMARK(8)System Manager's Manual (smm)ZFS-BOOKMARK(8)
+
+
+

+

zfs-bookmark — + Creates a bookmark of the given snapshot.

+
+
+

+
+
+

+
+
zfs bookmark + snapshot|bookmark + newbookmark
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs-send(8) command. +

When creating a bookmark from an existing redaction + bookmark, the resulting bookmark is + a redaction + bookmark.

+

This feature must be enabled to be used. See + zpool-features(5) for details on ZFS feature flags and + the + + feature.

+
+
+
+
+

+

zfs-destroy(8), zfs-send(8), + zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-change-key.8.html b/man/v2.0/8/zfs-change-key.8.html new file mode 100644 index 000000000..a2cbea4c5 --- /dev/null +++ b/man/v2.0/8/zfs-change-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-change-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-change-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + Load, unload, or change the encryption key used to access a + dataset.

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a | filesystem
+
+ + + + + +
zfsunload-key [-r] + -a | filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a | filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. This will cause zfs to + simply check that the provided key is correct. This command may be run + even if the key is already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a | filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded + into ZFS. This command may also be used to change the + keylocation, keyformat, and + pbkdf2iters properties as needed. If the dataset was not + previously an encryption root it will become one. Alternatively, the + -i flag may be provided to cause an encryption + root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim --secure if + supported by your hardware, otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to "zfs + load-key filesystem; + zfs change-key + filesystem"
+
+ property=value
+
Allows the user to set encryption key properties ( + keyformat, keylocation, and + pbkdf2iters ) while changing the key. This is the + only way to alter keyformat and + pbkdf2iters after the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + zvol data, file attributes, ACLs, permission bits, directory listings, FUID + mappings, and + + / + + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the zfs + load-key subcommand for more info on key + loading).

+

Creating an encrypted dataset requires + specifying the encryption and keyformat + properties at creation time, along with an optional + keylocation and pbkdf2iters. After + entering an encryption key, the created dataset will become an encryption + root. Any descendant datasets will inherit their encryption key from the + encryption root by default, meaning that loading, unloading, or changing the + key for the encryption root will implicitly do the same for all inheriting + datasets. If this inheritance is not desired, simply supply a + keyformat when creating the child dataset or use + zfs change-key to break an + existing relationship, creating a new encryption root on the child. Note + that the child's keyformat may match that of the parent + while still creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, and + pbkdf2iters) do not inherit like other ZFS properties and + instead use the value determined by their encryption root. Encryption root + inheritance can be tracked via the read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only dedup against themselves, their + snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost per block written.

+
+
+
+

+

zfs-create(8), zfs-set(8), + zfsprops(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-clone.8.html b/man/v2.0/8/zfs-clone.8.html new file mode 100644 index 000000000..d53e261fb --- /dev/null +++ b/man/v2.0/8/zfs-clone.8.html @@ -0,0 +1,290 @@ + + + + + + + zfs-clone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-clone.8

+
+ + + + + +
ZFS-CLONE(8)System Manager's ManualZFS-CLONE(8)
+
+
+

+

zfs-clone — + Creates a clone of the given snapshot.

+
+
+

+ + + + + +
zfsclone [-p] + [-o + property=value]... + snapshot + filesystem|volume
+
+
+

+
+
zfs clone + [-p] [-o + property=value]... + snapshot + filesystem|volume
+
See the + section of zfsconcepts(8) for details. The target + dataset can be located anywhere in the ZFS hierarchy, and is created as + the same type as the original. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + + property inherited from their parent. If the target filesystem or + volume already exists, the operation completes successfully.
+
+
+
+
+
+

+

zfs-promote(8), + zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-create.8.html b/man/v2.0/8/zfs-create.8.html new file mode 100644 index 000000000..978861411 --- /dev/null +++ b/man/v2.0/8/zfs-create.8.html @@ -0,0 +1,411 @@ + + + + + + + zfs-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-create.8

+
+ + + + + +
ZFS-CREATE(8)System Manager's ManualZFS-CREATE(8)
+
+
+

+

zfs-create — + Creates a new ZFS file system.

+
+
+

+ + + + + +
zfscreate [-Pnpv] + [-o + property=value]... + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]... + -V size + volume
+
+
+

+
+
zfs create + [-Pnpv] [-o + property=value]... + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have filesystem as + their only value. The create_ancestors key only + appears if the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + filesystem due to the use of the + -o option.
+
+
Print verbose information about the created dataset.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]... + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically + rounded up to the nearest multiple of the + .

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + + in the + section of zfsprops(8) for more + information about sparse volumes.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have volume as their + only value. The create_ancestors key only appears if + the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + volume due to the use of the + -b or -o options, as + well as + + if the volume is not sparse.
+
+
Print verbose information about the created dataset.
+
+
+
+
+

+

ZFS volumes may be used as swap devices. After creating the volume + with the zfs create + -V command set up and enable the swap area using the + mkswap(8) and swapon(8) commands. Do not + swap to a file on a ZFS file system. A ZFS swap file configuration is not + supported.

+
+
+
+

+

zfs-destroy(8), zfs-list(8), + zpool-create(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-destroy.8.html b/man/v2.0/8/zfs-destroy.8.html new file mode 100644 index 000000000..8ff670f53 --- /dev/null +++ b/man/v2.0/8/zfs-destroy.8.html @@ -0,0 +1,368 @@ + + + + + + + zfs-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-destroy.8

+
+ + + + + +
ZFS-DESTROY(8)System Manager's ManualZFS-DESTROY(8)
+
+
+

+

zfs-destroy — + Destroys the given dataset(s), snapshot(s), or + bookmark.

+
+
+

+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+
+

+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Force an unmount of any file systems using the + unmount -f command. + This option has no effect on non-file systems or unmounted file + systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]...
+
The given snapshots are destroyed immediately if and only if the + ‘zfs destroy’ command without the + -d option would have destroyed it. Such immediate + destruction would occur, for example, if the snapshot had no clones and + the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + ) should be + specified when using a range or comma-separated list to identify + multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
+
+
+

+

zfs-create(8), zfs-hold(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-diff.8.html b/man/v2.0/8/zfs-diff.8.html new file mode 100644 index 000000000..20781799f --- /dev/null +++ b/man/v2.0/8/zfs-diff.8.html @@ -0,0 +1,304 @@ + + + + + + + zfs-diff.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-diff.8

+
+ + + + + +
ZFS-DIFF(8)System Manager's ManualZFS-DIFF(8)
+
+
+

+

zfs-diffDisplay + the difference between two snapshots of a given filesystem.

+
+
+

+ + + + + +
zfsdiff [-FHt] + snapshot + snapshot|filesystem
+
+
+

+
+
zfs diff + [-FHt] snapshot + snapshot|filesystem
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are: +
+
-       The path has been removed
++       The path has been created
+M       The path has been modified
+R       The path has been renamed
+
+
+
+
Display an indication of the type of file, in a manner similar to the + -F option of ls(1). +
+
B       Block device
+C       Character device
+/       Directory
+>       Door
+|       Named pipe
+@       Symbolic link
+P       Event port
+=       Socket
+F       Regular file
+
+
+
+
Give more parsable tab-separated output, without header lines and + without arrows.
+
+
Display the path's inode change time as the first column of + output.
+
+
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-get.8.html b/man/v2.0/8/zfs-get.8.html new file mode 100644 index 000000000..11e8d36e0 --- /dev/null +++ b/man/v2.0/8/zfs-get.8.html @@ -0,0 +1,406 @@ + + + + + + + zfs-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-get.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setSets the + property or list of properties to the given value(s) for each + dataset.

+
+
+

+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+
+

+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Only some properties can be edited. See zfsprops(8) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the User Properties section of + zfsprops(8).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source  local, default, inherited,
+              temporary, received or none (-).
+
+

All columns are displayed by default, though this + can be controlled by using the -o option. This + command takes a comma-separated list of properties as described in the + and User Properties sections of + zfsprops(8).

+

The value all can be used to display all + properties that apply to the given dataset's type (filesystem, volume, + snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + , + , + , + , + and + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of + , + , + , + , + or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(8) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
+
+
+

+

zfs-list(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-groupspace.8.html b/man/v2.0/8/zfs-groupspace.8.html new file mode 100644 index 000000000..757ea7a3a --- /dev/null +++ b/man/v2.0/8/zfs-groupspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-groupspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-groupspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + Displays space consumed by, and quotas on, each user or + group in the specified filesystem or snapshot.

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is + numeral, not name. So need neither the option -i for SID + to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfs-set(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-hold.8.html b/man/v2.0/8/zfs-hold.8.html new file mode 100644 index 000000000..104a537be --- /dev/null +++ b/man/v2.0/8/zfs-hold.8.html @@ -0,0 +1,323 @@ + + + + + + + zfs-hold.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-hold.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdHold a + snapshot to prevent it being removed with the zfs destroy + command.

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot...
+
+ + + + + +
zfsholds [-rH] + snapshot...
+
+ + + + + +
zfsrelease [-r] + tag snapshot...
+
+
+

+
+
zfs hold + [-r] tag + snapshot...
+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its + own tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot...
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot...
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return + EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-inherit.8.html b/man/v2.0/8/zfs-inherit.8.html new file mode 100644 index 000000000..4f0d475e9 --- /dev/null +++ b/man/v2.0/8/zfs-inherit.8.html @@ -0,0 +1,406 @@ + + + + + + + zfs-inherit.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-inherit.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setSets the + property or list of properties to the given value(s) for each + dataset.

+
+
+

+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+
+

+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Only some properties can be edited. See zfsprops(8) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the User Properties section of + zfsprops(8).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source  local, default, inherited,
+              temporary, received or none (-).
+
+

All columns are displayed by default, though this + can be controlled by using the -o option. This + command takes a comma-separated list of properties as described in the + and User Properties sections of + zfsprops(8).

+

The value all can be used to display all + properties that apply to the given dataset's type (filesystem, volume, + snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + , + , + , + , + and + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of + , + , + , + , + or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(8) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
+
+
+

+

zfs-list(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-jail.8.html b/man/v2.0/8/zfs-jail.8.html new file mode 100644 index 000000000..7324a4c57 --- /dev/null +++ b/man/v2.0/8/zfs-jail.8.html @@ -0,0 +1,312 @@ + + + + + + + zfs-jail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-jail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jail — + Attaches and detaches ZFS filesystems from FreeBSD jails. + A ZFS dataset can be attached to a jail by using the + "zfs jail" subcommand. You cannot attach a + dataset to one jail and the children of the same dataset to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the jail, + as it would be attached unmounted until it is mounted from the rc script + inside the jail. To allow management of the dataset from within a jail, the + jailed property has to be set and the jail needs access to + the /dev/zfs device. The + + property cannot be changed from within a jail. See jail(8) + for information on how to allow mounting ZFS datasets from within a + jail.

+

A ZFS dataset can be detached from a jail + using the "zfs unjail" subcommand.

+

After a dataset is attached to a jail and the jailed property is + set, a jailed file system cannot be mounted outside the jail, since the jail + administrator might have set the mount point to an unacceptable value.

+
+
+

+ + + + + +
zfsjail + jailid|jailname + filesystem
+
+ + + + + +
zfsunjail + jailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid filesystem
+
+

Attaches the specified filesystem to the + jail identified by JID jailid. From now on this + file system tree can be managed from within a jail if the + jailed property has been set. To use this + functuinality, the jail needs the allow.mount and + allow.mount.zfs parameters set to 1 and the + enforce_statfs parameter set to a value lower than + 2.

+

See jail(8) for more information on managing + jails and configuring the parameters above.

+
+
zfs unjail + jailid filesystem
+
+

Detaches the specified filesystem from + the jail identified by JID jailid.

+
+
+
+
+

+

zfsprops(8)

+
+
+ + + + + +
December 9, 2019FreeBSD
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-list.8.html b/man/v2.0/8/zfs-list.8.html new file mode 100644 index 000000000..eeaf482b5 --- /dev/null +++ b/man/v2.0/8/zfs-list.8.html @@ -0,0 +1,370 @@ + + + + + + + zfs-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-list.8

+
+ + + + + +
ZFS-LIST(8)System Manager's ManualZFS-LIST(8)
+
+
+

+

zfs-listLists + the property information for the given datasets in tabular form.

+
+
+

+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
+
+

+
+
zfs + list + [-r|-d + depth] [-Hp] + [-o + property[,property]...] + [-s property]... + [-S property]... + [-t + type[,type]...] + [filesystem|volume|snapshot]...
+
If specified, you can list property information by the absolute pathname + or the relative pathname. By default, all file systems and volumes are + displayed. Snapshots are displayed if the + + pool property is + (the + default is + ), or + if the -t snapshot or + -t all options are specified. + The following fields are displayed: name, + used, + , + , + . +
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ property
+
Same as the -s option, but sorts by property + in descending order.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A + depth of + will + display only the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: + +
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command + line.
+
+ property
+
A property for sorting the output by column in ascending order based + on the value of the property. The property must be one of the + properties described in the + + section of zfsprops(8) or the value + name to sort by the dataset name. Multiple + properties can be specified at one time using multiple + -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior + of zfs list is + preserved.

+
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + , + or all. For example, specifying + -t snapshot displays only + snapshots.
+
+
+
+
+
+

+

zfs-get(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-load-key.8.html b/man/v2.0/8/zfs-load-key.8.html new file mode 100644 index 000000000..5eebc99ff --- /dev/null +++ b/man/v2.0/8/zfs-load-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-load-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-load-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + Load, unload, or change the encryption key used to access a + dataset.

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a | filesystem
+
+ + + + + +
zfsunload-key [-r] + -a | filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a | filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. This will cause zfs to + simply check that the provided key is correct. This command may be run + even if the key is already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a | filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded + into ZFS. This command may also be used to change the + keylocation, keyformat, and + pbkdf2iters properties as needed. If the dataset was not + previously an encryption root it will become one. Alternatively, the + -i flag may be provided to cause an encryption + root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim --secure if + supported by your hardware, otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to "zfs + load-key filesystem; + zfs change-key + filesystem"
+
+ property=value
+
Allows the user to set encryption key properties ( + keyformat, keylocation, and + pbkdf2iters ) while changing the key. This is the + only way to alter keyformat and + pbkdf2iters after the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + zvol data, file attributes, ACLs, permission bits, directory listings, FUID + mappings, and + + / + + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the zfs + load-key subcommand for more info on key + loading).

+

Creating an encrypted dataset requires + specifying the encryption and keyformat + properties at creation time, along with an optional + keylocation and pbkdf2iters. After + entering an encryption key, the created dataset will become an encryption + root. Any descendant datasets will inherit their encryption key from the + encryption root by default, meaning that loading, unloading, or changing the + key for the encryption root will implicitly do the same for all inheriting + datasets. If this inheritance is not desired, simply supply a + keyformat when creating the child dataset or use + zfs change-key to break an + existing relationship, creating a new encryption root on the child. Note + that the child's keyformat may match that of the parent + while still creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, and + pbkdf2iters) do not inherit like other ZFS properties and + instead use the value determined by their encryption root. Encryption root + inheritance can be tracked via the read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only dedup against themselves, their + snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost per block written.

+
+
+
+

+

zfs-create(8), zfs-set(8), + zfsprops(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-mount-generator.8.html b/man/v2.0/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..1e8fc712c --- /dev/null +++ b/man/v2.0/8/zfs-mount-generator.8.html @@ -0,0 +1,395 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)System Manager's ManualZFS-MOUNT-GENERATOR(8)
+
+

+

+
+

+

zfs-mount-generator - generates systemd mount units for ZFS

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+

+
+
+

+

zfs-mount-generator implements the Generators Specification + of systemd(1), and is called during early boot to generate + systemd.mount(5) units for automatically mounted datasets. Mount + ordering and dependencies are created for all tracked pools (see below).

+

+
+

+

If the dataset is an encryption root, a service that loads the + associated key (either from file or through a systemd-ask-password(1) + prompt) will be created. This service RequiresMountsFor the path of + the key (if file-based) and also copies the mount unit's After, + Before and Requires. All mount units of encrypted datasets add + the key-load service for their encryption root to their Wants and + After. The service will not be Wanted or Required by + local-fs.target directly, and so will only be started manually or as + a dependency of a started mount unit.

+

+
+
+

+

mount unit's Before -> key-load service (if any) -> + mount unit -> mount unit's After

+

It is worth nothing that when a mount unit is activated, it + activates all available mount units for parent paths to its mountpoint, i.e. + activating the mount unit for /tmp/foo/1/2/3 automatically activates all + available mount units for /tmp, /tmp/foo, /tmp/foo/1, and /tmp/foo/1/2. This + is true for any combination of mount units from any sources, not just + ZFS.

+

+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of the command

+

+
zfs list -H -o + name,mountpoint,canmount,atime,relatime,devices,exec,readonly,setuid,nbmand,encroot,keylocation,org.openzfs.systemd:requires,org.openzfs.systemd:requires-mounts-for,org.openzfs.systemd:before,org.openzfs.systemd:after,org.openzfs.systemd:wanted-by,org.openzfs.systemd:required-by,org.openzfs.systemd:nofail,org.openzfs.systemd:ignore +

+
+

for datasets that should be mounted by systemd, should be kept + separate from the pool, at

+

+
@sysconfdir@/zfs/zfs-list.cache/POOLNAME
+

The cache file, if writeable, will be kept synchronized with the + pool state by the ZEDLET

+

+
history_event-zfs-list-cacher.sh .
+
+
+

+

The behavior of the generator script can be influenced by the + following dataset properties:

+

+
+
+
If a dataset has mountpoint set and canmount is not + off, a mount unit will be generated. Additionally, if + canmount is on, local-fs.target will gain a + dependency on the mount unit. +

This behavior is equal to the auto and noauto + legacy mount options, see systemd.mount(5).

+

Encryption roots always generate a key-load service, even for + canmount=off.

+
+
+
Space-separated list of mountpoints to require to be mounted for this + mount unit
+
+
The mount unit and associated key-load service will be ordered before this + space-separated list of units.
+
+
The mount unit and associated key-load service will be ordered after this + space-separated list of units.
+
+
Space-separated list of units that will gain a Wants dependency on + this mount unit. Setting this property implies noauto.
+
+
Space-separated list of units that will gain a Requires dependency + on this mount unit. Setting this property implies noauto.
+
+
Toggles between a Wants and Requires type of dependency + between the mount unit and local-fs.target, if noauto isn't + set or implied. +

on: Mount will be WantedBy local-fs.target

+

off: Mount will be Before and RequiredBy + local-fs.target

+

unset: Mount will be Before and WantedBy + local-fs.target

+
+
+
If set to on, do not generate a mount unit for this dataset. +

+
+
+
+See also systemd.mount(5) +

+
+
+
+

+

To begin, enable tracking for the pool:

+

+
touch + @sysconfdir@/zfs/zfs-list.cache/POOLNAME
+

Then, enable the tracking ZEDLET:

+

+
ln -s + "@zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh" + "@sysconfdir@/zfs/zed.d" +

systemctl enable zfs-zed.service

+

systemctl restart zfs-zed.service

+
+

Force the running of the ZEDLET by setting a monitored property, + e.g. canmount, for at least one dataset in the pool:

+

+
zfs set canmount=on DATASET
+

This forces an update to the stale cache file.

+

To test the generator output, run

+

+
@systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator . .
+

This will generate units and dependencies in + /tmp/zfs-mount-generator for you to inspect them. The second and + third argument are ignored.

+

If you're satisfied with the generated units, instruct systemd to + re-run all generators:

+

+
systemctl daemon-reload
+

+

+
+
+

+

zfs(5) zfs-events(5) zed(8) zpool(5) + systemd(1) systemd.target(5) systemd.special(7) + systemd.mount(7)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-mount.8.html b/man/v2.0/8/zfs-mount.8.html new file mode 100644 index 000000000..73c14ca09 --- /dev/null +++ b/man/v2.0/8/zfs-mount.8.html @@ -0,0 +1,339 @@ + + + + + + + zfs-mount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-mount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountManage + mount state of ZFS file systems.

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a | filesystem
+
+ + + + + +
zfsunmount [-fu] + -a | + filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] -a | + filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(8) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has a + + of + + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] -a | + filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-program.8.html b/man/v2.0/8/zfs-program.8.html new file mode 100644 index 000000000..7e289e4f0 --- /dev/null +++ b/man/v2.0/8/zfs-program.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs-program — + executes ZFS channel programs

+
+
+

+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script
+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at:

+ +

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MB, and can be set to a maximum of 100 MB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libZFS interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libZFS interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
+
return ret0, ret1, ret2, ...
+
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
+
error: "error string, including Lua stack trace"
+
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

: + ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libZFS interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+
+
string -> string
+number -> int64
+boolean -> boolean_value
+nil -> boolean (no value)
+table -> nvlist
+
+

Likewise, table keys are replaced by string equivalents as + follows:

+
+
string -> no change
+number -> signed decimal string ("%lld")
+boolean -> "true" | "false"
+
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+
+
assert                  rawlen
+collectgarbage          rawget
+error                   rawset
+getmetatable            select
+ipairs                  setmetatable
+next                    tonumber
+pairs                   tostring
+rawequal                type
+
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+
+
dofile
+loadfile
+load
+pcall
+print
+xpcall
+
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
+
zfs.sync.destroy("rpool@snap")
+
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
+
zfs.sync.destroy({1="rpool@snap", defer=true})
+
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
+
zfs.sync.snapshot{"rpool@snap", defer=true}
+
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
+
errno = zfs.sync.promote(dataset)
+
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+
+
EPERM     ECHILD      ENODEV      ENOSPC
+ENOENT    EAGAIN      ENOTDIR     ESPIPE
+ESRCH     ENOMEM      EISDIR      EROFS
+EINTR     EACCES      EINVAL      EMLINK
+EIO       EFAULT      ENFILE      EPIPE
+ENXIO     ENOTBLK     EMFILE      EDOM
+E2BIG     EBUSY       ENOTTY      ERANGE
+ENOEXEC   EEXIST      ETXTBSY     EDQUOT
+EBADF     EXDEV       EFBIG
+
+
+
+

+

For detailed descriptions of the exact behavior of any zfs + administrative operations, see the main zfs(8) manual + page.

+
+
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running: +
+
  dtrace -n 'zfs-dbgmsg{trace(stringof(arg0))}'
+
+

msg (string)

+
Debug message to be printed.
+
+
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns false, but + zfs.exists("somepool/fs_that_may_exist") will error. +

dataset (string)

+
Dataset to check for existence. Must be in the + target pool.
+
+
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like guid) may wrap around and appear negative. +

dataset (string)

+
Filesystem or snapshot path to retrieve properties + from.
+

property (string)

+
Name of property to retrieve. All filesystem, + snapshot and volume properties are supported except for 'mounted' and + 'iscsioptions.' Also supports the 'written@snap' and 'written#bookmark' + properties and the '<user|group><quota|used>@id' properties, + though the id must be in numeric form.
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

dataset (string)

+
Filesystem or snapshot to be destroyed.
+

[optional] defer (boolean)

+
Valid only for destroying snapshots. If set to + true, and the snapshot has holds or clones, allows the snapshot to be + marked for deferred deletion rather than failing.
+
+
+
Clears the specified property in the given dataset, causing it to be + inherited from an ancestor, or restored to the default if no ancestor + property is set. The ‘zfs inherit + -S’ option has not been implemented. Returns 0 on + success, or a nonzero error code if the property could not be cleared. +

dataset (string)

+
Filesystem or snapshot containing the property + to clear.
+

property (string)

+
The property to clear. Allowed properties are + the same as those for the zfs + inherit command.
+
+
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

dataset (string)

+
Clone to be promoted.
+
+
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

filesystem (string)

+
Filesystem to rollback.
+
+
+
Sets the given property on a dataset. Currently only user properties + are supported. Returns 0 if the property was set, or a nonzero error + code otherwise. +

dataset (string)

+
The dataset where the property will be + set.
+

property (string)

+
The property to set. Only user properties are + supported.
+

value (string)

+
The value of the property to be set.
+
+
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

dataset (string)

+
Name of snapshot to create.
+
+
+
Create a bookmark of an existing source snapshot or bookmark. Returns + 0 if the new bookmark was successfully created, and a nonzero error + code otherwise. +

Note: Bookmarking requires the corresponding pool feature + to be enabled.

+

source (string)

+
Full name of the existing snapshot or + bookmark.
+

newbookmark (string)

+
Full name of the new bookmark.
+
+
+
+
+
For each function in the zfs.sync submodule, there is a corresponding + zfs.check function which performs a "dry run" of the same + operation. Each takes the same arguments as its zfs.sync counterpart and + returns 0 if the operation would succeed, or a non-zero error code if it + would fail, along with any other error details. That is, each has the same + behavior as the corresponding sync function except for actually executing + the requested change. For example, + + returns 0 if + + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
+
 
+
+
 
+
+
 
+
+
 
+
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
+
Iterate through all clones of the given snapshot. +

snapshot (string)

+
Must be a valid snapshot path in the current + pool.
+
+
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all bookmarks of the given dataset. Each bookmark is + returned as a string containing the full dataset name, e.g. + "pool/fs#bookmark". +

dataset (string)

+
Must be a valid filesystem or volume.
+
+
+
Iterate through all user holds on the given snapshot. Each hold is + returned as a pair of the hold's tag and the timestamp (in seconds + since the epoch) at which it was created. +

snapshot (string)

+
Must be a valid snapshot.
+
+
+
An alias for zfs.list.user_properties (see relevant entry). +

dataset (string)

+
Must be a valid filesystem, snapshot, or + volume.
+
+
+
Iterate through all user properties for the given dataset. For each + step of the iteration, output the property name, its value, and its + source. Throws a Lua error if the dataset is invalid. +

dataset (string)

+
Must be a valid filesystem, snapshot, or + volume.
+
+
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

dataset (string)

+
Must be a valid filesystem, snapshot or + volume.
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
January 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-project.8.html b/man/v2.0/8/zfs-project.8.html new file mode 100644 index 000000000..0d0d77862 --- /dev/null +++ b/man/v2.0/8/zfs-project.8.html @@ -0,0 +1,366 @@ + + + + + + + zfs-project.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-project.8

+
+ + + + + +
ZFS-PROJECT(8)System Manager's ManualZFS-PROJECT(8)
+
+
+

+

zfs-project — + List, set, or clear project ID and/or inherit flag on the + file(s) or directories.

+
+
+

+ + + + + +
zfsproject + [-d|-r] + file|directory...
+
+ + + + + +
zfsproject -C + [-kr] + file|directory...
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory...
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory...
+
+
+

+
+
zfs project + [-d|-r] + file|directory...
+
List project identifier (ID) and inherit flag of file(s) or directories. +
+
+
Show the directory project ID and inherit flag, not its children. It + will overwrite the former specified -r + option.
+
+
Show on subdirectories recursively. It will overwrite the former + specified -d option.
+
+
+
zfs project + -C [-kr] + file|directory...
+
Clear project inherit flag and/or ID on the file(s) or directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset as zero.
+
+
Clear on subdirectories recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory...
+
Check project ID and inherit flag on the file(s) or directories, report + the entries without project inherit flag or with different project IDs + from the specified (via -p option) value or the + target directory's project ID. +
+
+
Print file name with a trailing NUL instead of newline (by default), + like "find -print0".
+
+
Check the directory project ID and inherit flag, not its children. It + will overwrite the former specified -r + option.
+
+
Specify the referenced ID for comparing with the target file(s) or + directories' project IDs. If not specified, the target (top) + directory's project ID will be used as the referenced one.
+
+
Check on subdirectories recursively. It will overwrite the former + specified -d option.
+
+
+
zfs project + [-p id] + [-rs] + file|directory...
+
Set project ID and/or inherit flag on the file(s) or directories. +
+
+
Set the file(s)' or directories' project ID with the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given file(s) or directories. It is + usually used for setup tree quota on the directory target with + -r option specified together. When setup tree + quota, by default the directory's project ID will be set to all its + descendants unless you specify the project ID via + -p option explicitly.
+
+
+
+
+
+

+

zfs-projectspace(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-projectspace.8.html b/man/v2.0/8/zfs-projectspace.8.html new file mode 100644 index 000000000..4ea2ea4e1 --- /dev/null +++ b/man/v2.0/8/zfs-projectspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-projectspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-projectspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + Displays space consumed by, and quotas on, each user or + group in the specified filesystem or snapshot.

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is + numeral, not name. So need neither the option -i for SID + to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfs-set(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-promote.8.html b/man/v2.0/8/zfs-promote.8.html new file mode 100644 index 000000000..2d92e93c9 --- /dev/null +++ b/man/v2.0/8/zfs-promote.8.html @@ -0,0 +1,280 @@ + + + + + + + zfs-promote.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-promote.8

+
+ + + + + +
ZFS-PROMOTE(8)System Manager's ManualZFS-PROMOTE(8)
+
+
+

+

zfs-promote — + Promotes a clone file system to no longer be dependent on + its origin snapshot.

+
+
+

+ + + + + +
zfspromote + clone-filesystem
+
+
+

+
+
zfs promote + clone-filesystem
+
The promote command makes it possible to destroy + the file system that the clone was created from. The clone parent-child + dependency relationship is reversed, so that the origin file system + becomes a clone of the specified file system. +

The snapshot that was cloned, and any snapshots previous to + this snapshot, are now owned by the promoted clone. The space they use + moves from the origin file system to the promoted clone, so enough space + must be available to accommodate these snapshots. No new space is + consumed by this operation, but the space accounting is adjusted. The + promoted clone must not have any conflicting snapshot names of its own. + The zfs-rename(8) subcommand can be used to rename any + conflicting snapshots.

+
+
+
+
+

+

zfs-clone(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-receive.8.html b/man/v2.0/8/zfs-receive.8.html new file mode 100644 index 000000000..bc0a46b6d --- /dev/null +++ b/man/v2.0/8/zfs-receive.8.html @@ -0,0 +1,557 @@ + + + + + + + zfs-receive.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-receive.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + Creates a snapshot whose contents are as specified in the + stream provided on standard input.

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set ( + -o ) or inherited ( -x ) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w ) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + during + a receive. This is because the receive process itself is already using + stdin for the send stream. Instead, the property can be overridden after + the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no affect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
+
# zfs send tank/test@snap1 | zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile
+
+

Note that [-o + keylocation=prompt] may + not be specified here, since stdin is already being utilized for the + send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying [-x + encryption] to force the property to be + inherited. Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(5) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
+
+
+

+

zfs-send(8) zstream(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-recv.8.html b/man/v2.0/8/zfs-recv.8.html new file mode 100644 index 000000000..1b17d7af1 --- /dev/null +++ b/man/v2.0/8/zfs-recv.8.html @@ -0,0 +1,557 @@ + + + + + + + zfs-recv.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-recv.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + Creates a snapshot whose contents are as specified in the + stream provided on standard input.

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set ( + -o ) or inherited ( -x ) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w ) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + during + a receive. This is because the receive process itself is already using + stdin for the send stream. Instead, the property can be overridden after + the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no affect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
+
# zfs send tank/test@snap1 | zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile
+
+

Note that [-o + keylocation=prompt] may + not be specified here, since stdin is already being utilized for the + send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying [-x + encryption] to force the property to be + inherited. Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(5) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
+
+
+

+

zfs-send(8) zstream(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-redact.8.html b/man/v2.0/8/zfs-redact.8.html new file mode 100644 index 000000000..f0ad8515f --- /dev/null +++ b/man/v2.0/8/zfs-redact.8.html @@ -0,0 +1,745 @@ + + + + + + + zfs-redact.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-redact.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + Generate a send stream, which may be of a filesystem, and + may be incremental from a bookmark.

+
+
+

+ + + + + +
zfssend [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPRcenpvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPcenpv] +
+ [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-Penv] + -t receive_resume_token
+
+ + + + + +
zfssend [-Pnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark redaction_snapshot...
+
+
+

+
+
zfs send + [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
+ --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o -compress=value. + The data will stay compressed as it was from the sender. The new + compression property will be set for future data.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold command), and indicating to + zfs receive that the holds be applied to the dataset + on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPRcenpvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPcenpv] +
+ [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or --d ) flag. + The resulting send stream is said to be redacted with respect to the + snapshots the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs redact on the snapshot being + sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+

1. To receive, as a clone, an incremental send from the + original snapshot to one of the snapshots it was redacted with respect + to. In this case, the stream will produce a valid dataset when received + because all blocks that were redacted in the parent are guaranteed to be + present in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.

+

2. To receive an incremental send from the original snapshot + to something redacted with respect to a subset of the set of snapshots + the initial snapshot was redacted with respect to. In this case, each + block that was redacted in the original is still redacted (redacting + with respect to additional snapshots causes less data to be redacted + (because the snapshots define what is permitted, and everything else is + redacted)). This use case will produce a new redacted snapshot.

+

3. To receive an incremental send from a redaction bookmark of + the original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what blocks + were originally redacted. This will either produce a normal snapshot or + a redacted one, depending on whether the new send stream is + redacted.

+

4. To receive an incremental send from a redacted version of + the initial snapshot that is redacted with respect to a subject of the + set of snapshots the initial snapshot was created with respect to. A + send stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either produce + a normal snapshot or a redacted one, depending on whether the new send + stream is redacted.

+

5. To receive a full send as a clone of the redacted snapshot. + Since the stream is a full send, it definitionally contains all the data + needed to create a new dataset. This use case will either produce a + normal snapshot or a redacted one, depending on whether the full send + stream was redacted.

+

These restrictions are detected and enforced by zfs + receive; a redacted send stream will contain the list of snapshots + that the stream is redacted with respect to. These are stored with the + redacted snapshot, and are used to detect and correctly handle the cases + above. Note that for technical reasons, raw sends and redacted sends + cannot be combined at this time.

+
+
zfs send + [-Penv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs receive -s for more details.
+
zfs send + [-Pnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
+ --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot...
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for more information on the purpose + of this operation. If a redact operation fails partway through (due to an + error or a system failure), the redaction can be resumed by rerunning the + same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + , this + causes a redacted send to occur. Redacted sends omit the + blocks containing sensitive information, replacing them with REDACT records. + When these send streams are received, a redacted dataset + is created. A redacted dataset cannot be mounted by default, since it is + incomplete. It can be used to receive other send streams. In this way + datasets can be used for data backup and replication, with all the benefits + that zfs send and receive have to offer, while protecting sensitive + information from being stored on less-trusted machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs redact command + with a parent snapshot, a bookmark to be created, and a number of redaction + snapshots. These redaction snapshots must be descendants of the parent + snapshot, and they should modify data that is considered sensitive in some + way. Any blocks of data modified by all of the redaction snapshots will be + listed in the redaction bookmark, because it represents the truly sensitive + information. When it comes to the send step, the send process will not send + the blocks listed in the redaction bookmark, instead replacing them with + REDACT records. When received on the target system, this will create a + redacted dataset, missing the data that corresponds to the blocks in the + redaction bookmark on the sending system. The incremental send streams from + the original parent to the redaction snapshots can then also be received on + the target system, and this will produce a complete snapshot that can be + used normally. Incrementals from one snapshot on the parent filesystem and + another can also be done by sending from the redaction bookmark, rather than + the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be send to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-release.8.html b/man/v2.0/8/zfs-release.8.html new file mode 100644 index 000000000..ed50a4281 --- /dev/null +++ b/man/v2.0/8/zfs-release.8.html @@ -0,0 +1,323 @@ + + + + + + + zfs-release.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-release.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdHold a + snapshot to prevent it being removed with the zfs destroy + command.

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot...
+
+ + + + + +
zfsholds [-rH] + snapshot...
+
+ + + + + +
zfsrelease [-r] + tag snapshot...
+
+
+

+
+
zfs hold + [-r] tag + snapshot...
+
Adds a single reference, named with the tag + argument, to the specified snapshot or snapshots. Each snapshot has its + own tag namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot...
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot...
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return + EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-rename.8.html b/man/v2.0/8/zfs-rename.8.html new file mode 100644 index 000000000..aa384891a --- /dev/null +++ b/man/v2.0/8/zfs-rename.8.html @@ -0,0 +1,333 @@ + + + + + + + zfs-rename.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rename.8

+
+ + + + + +
ZFS-RENAME(8)System Manager's ManualZFS-RENAME(8)
+
+
+

+

zfs-rename — + Renames the given dataset (filesystem or + snapshot).

+
+
+

+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename -p + [-f] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -u + [-f] filesystem + filesystem
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+
+

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + -p [-f] + filesystem|volume + filesystem|volume
+
 
+
zfs rename + -u [-f] + filesystem filesystem
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any file systems that need to be unmounted in the + process. This flag has no effect if used together with the + -u flag.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
Do not remount file systems during rename. If a file system's + mountpoint property is set to + + or + , + the file system is not unmounted even if this option is not + given.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
+
+
+ + + + + +
September 1, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-rollback.8.html b/man/v2.0/8/zfs-rollback.8.html new file mode 100644 index 000000000..1517d5d94 --- /dev/null +++ b/man/v2.0/8/zfs-rollback.8.html @@ -0,0 +1,290 @@ + + + + + + + zfs-rollback.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rollback.8

+
+ + + + + +
ZFS-ROLLBACK(8)System Manager's ManualZFS-ROLLBACK(8)
+
+
+

+

zfs-rollback — + Roll back the given dataset to a previous + snapshot.

+
+
+

+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+
+

+
+
zfs rollback + [-Rfr] snapshot
+
When a dataset is rolled back, all data that has changed since the + snapshot is discarded, and the dataset reverts to the state at the time of + the snapshot. By default, the command refuses to roll back to a snapshot + other than the most recent one. In order to do so, all intermediate + snapshots and bookmarks must be destroyed by specifying the + -r option. +

The -rR options do not recursively + destroy the child snapshots of a recursive snapshot. Only direct + snapshots of the specified filesystem are destroyed by either of these + options. To completely roll back a recursive snapshot, you must rollback + the individual child snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones + of those snapshots.
+
+
Used with the -R option to force an unmount of + any clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-send.8.html b/man/v2.0/8/zfs-send.8.html new file mode 100644 index 000000000..70a8f1f04 --- /dev/null +++ b/man/v2.0/8/zfs-send.8.html @@ -0,0 +1,745 @@ + + + + + + + zfs-send.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-send.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + Generate a send stream, which may be of a filesystem, and + may be incremental from a bookmark.

+
+
+

+ + + + + +
zfssend [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPRcenpvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPcenpv] +
+ [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-Penv] + -t receive_resume_token
+
+ + + + + +
zfssend [-Pnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark redaction_snapshot...
+
+
+

+
+
zfs send + [-DLPRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
+ --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o -compress=value. + The data will stay compressed as it was from the sender. The new + compression property will be set for future data.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold command), and indicating to + zfs receive that the holds be applied to the dataset + on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPRcenpvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
+ --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(5) for details on ZFS feature + flags and the large_blocks feature.
+
+ --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
+ --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
+ --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
+ --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(5) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
+ --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
+ --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPcenpv] +
+ [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or --d ) flag. + The resulting send stream is said to be redacted with respect to the + snapshots the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs redact on the snapshot being + sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+

1. To receive, as a clone, an incremental send from the + original snapshot to one of the snapshots it was redacted with respect + to. In this case, the stream will produce a valid dataset when received + because all blocks that were redacted in the parent are guaranteed to be + present in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.

+

2. To receive an incremental send from the original snapshot + to something redacted with respect to a subset of the set of snapshots + the initial snapshot was redacted with respect to. In this case, each + block that was redacted in the original is still redacted (redacting + with respect to additional snapshots causes less data to be redacted + (because the snapshots define what is permitted, and everything else is + redacted)). This use case will produce a new redacted snapshot.

+

3. To receive an incremental send from a redaction bookmark of + the original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what blocks + were originally redacted. This will either produce a normal snapshot or + a redacted one, depending on whether the new send stream is + redacted.

+

4. To receive an incremental send from a redacted version of + the initial snapshot that is redacted with respect to a subject of the + set of snapshots the initial snapshot was created with respect to. A + send stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either produce + a normal snapshot or a redacted one, depending on whether the new send + stream is redacted.

+

5. To receive a full send as a clone of the redacted snapshot. + Since the stream is a full send, it definitionally contains all the data + needed to create a new dataset. This use case will either produce a + normal snapshot or a redacted one, depending on whether the full send + stream was redacted.

+

These restrictions are detected and enforced by zfs + receive; a redacted send stream will contain the list of snapshots + that the stream is redacted with respect to. These are stored with the + redacted snapshot, and are used to detect and correctly handle the cases + above. Note that for technical reasons, raw sends and redacted sends + cannot be combined at this time.

+
+
zfs send + [-Penv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs receive -s for more details.
+
zfs send + [-Pnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
+ --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot...
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for more information on the purpose + of this operation. If a redact operation fails partway through (due to an + error or a system failure), the redaction can be resumed by rerunning the + same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + , this + causes a redacted send to occur. Redacted sends omit the + blocks containing sensitive information, replacing them with REDACT records. + When these send streams are received, a redacted dataset + is created. A redacted dataset cannot be mounted by default, since it is + incomplete. It can be used to receive other send streams. In this way + datasets can be used for data backup and replication, with all the benefits + that zfs send and receive have to offer, while protecting sensitive + information from being stored on less-trusted machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs redact command + with a parent snapshot, a bookmark to be created, and a number of redaction + snapshots. These redaction snapshots must be descendants of the parent + snapshot, and they should modify data that is considered sensitive in some + way. Any blocks of data modified by all of the redaction snapshots will be + listed in the redaction bookmark, because it represents the truly sensitive + information. When it comes to the send step, the send process will not send + the blocks listed in the redaction bookmark, instead replacing them with + REDACT records. When received on the target system, this will create a + redacted dataset, missing the data that corresponds to the blocks in the + redaction bookmark on the sending system. The incremental send streams from + the original parent to the redaction snapshots can then also be received on + the target system, and this will produce a complete snapshot that can be + used normally. Incrementals from one snapshot on the parent filesystem and + another can also be done by sending from the redaction bookmark, rather than + the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be send to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-set.8.html b/man/v2.0/8/zfs-set.8.html new file mode 100644 index 000000000..04bf0d6c1 --- /dev/null +++ b/man/v2.0/8/zfs-set.8.html @@ -0,0 +1,406 @@ + + + + + + + zfs-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-set.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setSets the + property or list of properties to the given value(s) for each + dataset.

+
+
+

+ + + + + +
zfsset + property=value + [property=value]... + filesystem|volume|snapshot...
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot...
+
+
+

+
+
zfs set + property=value + [property=value]... + filesystem|volume|snapshot...
+
Only some properties can be edited. See zfsprops(8) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the User Properties section of + zfsprops(8).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]...] + [-s + source[,source]...] + [-t + type[,type]...] + all | + property[,property]... + [filesystem|volume|snapshot|bookmark]...
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
    name      Dataset name
+    property  Property name
+    value     Property value
+    source    Property source  local, default, inherited,
+              temporary, received or none (-).
+
+

All columns are displayed by default, though this + can be controlled by using the -o option. This + command takes a comma-separated list of properties as described in the + and User Properties sections of + zfsprops(8).

+

The value all can be used to display all + properties that apply to the given dataset's type (filesystem, volume, + snapshot, or bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: + , + , + , + , + , + and + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of + , + , + , + , + or all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot...
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(8) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value if one exists; otherwise + operate as if the -S option was not + specified.
+
+
+
+
+
+

+

zfs-list(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-share.8.html b/man/v2.0/8/zfs-share.8.html new file mode 100644 index 000000000..0435616d2 --- /dev/null +++ b/man/v2.0/8/zfs-share.8.html @@ -0,0 +1,300 @@ + + + + + + + zfs-share.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-share.8

+
+ + + + + +
ZFS-SHARE(8)System Manager's ManualZFS-SHARE(8)
+
+
+

+

zfs-shareShares + and unshares available ZFS filesystems.

+
+
+

+ + + + + +
zfsshare -a | + filesystem
+
+ + + + + +
zfsunshare -a | + filesystem|mountpoint
+
+
+

+
+
zfs share + -a | filesystem
+
Shares available ZFS file systems. +
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a | + filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
+
+
+

+

exports(5), smb.conf(5), + zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-snapshot.8.html b/man/v2.0/8/zfs-snapshot.8.html new file mode 100644 index 000000000..afc45584f --- /dev/null +++ b/man/v2.0/8/zfs-snapshot.8.html @@ -0,0 +1,291 @@ + + + + + + + zfs-snapshot.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-snapshot.8

+
+ + + + + +
ZFS-SNAPSHOT(8)System Manager's ManualZFS-SNAPSHOT(8)
+
+
+

+

zfs-snapshot — + Creates snapshots with the given names.

+
+
+

+ + + + + +
zfssnapshot [-r] + [-o + property=value]... + filesystem@snapname|volume@snapname...
+
+
+

+
+
zfs + snapshot [-r] + [-o + property=value]... + filesystem@snapname|volume@snapname...
+
All previous modifications by successful system calls to the file system + are part of the snapshots. Snapshots are taken atomically, so that all + snapshots correspond to the same moment in time. + zfs snap can be used as an + alias for zfs snapshot. + See the + + section of zfsconcepts(8) for details. +
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
+
+
+

+

zfs-bookmark(8), zfs-clone(8), + zfs-destroy(8), zfs-diff(8), + zfs-hold(8), zfs-rename(8), + zfs-rollback(8), zfs-send(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-unallow.8.html b/man/v2.0/8/zfs-unallow.8.html new file mode 100644 index 000000000..0b30658bb --- /dev/null +++ b/man/v2.0/8/zfs-unallow.8.html @@ -0,0 +1,540 @@ + + + + + + + zfs-unallow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unallow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + Delegates ZFS administration permission for the file + systems to non-privileged users.

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]... + perm|@setname[,perm|@setname]... + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]... + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]...
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]...
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]...
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]...
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+
+
NAME             TYPE           NOTES
+allow            subcommand     Must also have the permission that is
+                                being allowed
+clone            subcommand     Must also have the 'create' ability and
+                                'mount' ability in the origin file system
+create           subcommand     Must also have the 'mount' ability.
+                                Must also have the 'refreservation' ability to
+                                create a non-sparse volume.
+destroy          subcommand     Must also have the 'mount' ability
+diff             subcommand     Allows lookup of paths within a dataset
+                                given an object number, and the ability
+                                to create snapshots necessary to
+                                'zfs diff'.
+hold             subcommand     Allows adding a user hold to a snapshot
+load-key         subcommand     Allows loading and unloading of encryption key
+                                (see 'zfs load-key' and 'zfs unload-key').
+change-key       subcommand     Allows changing an encryption key via
+                                'zfs change-key'.
+mount            subcommand     Allows mount/umount of ZFS datasets
+promote          subcommand     Must also have the 'mount' and 'promote'
+                                ability in the origin file system
+receive          subcommand     Must also have the 'mount' and 'create'
+                                ability
+release          subcommand     Allows releasing a user hold which might
+                                destroy the snapshot
+rename           subcommand     Must also have the 'mount' and 'create'
+                                ability in the new parent
+rollback         subcommand     Must also have the 'mount' ability
+send             subcommand
+share            subcommand     Allows sharing file systems over NFS
+                                or SMB protocols
+snapshot         subcommand     Must also have the 'mount' ability
+
+groupquota       other          Allows accessing any groupquota@...
+                                property
+groupused        other          Allows reading any groupused@... property
+userprop         other          Allows changing any user property
+userquota        other          Allows accessing any userquota@...
+                                property
+userused         other          Allows reading any userused@... property
+projectobjquota  other          Allows accessing any projectobjquota@...
+                                property
+projectquota     other          Allows accessing any projectquota@... property
+projectobjused   other          Allows reading any projectobjused@... property
+projectused      other          Allows reading any projectused@... property
+
+aclinherit       property
+acltype          property
+atime            property
+canmount         property
+casesensitivity  property
+checksum         property
+compression      property
+copies           property
+devices          property
+exec             property
+filesystem_limit property
+mountpoint       property
+nbmand           property
+normalization    property
+primarycache     property
+quota            property
+readonly         property
+recordsize       property
+refquota         property
+refreservation   property
+reservation      property
+secondarycache   property
+setuid           property
+sharenfs         property
+sharesmb         property
+snapdir          property
+snapshot_limit   property
+utf8only         property
+version          property
+volblocksize     property
+volsize          property
+vscan            property
+xattr            property
+zoned            property
+
+
+
zfs allow + -c + perm|@setname[,perm|@setname]... + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]... + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]... + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]...] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-unjail.8.html b/man/v2.0/8/zfs-unjail.8.html new file mode 100644 index 000000000..2c0076e2e --- /dev/null +++ b/man/v2.0/8/zfs-unjail.8.html @@ -0,0 +1,312 @@ + + + + + + + zfs-unjail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unjail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jail — + Attaches and detaches ZFS filesystems from FreeBSD jails. + A ZFS dataset can be attached to a jail by using the + "zfs jail" subcommand. You cannot attach a + dataset to one jail and the children of the same dataset to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the jail, + as it would be attached unmounted until it is mounted from the rc script + inside the jail. To allow management of the dataset from within a jail, the + jailed property has to be set and the jail needs access to + the /dev/zfs device. The + + property cannot be changed from within a jail. See jail(8) + for information on how to allow mounting ZFS datasets from within a + jail.

+

A ZFS dataset can be detached from a jail + using the "zfs unjail" subcommand.

+

After a dataset is attached to a jail and the jailed property is + set, a jailed file system cannot be mounted outside the jail, since the jail + administrator might have set the mount point to an unacceptable value.

+
+
+

+ + + + + +
zfsjail + jailid|jailname + filesystem
+
+ + + + + +
zfsunjail + jailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid filesystem
+
+

Attaches the specified filesystem to the + jail identified by JID jailid. From now on this + file system tree can be managed from within a jail if the + jailed property has been set. To use this + functuinality, the jail needs the allow.mount and + allow.mount.zfs parameters set to 1 and the + enforce_statfs parameter set to a value lower than + 2.

+

See jail(8) for more information on managing + jails and configuring the parameters above.

+
+
zfs unjail + jailid filesystem
+
+

Detaches the specified filesystem from + the jail identified by JID jailid.

+
+
+
+
+

+

zfsprops(8)

+
+
+ + + + + +
December 9, 2019FreeBSD
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-unload-key.8.html b/man/v2.0/8/zfs-unload-key.8.html new file mode 100644 index 000000000..ac3f8e21c --- /dev/null +++ b/man/v2.0/8/zfs-unload-key.8.html @@ -0,0 +1,473 @@ + + + + + + + zfs-unload-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unload-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + Load, unload, or change the encryption key used to access a + dataset.

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a | filesystem
+
+ + + + + +
zfsunload-key [-r] + -a | filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a | filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. This will cause zfs to + simply check that the provided key is correct. This command may be run + even if the key is already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a | filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded + into ZFS. This command may also be used to change the + keylocation, keyformat, and + pbkdf2iters properties as needed. If the dataset was not + previously an encryption root it will become one. Alternatively, the + -i flag may be provided to cause an encryption + root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim --secure if + supported by your hardware, otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to "zfs + load-key filesystem; + zfs change-key + filesystem"
+
+ property=value
+
Allows the user to set encryption key properties ( + keyformat, keylocation, and + pbkdf2iters ) while changing the key. This is the + only way to alter keyformat and + pbkdf2iters after the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + zvol data, file attributes, ACLs, permission bits, directory listings, FUID + mappings, and + + / + + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the zfs + load-key subcommand for more info on key + loading).

+

Creating an encrypted dataset requires + specifying the encryption and keyformat + properties at creation time, along with an optional + keylocation and pbkdf2iters. After + entering an encryption key, the created dataset will become an encryption + root. Any descendant datasets will inherit their encryption key from the + encryption root by default, meaning that loading, unloading, or changing the + key for the encryption root will implicitly do the same for all inheriting + datasets. If this inheritance is not desired, simply supply a + keyformat when creating the child dataset or use + zfs change-key to break an + existing relationship, creating a new encryption root on the child. Note + that the child's keyformat may match that of the parent + while still creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, and + pbkdf2iters) do not inherit like other ZFS properties and + instead use the value determined by their encryption root. Encryption root + inheritance can be tracked via the read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only dedup against themselves, their + snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost per block written.

+
+
+
+

+

zfs-create(8), zfs-set(8), + zfsprops(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-unmount.8.html b/man/v2.0/8/zfs-unmount.8.html new file mode 100644 index 000000000..cf17a2f9e --- /dev/null +++ b/man/v2.0/8/zfs-unmount.8.html @@ -0,0 +1,339 @@ + + + + + + + zfs-unmount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unmount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountManage + mount state of ZFS file systems.

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a | filesystem
+
+ + + + + +
zfsunmount [-fu] + -a | + filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] -a | + filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(8) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has a + + of + + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] -a | + filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-upgrade.8.html b/man/v2.0/8/zfs-upgrade.8.html new file mode 100644 index 000000000..3bb0b8b45 --- /dev/null +++ b/man/v2.0/8/zfs-upgrade.8.html @@ -0,0 +1,319 @@ + + + + + + + zfs-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-upgrade.8

+
+ + + + + +
ZFS-UPGRADE(8)System Manager's ManualZFS-UPGRADE(8)
+
+
+

+

zfs-upgrade — + Manage upgrading the on-disk version of + filesystems.

+
+
+

+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a | filesystem
+
+
+

+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] -a | + filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of the software. zfs + send streams generated from new snapshots of these + file systems cannot be accessed on systems running older versions of the + software. +

In general, the file system version is independent of the pool + version. See zpool(8) for information on the + zpool upgrade + command.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to the specified version. If the + -V flag is not specified, this command + upgrades to the most recent version. This option can only be used to + increase the version number, and only up to the most recent version + supported by this software.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
+
+
+

+

zpool-upgrade(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-userspace.8.html b/man/v2.0/8/zfs-userspace.8.html new file mode 100644 index 000000000..2e5a2fe4a --- /dev/null +++ b/man/v2.0/8/zfs-userspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-userspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-userspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + Displays space consumed by, and quotas on, each user or + group in the specified filesystem or snapshot.

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (for example, + stat(2), ls + -l) perform this translation, so the + -i option allows the output from + zfs userspace to be + compared directly with those utilities. However, + -i may lead to confusion if some files were + created by an SMB user before a SMB-to-POSIX name mapping was + established. In such a case, some files will be owned by the SMB + entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]...
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]...
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]...] + [-s field]... + [-S field]... + [-t + type[,type]...] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]...] + [-s field]... + [-S field]... + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is + numeral, not name. So need neither the option -i for SID + to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfs-set(8), zfsprops(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs-wait.8.html b/man/v2.0/8/zfs-wait.8.html new file mode 100644 index 000000000..2b9786a65 --- /dev/null +++ b/man/v2.0/8/zfs-wait.8.html @@ -0,0 +1,284 @@ + + + + + + + zfs-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-wait.8

+
+ + + + + +
ZFS-WAIT(8)System Manager's ManualZFS-WAIT(8)
+
+
+

+

zfs-waitWait + for background activity to stop in a ZFS filesystem

+
+
+

+ + + + + +
zfswait [-t + activity[,activity]...] + fs
+
+
+

+
+
zfs wait + [-t + activity[,activity]...] + fs
+
Waits until all background activity of the given types has ceased in the + given filesystem. The activity could cease because it has completed or + because the filesystem has been destroyed or unmounted. If no activities + are specified, the command waits until background activity of every type + listed below has ceased. If there is no activity of the given types in + progress, the command returns immediately. +

These are the possible values for + activity, along with what each one waits for:

+
+
        deleteq       The filesystem's internal delete queue to empty
+
+

Note that the internal delete queue does not finish draining + until all large files have had time to be fully destroyed and all open + file handles to unlinked files are closed.

+
+
+
+
+

+

lsof(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs.8.html b/man/v2.0/8/zfs.8.html new file mode 100644 index 000000000..797d98782 --- /dev/null +++ b/man/v2.0/8/zfs.8.html @@ -0,0 +1,984 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's ManualZFS(8)
+
+
+

+

zfsconfigures + ZFS file systems

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfsversion
+
+ + + + + +
zfs<subcommand> + [<args>]
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace. For + example:

+
+
pool/{filesystem,volume,snapshot}
+
+

where the maximum length of a dataset name is + MAXNAMELEN (256 bytes) and the maximum amount of + nesting allowed in a path is 50 levels deep.

+

A dataset can be one of the following:

+
+
+
A ZFS dataset of type + + can be mounted within the standard system namespace and behaves like other + file systems. While ZFS file systems are designed to be POSIX compliant, + known issues exist that prevent compliance in some cases. Applications + that depend on standards conformance might fail due to non-standard + behavior when checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+

For details see zfsconcepts(8).

+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about properties, see + the zfsprops(8) man page.

+
+
+

+

Enabling the + + feature allows for the creation of encrypted filesystems and volumes. ZFS + will encrypt file and zvol data, file attributes, ACLs, permission bits, + directory listings, FUID mappings, and + + / + + data. For an overview of encryption see the + zfs-load-key(8) command manual.

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
An alias for the zfs + version subcommand.
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+

+
+
zfs-list(8)
+
Lists the property information for the given datasets in tabular + form.
+
zfs-create(8)
+
Creates a new ZFS file system or volume.
+
zfs-destroy(8)
+
Destroys the given dataset(s), snapshot(s), or bookmark.
+
zfs-rename(8)
+
Renames the given dataset (filesystem or snapshot).
+
zfs-upgrade(8)
+
Manage upgrading the on-disk version of filesystems.
+
+
+
+

+
+
zfs-snapshot(8)
+
Creates snapshots with the given names.
+
zfs-rollback(8)
+
Roll back the given dataset to a previous snapshot.
+
zfs-hold(8) / zfs-release(8)
+
Add or remove a hold reference to the specified snapshot or snapshots. If + a hold exists on a snapshot, attempts to destroy that snapshot by using + the zfs destroy command + return EBUSY.
+
zfs-diff(8)
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem.
+
+
+
+

+
+
zfs-clone(8)
+
Creates a clone of the given snapshot.
+
zfs-promote(8)
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot.
+
+
+
+

+
+
zfs-send(8)
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark.
+
zfs-receive(8)
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the + zfs-send(8) subcommand, which by default creates a full + stream.
+
zfs-bookmark(8)
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs + send command.
+
zfs-redact(8)
+
Generate a new redaction bookmark. This feature can be used to allow + clones of a filesystem to be made available on a remote system, in the + case where their parent need not (or needs to not) be usable.
+
+
+
+

+
+
zfs-get(8)
+
Displays properties for the given datasets.
+
zfs-set(8)
+
Sets the property or list of properties to the given value(s) for each + dataset.
+
zfs-inherit(8)
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists.
+
+
+
+

+
+
zfs-userspace(8) / zfs-groupspace(8) / + zfs-projectspace(8)
+
Displays space consumed by, and quotas on, each user, group, or project in + the specified filesystem or snapshot.
+
zfs-project(8)
+
List, set, or clear project ID and/or inherit flag on the file(s) or + directories.
+
+
+
+

+
+
zfs-mount(8)
+
Displays all ZFS file systems currently mounted, or mount ZFS filesystem + on a path described by its + + property.
+
zfs-unmount(8)
+
Unmounts currently mounted ZFS file systems.
+
+
+
+

+
+
zfs-share(8)
+
Shares available ZFS file systems.
+
zfs-unshare(8)
+
Unshares currently shared ZFS file systems.
+
+
+
+

+
+
zfs-allow(8)
+
Delegate permissions on the specified filesystem or volume.
+
zfs-unallow(8)
+
Remove delegated permissions on the specified filesystem or volume.
+
+
+
+

+
+
zfs-change-key(8)
+
Add or change an encryption key on the specified dataset.
+
zfs-load-key(8)
+
Load the key for the specified encrypted dataset, enabling access.
+
zfs-unload-key(8)
+
Unload a key for the specified dataset, removing the ability to access the + dataset.
+
+
+
+

+
+
zfs-program(8)
+
Execute ZFS administrative operations programmatically via a Lua + script-language channel program.
+
+
+
+

+
+
zfs-jail(8)
+
Attaches a filesystem to a jail.
+
zfs-unjail(8)
+
Detaches a filesystem from a jail.
+
+
+
+

+
+
zfs-wait(8)
+
Wait for background activity in a filesystem to complete.
+
+
+
+
+

+

The zfs utility exits 0 on success, 1 if + an error occurs, and 2 if invalid command line options were specified.

+
+
+

+
+
Creating a ZFS File System Hierarchy
+
The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, + and is automatically inherited by the child file system. +
+
# zfs create pool/home
+# zfs set mountpoint=/export/home pool/home
+# zfs create pool/home/bob
+
+
+
Creating a ZFS Snapshot
+
The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system. +
+
# zfs snapshot pool/home/bob@yesterday
+
+
+
Creating and Destroying Multiple + Snapshots
+
The following command creates snapshots named yesterday + of pool/home and all of its descendent file systems. + Each snapshot is mounted on demand in the + .zfs/snapshot directory at the root of its file + system. The second command destroys the newly created snapshots. +
+
# zfs snapshot -r pool/home@yesterday
+# zfs destroy -r pool/home@yesterday
+
+
+
Disabling and Enabling File System + Compression
+
The following command disables the compression property + for all file systems under pool/home. The next command + explicitly enables compression for + pool/home/anne. +
+
# zfs set compression=off pool/home
+# zfs set compression=on pool/home/anne
+
+
+
Listing ZFS Datasets
+
The following command lists all active file systems and volumes in the + system. Snapshots are displayed if the + + property is + . The + default is + . See + zpool(8) for more information on pool properties. +
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
Setting a Quota on a ZFS File System
+
The following command sets a quota of 50 Gbytes for + pool/home/bob. +
+
# zfs set quota=50G pool/home/bob
+
+
+
Listing ZFS Properties
+
The following command lists all properties for + pool/home/bob. +
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value.

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+ The following command lists all properties with local settings for + pool/home/bob. +
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
Rolling Back a ZFS File System
+
The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots. +
+
# zfs rollback -r pool/home/anne@yesterday
+
+
+
Creating a ZFS Clone
+
The following command creates a writable file system whose initial + contents are the same as + . +
+
# zfs clone pool/home/bob@yesterday pool/clone
+
+
+
Promoting a ZFS Clone
+
The following commands illustrate how to test out changes to a file + system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming: +
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
Inheriting ZFS Properties
+
The following command causes pool/home/bob and + pool/home/anne to inherit the + + property from their parent. +
+
# zfs inherit checksum pool/home/bob pool/home/anne
+
+
+
Remotely Replicating ZFS Data
+
The following commands send a full stream and then an incremental stream + to a remote machine, restoring them into + + and + , + respectively. poolB must contain the file system + poolB/received, and must not initially contain + . +
+
# zfs send pool/fs@a | \
+  ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b | \
+  ssh host zfs receive poolB/received/fs
+
+
+
Using the zfs receive -d Option
+
The following command sends a full stream of + + to a remote machine, receiving it into + . + The + + portion of the received snapshot's name is determined from the name of the + sent snapshot. poolB must contain the file system + poolB/received. If + + does not exist, it is created as an empty file system. +
+
# zfs send poolA/fsA/fsB@snap | \
+  ssh host zfs receive -d poolB/received
+
+
+
Setting User Properties
+
The following example sets the user-defined + + property for a dataset. +
+
# zfs set com.example:department=12345 tank/accounting
+
+
+
Performing a Rolling Snapshot
+
The following example shows how to maintain a history of snapshots with a + consistent naming scheme. To keep a week's worth of snapshots, the user + destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows: +
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
Setting sharenfs Property Options on a ZFS File + System
+
The following commands show how to set + + property options to enable + access + for a set of + addresses + and to enable root access for system + on the + + file system. +
+
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
+
+

If you are using + for host name + resolution, specify the fully qualified hostname.

+
+
Delegating ZFS Administration Permissions on a + ZFS Dataset
+
The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots on + tank/cindys. The permissions on + tank/cindys are also displayed. +
+
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point + access:

+
+
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
+
+
+
Delegating Create Time Permissions on a ZFS + Dataset
+
The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed. +
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
Defining and Granting a Permission Set on a ZFS + Dataset
+
The following example shows how to define and grant a permission set on + the tank/users file system. The permissions on + tank/users are also displayed. +
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Delegating Property Permissions on a ZFS + Dataset
+
The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed. +
+
# zfs allow cindys quota,reservation users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
Removing ZFS Delegated Permissions on a ZFS + Dataset
+
The following example shows how to remove the snapshot permission from the + staff group on the tank/users file + system. The permissions on tank/users are also + displayed. +
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
Showing the differences between a snapshot and a + ZFS Dataset
+
The following example shows how to see what has changed between a prior + snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected. +
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
Creating a bookmark
+
The following example create a bookmark to a snapshot. This bookmark can + then be used instead of snapshot in send streams. +
+
# zfs bookmark rpool@snapshot rpool#bookmark
+
+
+
Setting sharesmb Property Options on a ZFS File + System
+
The following example show how to share SMB filesystem through ZFS. Note + that that a user and his/her password must be given. +
+
# smbmount //127.0.0.1/share_tmp /mnt/tmp \
+  -o user=workgroup/turbo,password=obrut,uid=1000
+
+

Minimal + + configuration required:

+

Samba will need to listen to 'localhost' (127.0.0.1) for the + ZFS utilities to communicate with Samba. This is the default behavior + for most Linux distributions.

+

Samba must be able to authenticate a user. This can be done in + a number of ways, depending on if using the system password file, LDAP + or the Samba specific smbpasswd file. How to do this is outside the + scope of this manual. Please refer to the smb.conf(5) + man page for more information.

+

See the + of the smb.conf(5) man page for all + configuration options in case you need to modify any options to the + share afterwards. Do note that any changes done with the + net(8) command will be undone if the share is ever + unshared (such as at a reboot etc).

+
+
+
+
+

+
+
+
Cause zfs mount to use + + to mount zfs datasets. This option is provided for backwards compatibility + with older zfs versions.
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-allow(8), zfs-bookmark(8), + zfs-change-key(8), zfs-clone(8), + zfs-create(8), zfs-destroy(8), + zfs-diff(8), zfs-get(8), + zfs-groupspace(8), zfs-hold(8), + zfs-inherit(8), zfs-jail(8), + zfs-list(8), zfs-load-key(8), + zfs-mount(8), zfs-program(8), + zfs-project(8), zfs-projectspace(8), + zfs-promote(8), zfs-receive(8), + zfs-redact(8), zfs-release(8), + zfs-rename(8), zfs-rollback(8), + zfs-send(8), zfs-set(8), + zfs-share(8), zfs-snapshot(8), + zfs-unallow(8), zfs-unjail(8), + zfs-unload-key(8), zfs-unmount(8), + zfs-upgrade(8), + zfs-userspace(8), zfs-wait(8), + zfsconcepts(8), zfsprops(8), + zpool(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfs_ids_to_path.8.html b/man/v2.0/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..c4750964b --- /dev/null +++ b/man/v2.0/8/zfs_ids_to_path.8.html @@ -0,0 +1,279 @@ + + + + + + + zfs_ids_to_path.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_ids_to_path.8

+
+ + + + + +
ZFS_IDS_TO_PATH(8)System Manager's ManualZFS_IDS_TO_PATH(8)
+
+
+

+

zfs_ids_to_path — + convert objset and object ids to names and paths

+
+
+

+ + + + + +
zfs_ids_to_path[-v] pool + objset id object id
+
+ + + + + +
zfs_ids_to_path
+
+
+

+

The + + utility converts a provided objset and object id into a path to the file + that those ids refer to.

+
+
+
Verbose. Print the dataset name and the file path within the dataset + separately. This will work correctly even if the dataset is not + mounted.
+
+
+
+

+

zfs(8), zdb(8)

+
+
+ + + + + +
April 17, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfsconcepts.8.html b/man/v2.0/8/zfsconcepts.8.html new file mode 100644 index 000000000..6afe9023a --- /dev/null +++ b/man/v2.0/8/zfsconcepts.8.html @@ -0,0 +1,376 @@ + + + + + + + zfsconcepts.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsconcepts.8

+
+ + + + + +
ZFSCONCEPTS(8)System Manager's ManualZFSCONCEPTS(8)
+
+
+

+

zfsconceptsAn + overview of ZFS concepts.

+
+
+

+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of + volumes can be cloned or rolled back, visibility is determined by the + property + of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the + + property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a + snapshot is cloned, it creates an implicit dependency between the parent and + child. Even though the clone is created somewhere else in the dataset + hierarchy, the original snapshot cannot be destroyed as long as a clone + exists. The + property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set in + the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if + has a + mount point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + prevents the + file system from being mounted.

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, + umount, /etc/fstab). If a + file system's mount point is set to + , ZFS makes + no attempt to manage the file system, and the administrator is responsible + for mounting and unmounting the file system. Because pools must be imported + before a legacy mount can succeed, administrators should ensure that legacy + mounts are only attempted after the zpool import process finishes at boot + time. For example, on machines using systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the + + property enabled, duplicate data blocks are removed synchronously. The + result is that only unique data is stored and common components are shared + among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow IO and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk IO.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. As an alternative to deduplication consider using + , + as a less resource-intensive alternative.

+
+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zfsprops.8.html b/man/v2.0/8/zfsprops.8.html new file mode 100644 index 000000000..43204a80d --- /dev/null +++ b/man/v2.0/8/zfsprops.8.html @@ -0,0 +1,1534 @@ + + + + + + + zfsprops.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfsprops.8

+
+ + + + + +
ZFSPROPS(8)System Manager's ManualZFSPROPS(8)
+
+
+

+

zfspropsNative + properties and user-defined of ZFS datasets.

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the Encryption + section of zfs-load-key(8) for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + . + See zfs load-key and + zfs unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid , the objsetid of a dataset is + not transferred to other pools when the snapshot is copied with a + send/receive operation. The objsetid can be reused (for + a new dataset) after the dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive -s, this opaque token can be provided to + zfs send -t to resume and complete the zfs + receive.
+
+
For bookmarks, this is the list of snapshot guids the bookmark contains a + redaction list for. For snapshots, this is the list of snapshot guids the + snapshot is redacted with respect to.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: filesystem, + , + snapshot, or + .
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section of zfsconcepts(8)) + is space that is referenced exclusively by this snapshot. If this + snapshot is destroyed, the amount of used space will + be freed. Space that is shared by multiple snapshots isn't accounted for + in this metric. When a snapshot is destroyed, space that was previously + shared with this snapshot can become unique to snapshots adjacent to it, + thus changing the used space of those snapshots. The used space of the + latest snapshot can also be affected by changes in the file system. Note + that the used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced does not + take into account pending changes. Pending changes are generally + accounted for within a few seconds. Committing a change to a disk using + fsync(2) or O_SYNC does not + necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du and + ls -s. See the + zfs userspace subcommand + for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@... + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on is set on a file + system additional objects will be created per-file to store extended + attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal objects are normally + required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property xattr=on is set + on a fileset, ZFS will create additional objects per-file to store + extended attributes. These additional objects are reflected in the + projectobjused value and are counted against the + project's projectobjquota. When a filesystem is + configured to use xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 8 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which for + clones may be a snapshot in the origin's filesystem (or the origin of + the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=discard|groupmask|passthrough|restricted
+
Controls how an ACL is modified during chmod(2) and how inherited ACEs are + modified by the file creation mode. +
+
+
default, deletes all + + except for those representing the mode of the file or directory + requested by chmod(2).
+
+
reduces permissions granted in all + + entries found in the + + such that they are no greater than the group permissions specified by + chmod(2).
+
+
indicates that no changes are made to the ACL other than creating or + updating the necessary ACL entries to represent the new mode of the + file or directory.
+
+
will cause the chmod(2) operation to return an error + when used on any file or directory which has a non-trivial ACL whose + entries can not be represented by a mode. chmod(2) + is required to change the set user ID, set group ID, or sticky bits on + a file or directory, as they do not have equivalent ACL entries. In + order to use chmod(2) on a file or directory with a + non-trivial ACL when aclmode is set to + restricted, you must first remove all ACL entries + which do not represent the current mode.
+
+
+
=off|nfsv4|posix
+
Controls whether ACLs are enabled and if so what type of ACL to use. When + this property is set to a type of ACL not supported by the current + platform, the behavior is the same as if it were set to + off. +
+
+
default on Linux, when a file system has the acltype + property set to off then ACLs are disabled.
+
+
an alias for off
+
+
default on FreeBSD, indicates that NFSv4-style ZFS ACLs should be + used. These ACLs can be managed with the getfacl(1) + and setfacl(1) commands on FreeBSD. The + nfsv4 ZFS ACL type is not yet supported on + Linux.
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+
an alias for posix
+
+

To obtain the best performance when setting + posix users are strongly encouraged to set the + xattr=sa property. This will result in the POSIX ACL + being stored more efficiently on disk. But as a consequence, all new + extended attributes will only be accessible from OpenZFS implementations + which support the xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, and + edonr checksum algorithms require enabling the + appropriate features on the pool. FreeBSD does not support the + edonr algorithm.

+

Please see zpool-features(5) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N
+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the + current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(5) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where N is + an integer from 1 (fastest) to 9 (best compression ratio). Currently, + gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zstd compression algorithm + provides both high compression ratios and good performance. You can + specify the zstd level by using the value + zstd-N, where N is + an integer from 1 (fastest) to 19 (best compression ratio). + zstd is equivalent to + .

+

Faster speeds at the cost of the compression + ratio can be requested by setting a negative zstd + level. This is done using + zstd-fast-N, where + N is an integer in [1-9,10,20,30,...,100,500,1000] + which maps to a negative zstd level. The lower the + level the faster the compression - 1000 provides the fastest compression + and lowest compression ratio. zstd-fast is equivalent + to + .

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example 8k + blocks on disks with 4k disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=||3
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a + missing top-level vdev. Do NOT create, for example a + two-disk striped pool and set + on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the implementation + stores some encryption metadata where the third copy would normally + be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify||||
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + +

If set to verify, ZFS will do a byte-to-byte + comparison in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should NOT + be enabled on a system. See the + + section of zfsconcepts(8).

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy requires the + large_dnode pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the workload makes heavy + use of extended attributes. This may be applicable to SELinux-enabled + systems, Lustre servers, and Samba servers, for example. Literal values + are supported for cases where the optimal size is known in advance and + for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode feature, or if you + need to import this pool on a system that doesn't support the + large_dnode feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section of + zfs-load-key(8).

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
+
# dd if=/dev/urandom of=/path/to/output/key bs=32 count=1
+
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt|
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is + +

Even though the encryption suite cannot be changed after + dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via + STDIN, but users should be careful not to place keys which should be + kept secret on the command line. If a file URI is selected, the key will + be loaded from the specified absolute file path.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(5)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512B up to 1M. The default size is 0 which means no + small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpoolconcepts(8) for more + details on the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + section of zfsconcepts(8) for more + information on how this property is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none, or if they were mounted before the property + was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non Blocking mandatory locks). This is used for + SMB clients. Changes to this property only take effect when the file + system is umounted and remounted. See mount(8) for more + information on nbmand mounts. This property is not used + on Linux.
+
=on|off
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux and + FreeBSD file systems. On these platforms the property is + on by default. Set to off to disable + overlay mounts for consistency with OpenZFS on other platforms.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(5)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace subcommand + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@... properties are not + displayed by zfs get + all. The user's name must be appended after the + @ symbol, using one of the following forms:

+ +

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + mount + options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two + greater than or equal to 512 and less than or equal to 128 Kbytes. If + the + + feature is enabled on the pool, the size may be up to 1 Mbyte. See + zpool-features(5) for details on ZFS feature + flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 100 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + + is set. Turning this property on causes the access time to be updated + relative to the modify or change time. Access time is only updated if the + previous access time was earlier than the current modify or change time or + if the existing access time hasn't been updated within the past 24 hours. + The default value is off. The values + on and off are equivalent to the + relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a + . +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", ie. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user, system + passwd/shadow, LDAP or smbpasswd based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the + + file. Otherwise, the file system is automatically shared and unshared with + the zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section of zfsconcepts(8). + The default value is hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse + volume" (also known as "thin provisioned") can be created + by specifying the -s option to the + zfs create + -V command, or by changing the value of the + refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the + + A volume that is not sparse is said to be "thick provisioned". + A sparse volume can become thick provisioned by setting + refreservation to auto.

+
+
=default + | + + | + + | + | +
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshotted, cloned, replicated, etc, that can be suitable for + backup purposes. Value + + means that volumes exposition is controlled by system-wide tunable + zvol_volmode, where full, + dev and none are encoded as 1, 2 and 3 + respectively. The default value is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used on Linux.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported: either directory based or + system attribute based. +

The default value of on enables directory + based extended attributes. This style of extended attribute imposes no + practical limit on either the size or number of attributes which can be + set on a file. Although under Linux the getxattr(2) + and setxattr(2) system calls limit the maximum size to + 64K. This is the most compatible style of extended attribute and is + supported by all ZFS implementations.

+

System attribute based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk IO required. Up to + 64K of data may be stored per-file in the space reserved for system + attributes. If there is not enough space available for an extended + attribute then it will be automatically written as a directory based + xattr. System attribute based extended attributes are not accessible on + platforms which do not support the xattr=sa feature. + OpenZFS supports xattr=sa on both FreeBSD and + Linux.

+

The use of system attribute based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely on extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=off|on
+
Controls whether the dataset is managed from a jail. See the + "Jails" section in + zfs(8) for more information. Jails are a FreeBSD feature + and are not relevant on other platforms. The default value is + off.
+
=on|off
+
Controls whether the dataset is managed from a non-global zone. Zones are + a Solaris feature and are not relevant on other platforms. The default + value is off.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
    PROPERTY                MOUNT OPTION
+    atime                   atime/noatime
+    canmount                auto/noauto
+    devices                 dev/nodev
+    exec                    exec/noexec
+    readonly                ro/rw
+    relatime                relatime/norelatime
+    setuid                  suid/nosuid
+    xattr                   xattr/noxattr
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:, + but this namespace is not enforced by ZFS. User property names can be at + most 256 characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is + strongly suggested to use a reversed + domain name for + the module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+
+ + + + + +
May 5, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zgenhostid.8.html b/man/v2.0/8/zgenhostid.8.html new file mode 100644 index 000000000..19896cc61 --- /dev/null +++ b/man/v2.0/8/zgenhostid.8.html @@ -0,0 +1,331 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's Manual (smm)ZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate and store a hostid in +

+
+
+

+ + + + + +
zgenhostid[-f] [-o + filename] [hostid]
+
+
+

+

Creates /etc/hostid file and stores hostid + in it. If the user provides [hostid] on the command + line, validates and stores that value. Otherwise, randomly generates a value + to store.

+
+
+
Display a summary of the command-line options.
+
+
Force file overwrite.
+
+ filename
+
Write to filename instead of default + /etc/hostid
+
hostid
+
Specifies the value to be placed in /etc/hostid. + It should be a number with a value between 1 and 2^32-1. If it is 0, + zgenhostid will generate a random hostid. This value + must be unique among your systems. It + must be expressed in hexadecimal and be exactly + digits long, + optionally prefixed by + .
+
+
+
+

+

/etc/hostid

+
+
+

+
+
Generate a random hostid and store it
+
+
+
# zgenhostid
+
+
+
Record the libc-generated hostid in + /etc/hostid
+
+
+
# zgenhostid "$(hostid)"
+
+
+
Record a custom hostid (0xdeadbeef) in + /etc/hostid
+
+
+
# zgenhostid deadbeef
+
+
+
Record a custom hostid (0x01234567) in + /tmp/hostid
+
and ovewrite the file if it exists +
+
# zgenhostid -f -o /tmp/hostid 0x01234567
+
+
+
+
+
+

+

genhostid(1), hostid(1), + sethostid(3), + spl-module-parameters(5)

+
+
+

+

zgenhostid emulates the + genhostid(1) utility and is provided for use on systems + which do not include the utility or do not provide + sethostid(3) call.

+
+
+ + + + + +
March 18, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zinject.8.html b/man/v2.0/8/zinject.8.html new file mode 100644 index 000000000..2435c226c --- /dev/null +++ b/man/v2.0/8/zinject.8.html @@ -0,0 +1,403 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
ZINJECT(8)System Manager's ManualZINJECT(8)
+
+

+
+

+

zinject - ZFS Fault Injector

+
+
+

+

zinject creates artificial problems in a ZFS pool by + simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+
List injection records.
+
zinject -b objset:object:level:blkd [-f + frequency] [-amu] pool
+
Force an error into the pool at a bookmark.
+
zinject -c <id | all>
+
Cancel injection records.
+
zinject -d vdev -A <degrade|fault> + pool
+
Force a vdev into the DEGRADED or FAULTED state.
+
zinject -d vdev -D latency:lanes + pool
+
+

Add an artificial delay to IO requests on a particular device, + such that the requests take a minimum of 'latency' milliseconds to + complete. Each delay has an associated number of 'lanes' which defines + the number of concurrent IO requests that can be processed.

+

For example, with a single lane delay of 10 ms (-D 10:1), the + device will only be able to service a single IO request at a time with + each request taking 10 ms to complete. So, if only a single request is + submitted every 10 ms, the average latency will be 10 ms; but if more + than one request is submitted every 10 ms, the average latency will be + more than 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D 10:2), then the device will be able to service two requests at a + time, each with a minimum latency of 10 ms. So, if two requests are + submitted every 10 ms, then the average latency will be 10 ms; but if + more than two requests are submitted every 10 ms, the average latency + will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + '-D 10:1', is roughly equivalent to a single invocation of '-D 10:2'. + This also means, one can specify multiple lanes with differing target + latencies. For example, an invocation of '-D 10:1' followed by '-D 25:2' + will create 3 lanes on the device; one lane with a latency of 10 ms and + two lanes with a 25 ms latency.

+

+
+
zinject -d vdev [-e device_error] [-L + label_error] [-T failure] [-f + frequency] [-F] pool
+
Force a vdev error.
+
zinject -I [-s seconds | -g txgs] + pool
+
Simulate a hardware failure that fails to honor a cache flush.
+
zinject -p function pool
+
Panic inside the specified function.
+
zinject -t data [-C dvas] [-e device_error] [-f + frequency] [-l level] [-r range] + [-amq] path
+
Force an error into the contents of a file.
+
zinject -t dnode [-C dvas] [-e device_error] + [-f frequency] [-l level] [-amq] + path
+
Force an error into the metadnode for a file or directory.
+
zinject -t mos_type [-C dvas] [-e + device_error] [-f frequency] [-l + level] [-r range] [-amqu] + pool
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas (ex. '0,2'). + This option is not applicable to logical data errors such as + decompress and decrypt.
+
+
A vdev specified by path or GUID.
+
+
Specify checksum for an ECKSUM error, decompress for a data + decompression error, decrypt for a data decryption error, + corrupt to flip a bit in the data after a read, dtl for an + ECHILD error, io for an EIO error where reopening the device will + succeed, or nxio for an ENXIO error where reopening the device will + fail. For EIO and ENXIO, the "failed" reads or writes still + occur. The probe simply sets the error value reported by the I/O pipeline + so it appears the read or write failed. Decryption errors only currently + work with file data.
+
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between 0.0001 and 100.
+
+
Fail faster. Do fewer checks.
+
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+
Inject an error at a particular block level. The default is 0.
+
+
Set the label error region to one of nvlist, pad1, + pad2, or uber.
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+
Run for this many seconds before reporting failure.
+
+
Set the failure type to one of all, claim, free, + read, or write.
+
+
Set this to mos for any data in the MOS, mosdir for an + object directory, config for the pool configuration, bpobj + for the block pointer list, spacemap for the space map, + metaslab for the metaslab, or errlog for the persistent + error log.
+
+
Unload the pool after injection. +

+
+
+
+
+

+
+
+
Run zinject in debug mode. +

+
+
+
+
+

+

This man page was written by Darik Horn + <dajhorn@vanadac.com> excerpting the zinject usage message and + source code.

+

+
+
+

+

zpool(8), zfs(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-add.8.html b/man/v2.0/8/zpool-add.8.html new file mode 100644 index 000000000..616899d4d --- /dev/null +++ b/man/v2.0/8/zpool-add.8.html @@ -0,0 +1,308 @@ + + + + + + + zpool-add.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-add.8

+
+ + + + + +
ZPOOL-ADD(8)System Manager's ManualZPOOL-ADD(8)
+
+
+

+

zpool-addAdds + specified virtual devices to a ZFS storage pool

+
+
+

+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev...
+
+
+

+
+
zpool add + [-fgLnP] [-o + property=value] + pool vdev...
+
Adds the specified virtual devices to the given pool. The + vdev specification is described in the + + section of zpoolconcepts(8.) The behavior of the + -f option, and the device checks performed are + described in the zpool + create subcommand. +
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all + symbolic links. This can be used to look up the current block device + name regardless of the /dev/disk/ path used to open it.
+
+
Displays the configuration that would be used without actually adding + the vdevs. The actual pool creation can still + fail due to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the + zpoolprops(8) manual page for a list of valid + properties that can be set. The only property supported at the moment + is ashift.
+
+
+
+
+
+

+

zpool-remove(8), + zpool-attach(8), zpool-import(8), + zpool-initialize(8), zpool-online(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-attach.8.html b/man/v2.0/8/zpool-attach.8.html new file mode 100644 index 000000000..0233dc367 --- /dev/null +++ b/man/v2.0/8/zpool-attach.8.html @@ -0,0 +1,305 @@ + + + + + + + zpool-attach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-attach.8

+
+ + + + + +
ZPOOL-ATTACH(8)System Manager's ManualZPOOL-ATTACH(8)
+
+
+

+

zpool-attach — + Attach a new device to an existing ZFS virtual device + (vdev).

+
+
+

+ + + + + +
zpoolattach [-fsw] + [-o + property=value] + pool device new_device
+
+
+

+
+
zpool attach + [-fsw] [-o + property=value] + pool device new_device
+
Attaches new_device to the existing + device. The existing device cannot be part of a + raidz configuration. If device is not currently part + of a mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part + of a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately and any + running scrub is cancelled. +
+
+
Forces use of new_device, even if it appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + zpoolprops(8) manual page for a list of valid + properties that can be set. The only property supported at the moment + is ashift.
+
+
The new_device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verfied + during sequential reconstruction so a scrub is started when the + resilver completes. Sequential reconstruction is not supported for + raidz configurations.
+
+
Waits until new_device has finished resilvering + before returning.
+
+
+
+
+
+

+

zpool-detach(8), zpool-add(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-replace(8), + zpool-resilver(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-checkpoint.8.html b/man/v2.0/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..17117c02a --- /dev/null +++ b/man/v2.0/8/zpool-checkpoint.8.html @@ -0,0 +1,293 @@ + + + + + + + zpool-checkpoint.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-checkpoint.8

+
+ + + + + +
ZPOOL-CHECKPOINT(8)System Manager's ManualZPOOL-CHECKPOINT(8)
+
+
+

+

zpool-checkpoint — + Checkpoints the current state of a ZFS storage + pool

+
+
+

+ + + + + +
zpoolcheckpoint [-d, + --discard [-w, + --wait]] pool
+
+
+

+
+
zpool checkpoint + [-d, --discard + [-w, --wait]] + pool
+
Checkpoints the current state of pool , which can be + later restored by zpool import + --rewind-to-checkpoint. The existence of a checkpoint in a pool + prohibits the following zpool commands: + remove, attach, + detach, split, and + reguid. In addition, it may break reservation + boundaries if the pool lacks free space. The zpool + status command indicates the existence of a + checkpoint or the progress of discarding a checkpoint from a pool. The + zpool list command reports + how much space the checkpoint takes from the pool. +
+
+ --discard
+
Discards an existing checkpoint from pool.
+
+ --wait
+
Waits until the checkpoint has finished being discarded before + returning.
+
+
+
+
+
+

+

zpool-import(8), + zpool-status(8), zfs-snapshot(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-clear.8.html b/man/v2.0/8/zpool-clear.8.html new file mode 100644 index 000000000..8acbe0d28 --- /dev/null +++ b/man/v2.0/8/zpool-clear.8.html @@ -0,0 +1,274 @@ + + + + + + + zpool-clear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-clear.8

+
+ + + + + +
ZPOOL-CLEAR(8)System Manager's ManualZPOOL-CLEAR(8)
+
+
+

+

zpool-clear — + Clears device errors in a ZFS storage pool.

+
+
+

+ + + + + +
zpoolclear pool + [device]
+
+
+

+
+
zpool clear + pool [device]
+
Clears device errors in a pool. If no arguments are specified, all device + errors within the pool are cleared. If one or more devices is specified, + only those errors associated with the specified device or devices are + cleared. If multihost is enabled, and the pool has been suspended, this + will not resume I/O. While the pool was suspended, it may have been + imported on another host, and resuming I/O could result in pool + damage.
+
+
+
+

+

zdb(8), zpool-reopen(8), + zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-create.8.html b/man/v2.0/8/zpool-create.8.html new file mode 100644 index 000000000..0645be7a6 --- /dev/null +++ b/man/v2.0/8/zpool-create.8.html @@ -0,0 +1,392 @@ + + + + + + + zpool-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-create.8

+
+ + + + + +
ZPOOL-CREATE(8)System Manager's ManualZPOOL-CREATE(8)
+
+
+

+

zpool-create — + Creates a new ZFS storage pool

+
+
+

+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]... + [-o + feature@feature=value] + [-O + file-system-property=value]... + [-R root] + pool vdev...
+
+
+

+
+
zpool create + [-dfn] [-m + mountpoint] [-o + property=value]... + [-o + feature@feature=value]... + [-O + file-system-property=value]... + [-R root] + [-t tname] + pool vdev...
+
Creates a new storage pool containing the virtual devices specified on the + command line. The pool name must begin with a letter, and can only contain + alphanumeric characters as well as underscore + (""), dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + spare and + are + reserved, as are names beginning with mirror, + raidz, spare, and the pattern + . + The vdev specification is described in the + section of zpoolconcepts(8). +

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts + to use a new device in different pools, even if + + is + + The administrator must ensure that simultaneous invocations of any + combination of zpool replace, zpool + create, zpool add, or zpool + labelclear, do not refer to the same device. Using the same device + in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or + specified as the dedicated dump device, that prevents a device from ever + being used by ZFS. Other uses, such as having a preexisting UFS file + system, can be overridden with the -f + option.

+

The command also checks that the replication strategy for the + pool is consistent. An attempt to combine redundant and non-redundant + storage in a single pool, or to mix disks and files, results in an error + unless -f is specified. The use of differently + sized devices within a single raidz or mirror group is also flagged as + an error unless -f is specified.

+

Unless the -R option is specified, the + default mount point is + /pool. The mount point + must not exist or must be empty, or else the root dataset cannot be + mounted. This can be overridden with the -m + option.

+

By default all supported features are enabled on the new pool + unless the -d option is specified.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + + with the -o option. See + zpool-features(5) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use + or specify a conflicting replication level. Not all devices can be + overridden in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool + if altroot is specified. The mount point must be + an absolute path, + , + or none. For more information on dataset mount + points, see zfs(8).
+
+
Displays the configuration that would be used without actually + creating the pool. The actual pool creation can still fail due to + insufficient privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See the + zpoolprops(8) manual page for a list of valid + properties that can be set.
+
+ feature@feature=value
+
Sets the given pool feature. See the + zpool-features(5) section for a list of valid + features that can be set. Value can be either disabled or + enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the + pool. See the zfsprops(8) manual page for a list of + valid properties that can be set.
+
+ root
+
Equivalent to -o + =none + -o + =root
+
+ tname
+
Sets the in-core pool name to + + while the on-disk name will be the name specified as the pool name + . + This will set the default cachefile property to none. This is intended + to handle name space collisions when creating pools for other systems, + such as virtual machines or physical machines whose pools live on + network block devices.
+
+
+
+
+
+

+

zpool-destroy(8), + zpool-export(8), zpool-import(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-destroy.8.html b/man/v2.0/8/zpool-destroy.8.html new file mode 100644 index 000000000..121b444e4 --- /dev/null +++ b/man/v2.0/8/zpool-destroy.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-destroy.8

+
+ + + + + +
ZPOOL-DESTROY(8)System Manager's ManualZPOOL-DESTROY(8)
+
+
+

+

zpool-destroy — + Destroys the given ZFS storage pool, freeing up any devices + for other use

+
+
+

+ + + + + +
zpooldestroy [-f] + pool
+
+
+

+
+
zpool destroy + [-f] pool
+
Destroys the given pool, freeing up any devices for other use. This + command tries to unmount any active datasets before destroying the pool. +
+
+
Forces any active datasets contained within the pool to be + unmounted.
+
+
+
+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-detach.8.html b/man/v2.0/8/zpool-detach.8.html new file mode 100644 index 000000000..82dbf5c13 --- /dev/null +++ b/man/v2.0/8/zpool-detach.8.html @@ -0,0 +1,274 @@ + + + + + + + zpool-detach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-detach.8

+
+ + + + + +
ZPOOL-DETACH(8)System Manager's ManualZPOOL-DETACH(8)
+
+
+

+

zpool-detach — + Detaches a device from a ZFS mirror vdev (virtual + device)

+
+
+

+ + + + + +
zpooldetach pool device
+
+
+

+
+
zpool detach + pool device
+
Detaches device from a mirror. The operation is + refused if there are no other valid replicas of the data. If device may be + re-added to the pool later on then consider the + + command instead.
+
+
+
+

+

zpool-attach(8), + zpool-offline(8), zpool-labelclear(8), + zpool-remove(8), zpool-replace(8), + zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-events.8.html b/man/v2.0/8/zpool-events.8.html new file mode 100644 index 000000000..ff2ebfd15 --- /dev/null +++ b/man/v2.0/8/zpool-events.8.html @@ -0,0 +1,287 @@ + + + + + + + zpool-events.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-events.8

+
+ + + + + +
ZPOOL-EVENTS(8)System Manager's ManualZPOOL-EVENTS(8)
+
+
+

+

zpool-events — + Lists all recent events generated by the ZFS kernel + modules

+
+
+

+ + + + + +
zpoolevents [-vHf + [pool] | -c]
+
+
+

+
+
zpool events + [-vHf [pool] | + -c]
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + For more information about the subclasses and event payloads that can be + generated see the zfs-events(5) man page. +
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
+
+
+

+

zed(8), zpool-wait(8), + zfs-events(5), + zfs-module-parameters(5)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-export.8.html b/man/v2.0/8/zpool-export.8.html new file mode 100644 index 000000000..4c35c3cfb --- /dev/null +++ b/man/v2.0/8/zpool-export.8.html @@ -0,0 +1,293 @@ + + + + + + + zpool-export.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-export.8

+
+ + + + + +
ZPOOL-EXPORT(8)System Manager's ManualZPOOL-EXPORT(8)
+
+
+

+

zpool-export — + Exports the given ZFS storage pools from the + system

+
+
+

+ + + + + +
zpoolexport [-a] + [-f] pool...
+
+
+

+
+
zpool export + [-a] [-f] + pool...
+
Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present. +

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, + so that ZFS can label the disks with portable EFI labels. Otherwise, + disk drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, using the + unmount -f command. + This option is not supported on Linux. +

This command will forcefully export the pool even if it + has a shared spare that is currently being used. This may lead to + potential data corruption.

+
+
+
+
+
+
+

+

zpool-import(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-get.8.html b/man/v2.0/8/zpool-get.8.html new file mode 100644 index 000000000..27025f64d --- /dev/null +++ b/man/v2.0/8/zpool-get.8.html @@ -0,0 +1,313 @@ + + + + + + + zpool-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-get.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + Retrieves properties for the specified ZFS storage + pool(s)

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]...] + all|property[,property]... + [pool]...
+
+ + + + + +
zpoolset + property=value + pool
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]...] + all|property[,property]... + [pool]...
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
        name          Name of storage pool
+        property      Property name
+        value         Property value
+        source        Property source, either 'default' or 'local'.
+
+

See the zpoolprops(8) manual page for more + information on the available pool properties.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(8) manual page for more information on what + properties can be set and acceptable values.
+
+
+
+

+

zpoolprops(8), zpool-list(8), + zpool-features(5)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-history.8.html b/man/v2.0/8/zpool-history.8.html new file mode 100644 index 000000000..e106a8212 --- /dev/null +++ b/man/v2.0/8/zpool-history.8.html @@ -0,0 +1,281 @@ + + + + + + + zpool-history.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-history.8

+
+ + + + + +
ZPOOL-HISTORY(8)System Manager's ManualZPOOL-HISTORY(8)
+
+
+

+

zpool-history — + Displays the command history of the specified ZFS storage + pool(s)

+
+
+

+ + + + + +
zpoolhistory [-il] + [pool]...
+
+
+

+
+
zpool history + [-il] [pool]...
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified. +
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard + format includes, the user name, the hostname, and the zone in which + the operation was performed.
+
+
+
+
+
+

+

zpool-checkpoint(8), + zpool-events(8), zpool-status(8), + zpool-wait(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-import.8.html b/man/v2.0/8/zpool-import.8.html new file mode 100644 index 000000000..cd7e976f2 --- /dev/null +++ b/man/v2.0/8/zpool-import.8.html @@ -0,0 +1,546 @@ + + + + + + + zpool-import.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-import.8

+
+ + + + + +
ZPOOL-IMPORT(8)System Manager's ManualZPOOL-IMPORT(8)
+
+
+

+

zpool-import — + Lists ZFS storage pools available to import or import the + specified pools

+
+
+

+ + + + + +
zpoolimport [-D] + [-d dir|device]
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-n] [-T] + [-X]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root]
+
+ + + + + +
zpoolimport [-Dflm] + [-F [-n] + [-T] [-X]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool [-t]]
+
+
+

+
+
zpool import + [-D] [-d + dir|device]
+
Lists pools available to import. If the -d + -or -c options are not + specified, this command searches for devices using libblkid on Linux and + geom on FreeBSD. The -d option can be specified + multiple times, and all directories are searched. If the device appears to + be part of an exported pool, this command displays a summary of the pool + with the name of the pool, a numeric identifier, as well as the vdev + layout and current health of the device for each device or file. Destroyed + pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-n] + [-T] [-X]] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(8) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the + + mounting option is enabled. In this case, the checkpointed state of + the pool is opened and an administrator can see how the pool would + look like if they were to fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflm] [-F + [-n] [-t] + [-T] [-X]] + [-c + cachefile|-d + dir|device] [-o + mntopts] [-o + property=value]... + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(8) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set -o + cachefile=none when not explicitly specified.
+
+
+
+
+
+

+

zpool-export(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-initialize.8.html b/man/v2.0/8/zpool-initialize.8.html new file mode 100644 index 000000000..98110da28 --- /dev/null +++ b/man/v2.0/8/zpool-initialize.8.html @@ -0,0 +1,297 @@ + + + + + + + zpool-initialize.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-initialize.8

+
+ + + + + +
ZPOOL-INITIALIZE(8)System Manager's ManualZPOOL-INITIALIZE(8)
+
+
+

+

zpool-initialize — + Write to all unallocated regions of eligible devices in a + ZFS storage pool

+
+
+

+ + + + + +
zpoolinitialize [-c | + -s] [-w] + pool [device...]
+
+
+

+
+
zpool initialize + [-c | -s] + [-w] pool + [device...]
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified. Only leaf data or log devices may be initialized. +
+
+ --cancel
+
Cancel initializing on the specified devices, or all eligible devices + if none are specified. If one or more target devices are invalid or + are not currently being initialized, the command will fail and no + cancellation will occur on any device.
+
+ --suspend
+
Suspend initializing on the specified devices, or all eligible devices + if none are specified. If one or more target devices are invalid or + are not currently being initialized, the command will fail and no + suspension will occur on any device. Initializing can then be resumed + by running zpool + initialize with no flags on the relevant + target devices.
+
+ --wait
+
Wait until the devices have finished initializing before + returning.
+
+
+
+
+
+

+

zpool-add(8), zpool-attach(8), + zpool-create(8), zpool-online(8), + zpool-replace(8), zpool-trim(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-iostat.8.html b/man/v2.0/8/zpool-iostat.8.html new file mode 100644 index 000000000..80cbe6cc4 --- /dev/null +++ b/man/v2.0/8/zpool-iostat.8.html @@ -0,0 +1,427 @@ + + + + + + + zpool-iostat.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-iostat.8

+
+ + + + + +
ZPOOL-IOSTAT(8)System Manager's ManualZPOOL-IOSTAT(8)
+
+
+

+

zpool-iostat — + Display logical I/O statistics for the given ZFS storage + pools/vdevs

+
+
+

+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
+
+

+
+
zpool iostat + [[[-c SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [[pool...]|[pool + vdev...]|[vdev...]] + [interval [count]]
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os + may be observed via iostat(1). If writes are located + nearby, they may be merged into a single larger operation. Additional I/O + may be generated depending on the level of vdev redundancy. To filter + output, you may pass in a list of pools, a pool and list of vdevs in that + pool, or a list of any vdevs from any pool. If no items are specified, + statistics for every pool in the system are shown. When given an + interval, the statistics are printed every + interval seconds until ^C is pressed. If + -n flag is specified the headers are displayed + only once, otherwise they are displayed periodically. If count is + specified, the command exits after count reports are printed. The first + report printed is always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the + -y flag. Also note that the units of + , + , + that are + printed in the report are in base 1024. To get the raw values, use the + -p flag. +
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + iostat output. Users can run any script found + in their ~/.zpool.d directory or from the + system /etc/zfs/zpool.d directory. Script + names containing the slash (/) character are not allowed. The default + search path can be overridden by setting the ZPOOL_SCRIPTS_PATH + environment variable. A privileged user can run + -c if they have the ZPOOL_SCRIPTS_AS_ROOT + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or + add the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script + name, it prints a list of all scripts. -c + also sets verbose mode + (-v).

+

Script output should be in the form of + "name=value". The column name is set to "name" + and the value is set to "value". Multiple lines can be + used to output multiple columns. The first line of output not in the + "name=value" format is displayed without a column title, + and no more output after that is displayed. This can be useful for + printing error messages. Blank or NULL values are printed as a '-' + to make output awk-able.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
+
+
Underlying path to the vdev (/dev/sd*). For use with device + mapper, multipath, or partitioned vdevs.
+
+
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Print request size histograms for the leaf vdev's IO. This includes + histograms of individual IOs (ind) and aggregate IOs (agg). These + stats can be useful for observing how well IO aggregation is working. + Note that TRIM IOs may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wide statistics.
+
+
Omit statistics since boot. Normally the first line of output reports + the statistics since boot. This option suppresses that first line of + output. interval
+
+
Display latency histograms: +

total_wait: Total IO time (queuing + + disk IO time). disk_wait: Disk IO time (time + reading/writing the disk). syncq_wait: Amount + of time IO spent in synchronous priority queues. Does not include + disk time. asyncq_wait: Amount of time IO + spent in asynchronous priority queues. Does not include disk time. + scrub: Amount of time IO spent in scrub queue. + Does not include disk time.

+
+
+
Include average latency statistics: +

total_wait: Average total IO time + (queuing + disk IO time). disk_wait: Average + disk IO time (time reading/writing the disk). + syncq_wait: Average amount of time IO spent in + synchronous priority queues. Does not include disk time. + asyncq_wait: Average amount of time IO spent + in asynchronous priority queues. Does not include disk time. + scrub: Average queuing time in scrub queue. + Does not include disk time. trim: Average + queuing time in trim queue. Does not include disk time.

+
+
+
Include active queue statistics. Each priority queue has both pending + ( pend) and active ( + activ) IOs. Pending IOs are waiting to be issued + to the disk, and active IOs have been issued to disk and are waiting + for completion. These stats are broken out by priority queue: +

syncq_read/write: Current number of + entries in synchronous priority queues. + asyncq_read/write: Current number of entries + in asynchronous priority queues. scrubq_read: + Current number of entries in scrub queue. + trimq_write: Current number of entries in trim + queue.

+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
+
+
+

+

zpool-list(8), + zpool-status(8), iostat(1), + smartctl(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-labelclear.8.html b/man/v2.0/8/zpool-labelclear.8.html new file mode 100644 index 000000000..fca94445c --- /dev/null +++ b/man/v2.0/8/zpool-labelclear.8.html @@ -0,0 +1,279 @@ + + + + + + + zpool-labelclear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-labelclear.8

+
+ + + + + +
ZPOOL-LABELCLEAR(8)System Manager's ManualZPOOL-LABELCLEAR(8)
+
+
+

+

zpool-labelclear — + Removes ZFS label information from the specified physical + device

+
+
+

+ + + + + +
zpoollabelclear [-f] + device
+
+
+

+
+
zpool labelclear + [-f] device
+
Removes ZFS label information from the specified + device. If the device is a + cache device, it also removes the L2ARC header (persistent L2ARC). The + device must not be part of an active pool + configuration. +
+
+
Treat exported or foreign devices as inactive.
+
+
+
+
+
+

+

zpool-destroy(8), + zpool-detach(8), zpool-remove(8), + zpool-replace(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-list.8.html b/man/v2.0/8/zpool-list.8.html new file mode 100644 index 000000000..da0acd2c7 --- /dev/null +++ b/man/v2.0/8/zpool-list.8.html @@ -0,0 +1,320 @@ + + + + + + + zpool-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-list.8

+
+ + + + + +
ZPOOL-LIST(8)System Manager's ManualZPOOL-LIST(8)
+
+
+

+

zpool-listLists + ZFS storage pools along with a health status and space usage

+
+
+

+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
+
+

+
+
zpool list + [-HgLpPv] [-o + property[,property]...] + [-T u|d] + [pool]... [interval + [count]]
+
Lists the given pools along with a health status and space usage. If no + pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until ^C is pressed. + If count is specified, the command exits after + count reports are printed. +
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + zpoolprops(8) manual page for a list of valid + properties. The default list is name, + size, allocated, + free, checkpoint, + expandsize, fragmentation, + capacity, dedupratio, + health, altroot.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs + within the pool, in addition to the pool-wide statistics.
+
+
+
+
+
+

+

zpool-import(8), + zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-offline.8.html b/man/v2.0/8/zpool-offline.8.html new file mode 100644 index 000000000..82d4b8ad9 --- /dev/null +++ b/man/v2.0/8/zpool-offline.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-offline.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-offline.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + Take a physical device in a ZFS storage pool + offline

+
+
+

+ + + + + +
zpooloffline [-f] + [-t] pool + device...
+
+ + + + + +
zpoolonline [-e] + pool device...
+
+
+

+
+
zpool offline + [-f] [-t] + pool device...
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device...
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-online.8.html b/man/v2.0/8/zpool-online.8.html new file mode 100644 index 000000000..2dafbc566 --- /dev/null +++ b/man/v2.0/8/zpool-online.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-online.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-online.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + Take a physical device in a ZFS storage pool + offline

+
+
+

+ + + + + +
zpooloffline [-f] + [-t] pool + device...
+
+ + + + + +
zpoolonline [-e] + pool device...
+
+
+

+
+
zpool offline + [-f] [-t] + pool device...
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [-e] pool + device...
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-reguid.8.html b/man/v2.0/8/zpool-reguid.8.html new file mode 100644 index 000000000..c4a1cacbb --- /dev/null +++ b/man/v2.0/8/zpool-reguid.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-reguid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reguid.8

+
+ + + + + +
ZPOOL-REGUID(8)System Manager's ManualZPOOL-REGUID(8)
+
+
+

+

zpool-reguid — + Generate a new unique identifier for a ZFS storage + pool

+
+
+

+ + + + + +
zpoolreguid pool
+
+
+

+
+
zpool reguid + pool
+
Generates a new unique identifier for the pool. You must ensure that all + devices in this pool are online and healthy before performing this + action.
+
+
+
+

+

zpool-export(8), + zpool-import(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-remove.8.html b/man/v2.0/8/zpool-remove.8.html new file mode 100644 index 000000000..2336588df --- /dev/null +++ b/man/v2.0/8/zpool-remove.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-remove.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-remove.8

+
+ + + + + +
ZPOOL-REMOVE(8)System Manager's ManualZPOOL-REMOVE(8)
+
+
+

+

zpool-remove — + Remove a device from a ZFS storage pool

+
+
+

+ + + + + +
zpoolremove [-npw] + pool device...
+
+ + + + + +
zpoolremove -s + pool
+
+
+

+
+
zpool + remove [-npw] + pool device...
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. When the primary pool + storage includes a top-level raidz vdev only hot spare, cache, and log + devices can be removed. Note that keys for all encrypted datasets must be + loaded for top-level vdevs to be removed. +

Removing a top-level vdev reduces the total amount of space in + the storage pool. The specified device will be evacuated by copying all + allocated space from it to the other devices in the pool. In this case, + the zpool remove command + initiates the removal and returns, while the evacuation continues in the + background. The removal progress can be monitored with + zpool status. If an IO + error is encountered during the removal process it will be cancelled. + The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(5).

+

A mirrored top-level device (log or data) can be removed by + specifying the top-level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("no-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
Waits until the removal has completed before returning.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-offline(8), zpool-labelclear(8), + zpool-replace(8), zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-reopen.8.html b/man/v2.0/8/zpool-reopen.8.html new file mode 100644 index 000000000..96f75dcac --- /dev/null +++ b/man/v2.0/8/zpool-reopen.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-reopen.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reopen.8

+
+ + + + + +
ZPOOL-REOPEN(8)System Manager's ManualZPOOL-REOPEN(8)
+
+
+

+

zpool-reopen — + Reopen all virtual devices (vdevs) associated with a ZFS + storage pool

+
+
+

+ + + + + +
zpoolreopen [-n] + pool
+
+
+

+
+
zpool reopen + [-n] pool
+
Reopen all the vdevs associated with the pool. +
+
+
Do not restart an in-progress scrub operation. This is not recommended + and can result in partially resilvered devices unless a second scrub + is performed.
+
+
+
+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-replace.8.html b/man/v2.0/8/zpool-replace.8.html new file mode 100644 index 000000000..89532bd14 --- /dev/null +++ b/man/v2.0/8/zpool-replace.8.html @@ -0,0 +1,311 @@ + + + + + + + zpool-replace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-replace.8

+
+ + + + + +
ZPOOL-REPLACE(8)System Manager's ManualZPOOL-REPLACE(8)
+
+
+

+

zpool-replace — + Replace one device with another in a ZFS storage + pool

+
+
+

+ + + + + +
zpoolreplace [-fsw] + [-o + property=value] + pool device + [new_device]
+
+
+

+
+
zpool replace + [-fsw] [-o + property=value] + pool device + [new_device]
+
Replaces old_device with + new_device. This is equivalent to attaching + new_device, waiting for it to resilver, and then + detaching old_device. Any in progress scrub will be + cancelled. +

The size of new_device must be greater + than or equal to the minimum size of all the devices in a mirror or + raidz configuration.

+

new_device is required if the pool is + not redundant. If new_device is not specified, it + defaults to old_device. This form of replacement + is useful after an existing disk has failed and has been physically + replaced. In this case, the new disk may have the same + /dev path as the old device, even though it is + actually a different disk. ZFS recognizes this.

+
+
+
Forces use of new_device, even if it appears to + be in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the + zpoolprops(8) manual page for a list of valid + properties that can be set. The only property supported at the moment + is + .
+
+
The new_device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verfied + during sequential reconstruction so a scrub is started when the + resilver completes. Sequential reconstruction is not supported for + raidz configurations.
+
+
Waits until the replacement has completed before returning.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-initialize(8), zpool-online(8), + zpool-resilver(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-resilver.8.html b/man/v2.0/8/zpool-resilver.8.html new file mode 100644 index 000000000..1b1f86570 --- /dev/null +++ b/man/v2.0/8/zpool-resilver.8.html @@ -0,0 +1,274 @@ + + + + + + + zpool-resilver.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-resilver.8

+
+ + + + + +
ZPOOL-RESILVER(8)System Manager's ManualZPOOL-RESILVER(8)
+
+
+

+

zpool-resilver — + Start a resilver of a device in a ZFS storage + pool

+
+
+

+ + + + + +
zpoolresilver pool...
+
+
+

+
+
zpool + resilver pool...
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning. Any drives that were scheduled for a + deferred resilver will be added to the new one. This requires the + + feature.
+
+
+
+

+

zpool-iostat(8), + zpool-online(8), zpool-reopen(8), + zpool-replace(8), zpool-scrub(8), + zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-scrub.8.html b/man/v2.0/8/zpool-scrub.8.html new file mode 100644 index 000000000..39f0c6ca6 --- /dev/null +++ b/man/v2.0/8/zpool-scrub.8.html @@ -0,0 +1,306 @@ + + + + + + + zpool-scrub.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-scrub.8

+
+ + + + + +
ZPOOL-SCRUB(8)System Manager's ManualZPOOL-SCRUB(8)
+
+
+

+

zpool-scrub — + Begin a scrub or resume a paused scrub of a ZFS storage + pool

+
+
+

+ + + + + +
zpoolscrub [-s | + -p] [-w] + pool...
+
+
+

+
+
zpool scrub + [-s | -p] + [-w] pool...
+
Begins a scrub or resumes a paused scrub. The scrub examines all data in + the specified pools to verify that it checksums correctly. For replicated + (mirror or raidz) devices, ZFS automatically repairs any damage discovered + during the scrub. The zpool + status command reports the progress of the scrub + and summarizes the results of the scrub upon completion. +

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be + out of date (for example, when attaching a new device to a mirror or + replacing an existing device), whereas scrubbing examines all data to + discover silent errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive + operations, ZFS only allows one at a time. If a scrub is paused, the + zpool scrub resumes it. + If a resilver is in progress, ZFS does not allow a scrub to be started + until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During + this period, no completion time estimate will be provided.

+
+
+
Stop scrubbing.
+
+
+
+
Pause scrubbing. Scrub pause state and progress are periodically + synced to disk. If the system is restarted or pool is exported during + a paused scrub, even after import, scrub will remain paused until it + is resumed. Once resumed the scrub will pick up from the place where + it was last checkpointed to disk. To resume a paused scrub issue + zpool scrub + again.
+
+
Wait until scrub has completed before returning.
+
+
+
+
+
+

+

zpool-iostat(8), + zpool-resilver(8), zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-set.8.html b/man/v2.0/8/zpool-set.8.html new file mode 100644 index 000000000..d9057e52c --- /dev/null +++ b/man/v2.0/8/zpool-set.8.html @@ -0,0 +1,313 @@ + + + + + + + zpool-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-set.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + Retrieves properties for the specified ZFS storage + pool(s)

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]...] + all|property[,property]... + [pool]...
+
+ + + + + +
zpoolset + property=value + pool
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]...] + all|property[,property]... + [pool]...
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
        name          Name of storage pool
+        property      Property name
+        value         Property value
+        source        Property source, either 'default' or 'local'.
+
+

See the zpoolprops(8) manual page for more + information on the available pool properties.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display. + ,,, + is the default value.
+
+
Display numbers in parsable (exact) values.
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(8) manual page for more information on what + properties can be set and acceptable values.
+
+
+
+

+

zpoolprops(8), zpool-list(8), + zpool-features(5)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-split.8.html b/man/v2.0/8/zpool-split.8.html new file mode 100644 index 000000000..3bf8dbdee --- /dev/null +++ b/man/v2.0/8/zpool-split.8.html @@ -0,0 +1,324 @@ + + + + + + + zpool-split.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-split.8

+
+ + + + + +
ZPOOL-SPLIT(8)System Manager's ManualZPOOL-SPLIT(8)
+
+
+

+

zpool-split — + Split devices off a ZFS storage pool creating a new + pool

+
+
+

+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]... + [-R root] + pool newpool [device]...
+
+
+

+
+
zpool split + [-gLlnP] [-o + property=value]... + [-R root] pool + newpool [device ...]
+
Splits devices off pool creating + newpool. All vdevs in pool + must be mirrors and the pool must not be in the process of resilvering. At + the time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool. +

The optional device specification causes the specified + device(s) to be included in the new pool and, + should any devices remain unspecified, the last device in each mirror is + used as would be by default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the new pool + online. Note that if any datasets have a + + of + + this command will block waiting for the keys to be entered. Without + this flag encrypted datasets will be left unavailable until the keys + are loaded.
+
+
Do dry run, do not actually perform the split. Print out the expected + configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the specified property for newpool. See the + zpoolprops(8) manual page for more information on + the available pool properties.
+
+ root
+
Set + + for newpool to root and + automatically import it.
+
+
+
+
+
+

+

zpool-import(8), + zpool-list(8), zpool-remove(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-status.8.html b/man/v2.0/8/zpool-status.8.html new file mode 100644 index 000000000..8c453dc47 --- /dev/null +++ b/man/v2.0/8/zpool-status.8.html @@ -0,0 +1,337 @@ + + + + + + + zpool-status.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-status.8

+
+ + + + + +
ZPOOL-STATUS(8)System Manager's ManualZPOOL-STATUS(8)
+
+
+

+

zpool-status — + Display detailed health status for the given ZFS storage + pools

+
+
+

+ + + + + +
zpoolstatus [-c + SCRIPT] [-DigLpPstvx] + [-T u|d] + [pool]... [interval + [count]]
+
+
+

+
+
zpool status + [-c + [SCRIPT1[,SCRIPT2]...]] + [-DigLpPstvx] [-T + u|d] [pool]... + [interval [count]]
+
Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in + the system is displayed. For more information on pool and device health, + see the section of zpoolconcepts(8). +

If a scrub or resilver is in progress, this command reports + the percentage done and the estimated time to completion. Both of these + are only approximate, because the amount of data in the pool and the + other workloads on the system can change.

+
+
+ [SCRIPT1[,SCRIPT2]...]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool + status output. See the + -c option of zpool + iostat for complete details.
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can + be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the + -L flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in + the pool) block counts and sizes by reference count.
+
+
Display the number of leaf VDEV slow IOs. This is the number of IOs + that didn't complete in zio_slow_io_ms milliseconds (default 30 + seconds). This does not necessarily mean the IOs failed to complete, + just took an unreasonably long amount of time. This may indicate a + problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
Displays verbose data error information, printing out a complete list + of all data errors since the last complete pool scrub.
+
+
Only display status for pools that are exhibiting errors or are + otherwise unavailable. Warnings about pools not using the latest + on-disk format will not be included.
+
+
+
+
+
+

+

zpool-events(8), + zpool-history(8), zpool-iostat(8), + zpool-list(8), zpool-resilver(8), + zpool-scrub(8), zpool-wait(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-sync.8.html b/man/v2.0/8/zpool-sync.8.html new file mode 100644 index 000000000..ca07c1d1b --- /dev/null +++ b/man/v2.0/8/zpool-sync.8.html @@ -0,0 +1,273 @@ + + + + + + + zpool-sync.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-sync.8

+
+ + + + + +
ZPOOL-SYNC(8)System Manager's ManualZPOOL-SYNC(8)
+
+
+

+

zpool-syncForce + data to be written to primary storage of a ZFS storage pool and update + reporting data

+
+
+

+ + + + + +
zpoolsync [pool]...
+
+
+

+
+
zpool sync + [pool ...]
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + will + sync all pools on the system. Otherwise, it will sync only the specified + pool(s).
+
+
+
+

+

zpoolconcepts(8), + zpool-export(8), zpool-iostat(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-trim.8.html b/man/v2.0/8/zpool-trim.8.html new file mode 100644 index 000000000..2917a11e7 --- /dev/null +++ b/man/v2.0/8/zpool-trim.8.html @@ -0,0 +1,312 @@ + + + + + + + zpool-trim.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-trim.8

+
+ + + + + +
ZPOOL-TRIM(8)System Manager's ManualZPOOL-TRIM(8)
+
+
+

+

zpool-trim — + Initiate immediate TRIM operations for all free space in a + ZFS storage pool

+
+
+

+ + + + + +
zpooltrim [-dw] + [-r rate] + [-c | -s] + pool [device...]
+
+
+

+
+
zpool trim + [-dw] [-c | + -s] pool + [device...]
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space. +

A manual on-demand TRIM operation can be initiated + irrespective of the autotrim pool property setting. + See the documentation for the autotrim property above + for the types of vdev devices which can be trimmed.

+
+
+ --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, + the device guarantees that data stored on the trimmed blocks has been + erased. This requires support from the device and is not supported by + all SSDs.
+
+ --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
+ --cancel
+
Cancel trimming on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are + not currently being trimmed, the command will fail and no cancellation + will occur on any device.
+
+ --suspend
+
Suspend trimming on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are + not currently being trimmed, the command will fail and no suspension + will occur on any device. Trimming can then be resumed by running + zpool trim with no + flags on the relevant target devices.
+
+ --wait
+
Wait until the devices are done being trimmed before returning.
+
+
+
+
+
+

+

zpool-initialize(8), + zpool-wait(8), zpoolprops(8)

+
+
+ + + + + +
February 25, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-upgrade.8.html b/man/v2.0/8/zpool-upgrade.8.html new file mode 100644 index 000000000..4a8284b65 --- /dev/null +++ b/man/v2.0/8/zpool-upgrade.8.html @@ -0,0 +1,312 @@ + + + + + + + zpool-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-upgrade.8

+
+ + + + + +
ZPOOL-UPGRADE(8)System Manager's ManualZPOOL-UPGRADE(8)
+
+
+

+

zpool-upgrade — + Manage version and feature flags of ZFS storage + pools

+
+
+

+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool...
+
+
+

+
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools.
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the current software. See + zpool-features(5) for a description of feature flags + features supported by the current software.
+
zpool upgrade + [-V version] + -a|pool...
+
Enables all supported features on the given pool. Once this is done, the + pool will no longer be accessible on systems that do not support feature + flags. See zpool-features(5) for details on + compatibility with systems that support feature flags, but do not support + all features enabled on the pool. +
+
+
Enables all supported features on all pools.
+
+ version
+
Upgrade to the specified legacy version. If the + -V flag is specified, no features will be + enabled on the pool. This option can only be used to increase the + version number up to the last supported legacy version number.
+
+
+
+
+
+

+

zpool-features(5), + zpoolconcepts(8), zpoolprops(8), + zpool-history(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool-wait.8.html b/man/v2.0/8/zpool-wait.8.html new file mode 100644 index 000000000..22537b709 --- /dev/null +++ b/man/v2.0/8/zpool-wait.8.html @@ -0,0 +1,312 @@ + + + + + + + zpool-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-wait.8

+
+ + + + + +
ZPOOL-WAIT(8)System Manager's ManualZPOOL-WAIT(8)
+
+
+

+

zpool-waitWait + for background activity to stop in a ZFS storage pool

+
+
+

+ + + + + +
zpoolwait [-Hp] + [-T u|d] + [-t + activity[,activity]...] + pool [interval]
+
+
+

+
+
zpool wait + [-Hp] [-T + u|d] [-t + activity[,activity]...] + pool [interval]
+
Waits until all background activity of the given types has ceased in the + given pool. The activity could cease because it has completed, or because + it has been paused or canceled by a user, or because the pool has been + exported or destroyed. If no activities are specified, the command waits + until background activity of every type listed below has ceased. If there + is no activity of the given types in progress, the command returns + immediately. +

These are the possible values for + activity, along with what each one waits for:

+
+
        discard       Checkpoint to be discarded
+        free          'freeing' property to become 0
+        initialize    All initializations to cease
+        replace       All device replacements to cease
+        remove        Device removal to cease
+        resilver      Resilver to cease
+        scrub         Scrub to cease
+        trim          Manual trim to cease
+
+

If an interval is provided, the amount + of work remaining, in bytes, for each activity is printed every + interval seconds.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+
Display numbers in parsable (exact) values.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard + date format. See date(1).
+
+
+
+
+
+

+

zpool-status(8), + zpool-checkpoint(8), + zpool-initialize(8), zpool-replace(8), + zpool-remove(8), zpool-resilver(8), + zpool-scrub(8), zpool-trim(8)

+
+
+ + + + + +
February 25, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpool.8.html b/man/v2.0/8/zpool.8.html new file mode 100644 index 000000000..2e2b6881f --- /dev/null +++ b/man/v2.0/8/zpool.8.html @@ -0,0 +1,798 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's ManualZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpoolversion
+
+ + + + + +
zpool<subcommand> + [<args>]
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+

For an overview of creating and managing ZFS storage pools see the + zpoolconcepts(8) manual page.

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
An alias for the zpool + version subcommand.
+
zpool version
+
Displays the software version of the zpool + userland utility and the zfs kernel module.
+
+
+

+
+
zpool-create(8)
+
Creates a new storage pool containing the virtual devices specified on the + command line.
+
zpool-initialize(8)
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified.
+
+
+
+

+
+
zpool-destroy(8)
+
Destroys the given pool, freeing up any devices for other use.
+
zpool-labelclear(8)
+
Removes ZFS label information from the specified + device.
+
+
+
+

+
+
zpool-attach(8) / zpool-detach(8)
+
Increases or decreases redundancy by attach-ing or + detach-ing a device on an existing vdev (virtual + device).
+
zpool-add(8) / zpool-remove(8)
+
Adds the specified virtual devices to the given pool, or removes the + specified device from the pool.
+
zpool-replace(8)
+
Replaces an existing device (which may be faulted) with a new one.
+
zpool-split(8)
+
Creates a new pool by splitting all mirrors in an existing pool (which + decreases its redundancy).
+
+
+
+

+

Available pool properties listed in the + zpoolprops(8) manual page.

+
+
zpool-list(8)
+
Lists the given pools along with a health status and space usage.
+
zpool-get(8) / + zpool-set(8)
+
Retrieves the given list of properties (or all properties if + is used) for + the specified storage pool(s).
+
+
+
+

+
+
zpool-status(8)
+
Displays the detailed health status for the given pools.
+
zpool-iostat(8)
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os + may be observed via iostat(1).
+
zpool-events(8)
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + For more information about the subclasses and event payloads that can be + generated see the zfs-events(5) man page.
+
zpool-history(8)
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified.
+
+
+
+

+
+
zpool-scrub(8)
+
Begins a scrub or resumes a paused scrub.
+
zpool-checkpoint(8)
+
Checkpoints the current state of pool , which can be + later restored by zpool import + --rewind-to-checkpoint.
+
zpool-trim(8)
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.
+
zpool-sync(8)
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + will + sync all pools on the system. Otherwise, it will sync only the specified + pool(s).
+
zpool-upgrade(8)
+
Manage the on-disk format version of storage pools.
+
zpool-wait(8)
+
Waits until all background activity of the given types has ceased in the + given pool.
+
+
+
+

+
+
zpool-offline(8) zpool-online(8)
+
Takes the specified physical device offline or brings it online.
+
zpool-resilver(8)
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning.
+
zpool-reopen(8)
+
Reopen all the vdevs associated with the pool.
+
zpool-clear(8)
+
Clears device errors in a pool.
+
+
+
+

+
+
zpool-import(8)
+
Make disks containing ZFS storage pools available for use on the + system.
+
zpool-export(8)
+
Exports the given pools from the system.
+
zpool-reguid(8)
+
Generates a new unique identifier for the pool.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+

+
+
Creating a RAID-Z Storage Pool
+
The following command creates a pool with a single raidz root vdev that + consists of six disks. +
+
# zpool create tank raidz sda sdb sdc sdd sde sdf
+
+
+
Creating a Mirrored Storage Pool
+
The following command creates a pool with two mirrors, where each mirror + contains two disks. +
+
# zpool create tank mirror sda sdb mirror sdc sdd
+
+
+
Creating a ZFS Storage Pool by Using + Partitions
+
The following command creates an unmirrored pool using two disk + partitions. +
+
# zpool create tank sda1 sdb2
+
+
+
Creating a ZFS Storage Pool by Using + Files
+
The following command creates an unmirrored pool using files. While not + recommended, a pool based on files can be useful for experimental + purposes. +
+
# zpool create tank /path/to/file/a /path/to/file/b
+
+
+
Adding a Mirror to a ZFS Storage Pool
+
The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool. +
+
# zpool add tank mirror sda sdb
+
+
+
Listing Available ZFS Storage Pools
+
The following command lists all available pools on the system. In this + case, the pool + is + faulted due to a missing device. The results from this command are similar + to the following: +
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
Destroying a ZFS Storage Pool
+
The following command destroys the pool tank and any + datasets contained within. +
+
# zpool destroy -f tank
+
+
+
Exporting a ZFS Storage Pool
+
The following command exports the devices in pool tank + so that they can be relocated or later imported. +
+
# zpool export tank
+
+
+
Importing a ZFS Storage Pool
+
The following command displays available pools, and then imports the pool + tank for use on the system. The results from this + command are similar to the following: +
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
Upgrading All ZFS Storage Pools to the Current + Version
+
The following command upgrades all ZFS Storage pools to the current + version of the software. +
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
Managing Hot Spares
+
The following command creates a new pool with an available hot spare: +
+
# zpool create tank mirror sda sdb spare sdc
+
+

If one of the disks were to fail, the pool would be reduced to + the degraded state. The failed device can be replaced using the + following command:

+
+
# zpool replace tank sda sdd
+
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The + hot spare can be permanently removed from the pool using the following + command:

+
+
# zpool remove tank sdc
+
+
+
Creating a ZFS Pool with Mirrored Separate + Intent Logs
+
The following command creates a ZFS storage pool consisting of two, + two-way mirrors and mirrored log devices: +
+
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
+  sde sdf
+
+
+
Adding Cache Devices to a ZFS Pool
+
The following command adds two disks for use as cache devices to a ZFS + storage pool: +
+
# zpool add pool cache sdc sdd
+
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take + over an hour for them to fill. Capacity and reads can be monitored using + the iostat option as follows:

+
+
# zpool iostat -v pool 5
+
+
+
Removing a Mirrored top-level (Log or Data) + Device
+
The following commands remove the mirrored log device + mirror-2 and mirrored top-level data device + mirror-1. +

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
+
# zpool remove tank mirror-2
+
+

The command to remove the mirrored data + mirror-1 is:

+
+
# zpool remove tank mirror-1
+
+
+
Displaying expanded space on a + device
+
The following command displays the detailed information for the pool + . + This pool is comprised of a single raidz vdev where one of its devices + increased its capacity by 10GB. In this example, the pool will not be able + to utilize this extra capacity until all the devices under the raidz vdev + have been expanded. +
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
Adding output columns
+
Additional columns can be added to the zpool + status and zpool + iostat output with -c + option. +
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
+
+
Use ANSI color in zpool status output.
+
+
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
+
+
If set, suppress warning about non-native vdev ashift in + zpool status. The value is not used, only the + presence or absence of the variable matters.
+
+
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool status + -g command line option.
+
+
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the zpool + status -L command line option.
+
+
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the zpool + status -P command line option.
+
+
+
+
Older OpenZFS implementations had issues when attempting to display pool + config VDEV names if a devid NVP value is present in the + pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and zpool + status would fail when listing the config. This would also be + true for future Linux based pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+
+
+
+
+
Allow a privileged user to run the zpool + status/iostat with the -c option. Normally, + only unprivileged users are allowed to run + -c.
+
+
+
+
The search path for scripts when running zpool + status/iostat with the -c option. This is a + colon-separated list of directories and overrides the default + ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
+
+
Allow a user to run zpool status/iostat with the + -c option. If + ZPOOL_SCRIPTS_ENABLED is not set, it is assumed that the + user is allowed to run zpool status/iostat + -c.
+
+
+
+

+

+
+
+

+

zfs-events(5), + zfs-module-parameters(5), + zpool-features(5), zed(8), + zfs(8), zpool-add(8), + zpool-attach(8), zpool-checkpoint(8), + zpool-clear(8), zpool-create(8), + zpool-destroy(8), zpool-detach(8), + zpool-events(8), zpool-export(8), + zpool-get(8), zpool-history(8), + zpool-import(8), zpool-initialize(8), + zpool-iostat(8), zpool-labelclear(8), + zpool-list(8), zpool-offline(8), + zpool-online(8), zpool-reguid(8), + zpool-remove(8), zpool-reopen(8), + zpool-replace(8), zpool-resilver(8), + zpool-scrub(8), zpool-set(8), + zpool-split(8), zpool-status(8), + zpool-sync(8), zpool-trim(8), + zpool-upgrade(8), zpool-wait(8), + zpoolconcepts(8), zpoolprops(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpoolconcepts.8.html b/man/v2.0/8/zpoolconcepts.8.html new file mode 100644 index 000000000..4c42cbfec --- /dev/null +++ b/man/v2.0/8/zpoolconcepts.8.html @@ -0,0 +1,572 @@ + + + + + + + zpoolconcepts.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolconcepts.8

+
+ + + + + +
ZPOOLCONCEPTS(8)System Manager's ManualZPOOLCONCEPTS(8)
+
+
+

+

zpoolconcepts — + overview of ZFS storage pools

+
+
+

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system of which it + is a part. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with N disks of size X + can hold X bytes and can withstand (N-1) devices failing before data + integrity is compromised.
+
, + raidz1, raidz2, + raidz3
+
A variation on RAID-5 that allows for better distribution of parity and + eliminates the RAID-5 "write hole" (in which data and parity + become inconsistent after a power loss). Data and parity is striped across + all disks within a raidz group. +

A raidz group can have single-, double-, or triple-parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N disks of size X with P parity disks can + hold approximately (N-P)*X bytes and can withstand P device(s) failing + before data integrity is compromised. The minimum number of devices in a + raidz group is one more than the number of parity disks. The recommended + number is between 3 and 9 to help increase performance.

+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device dedicated solely for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested, so a mirror or raidz virtual + device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. The keywords mirror and + raidz are used to distinguish where a group ends and + another begins. For example, the following creates two root vdevs, each a + mirror of two disks:

+
+
# zpool create mypool mirror sda sdb mirror sdc sdd
+
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three states: + online, degraded, or faulted. An online pool has all devices operating + normally. A degraded pool is one in which one or more devices have failed, + but the data is still available due to a redundant configuration. A faulted + pool has corrupted metadata, or one or more faulted devices, and + insufficient replicas to continue functioning.

+

The health of the top-level vdev, such as mirror or raidz device, + is potentially impacted by the state of its associated vdevs, or component + devices. A top-level vdev or component device is in one of the following + states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

If a device is removed and later re-attached to the system, ZFS + attempts to put the device online automatically. Device attach detection is + hardware-dependent and might not be supported on all platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
+
# zpool create pool mirror sda sdb spare sdc sdd
+
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
+
# zpool create pool sda sdb log sdc
+
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allow much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
+
# zpool create pool sda sdb cache sdc sdd
+
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is + persistent across reboots and restored asynchronously when importing the + pool in L2ARC (persistent L2ARC). This can be disabled by setting + . For cache devices smaller than 1GB we do not write the metadata + structures required for rebuilding the L2ARC in order not to waste space. + This can be changed with + . + The cache device header (512 bytes) is updated even if no metadata + structures are written. Setting + will result in scanning the full-length ARC lists for cacheable + content to be written in L2ARC (persistent ARC). If a cache device is added + with zpool add its label and + header will be overwritten and its contents are not going to be restored in + L2ARC, even if the device was previously part of the pool. If a cache device + is onlined with zpool online + its contents will be restored in L2ARC. This is useful in case of memory + pressure where the contents of the cache device are not fully restored in + L2ARC. The user can off/online the cache device when there is less memory + pressure in order to fully restore its contents to L2ARC.

+
+
+

+

Before starting critical procedures that include destructive + actions (e.g zfs destroy ), + an administrator can checkpoint the pool's state and in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, while a pool has a checkpoint + certain operations are not allowed. Specifically, vdev + removal/attach/detach, mirror splitting, and changing the pool's guid. + Adding a new vdev is supported but in the case of a rewind it will have to + be added again. Finally, users of this feature should keep in mind that + scrubs in a pool that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
+
# zpool checkpoint pool
+
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
+
# zpool export pool
+# zpool import --rewind-to-checkpoint pool
+
+

To discard the checkpoint from a pool:

+
+
# zpool checkpoint -d pool
+
+

Dataset reservations (controlled by the + reservation or + refreservation zfs properties) may be unenforceable + while a checkpoint exists, because the checkpoint is allowed to consume the + dataset's reservation. Finally, data that is part of the checkpoint but has + been freed in the current state of the pool won't be scanned during a + scrub.

+
+
+

+

The allocations in the special class are dedicated to specific + block types. By default this includes all metadata, the indirect blocks of + user data, and any deduplication tables. The class can also be provisioned + to accept small file blocks.

+

A pool must always have at least one normal (non-dedup/special) + vdev before other devices can be assigned to the special class. If the + special class becomes full, then allocations intended for it will spill back + into the normal class.

+

Deduplication tables can be excluded + from the special class by setting the + + zfs module parameter to false (0).

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + dataset property. It defaults to zero, so you must opt-in by setting it to a + non-zero value. See zfs(8) for more info on setting this + property.

+
+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zpoolprops.8.html b/man/v2.0/8/zpoolprops.8.html new file mode 100644 index 000000000..6828f61b3 --- /dev/null +++ b/man/v2.0/8/zpoolprops.8.html @@ -0,0 +1,513 @@ + + + + + + + zpoolprops.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolprops.8

+
+ + + + + +
ZPOOLPROPS(8)System Manager's ManualZPOOLPROPS(8)
+
+
+

+

zpoolprops — + available properties for ZFS storage pools

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. On whole-disk vdevs, this is + the space beyond the end of the GPT – typically occurring when a + LUN is dynamically expanded or a disk replaced with a larger one. On + partition vdevs, this is the space appended to the partition after it was + added to the pool – most likely by resizing it in-place. The space + can be claimed for the pool by bringing it online with + + or using zpool online + -e.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz parity; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs-module-parameters(5) for more + information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
The current health of the pool. Health can be one of + , + , + , + , + .
+
+
A unique identifier for the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (e.g. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
+
Information about unsupported features that are enabled on the pool. See + zpool-features(5) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpoolprops command does not. For non-full pools + of a reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of + (internally + referred to as ashift ). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space vs. performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + + (which is 1<<12 = 4096). When set, this property is used as the + default hint value in subsequent vdev operations (add, attach and + replace). Changing this value will not modify any existing vdev, not even + on disk replacement; however it can be used, for instance, to replace a + dying 512B sectors disk with a newer 4KiB sectors device: this will + probably result in bad performance but at the same time could prevent loss + of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) man page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) man page for more details.
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default setting for this property is + off. +

Automatic TRIM does not immediately + reclaim blocks after a free. Instead, it will optimistically delay + allowing smaller ranges to be aggregated in to a few larger ones. These + can then be issued more efficiently to the storage. TRIM on L2ARC + devices is enabled by setting + .

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
=|pool/dataset
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=number
+
This property is deprecated and no longer has any effect.
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(5) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It + does not protect against an individual device being used in multiple + pools, regardless of the type of vdev. See the discussion under +

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs-module-parameters(5) man page. In order to + enable this property each host must set a unique hostid. See + zgenhostid(8) + spl-module-parameters(5) for additional details. The + default value is off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zstream.8.html b/man/v2.0/8/zstream.8.html new file mode 100644 index 000000000..cd85c0038 --- /dev/null +++ b/man/v2.0/8/zstream.8.html @@ -0,0 +1,325 @@ + + + + + + + zstream.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zstream.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate zfs send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+
+

+

The + + utility manipulates zfs send streams, which are the output of the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
+
# zstream redup DEDUP_STREAM_FILE | zfs receive ...
+
+
+
+
Verbose. Print summary of converted records.
+
+
+
+
+
+

+

zfs(8), zfs-send(8), + zfs-receive(8)

+
+
+ + + + + +
March 25, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/8/zstreamdump.8.html b/man/v2.0/8/zstreamdump.8.html new file mode 100644 index 000000000..8a560c869 --- /dev/null +++ b/man/v2.0/8/zstreamdump.8.html @@ -0,0 +1,276 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
ZSTREAMDUMP(8)System Manager's ManualZSTREAMDUMP(8)
+
+
+

+

zstreamdump - filter data in zfs send stream

+
+
+

+
zstreamdump [-C] [-v] [-d]
+

+
+
+

+

The zstreamdump utility reads from the output of the zfs + send command, then displays headers and some statistics from that + output. See zfs(8).

+
+
+

+

The following options are supported:

+

-C

+

+
Suppress the validation of checksums.
+

+

-v

+

+
Verbose. Dump all headers, not only begin and end + headers.
+

+

-d

+

+
Dump contents of blocks modified. Implies verbose.
+

+
+
+

+

zfs(8)

+
+
+ + + + + +
August 24, 2020OpenZFS
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.0/index.html b/man/v2.0/index.html new file mode 100644 index 000000000..5d84036dc --- /dev/null +++ b/man/v2.0/index.html @@ -0,0 +1,143 @@ + + + + + + + v2.0 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/arcstat.1.html b/man/v2.1/1/arcstat.1.html new file mode 100644 index 000000000..01f16eb9d --- /dev/null +++ b/man/v2.1/1/arcstat.1.html @@ -0,0 +1,336 @@ + + + + + + + arcstat.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

arcstat.1

+
+ + + + + +
ARCSTAT(1)General Commands ManualARCSTAT(1)
+
+
+

+

arcstatreport + ZFS ARC and L2ARC statistics

+
+
+

+ + + + + +
arcstat[-havxp] [-f + field[,field…]] + [-o file] + [-s string] + [interval] [count]
+
+
+

+

arcstat prints various ZFS ARC and L2ARC + statistics in vmstat-like fashion:

+
+
+
+
ARC target size
+
+
Demand data hit percentage
+
+
Demand data miss percentage
+
+
MFU list hits per second
+
+
Metadata hit percentage
+
+
Metadata miss percentage
+
+
MRU list hits per second
+
+
Prefetch hits percentage
+
+
Prefetch miss percentage
+
+
Demand data hits per second
+
+
Demand data misses per second
+
+
ARC hit percentage
+
+
ARC reads per second
+
+
MFU ghost list hits per second
+
+
Metadata hits per second
+
+
ARC misses per second
+
+
Metadata misses per second
+
+
MRU ghost list hits per second
+
+
Prefetch hits per second
+
+
Prefetch misses per second
+
+
Total ARC accesses per second
+
+
Current time
+
+
ARC size
+
+
Alias for size
+
+
Demand data accesses per second
+
+
evict_skip per second
+
+
ARC miss percentage
+
+
Metadata accesses per second
+
+
Prefetch accesses per second
+
+
L2ARC access hit percentage
+
+
L2ARC hits per second
+
+
L2ARC misses per second
+
+
Total L2ARC accesses per second
+
+
L2ARC prefetch allocated size per second
+
+
L2ARC prefetch allocated size percentage
+
+
L2ARC MFU allocated size per second
+
+
L2ARC MFU allocated size percentage
+
+
L2ARC MRU allocated size per second
+
+
L2ARC MRU allocated size percentage
+
+
L2ARC data (buf content) allocated size per second
+
+
L2ARC data (buf content) allocated size percentage
+
+
L2ARC metadata (buf content) allocated size per second
+
+
L2ARC metadata (buf content) allocated size percentage
+
+
Size of the L2ARC
+
+
mutex_miss per second
+
+
Bytes read per second from the L2ARC
+
+
L2ARC access miss percentage
+
+
Actual (compressed) size of the L2ARC
+
+
ARC grow disabled
+
+
ARC reclaim needed
+
+
The ARC's idea of how much free memory there is, which includes evictable + memory in the page cache. Since the ARC tries to keep + avail above zero, avail is usually + more instructive to observe than free.
+
+
The ARC's idea of how much free memory is available to it, which is a bit + less than free. May temporarily be negative, in which + case the ARC will reduce the target size c.
+
+
+
+
+

+
+
+
Print all possible stats.
+
+
Display only specific fields. See + DESCRIPTION for supported + statistics.
+
+
Display help message.
+
+
Report statistics to a file instead of the standard output.
+
+
Disable auto-scaling of numerical fields (for raw, machine-parsable + values).
+
+
Display data with a specified separator (default: 2 spaces).
+
+
Print extended stats (same as -f + time,mfu,mru,mfug,mrug,eskip,mtxmis,dread,pread,read).
+
+
Show field headers and definitions
+
+
+
+

+

The following operands are supported:

+
+
+
interval
+
Specify the sampling interval in seconds.
+
count
+
Display only count reports.
+
+
+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/cstyle.1.html b/man/v2.1/1/cstyle.1.html new file mode 100644 index 000000000..8677fc131 --- /dev/null +++ b/man/v2.1/1/cstyle.1.html @@ -0,0 +1,304 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
CSTYLE(1)General Commands ManualCSTYLE(1)
+
+
+

+

cstylecheck for + some common stylistic errors in C source files

+
+
+

+ + + + + +
cstyle[-chpvCP] [-o + construct[,construct…]] + [file]…
+
+
+

+

cstyle inspects C source files (*.c and + *.h) for common stylistic errors. It attempts to check for the cstyle + documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. + Note that there is much in that document that + + be checked for; just because your code is + cstyle-clean does not mean that you've followed + Sun's C style. + .

+
+
+

+

The following options are supported:

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented + + four spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see + , below.
+
+
Performs heuristic checks that are sometimes wrong. Not generally + used.
+
+
Performs some of the more picky checks. Includes ANSI + + and + + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current + continuation block.
+
+
Ignore errors in header comments (i.e. block comments starting in the + first column). Not generally used.
+
+
Check for use of non-POSIX types. Historically, types like + + and + + were used, but they are now deprecated in favor of the POSIX types + , + , + etc. This detects any use of the deprecated types. Used as part of the + putback checks.
+
+ construct[,construct…]
+
Available constructs include: +
+
+
Allow doxygen-style block comments + ( + and + ).
+
+
Allow splint-style lint comments + (...).
+
+
+
+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
    +
  1. Preprocessor macros which cause unmatched parenthesis will + confuse the checker for that line. To fix this, you'll need to make sure + that each branch of the + statement has + balanced parenthesis.
  2. +
  3. Some cpp(1) macros do not require + ;s after them. Any such macros + be ALL_CAPS; + any lower case letters will cause bad output. +

    The bad output will generally be corrected after the + next ;, + , + or + .

    +
  4. +
+Some continuation error messages deserve some additional explanation: +
+
+
A multi-line statement which is not broken at statement boundaries. For + example: +
+
if (this_is_a_long_variable == another_variable) a =
+    b + c;
+
+

Will trigger this error. Instead, do:

+
+
if (this_is_a_long_variable == another_variable)
+    a = b + c;
+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example: +
+
while (do_something(&x) == 0);
+
+

Will trigger this error. Instead, do:

+
+
while (do_something(&x) == 0)
+    ;
+
+
+
+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/index.html b/man/v2.1/1/index.html new file mode 100644 index 000000000..d3ae435f7 --- /dev/null +++ b/man/v2.1/1/index.html @@ -0,0 +1,157 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/raidz_test.1.html b/man/v2.1/1/raidz_test.1.html new file mode 100644 index 000000000..c400c2f21 --- /dev/null +++ b/man/v2.1/1/raidz_test.1.html @@ -0,0 +1,253 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
RAIDZ_TEST(1)General Commands ManualRAIDZ_TEST(1)
+
+
+

+

raidz_testraidz + implementation verification and benchmarking tool

+
+
+

+ + + + + +
raidz_test[-StBevTD] [-a + ashift] [-o + zio_off_shift] [-d + raidz_data_disks] [-s + zio_size_shift] [-r + reflow_offset]
+
+
+

+

The purpose of this tool is to run all supported raidz + implementation and verify the results of all methods. It also contains a + parameter sweep option where all parameters affecting a RAIDZ block are + verified (like ashift size, data offset, data size, etc.). The tool also + supports a benchmarking mode using the -B + option.

+
+
+

+
+
+
Print a help summary.
+
+ ashift (default: + )
+
Ashift value.
+
+ zio_off_shift (default: + )
+
ZIO offset for each raidz block. The offset's value is + .
+
+ raidz_data_disks (default: + )
+
Number of raidz data disks to use. Additional disks will be used for + parity.
+
+ zio_size_shift (default: + )
+
Size of data for raidz block. The real size is + .
+
+ reflow_offset (default: + )
+
Set raidz expansion offset. The expanded raidz map allocation function + will produce different map configurations depending on this value.
+
(weep)
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for the + -aods options. Runtime using this option will be + long.
+
(imeout)
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
(enchmark)
+
All implementations are benchmarked using increasing per disk data size. + Results are given as throughput per disk, measured in MiB/s.
+
(xpansion)
+
Use expanded raidz map allocation function.
+
(erbose)
+
Increase verbosity.
+
(est + the test)
+
Debugging option: fail all tests. This is to check if tests would properly + verify bit-exactness.
+
(ebug)
+
Debugging option: attach gdb(1) when + + or + + are received.
+
+
+
+

+

ztest(1)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/zhack.1.html b/man/v2.1/1/zhack.1.html new file mode 100644 index 000000000..283dabdd5 --- /dev/null +++ b/man/v2.1/1/zhack.1.html @@ -0,0 +1,267 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
ZHACK(1)General Commands ManualZHACK(1)
+
+
+

+

zhacklibzpool + debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+
+
+ + + + + +
zhackfeature stat pool
+
+
List feature flags.
+
+ + + + + +
zhackfeature enable [-d + description] [-r] + pool guid
+
+
Add a new feature to pool that is uniquely + identified by guid, which is specified in the same + form as a zfs(8) user property. +

The description is a short human + readable explanation of the new feature.

+

The -r flag indicates that + pool can be safely opened in read-only mode by a + system that does not understand the guid + feature.

+
+
+ + + + + +
zhackfeature ref + [-d|-m] + pool guid
+
+
Increment the reference count of the guid feature in + pool. +

The -d flag decrements the reference + count of the guid feature in + pool instead.

+

The -m flag indicates that the + guid feature is now required to read the pool + MOS.

+
+
+
+
+

+

The following can be passed to all zhack + invocations before any subcommand:

+
+
+ cachefile
+
Read pool configuration from the + cachefile, which is + /etc/zfs/zpool.cache by default.
+
+ dir
+
Search for pool members in + dir. Can be specified more than once.
+
+
+
+

+
+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
+# zhack feature enable -d 'Predict future disk failures.' tank com.example:clairvoyance
+# zhack feature ref tank com.example:clairvoyance
+
+
+
+

+

ztest(1), zpool-features(7), + zfs(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/ztest.1.html b/man/v2.1/1/ztest.1.html new file mode 100644 index 000000000..217791e79 --- /dev/null +++ b/man/v2.1/1/ztest.1.html @@ -0,0 +1,379 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ZTEST(1)General Commands ManualZTEST(1)
+
+
+

+

ztestwas + written by the ZFS Developers as a ZFS unit test

+
+
+

+ + + + + +
ztest[-VEG] [-v + vdevs] [-s + size_of_each_vdev] [-a + alignment_shift] [-m + mirror_copies] [-r + raidz_disks/draid_disks] [-R + raid_parity] [-K + raid_kind] [-D + draid_data] [-S + draid_spares] [-C + vdev_class_state] [-d + datasets] [-t + threads] [-g + gang_block_threshold] [-i + initialize_pool_i_times] [-k + kill_percentage] [-p + pool_name] [-T + time] [-z + zil_failure_rate]
+
+
+

+

ztest was written by the ZFS Developers as + a ZFS unit test. The tool was developed in tandem with the ZFS functionality + and was executed nightly as one of the many regression test against the + daily build. As features were added to ZFS, unit tests were also added to + ztest. In addition, a separate test development team + wrote and executed more functional and stress tests.

+

By default ztest runs for ten minutes and + uses block files (stored in /tmp) to create pools + rather than using physical disks. Block files afford + ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you + have a small tmp directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 + minutes. The -V option can be used to increase the + verbosity of the tool. Adding multiple -V options is + allowed and the more you add the more chatty ztest + becomes.

+

After the ztest run completes, you should + notice many ztest.* files lying around. Once the run + completes you can safely remove these files. Note that you shouldn't remove + these files during a run. You can re-use these files in your next + ztest run by using the -E + option.

+
+
+

+
+
, + -?, --help
+
Print a help summary.
+
, + --vdevs= (default: + )
+
Number of vdevs.
+
, + --vdev-size= (default: + )
+
Size of each vdev.
+
, + --alignment-shift= (default: + ) + (use + + for random)
+
Alignment shift used in test.
+
, + --mirror-copies= (default: + )
+
Number of mirror copies.
+
, + --raid-disks= (default: 4 + for + raidz/ + for draid)
+
Number of raidz/draid disks.
+
, + --raid-parity= (default: 1)
+
Raid parity (raidz & draid).
+
, + --raid-kind=||random + (default: random)
+
The kind of RAID config to use. With random the kind + alternates between raidz and draid.
+
, + --draid-data= (default: 4)
+
Number of data disks in a dRAID redundancy group.
+
, + --draid-spares= (default: 1)
+
Number of dRAID distributed spare disks.
+
, + --datasets= (default: + )
+
Number of datasets.
+
, + --threads= (default: + )
+
Number of threads.
+
, + --gang-block-threshold= (default: + 32K)
+
Gang block threshold.
+
, + --init-count= (default: 1)
+
Number of pool initializations.
+
, + --kill-percentage= (default: + )
+
Kill percentage.
+
, + --pool-name= (default: + )
+
Pool name.
+
, + --vdev-file-directory= (default: + /tmp)
+
File directory for vdev files.
+
, + --multi-host
+
Multi-host; simulate pool imported on remote host.
+
, + --use-existing-pool
+
Use existing pool (use existing pool instead of creating new one).
+
, + --run-time= (default: + s)
+
Total test run time.
+
, + --pass-time= (default: + s)
+
Time per pass.
+
, + --freeze-loops= (default: + )
+
Max loops in + ().
+
, + --alt-ztest=
+
Alternate ztest path.
+
, + --vdev-class-state=||random + (default: random)
+
The vdev allocation class state.
+
, + --option=variable=value
+
Set global variable to an unsigned 32-bit integer + value (little-endian only).
+
, + --dump-debug
+
Dump zfs_dbgmsg buffer before exiting due to an error.
+
, + --verbose
+
Verbose (use multiple times for ever more verbosity).
+
+
+
+

+

To override /tmp as your location for + block files, you can use the -f option:

+
# ztest -f /
+

To get an idea of what ztest is actually + testing try this:

+
# ztest -f / -VVV
+

Maybe you'd like to run ztest for longer? + To do so simply use the -T option and specify the + runlength in seconds like so:

+
# ztest -f / -V -T 120
+
+
+

+
+
=id
+
Use id instead of the SPL hostid to identify this host. + Intended for use with ztest, but this environment + variable will affect any utility which uses libzpool, including + zpool(8). Since the kernel is unaware of this setting, + results with utilities other than ztest are undefined.
+
=stacksize
+
Limit the default stack size to stacksize bytes for the + purpose of detecting and debugging kernel stack overflows. This value + defaults to 32K which is double the default + Linux + kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to + .

+
+
+
+
+

+

zdb(1), zfs(1), + zpool(1), spl(4)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/1/zvol_wait.1.html b/man/v2.1/1/zvol_wait.1.html new file mode 100644 index 000000000..884e3fa23 --- /dev/null +++ b/man/v2.1/1/zvol_wait.1.html @@ -0,0 +1,190 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands ManualZVOL_WAIT(1)
+
+
+

+

zvol_waitwait + for ZFS volume links to appear in /dev

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, the volumes within it will appear as + block devices. As they're registered, udev(7) + asynchronously creates symlinks under /dev/zvol + using the volumes' names. zvol_wait will wait for + all those symlinks to be created before exiting.

+
+
+

+

udev(7)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/4/index.html b/man/v2.1/4/index.html new file mode 100644 index 000000000..04c2315f7 --- /dev/null +++ b/man/v2.1/4/index.html @@ -0,0 +1,149 @@ + + + + + + + Devices and Special Files (4) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Devices and Special Files (4)

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/4/spl.4.html b/man/v2.1/4/spl.4.html new file mode 100644 index 000000000..3d7228dd3 --- /dev/null +++ b/man/v2.1/4/spl.4.html @@ -0,0 +1,319 @@ + + + + + + + spl.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

spl.4

+
+ + + + + +
SPL(4)Device Drivers ManualSPL(4)
+
+
+

+

splparameters + of the SPL kernel module

+
+
+

+
+
=4 + (uint)
+
The number of threads created for the spl_kmem_cache task queue. This task + queue is responsible for allocating new slabs for use by the kmem caches. + For the majority of systems and workloads only a small number of threads + are required.
+
=0 + (uint)
+
When this is set it prevents Linux from being able to rapidly reclaim all + the memory held by the kmem caches. This may be useful in circumstances + where it's preferable that Linux reclaim memory from some other subsystem + first. Setting this will increase the likelihood out of memory events on a + memory constrained system.
+
= + (uint)
+
The preferred number of objects per slab in the cache. In general, a + larger value will increase the caches memory footprint while decreasing + the time required to perform an allocation. Conversely, a smaller value + will minimize the footprint and improve cache reclaim time but individual + allocations may take longer.
+
= + (64-bit) or 4 (32-bit) (uint)
+
The maximum size of a kmem cache slab in MiB. This effectively limits the + maximum cache object size to + spl_kmem_cache_max_size/spl_kmem_cache_obj_per_slab. +

Caches may not be created with object sized larger than this + limit.

+
+
= + (uint)
+
For small objects the Linux slab allocator should be used to make the most + efficient use of the memory. However, large objects are not supported by + the Linux slab and therefore the SPL implementation is preferred. This + value is used to determine the cutoff between a small and large object. +

Objects of size spl_kmem_cache_slab_limit or + smaller will be allocated using the Linux slab allocator, large objects + use the SPL allocator. A cutoff of 16K was determined to be optimal for + architectures using 4K pages.

+
+
= + (uint)
+
As a general rule + () + allocations should be small, preferably just a few pages, since they must + by physically contiguous. Therefore, a rate limited warning will be + printed to the console for any kmem_alloc() which + exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped + at 32K to accommodate systems using large pages. This value was selected + to be small enough to ensure the largest allocations are quickly noticed + and fixed. But large enough to avoid logging any warnings when a + allocation size is larger than optimal but not a serious concern. Since + this value is tunable, developers are encouraged to set it lower when + testing so any new largish allocations are quickly caught. These + warnings may be disabled by setting the threshold to zero.

+
+
=KMALLOC_MAX_SIZE/4 + (uint)
+
Large + () + allocations will fail if they exceed KMALLOC_MAX_SIZE. + Allocations which are marginally smaller than this limit may succeed but + should still be avoided due to the expense of locating a contiguous range + of free pages. Therefore, a maximum kmem size with reasonable safely + margin of 4x is set. kmem_alloc() allocations + larger than this maximum will quickly fail. + () + allocations less than or equal to this value will use + (), + but shift to + () + when exceeding this value.
+
=0 + (uint)
+
Cache magazines are an optimization designed to minimize the cost of + allocating memory. They do this by keeping a per-cpu cache of recently + freed objects, which can then be reallocated without taking a lock. This + can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being + immediately released this may not be ideal for low memory machines. +

For this reason, + spl_kmem_cache_magazine_size can be used to set a + maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+
+
=0 + (ulong)
+
The system hostid, when set this can be used to uniquely identify a + system. By default this value is set to zero which indicates the hostid is + disabled. It can be explicitly enabled by placing a unique non-zero value + in /etc/hostid.
+
=/etc/hostid + (charp)
+
The expected path to locate the system hostid when specified. This value + may be overridden for non-standard configurations.
+
=0 + (uint)
+
Cause a kernel panic on assertion failures. When not enabled, the thread + is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+
+
=0 + (uint)
+
Kick stuck taskq to spawn threads. When writing a non-zero value to it, it + will scan all the taskqs. If any of them have a pending task more than 5 + seconds old, it will kick it to spawn more threads. This can be used if + you find a rare deadlock occurs because one or more taskqs didn't spawn a + thread when it should.
+
=0 + (int)
+
Bind taskq threads to specific CPUs. When enabled all taskq threads will + be distributed evenly across the available CPUs. By default, this behavior + is disabled to allow the Linux scheduler the maximum flexibility to + determine where a thread should run.
+
=1 + (int)
+
Allow dynamic taskqs. When enabled taskqs which set the + + flag will by default create only a single thread. New threads will be + created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will + be promptly destroyed. By default this behavior is enabled but it can be + disabled to aid performance analysis or troubleshooting.
+
=1 + (int)
+
Allow newly created taskq threads to set a non-default scheduler priority. + When enabled, the priority specified when a taskq is created will be + applied to all threads created by that taskq. When disabled all threads + will use the default Linux kernel thread priority. By default, this + behavior is enabled.
+
=4 + (int)
+
The number of items a taskq worker thread must handle without interruption + before requesting a new worker thread be spawned. This is used to control + how quickly taskqs ramp up the number of threads processing the queue. + Because Linux thread creation and destruction are relatively inexpensive a + small default value has been selected. This means that normally threads + will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for + some configurations.
+
= + (uint)
+
The maximum number of tasks per pending list in each taskq shown in + /proc/spl/taskq{,-all}. Write 0 + to turn off the limit. The proc file will walk the lists with lock held, + reading it could cause a lock-up if the list grow too large without + limiting the output. "(truncated)" will be shown if the list is + larger than the limit.
+
+
+
+ + + + + +
August 24, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/4/zfs.4.html b/man/v2.1/4/zfs.4.html new file mode 100644 index 000000000..30a3d4551 --- /dev/null +++ b/man/v2.1/4/zfs.4.html @@ -0,0 +1,2591 @@ + + + + + + + zfs.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.4

+
+ + + + + +
ZFS(4)Device Drivers ManualZFS(4)
+
+
+

+

zfstuning of + the ZFS kernel module

+
+
+

+

The ZFS module supports these parameters:

+
+
=ULONG_MAXB + (ulong)
+
Maximum size in bytes of the dbuf cache. The target size is determined by + the MIN versus + 1/2^dbuf_cache_shift (1/32nd) of + the target ARC size. The behavior of the dbuf cache and its associated + settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=ULONG_MAXB + (ulong)
+
Maximum size in bytes of the metadata dbuf cache. The target size is + determined by the MIN versus + 1/2^dbuf_metadata_cache_shift + (1/64th) of the target ARC size. The behavior of the metadata dbuf cache + and its associated settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=10% + (uint)
+
The percentage over dbuf_cache_max_bytes when dbufs must + be evicted directly.
+
=10% + (uint)
+
The percentage below dbuf_cache_max_bytes when the evict + thread stops evicting dbufs.
+
=5 + (int)
+
Set the size of the dbuf cache (dbuf_cache_max_bytes) to + a log2 fraction of the target ARC size.
+
= + (int)
+
Set the size of the dbuf metadata cache + (dbuf_metadata_cache_max_bytes) to a log2 fraction of + the target ARC size.
+
=7 + (128) (int)
+
dnode slots allocated in a single operation as a power of 2. The default + value minimizes lock contention for the bulk operation performed.
+
=134217728B + (128MB) (int)
+
Limit the amount we can prefetch with one call to this amount in bytes. + This helps to limit the amount of memory that can be used by + prefetching.
+
+ (int)
+
Alias for send_holes_without_birth_time.
+
=1|0 + (int)
+
Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set + as fast as possible.
+
=200 + (ulong)
+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only + applicable in related situations.
+
=1 + (ulong)
+
Seconds between L2ARC writing.
+
=2 + (ulong)
+
How far through the ARC lists to search for L2ARC cacheable content, + expressed as a multiplier of l2arc_write_max. ARC + persistence across reboots can be achieved with persistent L2ARC by + setting this parameter to 0, allowing the full length of + ARC lists to be searched for cacheable content.
+
=200% + (ulong)
+
Scales l2arc_headroom by this percentage when L2ARC + contents are being successfully compressed before writing. A value of + 100 disables this feature.
+
=0|1 + (int)
+
Controls whether buffers present on special vdevs are eligibile for + caching into L2ARC. If set to 1, exclude dbufs on special vdevs from being + cached to L2ARC.
+
=0|1 + (int)
+
Controls whether only MFU metadata and data are cached from ARC into + L2ARC. This may be desired to avoid wasting space on L2ARC when + reading/writing large amounts of data that are not expected to be accessed + more than once. +

The default is off, meaning both MRU and MFU data and metadata + are cached. When turning off this feature, some MRU buffers will still + be present in ARC and eventually cached on L2ARC. + If + l2arc_noprefetch=0, some prefetched + buffers will be cached to L2ARC, and those might later transition to + MRU, in which case the l2arc_mru_asize + arcstat will not be 0.

+

Regardless of l2arc_noprefetch, some MFU + buffers might be evicted from ARC, accessed later on as prefetches and + transition to MRU as prefetches. If accessed again they are counted as + MRU and the l2arc_mru_asize arcstat + will not be 0.

+

The ARC status of L2ARC buffers when they + were first cached in L2ARC can be seen in the + l2arc_mru_asize, + , + and + + arcstats when importing the pool or onlining a cache device if + persistent L2ARC is enabled.

+

The + + arcstat does not take into account if this option is enabled as the + information provided by the + + arcstats can be used to decide if toggling this option is appropriate + for the current workload.

+
+
=% + (int)
+
Percent of ARC size allowed for L2ARC-only headers. Since L2ARC buffers + are not evicted on memory pressure, too many headers on a system with an + irrationally large L2ARC can render it slow or unusable. This parameter + limits L2ARC writes and rebuilds to achieve the target.
+
=0% + (ulong)
+
Trims ahead of the current write size (l2arc_write_max) + on L2ARC devices by this percentage of write size if we have filled the + device. If set to 100 we TRIM twice the space required + to accommodate upcoming writes. A minimum of + + will be trimmed. It also enables TRIM of the whole L2ARC device upon + creation or addition to an existing pool or if the header of the device is + invalid upon importing a pool or onlining a cache device. A value of + 0 disables TRIM on L2ARC altogether and is the default + as it can put significant stress on the underlying storage devices. This + will vary depending of how well the specific device handles these + commands.
+
=1|0 + (int)
+
Do not write buffers to L2ARC if they were prefetched but not used by + applications. In case there are prefetched buffers in L2ARC and this + option is later set, we do not read the prefetched buffers from L2ARC. + Unsetting this option is useful for caching sequential reads from the + disks to L2ARC and serve those reads from L2ARC later on. This may be + beneficial in case the L2ARC device is significantly faster in sequential + reads than the disks of the pool. +

Use 1 to disable and 0 to + enable caching/reading prefetches to/from L2ARC.

+
+
=0|1 + (int)
+
No reads during writes.
+
=8388608B + (8MB) (ulong)
+
Cold L2ARC devices will have l2arc_write_max increased + by this amount while they remain cold.
+
=8388608B + (8MB) (ulong)
+
Max write bytes per interval.
+
=1|0 + (int)
+
Rebuild the L2ARC when importing a pool (persistent L2ARC). This can be + disabled if there are problems importing a pool or attaching an L2ARC + device (e.g. the L2ARC device is slow in reading stored log metadata, or + the metadata has become somehow fragmented/unusable).
+
=1073741824B + (1GB) (ulong)
+
Mininum size of an L2ARC device required in order to write log blocks in + it. The log blocks are used upon importing the pool to rebuild the + persistent L2ARC. +

For L2ARC devices less than 1GB, the amount + of data + () + evicts is significant compared to the amount of restored L2ARC data. In + this case, do not write log blocks in L2ARC in order not to waste + space.

+
+
=1048576B + (1MB) (ulong)
+
Metaslab granularity, in bytes. This is roughly similar to what would be + referred to as the "stripe size" in traditional RAID arrays. In + normal operation, ZFS will try to write this amount of data to each disk + before moving on to the next top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group biasing based on their vdevs' over- or + under-utilization relative to the pool.
+
=BB + (16MB + 1B) (ulong)
+
Make some blocks above a certain size be gang blocks. This option is used + by the test suite to facilitate testing.
+
=9 + (512 B) (int)
+
Default dnode block size as a power of 2.
+
= + (128 KiB) (int)
+
Default dnode indirect block size as a power of 2.
+
=1048576BB + (1MB) (int)
+
When attempting to log an output nvlist of an ioctl in the on-disk + history, the output will not be stored if it is larger than this size (in + bytes). This must be less than + + (64MB). This applies primarily to + () + (cf. zfs-program(8)).
+
=0|1 + (int)
+
Prevent log spacemaps from being destroyed during pool exports and + destroys.
+
=1|0 + (int)
+
Enable/disable segment-based metaslab selection.
+
=2 + (int)
+
When using segment-based metaslab selection, continue allocating from the + active metaslab until this option's worth of buckets have been + exhausted.
+
=0|1 + (int)
+
Load all metaslabs during pool import.
+
=0|1 + (int)
+
Prevent metaslabs from being unloaded.
+
=1|0 + (int)
+
Enable use of the fragmentation metric in computing metaslab weights.
+ +
Maximum distance to search forward from the last offset. Without this + limit, fragmented pools can see + + iterations and + () + becomes the performance limiting factor on high-performance storage. +

With the default setting of + 16MB, we typically see less than 500 + iterations, even with very fragmented + ashift=9 pools. The maximum number + of iterations possible is metaslab_df_max_search / + 2^(ashift+1). With the default setting of 16MB + this is + (with + ashift=9) or + + (with + ashift=).

+
+
=0|1 + (int)
+
If not searching forward (due to metaslab_df_max_search, + , + or + ), + this tunable controls which segment is used. If set, we will use the + largest free segment. If unset, we will use a segment of at least the + requested size.
+
=s + (1h) (ulong)
+
When we unload a metaslab, we cache the size of the largest free chunk. We + use that cached size to determine whether or not to load a metaslab for a + given allocation. As more frees accumulate in that metaslab while it's + unloaded, the cached max size becomes less and less accurate. After a + number of seconds controlled by this tunable, we stop considering the + cached max size and start considering only the histogram instead.
+
=25% + (int)
+
When we are loading a new metaslab, we check the amount of memory being + used to store metaslab range trees. If it is over a threshold, we attempt + to unload the least recently used metaslab to prevent the system from + clogging all of its memory with range trees. This tunable sets the + percentage of total system memory that is the threshold.
+
=0|1 + (int)
+
+
    +
  • If unset, we will first try normal allocation.
  • +
  • If that fails then we will do a gang allocation.
  • +
  • If that fails then we will do a "try hard" gang + allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+

+
    +
  • If set, we will first try normal allocation.
  • +
  • If that fails then we will do a "try hard" allocation.
  • +
  • If that fails we will do a gang allocation.
  • +
  • If that fails we will do a "try hard" gang allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+
+
=100 + (int)
+
When not trying hard, we only consider this number of the best metaslabs. + This improves performance, especially when there are many metaslabs per + vdev and the allocation can't actually be satisfied (so we would otherwise + iterate all metaslabs).
+
=200 + (int)
+
When a vdev is added, target this number of metaslabs per top-level + vdev.
+
= + (512MB) (int)
+
Default limit for metaslab size.
+
= + (ulong)
+
Maximum ashift used when optimizing for logical -> physical sector size + on new top-level vdevs. May be increased up to + + (16), but this may negatively impact pool space efficiency.
+
= + (9) (ulong)
+
Minimum ashift used when creating new top-level vdevs.
+
=16 + (int)
+
Minimum number of metaslabs to create in a top-level vdev.
+
=0|1 + (int)
+
Skip label validation steps during pool import. Changing is not + recommended unless you know what you're doing and are recovering a damaged + label.
+
=131072 + (128k) (int)
+
Practical upper limit of total metaslabs per top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group preloading.
+
=1|0 + (int)
+
Give more weight to metaslabs with lower LBAs, assuming they have greater + bandwidth, as is typically the case on a modern constant angular velocity + disk drive.
+
=32 + (int)
+
After a metaslab is used, we keep it loaded for this many TXGs, to attempt + to reduce unnecessary reloading. Note that both this many TXGs and + metaslab_unload_delay_ms milliseconds must pass before + unloading will occur.
+
=600000ms + (10min) (int)
+
After a metaslab is used, we keep it loaded for this many milliseconds, to + attempt to reduce unnecessary reloading. Note, that both this many + milliseconds and metaslab_unload_delay TXGs must pass + before unloading will occur.
+
=3 + (int)
+
Maximum reference holders being tracked when reference_tracking_enable is + active.
+
=0|1 + (int)
+
Track reference holders to + + objects (debug builds only).
+
=1|0 + (int)
+
When set, the hole_birth optimization will not be used, + and all holes will always be sent during a zfs + send. This is useful if you suspect your datasets + are affected by a bug in hole_birth.
+
=/etc/zfs/zpool.cache + (charp)
+
SPA config file.
+
= + (int)
+
Multiplication factor used to estimate actual disk consumption from the + size of data being written. The default value is a worst case estimate, + but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits.
+
=0|1 + (int)
+
Whether to print the vdev tree in the debugging message buffer during pool + import.
+
=1|0 + (int)
+
Whether to traverse data blocks during an "extreme rewind" + (-X) import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal skips non-metadata blocks. It can be toggled once the import + has started to stop or start the traversal of non-metadata blocks.

+
+
=1|0 + (int)
+
Whether to traverse blocks during an "extreme rewind" + (-X) pool import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal is not performed. It can be toggled once the import has + started to stop or start the traversal.

+
+
=4 + (1/16th) (int)
+
Sets the maximum number of bytes to consume during pool import to the log2 + fraction of the target ARC size.
+
=5 + (1/32nd) (int)
+
Normally, we don't allow the last + + () + of space in the pool to be consumed. This ensures that we don't run the + pool completely out of space, due to unaccounted changes (e.g. to the + MOS). It also limits the worst-case time to allocate space. If we have + less than this amount of free space, most ZPL operations (e.g. write, + create) will return + .
+
=32768B + (32kB) (int)
+
During top-level vdev removal, chunks of data are copied from the vdev + which may include free space in order to trade bandwidth for IOPS. This + parameter determines the maximum span of free space, in bytes, which will + be included as "unnecessary" data in a chunk of copied data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept + when doing regular reads (but there's no reason it has to be the + same).

+
+
=9 + (512B) (ulong)
+
Logical ashift for file-based devices.
+
=9 + (512B) (ulong)
+
Physical ashift for file-based devices.
+
=1|0 + (int)
+
If set, when we start iterating over a ZAP object, prefetch the entire + object (all leaf blocks). However, this is limited by + dmu_prefetch_max.
+
=1048576B + (1MB) (ulong)
+
If prefetching is enabled, disable prefetching for reads larger than this + size.
+
=4194304B + (4 MiB) (uint)
+
Min bytes to prefetch per stream. Prefetch distance starts from the demand + access size and quickly grows to this value, doubling on each hit. After + that it may grow further by 1/8 per hit, but only if some prefetch since + last time haven't completed in time to satisfy demand request, i.e. + prefetch depth didn't cover the read latency or the pool got + saturated.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch per stream.
+
=67108864B + (64MB) (uint)
+
Max bytes to prefetch indirects for per stream.
+
=8 + (uint)
+
Max number of streams per zfetch (prefetch streams per file).
+
=1 + (uint)
+
Min time before inactive prefetch stream can be reclaimed
+
=2 + (uint)
+
Max time before inactive prefetch stream can be deleted
+
=1|0 + (int)
+
Enables ARC from using scatter/gather lists and forces all allocations to + be linear in kernel memory. Disabling can improve performance in some code + paths at the expense of fragmented kernel memory.
+
= + (uint)
+
Maximum number of consecutive memory pages allocated in a single block for + scatter/gather lists. +

The value of + + depends on kernel configuration.

+
+
=B + (1.5kB) (uint)
+
This is the minimum allocation size that will use scatter (page-based) + ABDs. Smaller allocations will use linear ABDs.
+
=0B + (ulong)
+
When the number of bytes consumed by dnodes in the ARC exceeds this number + of bytes, try to unpin some of it in response to demand for non-metadata. + This value acts as a ceiling to the amount of dnode metadata, and defaults + to 0, which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that + may be used for dnodes. +

Also see zfs_arc_meta_prune which serves a + similar purpose but is used when the amount of metadata in the ARC + exceeds zfs_arc_meta_limit rather than in response to + overall demand for non-metadata.

+
+
=10% + (ulong)
+
Percentage that can be consumed by dnodes of ARC meta buffers. +

See also zfs_arc_dnode_limit, which serves a + similar purpose but has a higher priority if nonzero.

+
+
=10% + (ulong)
+
Percentage of ARC dnodes to try to scan in response to demand for + non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit.
+
=B + (8kB) (int)
+
The ARC's buffer hash table is sized based on the assumption of an average + block size of this value. This works out to roughly 1MB of hash table per + 1GB of physical memory with 8-byte pointers. For configurations with a + known larger average block size, this value can be increased to reduce the + memory footprint.
+
=200% + (int)
+
When + (), + () + waits for this percent of the requested amount of data to be evicted. For + example, by default, for every + that's + evicted, + of it + may be "reused" by a new allocation. Since this is above + 100%, it ensures that progress is made towards getting + arc_size under + arc_c. Since this is finite, it ensures that allocations + can still happen, even during the potentially long time that + arc_size is more than + arc_c.
+
=10 + (int)
+
Number ARC headers to evict per sub-list before proceeding to another + sub-list. This batch-style operation prevents entire sub-lists from being + evicted at once but comes at a cost of additional unlocking and + locking.
+
=0s + (int)
+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The + arc_grow_retry value (default + 5s) is the number of seconds the ARC will wait before + trying to resume growth after a memory pressure event.
+
=10% + (int)
+
Throttle I/O when free system memory drops below this percentage of total + system memory. Setting this value to 0 will disable the + throttle.
+
=0B + (ulong)
+
Max size of ARC in bytes. If 0, then the max size of ARC + is determined by the amount of system memory installed. Under Linux, half + of system memory will be used as the limit. Under + FreeBSD, the larger of + and + will be used as the limit. This value must be at + least 67108864B (64MB). +

This value can be changed dynamically, with some caveats. It + cannot be set back to 0 while running, and reducing it + below the current ARC size will not cause the ARC to shrink without + memory pressure to induce shrinking.

+
+
=4096 + (ulong)
+
The number of restart passes to make while scanning the ARC attempting the + free buffers in order to stay below the + . + This value should not need to be tuned but is available to facilitate + performance analysis.
+
=0B + (ulong)
+
The maximum allowed size in bytes that metadata buffers are allowed to + consume in the ARC. When this limit is reached, metadata buffers will be + reclaimed, even if the overall + + has not been reached. It defaults to 0, which indicates + that a percentage based on zfs_arc_meta_limit_percent of + the ARC may be used for metadata. +

This value my be changed dynamically, except that must be set + to an explicit value (cannot be set back to 0).

+
+
=75% + (ulong)
+
Percentage of ARC buffers that can be used for metadata. +

See also zfs_arc_meta_limit, which serves a + similar purpose but has a higher priority if nonzero.

+
+
=0B + (ulong)
+
The minimum allowed size in bytes that metadata buffers may consume in the + ARC.
+
=10000 + (int)
+
The number of dentries and inodes to be scanned looking for entries which + can be dropped. This may be required when the ARC reaches the + zfs_arc_meta_limit because dentries and inodes can pin + buffers in the ARC. Increasing this value will cause to dentry and inode + caches to be pruned more aggressively. Setting this value to + 0 will disable pruning the inode and dentry caches.
+
=1|0 + (int)
+
Define the strategy for ARC metadata buffer eviction (meta reclaim + strategy): +
+
+
+ (META_ONLY)
+
evict only the ARC metadata buffers
+
+ (BALANCED)
+
additional data buffers may be evicted if required to evict the + required number of metadata buffers.
+
+
+
+
=0B + (ulong)
+
Min size of ARC in bytes. If set to + 0, + + will default to consuming the larger of 32MB + or + .
+
=0ms(≡1s) + (int)
+
Minimum time prefetched blocks are locked in the ARC.
+
=0ms(≡6s) + (int)
+
Minimum time "prescient prefetched" blocks are locked in the + ARC. These blocks are meant to be prefetched fairly aggressively ahead of + the code that may use them.
+
=1 + (int)
+
Number of arc_prune threads. FreeBSD does not need + more than one. Linux may theoretically use one per mount point up to + number of CPUs, but that was not proven to be useful.
+
=0 + (int)
+
Number of missing top-level vdevs which will be allowed during pool import + (only in read-only mode).
+
= + 0 (ulong)
+
Maximum size in bytes allowed to be passed as + + for ioctls on /dev/zfs. This prevents a user from + causing the kernel to allocate an excessive amount of memory. When the + limit is exceeded, the ioctl fails with + + and a description of the error is sent to the + zfs-dbgmsg log. This parameter should not need to + be touched under normal circumstances. If 0, equivalent + to a quarter of the user-wired memory limit under + FreeBSD and to 134217728B + (128MB) under Linux.
+
=0 + (int)
+
To allow more fine-grained locking, each ARC state contains a series of + lists for both data and metadata objects. Locking is performed at the + level of these "sub-lists". This parameters controls the number + of sub-lists per ARC state, and also applies to other uses of the + multilist data structure. +

If 0, equivalent to the greater of the + number of online CPUs and 4.

+
+
=8 + (int)
+
The ARC size is considered to be overflowing if it exceeds the current ARC + target size (arc_c) by thresholds determined by this + parameter. Exceeding by (arc_c >> + zfs_arc_overflow_shift) * 0.5 starts ARC reclamation + process. If that appears insufficient, exceeding by (arc_c + >> zfs_arc_overflow_shift) * 1.5 blocks new + buffer allocation until the reclaim thread catches up. Started reclamation + process continues till ARC size returns below the target size. +

The default value of 8 causes the + ARC to start reclamation if it exceeds the target size by + of the + target size, and block allocations by + .

+
+
=0 + (int)
+
If nonzero, this will update arc_p_min_shift (default + 4) with the new value. arc_p_min_shift + is used as a shift of arc_c when + calculating the minumum arc_p + size.
+
=1|0 + (int)
+
Disable arc_p adapt dampener, which reduces the maximum + single adjustment to arc_p.
+
=0 + (int)
+
If nonzero, this will update + + (default 7) with the new value.
+
=0% + (off) (uint)
+
Percent of pagecache to reclaim ARC to. +

This tunable allows the ZFS ARC to play + more nicely with the kernel's LRU pagecache. It can guarantee that the + ARC size won't collapse under scanning pressure on the pagecache, yet + still allows the ARC to be reclaimed down to + zfs_arc_min if necessary. This value is specified as + percent of pagecache size (as measured by + ), + where that percent may exceed 100. This only operates + during memory pressure/reclaim.

+
+
=10000 + (int)
+
This is a limit on how many pages the ARC shrinker makes available for + eviction in response to one page allocation attempt. Note that in + practice, the kernel's shrinker can ask us to evict up to about four times + this for one allocation attempt. +

The default limit of 10000 (in + practice, + per allocation attempt with 4kB pages) limits + the amount of time spent attempting to reclaim ARC memory to less than + 100ms per allocation attempt, even with a small average compressed block + size of ~8kB.

+

The parameter can be set to 0 (zero) to disable the limit, and + only applies on Linux.

+
+
=0B + (ulong)
+
The target number of bytes the ARC should leave as free memory on the + system. If zero, equivalent to the bigger of + + and + .
+
=1|0 + (int)
+
Disable pool import at module load by ignoring the cache file + (spa_config_path).
+
=20/s + (uint)
+
Rate limit checksum events to this many per second. Note that this should + not be set below the ZED thresholds (currently 10 checksums over 10 + seconds) or else the daemon may not trigger any action.
+
=5% + (int)
+
This controls the amount of time that a ZIL block (lwb) will remain + "open" when it isn't "full", and it has a thread + waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly + impacting the latency of each individual transaction record (itx).
+
=0ms + (int)
+
Vdev indirection layer (used for device removal) sleeps for this many + milliseconds during mapping generation. Intended for use with the test + suite to throttle vdev removal speed.
+
=25% + (int)
+
Minimum percent of obsolete bytes in vdev mapping required to attempt to + condense (see zfs_condense_indirect_vdevs_enable). + Intended for use with the test suite to facilitate triggering condensing + as needed.
+
=1|0 + (int)
+
Enable condensing indirect vdev mappings. When set, attempt to condense + indirect vdev mappings if the mapping uses more than + zfs_condense_min_mapping_bytes bytes of memory and if + the obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The + condensing process is an attempt to save memory by removing obsolete + mappings.
+
=1073741824B + (1GB) (ulong)
+
Only attempt to condense indirect vdev mappings if the on-disk size of the + obsolete space map object is greater than this number of bytes (see + zfs_condense_indirect_vdevs_enable).
+
=131072B + (128kB) (ulong)
+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable).
+
=1|0 + (int)
+
Internally ZFS keeps a small log to facilitate debugging. The log is + enabled by default, and can be disabled by unsetting this option. The + contents of the log can be accessed by reading + /proc/spl/kstat/zfs/dbgmsg. Writing + 0 to the file clears the log. +

This setting does not influence debug prints due to + zfs_flags.

+
+
=4194304B + (4MB) (int)
+
Maximum size of the internal ZFS debug log.
+
=0 + (int)
+
Historically used for controlling what reporting was available under + /proc/spl/kstat/zfs. No effect.
+
=1|0 + (int)
+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms, or when an individual I/O + operation takes longer than zfs_deadman_ziotime_ms, then + the operation is considered to be "hung". If + zfs_deadman_enabled is set, then the deadman behavior is + invoked as described by zfs_deadman_failmode. By + default, the deadman is enabled and set to wait which + results in "hung" I/Os only being logged. The deadman is + automatically disabled when a pool gets suspended.
+
=wait + (charp)
+
Controls the failure behavior when the deadman detects a "hung" + I/O operation. Valid values are: +
+
+
+
Wait for a "hung" operation to complete. For each + "hung" operation a "deadman" event will be posted + describing that operation.
+
+
Attempt to recover from a "hung" operation by re-dispatching + it to the I/O pipeline if possible.
+
+
Panic the system. This can be used to facilitate automatic fail-over + to a properly configured fail-over partner.
+
+
+
+
=ms + (1min) (int)
+
Check time in milliseconds. This defines the frequency at which we check + for hung I/O requests and potentially invoke the + zfs_deadman_failmode behavior.
+
=600000ms + (10min) (ulong)
+
Interval in milliseconds after which the deadman is triggered and also the + interval after which a pool sync operation is considered to be + "hung". Once this limit is exceeded the deadman will be invoked + every zfs_deadman_checktime_ms milliseconds until the + pool sync completes.
+
=ms + (5min) (ulong)
+
Interval in milliseconds after which the deadman is triggered and an + individual I/O operation is considered to be "hung". As long as + the operation remains "hung", the deadman will be invoked every + zfs_deadman_checktime_ms milliseconds until the + operation completes.
+
=0|1 + (int)
+
Enable prefetching dedup-ed blocks which are going to be freed.
+
=60% + (int)
+
Start to delay each transaction once there is this amount of dirty data, + expressed as a percentage of zfs_dirty_data_max. This + value should be at least + zfs_vdev_async_write_active_max_dirty_percent. + See + ZFS TRANSACTION + DELAY.
+
=500000 + (int)
+
This controls how quickly the transaction delay approaches infinity. + Larger values cause longer delays for a given amount of dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will + smoothly handle between ten times and a tenth of this number. + See + ZFS TRANSACTION + DELAY.

+

zfs_delay_scale * + zfs_dirty_data_max + + .

+
+
=0|1 + (int)
+
Disables requirement for IVset GUIDs to be present and match when doing a + raw receive of encrypted datasets. Intended for users whose pools were + created with OpenZFS pre-release versions and now have compatibility + issues.
+
= + (4*10^8) (ulong)
+
Maximum number of uses of a single salt value before generating a new one + for encrypted datasets. The default value is also the maximum.
+
=64 + (uint)
+
Size of the znode hashtable used for holds. +

Due to the need to hold locks on objects that may not exist + yet, kernel mutexes are not created per-object and instead a hashtable + is used where collisions will result in objects waiting when there is + not actually contention on the same object.

+
+
=20/s + (int)
+
Rate limit delay and deadman zevents (which report slow I/Os) to this many + per second.
+
=1073741824B + (1GB) (ulong)
+
Upper-bound limit for unflushed metadata changes to be held by the log + spacemap in memory, in bytes.
+
=1000ppm + (0.1%) (ulong)
+
Part of overall system memory that ZFS allows to be used for unflushed + metadata changes by the log spacemap, in millionths.
+
=131072 + (128k) (ulong)
+
Describes the maximum number of log spacemap blocks allowed for each pool. + The default value means that the space in all the log spacemaps can add up + to no more than 131072 blocks (which means + of + logical space before compression and ditto blocks, assuming that blocksize + is 128kB). +

This tunable is important because it involves a trade-off + between import time after an unclean export and the frequency of + flushing metaslabs. The higher this number is, the more log blocks we + allow when the pool is active which means that we flush metaslabs less + often and thus decrease the number of I/Os for spacemap updates per TXG. + At the same time though, that means that in the event of an unclean + export, there will be more log spacemap blocks for us to read, inducing + overhead in the import time of the pool. The lower the number, the + amount of flushing increases, destroying log blocks quicker as they + become obsolete faster, which leaves less blocks to be read during + import time after a crash.

+

Each log spacemap block existing during pool import leads to + approximately one extra logical I/O issued. This is the reason why this + tunable is exposed in terms of blocks rather than space used.

+
+
=1000 + (ulong)
+
If the number of metaslabs is small and our incoming rate is high, we + could get into a situation that we are flushing all our metaslabs every + TXG. Thus we always allow at least this many log blocks.
+
=% + (ulong)
+
Tunable used to determine the number of blocks that can be used for the + spacemap log, expressed as a percentage of the total number of unflushed + metaslabs in the pool.
+
=1000 + (ulong)
+
Tunable limiting maximum time in TXGs any metaslab may remain unflushed. + It effectively limits maximum number of unflushed per-TXG spacemap logs + that need to be read after unclean pool export.
+ +
When enabled, files will not be asynchronously removed from the list of + pending unlinks and the space they consume will be leaked. Once this + option has been disabled and the dataset is remounted, the pending unlinks + will be processed and the freed space returned to the pool. This option is + used by the test suite.
+
= + (ulong)
+
This is the used to define a large file for the purposes of deletion. + Files containing more than zfs_delete_blocks will be + deleted asynchronously, while smaller files are deleted synchronously. + Decreasing this value will reduce the time spent in an + unlink(2) system call, at the expense of a longer delay + before the freed space is available.
+
= + (int)
+
Determines the dirty space limit in bytes. Once this limit is exceeded, + new writes are halted until space frees up. This parameter takes + precedence over zfs_dirty_data_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to + , + capped at zfs_dirty_data_max_max.

+
+
= + (int)
+
Maximum allowable value of zfs_dirty_data_max, expressed + in bytes. This limit is only enforced at module load time, and will be + ignored if zfs_dirty_data_max is later changed. This + parameter takes precedence over + zfs_dirty_data_max_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to + ,

+
+
=25% + (int)
+
Maximum allowable value of zfs_dirty_data_max, expressed + as a percentage of physical RAM. This limit is only enforced at module + load time, and will be ignored if zfs_dirty_data_max is + later changed. The parameter zfs_dirty_data_max_max + takes precedence over this one. See + ZFS TRANSACTION + DELAY.
+
=10% + (int)
+
Determines the dirty space limit, expressed as a percentage of all memory. + Once this limit is exceeded, new writes are halted until space frees up. + The parameter zfs_dirty_data_max takes precedence over + this one. See + ZFS TRANSACTION DELAY. +

Subject to zfs_dirty_data_max_max.

+
+
=20% + (int)
+
Start syncing out a transaction group if there's at least this much dirty + data (as a percentage of zfs_dirty_data_max). This + should be less than + zfs_vdev_async_write_active_min_dirty_percent.
+
= + (int)
+
The upper limit of write-transaction zil log data size in bytes. Write + operations are throttled when approaching the limit until log data is + cleared out after transaction group sync. Because of some overhead, it + should be set at least 2 times the size of + zfs_dirty_data_max to prevent harming + normal write throughput. It also should be smaller than the size of + the slog device if slog is present. +

Defaults to +

+
+
=% + (uint)
+
Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be + preallocated for a file in order to guarantee that later writes will not + run out of space. Instead, fallocate(2) space + preallocation only checks that sufficient space is currently available in + the pool or the user's project quota allocation, and then creates a sparse + file of the requested size. The requested space is multiplied by + zfs_fallocate_reserve_percent to allow additional space + for indirect blocks and other internal metadata. Setting this to + 0 disables support for fallocate(2) + and causes it to return + .
+
=fastest + (string)
+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, + scalar, + , + , + , + , + , + and + . + All except fastest and + scalar require instruction set extensions to be + available, and will only appear if ZFS detects that they are present at + runtime. If multiple implementations of fletcher 4 are available, the + fastest will be chosen using a micro benchmark. + Selecting scalar results in the original CPU-based + calculation being used. Selecting any option other than + fastest or + scalar results in vector instructions from the + respective CPU instruction set being used.

+
+
=1|0 + (int)
+
Enable/disable the processing of the free_bpobj object.
+
=ULONG_MAX + (unlimited) (ulong)
+
Maximum number of blocks freed in a single TXG.
+
= + (10^5) (ulong)
+
Maximum number of dedup blocks freed in a single TXG.
+
=0 + (ulong)
+
If nonzer, override record size calculation for + zfs send estimates.
+
=3 + (int)
+
Maximum asynchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum asynchronous read I/O operation active to each device. + See ZFS + I/O SCHEDULER.
+
=60% + (int)
+
When the pool has more than this much dirty data, use + zfs_vdev_async_write_max_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=30% + (int)
+
When the pool has less than this much dirty data, use + zfs_vdev_async_write_min_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=30 + (int)
+
Maximum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (int)
+
Minimum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER. +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of + 2 was chosen as a compromise. A value of + 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+
+
=1 + (int)
+
Maximum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1000 + (int)
+
The maximum number of I/O operations active to each device. Ideally, this + will be at least the sum of each queue's max_active. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
Timeout value to wait before determining a device is missing during + import. This is helpful for transient missing paths due to links being + briefly removed and recreated in response to udev events.
+
=3 + (int)
+
Maximum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (int)
+
Maximum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (int)
+
Maximum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (int)
+
Maximum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (int)
+
Minimum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (int)
+
Maximum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (int)
+
Minimum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (int)
+
Maximum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (int)
+
Minimum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=5 + (int)
+
For non-interactive I/O (scrub, resilver, removal, initialize and + rebuild), the number of concurrently-active I/O operations is limited to + , + unless the vdev is "idle". When there are no interactive I/O + operatinons active (synchronous or otherwise), and + zfs_vdev_nia_delay operations have completed since the + last interactive operation, then the vdev is considered to be + "idle", and the number of concurrently-active non-interactive + operations is increased to zfs_*_max_active. + See ZFS + I/O SCHEDULER.
+
=5 + (int)
+
Some HDDs tend to prioritize sequential I/O so strongly, that concurrent + random I/O latency reaches several seconds. On some HDDs this happens even + if sequential I/O operations are submitted one at a time, and so setting + zfs_*_max_active= 1 does not help. To + prevent non-interactive I/O, like scrub, from monopolizing the device, no + more than zfs_vdev_nia_credit operations can be sent + while there are outstanding incomplete interactive operations. This + enforced wait ensures the HDD services the interactive I/O within a + reasonable amount of time. See + ZFS I/O SCHEDULER.
+
=1000% + (int)
+
Maximum number of queued allocations per top-level vdev expressed as a + percentage of zfs_vdev_async_write_max_active, which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. This allows for + dynamic allocation distribution when devices are imbalanced, as fuller + devices will tend to be slower than empty devices. +

Also see zio_dva_throttle_enabled.

+
+
=s + (int)
+
Time before expiring .zfs/snapshot.
+
=0|1 + (int)
+
Allow the creation, removal, or renaming of entries in the + + directory to cause the creation, destruction, or renaming of snapshots. + When enabled, this functionality works both locally and over NFS exports + which have the + + option set.
+
=0 + (int)
+
Set additional debugging flags. The following flags may be bitwise-ored + together: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueSymbolic NameDescription
1ZFS_DEBUG_DPRINTFEnable dprintf entries in the debug log.
*2ZFS_DEBUG_DBUF_VERIFYEnable extra dbuf verifications.
*4ZFS_DEBUG_DNODE_VERIFYEnable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMESEnable snapshot name verification.
16ZFS_DEBUG_MODIFYCheck for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREEEnable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFYEnable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFYVerify space accounting on disk matches in-memory + range_trees.
512ZFS_DEBUG_SET_ERROREnable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAPVerify split blocks created by device removal.
2048ZFS_DEBUG_TRIMVerify TRIM ranges are always within the allocatable range + tree.
4096ZFS_DEBUG_LOG_SPACEMAPVerify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and + flushing.
+ * Requires debug build.
+
=0 + (uint)
+
Enables btree verification. The following settings are culminative: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueDescription
1Verify height.
2Verify pointers from children to parent.
3Verify element counts.
4Verify element order. (expensive)
*5Verify unused memory is poisoned. (expensive)
+ * Requires debug build.
+
=0|1 + (int)
+
If destroy encounters an EIO while reading metadata + (e.g. indirect blocks), space referenced by the missing metadata can not + be freed. Normally this causes the background destroy to become + "stalled", as it is unable to make forward progress. While in + this stalled state, all remaining space to free from the + error-encountering filesystem is "temporarily leaked". Set this + flag to cause it to ignore the EIO, permanently leak the + space from indirect blocks that can not be read, and continue to free + everything else that it can. +

The default "stalling" behavior is useful if the + storage partially fails (i.e. some but not all I/O operations fail), and + then later recovers. In this case, we will be able to continue pool + operations while it is partially failed, and when it recovers, we can + continue to free the space, with no leaks. Note, however, that this case + is actually fairly rare.

+

Typically pools either

+
    +
  1. fail completely (but perhaps temporarily, e.g. due to a top-level vdev + going offline), or
  2. +
  3. have localized, permanent errors (e.g. disk returns the wrong data due + to bit flip or firmware bug).
  4. +
+ In the former case, this setting does not matter because the pool will be + suspended and the sync thread will not be able to make forward progress + regardless. In the latter, because the error is permanent, the best we can + do is leak the minimum amount of space, which is what setting this flag + will do. It is therefore reasonable for this flag to normally be set, but + we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.
+
=1000ms + (1s) (int)
+
During a zfs destroy + operation using the + + feature, a minimum of this much time will be spent working on freeing + blocks per TXG.
+
=500ms + (int)
+
Similar to zfs_free_min_time_ms, but for cleanup of old + indirection records for removed vdevs.
+
=32768B + (32kB) (long)
+
Largest data block to write to the ZIL. Larger blocks will be treated as + if the dataset being written to had the + = + property set.
+
= + (0xDEADBEEFDEADBEEE) (ulong)
+
Pattern written to vdev free space by + zpool-initialize(8).
+
=1048576B + (1MB) (ulong)
+
Size of writes used by zpool-initialize(8). This option + is used by the test suite.
+
=500000 + (5*10^5) (ulong)
+
The threshold size (in block pointers) at which we create a new + sub-livelist. Larger sublists are more costly from a memory perspective + but the fewer sublists there are, the lower the cost of insertion.
+
=75% + (int)
+
If the amount of shared space between a snapshot and its clone drops below + this threshold, the clone turns off the livelist and reverts to the old + deletion method. This is in place because livelists no long give us a + benefit once a clone has been overwritten enough.
+
=0 + (int)
+
Incremented each time an extra ALLOC blkptr is added to a livelist entry + while it is being condensed. This option is used by the test suite to + track race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the synctask - + spa_livelist_condense_sync(). This option is used + by the test suite to trigger race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the open context condensing work in + spa_livelist_condense_cb(). This option is used by + the test suite to trigger race conditions.
+
= + (10^8) (ulong)
+
The maximum execution time limit that can be set for a ZFS channel + program, specified as a number of Lua instructions.
+
= + (100MB) (ulong)
+
The maximum memory limit that can be set for a ZFS channel program, + specified in bytes.
+
= + (int)
+
The maximum depth of nested datasets. This value can be tuned temporarily + to fix existing datasets that exceed the predefined limit.
+
=5 + (ulong)
+
The number of past TXGs that the flushing algorithm of the log spacemap + feature uses to estimate incoming log blocks.
+
=10 + (ulong)
+
Maximum number of rows allowed in the summary of the spacemap log.
+
=1048576 + (1MB) (int)
+
We currently support block sizes from + + to 16MB. The benefits of larger + blocks, and thus larger I/O, need to be weighed against the cost of COWing + a giant block to modify one byte. Additionally, very large blocks can have + an impact on I/O latency, and also potentially on the memory allocator. + Therefore, we do not allow the recordsize to be set larger than this + tunable. Larger blocks can be created by changing it, and pools with + larger blocks can always be imported and used, regardless of this + setting.
+
=0|1 + (int)
+
Allow datasets received with redacted send/receive to be mounted. Normally + disabled because these datasets may be missing key data.
+
=1 + (ulong)
+
Minimum number of metaslabs to flush per dirty TXG.
+
=% + (int)
+
Allow metaslabs to keep their active state as long as their fragmentation + percentage is no more than this value. An active metaslab that exceeds + this threshold will no longer keep its active status allowing better + metaslabs to be selected.
+
=% + (int)
+
Metaslab groups are considered eligible for allocations if their + fragmentation metric (measured as a percentage) is less than or equal to + this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also + crossed this threshold.
+
=0% + (int)
+
Defines a threshold at which metaslab groups should be eligible for + allocations. The value is expressed as a percentage of free space beyond + which a metaslab group is always eligible for allocations. If a metaslab + group's free space is less than or equal to the threshold, the allocator + will avoid allocating to that group unless all groups in the pool have + reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of + 0 disables the feature and causes all metaslab groups to + be eligible for allocations. +

This parameter allows one to deal + with pools having heavily imbalanced vdevs such as would be the case + when a new vdev has been added. Setting the threshold to a non-zero + percentage will stop allocations from being made to vdevs that aren't + filled to the specified percentage and allow lesser filled vdevs to + acquire more allocations than they otherwise would under the old + + facility.

+
+
=1|0 + (int)
+
If enabled, ZFS will place DDT data into the special allocation + class.
+
=1|0 + (int)
+
If enabled, ZFS will place user data indirect blocks into the special + allocation class.
+
=0 + (int)
+
Historical statistics for this many latest multihost updates will be + available in + /proc/spl/kstat/zfs/pool/multihost.
+
=1000ms + (1s) (ulong)
+
Used to control the frequency of multihost writes which are performed when + the + + pool property is on. This is one of the factors used to determine the + length of the activity check during import. +

The multihost write period is + zfs_multihost_interval / leaf-vdevs. On average a + multihost write will be issued for each leaf vdev every + zfs_multihost_interval milliseconds. In practice, the + observed period can vary with the I/O load and this observed value is + the delay which is stored in the uberblock.

+
+
=20 + (uint)
+
Used to control the duration of the activity test on import. Smaller + values of zfs_multihost_import_intervals will reduce the + import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount + of time determined by zfs_multihost_interval * + zfs_multihost_import_intervals, or the same product computed on the + host which last had the pool imported, whichever is greater. The + activity check time may be further extended if the value of MMP delay + found in the best uberblock indicates actual multihost updates happened + at longer intervals than zfs_multihost_interval. A + minimum of + is + enforced.

+

0 is equivalent to + 1.

+
+
=10 + (uint)
+
Controls the behavior of the pool when multihost write failures or delays + are detected. +

When 0, multihost write failures or delays + are ignored. The failures will still be reported to the ZED which + depending on its configuration may take action such as suspending the + pool or offlining a device.

+

Otherwise, the pool will be suspended if + zfs_multihost_fail_intervals * zfs_multihost_interval + milliseconds pass without a successful MMP write. This guarantees the + activity test will see MMP writes if the pool is imported. + 1 is equivalent to + 2; this is necessary to prevent the pool from being + suspended due to normal, small I/O latency variations.

+
+
=0|1 + (int)
+
Set to disable scrub I/O. This results in scrubs not actually scrubbing + data and simply doing a metadata crawl of the pool instead.
+
=0|1 + (int)
+
Set to disable block prefetching for scrubs.
+
=0|1 + (int)
+
Disable cache flush operations on disks when writing. Setting this will + cause pool corruption on power loss if a volatile out-of-order write cache + is enabled.
+
=1|0 + (int)
+
Allow no-operation writes. The occurrence of nopwrites will further depend + on other pool properties (i.a. the checksumming and compression + algorithms).
+
=1|0 + (int)
+
Enable forcing TXG sync to find holes. When enabled forces ZFS to sync + data when + + or + + flags are used allowing holes in a file to be accurately reported. When + disabled holes will not be reported in recently dirtied files.
+
=B + (50MB) (int)
+
The number of bytes which should be prefetched during a pool traversal, + like zfs send or other + data crawling operations.
+
=32 + (int)
+
The number of blocks pointed by indirect (non-L0) block which should be + prefetched during a pool traversal, like zfs + send or other data crawling operations.
+
=30% + (ulong)
+
Control percentage of dirtied indirect blocks from frees allowed into one + TXG. After this threshold is crossed, additional frees will wait until the + next TXG. 0 disables this + throttle.
+
=0|1 + (int)
+
Disable predictive prefetch. Note that it leaves "prescient" + prefetch (for. e.g. zfs + send) intact. Unlike predictive prefetch, + prescient prefetch never issues I/O that ends up not being needed, so it + can't hurt performance.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for SHA256 checksums. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for gzip compression. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for AES-GCM encryption. May be unset + after the ZFS modules have been loaded to initialize the QAT hardware as + long as support is compiled in and the QAT driver is present.
+
=1048576B + (1MB) (long)
+
Bytes to read per chunk.
+
=0 + (int)
+
Historical statistics for this many latest reads will be available in + /proc/spl/kstat/zfs/pool/reads.
+
=0|1 + (int)
+
Include cache hits in read history
+
=1048576B + (1MB) (ulong)
+
Maximum read segment size to issue when sequentially resilvering a + top-level vdev.
+
=1|0 + (int)
+
Automatically start a pool scrub when the last active sequential resilver + completes in order to verify the checksums of all blocks which have been + resilvered. This is enabled by default and strongly recommended.
+
=67108864B + (64 MiB) (ulong)
+
Maximum amount of I/O that can be concurrently issued for a sequential + resilver per leaf device, given in bytes.
+
=4096 + (int)
+
If an indirect split block contains more than this many possible unique + combinations when being reconstructed, consider it too computationally + expensive to check them all. Instead, try at most this many randomly + selected combinations each time the block is accessed. This allows all + segment copies to participate fairly in the reconstruction when all + combinations cannot be checked and prevents repeated use of one bad + copy.
+
=0|1 + (int)
+
Set to attempt to recover from fatal errors. This should only be used as a + last resort, as it typically results in leaked space, or worse.
+
=0|1 + (int)
+
Ignore hard IO errors during device removal. When set, if a device + encounters a hard IO error during the removal process the removal will not + be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is hence not recommended. This should only be used + as a last resort when the pool cannot be returned to a healthy state prior + to removing the device.
+
=0|1 + (int)
+
This is used by the test suite so that it can ensure that certain actions + happen while in the middle of a removal.
+
=16777216B + (16MB) (int)
+
The largest contiguous segment that we will attempt to allocate when + removing a device. If there is a performance problem with attempting to + allocate large blocks, consider decreasing this. The default value is also + the maximum.
+
=0|1 + (int)
+
Ignore the + + feature, causing an operation that would start a resilver to immediately + restart the one in progress.
+
=ms + (3s) (int)
+
Resilvers are processed by the sync thread. While resilvering, it will + spend at least this much time working on a resilver between TXG + flushes.
+
=0|1 + (int)
+
If set, remove the DTL (dirty time list) upon completion of a pool scan + (scrub), even if there were unrepairable errors. Intended to be used + during pool repair or recovery to stop resilvering when the pool is next + imported.
+
=1000ms + (1s) (int)
+
Scrubs are processed by the sync thread. While scrubbing, it will spend at + least this much time working on a scrub between TXG flushes.
+
=s + (2h) (int)
+
To preserve progress across reboots, the sequential scan algorithm + periodically needs to stop metadata scanning and issue all the + verification I/O to disk. The frequency of this flushing is determined by + this tunable.
+
=3 + (int)
+
This tunable affects how scrub and resilver I/O segments are ordered. A + higher number indicates that we care more about how filled in a segment + is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + affect on scrub or resilver performance.
+
=0 + (int)
+
Determines the order that data will be verified while scrubbing or + resilvering: +
+
+
+
Data will be verified as sequentially as possible, given the amount of + memory reserved for scrubbing (see + zfs_scan_mem_lim_fact). This may improve scrub + performance if the pool's data is very fragmented.
+
+
The largest mostly-contiguous chunk of found data will be verified + first. By deferring scrubbing of small segments, we may later find + adjacent data to coalesce and increase the segment size.
+
+
1 during normal + verification and strategy + 2 while taking a + checkpoint.
+
+
+
+
=0|1 + (int)
+
If unset, indicates that scrubs and resilvers will gather metadata in + memory before issuing sequential I/O. Otherwise indicates that the legacy + algorithm will be used, where I/O is initiated as soon as it is + discovered. Unsetting will not affect scrubs or resilvers that are already + in progress.
+
=B + (2MB) (int)
+
Sets the largest gap in bytes between scrub/resilver I/O operations that + will still be considered sequential for sorting purposes. Changing this + value will not affect scrubs or resilvers that are already in + progress.
+
=20^-1 + (int)
+
Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. + This tunable determines the hard limit for I/O sorting memory usage. When + the hard limit is reached we stop scanning metadata and start issuing data + verification I/O. This is done until we get below the soft limit.
+
=20^-1 + (int)
+
The fraction of the hard limit used to determined the soft limit for I/O + sorting by the sequential scan algorithm. When we cross this limit from + below no action is taken. When we cross this limit from above it is + because we are issuing verification I/O. In this case (unless the metadata + scan is done) we stop issuing verification I/O and start scanning metadata + again until we get to the hard limit.
+
=0|1 + (uint)
+
When reporting resilver throughput and estimated completion time use the + performance observed over roughly the last + zfs_scan_report_txgs TXGs. When set to zero performance + is calculated over the time between checkpoints.
+
=0|1 + (int)
+
Enforce tight memory limits on pool scans when a sequential scan is in + progress. When disabled, the memory limit may be exceeded by fast + disks.
+
=0|1 + (int)
+
Freezes a scrub/resilver in progress without actually pausing it. Intended + for testing/debugging.
+
=16777216B + (16 MiB) (int)
+
Maximum amount of data that can be concurrently issued at once for scrubs + and resilvers per leaf device, given in bytes.
+
=0|1 + (int)
+
Allow sending of corrupt data (ignore read/checksum errors when + sending).
+
=1|0 + (int)
+
Include unmodified spill blocks in the send stream. Under certain + circumstances, previous versions of ZFS could incorrectly remove the spill + block from an existing object. Including unmodified copies of the spill + blocks creates a backwards-compatible stream which will recreate a spill + block if it was incorrectly removed.
+
=20^-1 + (int)
+
The fill fraction of the zfs + send internal queues. The fill fraction controls + the timing with which internal threads are woken up.
+
=1048576B + (1MB) (int)
+
The maximum number of bytes allowed in zfs + send's internal queues.
+
=20^-1 + (int)
+
The fill fraction of the zfs + send prefetch queue. The fill fraction controls + the timing with which internal threads are woken up.
+
=16777216B + (16MB) (int)
+
The maximum number of bytes allowed that will be prefetched by + zfs send. This value must + be at least twice the maximum block size in use.
+
=20^-1 + (int)
+
The fill fraction of the zfs + receive queue. The fill fraction controls the + timing with which internal threads are woken up.
+
=16777216B + (16MB) (int)
+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice + the maximum block size in use.
+
=1048576B + (1MB) (int)
+
The maximum amount of data, in bytes, that zfs + receive will write in one DMU transaction. This is + the uncompressed size, even when receiving a compressed send stream. This + setting will not reduce the write size below a single block. Capped at a + maximum of 32MB.
+
=0|1 + (ulong)
+
Setting this variable overrides the default logic for estimating block + sizes when doing a zfs + send. The default heuristic is that the average + block size will be the current recordsize. Override this value if most + data in your dataset is not of that size and you require accurate zfs send + size estimates.
+
=2 + (int)
+
Flushing of data to disk is done in passes. Defer frees starting in this + pass.
+
=16777216B + (16MB) (int)
+
Maximum memory used for prefetching a checkpoint's space map on each vdev + while discarding the checkpoint.
+
=25% + (int)
+
Only allow small data blocks to be allocated on the special and dedup vdev + types when the available free space percentage on these vdevs exceeds this + value. This ensures reserved space is available for pool metadata as the + special vdevs approach capacity.
+
=8 + (int)
+
Starting in this sync pass, disable compression (including of metadata). + With the default setting, in practice, we don't have this many sync + passes, so this has no effect. +

The original intent was that disabling compression would help + the sync passes to converge. However, in practice, disabling compression + increases the average number of sync passes; because when we turn + compression off, many blocks' size will change, and thus we have to + re-allocate (not overwrite) them. It also increases the number of + 128kB allocations (e.g. for indirect blocks and + spacemaps) because these will not be compressed. The + 128kB allocations are especially detrimental to + performance on highly fragmented systems, which may have very few free + segments of this size, and may need to load new metaslabs to satisfy + these allocations.

+
+
=2 + (int)
+
Rewrite new block pointers starting in this pass.
+
=75% + (int)
+
This controls the number of threads used by + . + The default value of + will + create a maximum of one thread per CPU.
+
=134217728B + (128MB) (uint)
+
Maximum size of TRIM command. Larger ranges will be split into chunks no + larger than this value before issuing.
+
=32768B + (32kB) (uint)
+
Minimum size of TRIM commands. TRIM ranges smaller than this will be + skipped, unless they're part of a larger range which was chunked. This is + done because it's common for these small TRIMs to negatively impact + overall performance.
+
=0|1 + (uint)
+
Skip uninitialized metaslabs during the TRIM process. This option is + useful for pools constructed from large thinly-provisioned devices where + TRIM operations are slow. As a pool ages, an increasing fraction of the + pool's metaslabs will be initialized, progressively degrading the + usefulness of this option. This setting is stored when starting a manual + TRIM and will persist for the duration of the requested TRIM.
+
=10 + (uint)
+
Maximum number of queued TRIMs outstanding per leaf vdev. The number of + concurrent TRIM commands issued to the device is controlled by + zfs_vdev_trim_min_active and + zfs_vdev_trim_max_active.
+
=32 + (uint)
+
The number of transaction groups' worth of frees which should be + aggregated before TRIM operations are issued to the device. This setting + represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available + for use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite + effect. The default of 32 was determined to be a + reasonable compromise.

+
+
=0 + (int)
+
Historical statistics for this many latest TXGs will be available in + /proc/spl/kstat/zfs/pool/TXGs.
+
=5s + (int)
+
Flush dirty data to disk at least every this many seconds (maximum TXG + duration).
+
=0|1 + (int)
+
Allow TRIM I/Os to be aggregated. This is normally not helpful because the + extents to be trimmed will have been already been aggregated by the + metaslab. This option is provided for debugging and performance + analysis.
+
=1048576B + (1MB) (int)
+
Max vdev I/O aggregation size.
+
=131072B + (128kB) (int)
+
Max vdev I/O aggregation size for non-rotating media.
+
=16 + (64kB) (int)
+
Shift size to inflate reads to.
+
=16384B + (16kB) (int)
+
Inflate reads smaller than this value to meet the + zfs_vdev_cache_bshift size (default + ).
+
=0 + (int)
+
Total size of the per-disk cache in bytes. +

Currently this feature is disabled, as it has been found to + not be helpful for performance and in some cases harmful.

+
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load.
+
=5 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=1048576B + (1MB) (int)
+
The maximum distance for the last queued I/O operation in which the + balancing algorithm considers an operation to have locality. + See ZFS + I/O SCHEDULER.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/O operations do not immediately follow one + another.
+
=1 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=32768B + (32kB) (int)
+
Aggregate read I/O operations if the on-disk gap between them is within + this threshold.
+
=4096B + (4kB) (int)
+
Aggregate write I/O operations if the on-disk gap between them is within + this threshold.
+
=fastest + (string)
+
Select the raidz parity implementation to use. +

Variants that don't depend on CPU-specific features may be + selected on module load, as they are supported on all systems. The + remaining options may only be set after the module is loaded, as they + are available only if the implementations are compiled in and supported + on the running system.

+

Once the module is loaded, + /sys/module/zfs/parameters/zfs_vdev_raidz_impl + will show the available options, with the currently selected one + enclosed in square brackets.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
fastestselected by built-in benchmark
originaloriginal implementation
scalarscalar implementation
sse2SSE2 instruction set64-bit x86
ssse3SSSE3 instruction set64-bit x86
avx2AVX2 instruction set64-bit x86
avx512fAVX512F instruction set64-bit x86
avx512bwAVX512F & AVX512BW instruction sets64-bit x86
aarch64_neonNEONAarch64/64-bit ARMv8
aarch64_neonx2NEON with more unrollingAarch64/64-bit ARMv8
powerpc_altivecAltivecPowerPC
+
+
+ (charp)
+
. + Prints warning to kernel log for compatibility.
+
= + (int)
+
Max event queue length. Events in the queue can be viewed with + zpool-events(8).
+
=2000 + (int)
+
Maximum recent zevent records to retain for duplicate checking. Setting + this to 0 disables duplicate detection.
+
=s + (15min) (int)
+
Lifespan for a recent ereport that was retained for duplicate + checking.
+
=1048576 + (int)
+
The maximum number of taskq entries that are allowed to be cached. When + this limit is exceeded transaction records (itxs) will be cleaned + synchronously.
+
= + (int)
+
The number of taskq entries that are pre-populated when the taskq is first + created and are immediately available for use.
+
=100% + (int)
+
This controls the number of threads used by + . + The default value of + + will create a maximum of one thread per cpu.
+
=131072B + (128kB) (int)
+
This sets the maximum block size used by the ZIL. On very fragmented + pools, lowering this (typically to + ) + can improve performance.
+
= + (u64)
+
This sets the minimum delay in nanoseconds ZIL care to delay block commit, + waiting for more records. If ZIL writes are too fast, kernel may not be + able sleep for so short interval, increasing log latency above allowed by + zfs_commit_timeout_pct.
+
=0|1 + (int)
+
Disable the cache flush commands that are normally sent to disk by the ZIL + after an LWB write has completed. Setting this will cause ZIL corruption + on power loss if a volatile out-of-order write cache is enabled.
+
=0|1 + (int)
+
Disable intent logging replay. Can be disabled for recovery from corrupted + ZIL.
+
=B + (768kB) (ulong)
+
Limit SLOG write size per commit executed with synchronous priority. Any + writes above that will be executed with lower (asynchronous) priority to + limit potential SLOG device abuse by single active ZIL writer.
+
=64 + (int)
+
Usually, one metaslab from each normal-class vdev is dedicated for use by + the ZIL to log synchronous writes. However, if there are fewer than + zfs_embedded_slog_min_ms metaslabs in the vdev, this + functionality is disabled. This ensures that we don't set aside an + unreasonable amount of space for the ZIL.
+
=0|1 + (int)
+
If non-zero, the zio deadman will produce debugging messages (see + zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to + gain diagnostic information for hang conditions which don't involve a + mutex or other locking primitive: typically conditions in which a thread + in the zio pipeline is looping indefinitely.
+
=ms + (30s) (int)
+
When an I/O operation takes more than this much time to complete, it's + marked as slow. Each slow operation causes a delay zevent. Slow I/O + counters can be seen with zpool + status -s.
+
=1|0 + (int)
+
Throttle block allocations in the I/O pipeline. This allows for dynamic + allocation distribution when devices are imbalanced. When enabled, the + maximum number of pending allocations per top-level vdev is limited by + zfs_vdev_queue_depth_pct.
+
=0|1 + (int)
+
Prioritize requeued I/O.
+
=% + (uint)
+
Percentage of online CPUs which will run a worker thread for I/O. These + workers are responsible for I/O work such as compression and checksum + calculations. Fractional number of CPUs will be rounded down. +

The default value of + was chosen to + avoid using all CPUs which can result in latency issues and inconsistent + application performance, especially when slower compression and/or + checksumming is enabled.

+
+
=0 + (uint)
+
Number of worker threads per taskq. Lower values improve I/O ordering and + CPU utilization, while higher reduces lock contention. +

If 0, generate a system-dependent value + close to 6 threads per taskq.

+
+
= (charp)
+
Set the queue and thread configuration for the IO read queues. This is an + advanced debugging parameter. Don't change this unless you understand what + it does.
+
= (charp)
+
Set the queue and thread configuration for the IO write queues. This is an + advanced debugging parameter. Don't change this unless you understand what + it does.
+
=0|1 + (uint)
+
Do not create zvol device nodes. This may slightly improve startup time on + systems with a very large number of zvols.
+
= + (uint)
+
Major number for zvol block devices.
+
=16384 + (ulong)
+
Discard (TRIM) operations done on zvols will be done in batches of this + many blocks, where block size is determined by the + + property of a zvol.
+
=131072B + (128kB) (uint)
+
When adding a zvol to the system, prefetch this many bytes from the start + and end of the volume. Prefetching these regions of the volume is + desirable, because they are likely to be accessed immediately by + blkid(8) or the kernel partitioner.
+
=0|1 + (uint)
+
When processing I/O requests for a zvol, submit them synchronously. This + effectively limits the queue depth to 1 for each I/O + submitter. When unset, requests are handled asynchronously by a thread + pool. The number of requests which can be handled concurrently is + controlled by zvol_threads.
+
=32 + (uint)
+
Max number of threads which can handle zvol I/O requests + concurrently.
+
=1 + (uint)
+
Defines zvol block devices behaviour when + =: + +
+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/O operations. The scheduler determines when and in what order those + operations are issued. The scheduler divides operations into five I/O + classes, prioritized in the following order: sync read, sync write, async + read, async write, and scrub/resilver. Each queue defines the minimum and + maximum number of concurrent operations that may be issued to the device. In + addition, the device has an aggregate maximum, + zfs_vdev_max_active. Note that the sum of the per-queue + minima must not exceed the aggregate maximum. If the sum of the per-queue + maxima exceeds the aggregate maximum, then the number of active operations + may reach zfs_vdev_max_active, in which case no further + operations will be issued, regardless of whether all per-queue minima have + been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Furthermore, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been + hit, or if there are no operations queued for an I/O class that has not hit + its maximum. Every time an I/O operation is queued or an operation + completes, the scheduler looks for new operations to issue.

+

In general, smaller max_actives will lead to + lower latency of synchronous operations. Larger + max_actives may lead to higher overall throughput, + depending on underlying storage.

+

The ratio of the queues' max_actives determines + the balance of performance between reads, writes, and scrubs. For example, + increasing zfs_vdev_scrub_max_active will cause the scrub + or resilver to complete more quickly, but reads and writes to have higher + latency and lower throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations, except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically, + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write operations + according to the amount of dirty data in the pool. Since both throughput and + latency typically increase with the number of concurrent operations issued + to physical devices, reducing the burstiness in the number of concurrent + operations also stabilizes the response time of operations from other + – and in particular synchronous – queues. In broad strokes, + the I/O scheduler will issue more concurrent operations from the async write + queue as there's more dirty data in the pool.

+
+

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points:

+
+
       |              o---------| <-- zfs_vdev_async_write_max_active
+  ^    |             /^         |
+  |    |            / |         |
+active |           /  |         |
+ I/O   |          /   |         |
+count  |         /    |         |
+       |        /     |         |
+       |-------o      |         | <-- zfs_vdev_async_write_min_active
+      0|_______^______|_________|
+       0%      |      |       100% of zfs_dirty_data_max
+               |      |
+               |      `-- zfs_vdev_async_write_active_max_dirty_percent
+               `--------- zfs_vdev_async_write_active_min_dirty_percent
+
+

Until the amount of dirty data exceeds a minimum percentage of the + dirty data allowed in the pool, the I/O scheduler will limit the number of + concurrent operations to the minimum. As that threshold is crossed, the + number of concurrent operations issued increases linearly to the maximum at + the specified maximum percentage of the dirty data allowed in the pool.

+

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it + exceeds the maximum percentage, this indicates that the rate of incoming + data is greater than the rate that the backend storage can handle. In this + case, we must further throttle incoming writes, as described in the next + section.

+
+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as

+
min_time = + min(zfs_delay_scale * (dirty - min) / (max + - dirty), 100ms)
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be + at or above zfs_vdev_async_write_active_max_dirty_percent, + so that we only start to delay after writing at full speed has failed to + keep up with the incoming write rate. The scale of the curve is defined by + zfs_delay_scale. Roughly speaking, this variable + determines the amount of delay at the midpoint of the curve.

+
+
delay
+ 10ms +-------------------------------------------------------------*+
+      |                                                             *|
+  9ms +                                                             *+
+      |                                                             *|
+  8ms +                                                             *+
+      |                                                            * |
+  7ms +                                                            * +
+      |                                                            * |
+  6ms +                                                            * +
+      |                                                            * |
+  5ms +                                                           *  +
+      |                                                           *  |
+  4ms +                                                           *  +
+      |                                                           *  |
+  3ms +                                                          *   +
+      |                                                          *   |
+  2ms +                                              (midpoint) *    +
+      |                                                  |    **     |
+  1ms +                                                  v ***       +
+      |             zfs_delay_scale ---------->     ********         |
+    0 +-------------------------------------*********----------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note, that since the delay is added to the outstanding + time remaining on the most recent transaction it's effectively the inverse + of IOPS. Here, the midpoint of + translates to + 2000 IOPS. The shape of the curve was chosen such that + small changes in the amount of accumulated dirty data in the first three + quarters of the curve yield relatively small differences in the amount of + delay.

+

The effects can be easier to understand when the amount of delay + is represented on a logarithmic scale:

+
+
delay
+100ms +-------------------------------------------------------------++
+      +                                                              +
+      |                                                              |
+      +                                                             *+
+ 10ms +                                                             *+
+      +                                                           ** +
+      |                                              (midpoint)  **  |
+      +                                                  |     **    +
+  1ms +                                                  v ****      +
+      +             zfs_delay_scale ---------->        *****         +
+      |                                             ****             |
+      +                                          ****                +
+100us +                                        **                    +
+      +                                       *                      +
+      |                                      *                       |
+      +                                     *                        +
+ 10us +                                     *                        +
+      +                                                              +
+      |                                                              |
+      +                                                              +
+      +--------------------------------------------------------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the back-end storage, and then by changing the value + of zfs_delay_scale to increase the steepness of the + curve.

+
+
+ + + + + +
January 10, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/5/index.html b/man/v2.1/5/index.html new file mode 100644 index 000000000..098040a40 --- /dev/null +++ b/man/v2.1/5/index.html @@ -0,0 +1,147 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/5/vdev_id.conf.5.html b/man/v2.1/5/vdev_id.conf.5.html new file mode 100644 index 000000000..446682c88 --- /dev/null +++ b/man/v2.1/5/vdev_id.conf.5.html @@ -0,0 +1,367 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
VDEV_ID.CONF(5)File Formats ManualVDEV_ID.CONF(5)
+
+
+

+

vdev_id.conf — + configuration file for vdev_id(8)

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of + vdev_id(8) while it is mapping a disk device name to an + alias.

+

The vdev_id.conf file uses a simple format + consisting of a keyword followed by one or more values on a single line. Any + line not beginning with a recognized keyword is ignored. Comments may + optionally begin with a hash character.

+

The following keywords and values are used.

+
+
+ name devlink
+
Maps a device link in the /dev directory hierarchy + to a new device name. The udev rule defining the device link must have run + prior to vdev_id(8). A defined alias takes precedence + over a topology-derived name, but the two naming methods can otherwise + coexist. For example, one might name drives in a JBOD with the + sas_direct topology while naming an internal L2ARC + device with an alias. +

name is the name of the link to the + device that will by created under + /dev/disk/by-vdev.

+

devlink is the name of the device link + that has already been defined by udev. This may be an absolute path or + the base filename.

+
+
+ [pci_slot] port + name
+
Maps a physical path to a channel name (typically representing a single + disk enclosure).
+ +
Additionally create /dev/by-enclosure symlinks to + the disk enclosure + devices + using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for + sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form + /dev/by-enclosure/prefix⟩-⟨channel⟩⟨num⟩ +

Defaults to + “”.

+
+
+ prefix new + [channel]
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is + specified then the mapping is only applied to slots in the named channel, + otherwise the mapping is applied to all channels. The first-specified + slot rule that can match a slot takes precedence. + Therefore a channel-specific mapping for a given slot should generally + appear before a generic mapping for the same slot. In this way a custom + mapping may be applied to a particular channel and a default mapping + applied to the others.
+
+ yes|no
+
Specifies whether vdev_id(8) will handle only + dm-multipath devices. If set to yes then + vdev_id(8) will examine the first running component disk + of a dm-multipath device as provided by the driver command to determine + the physical path.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+ num
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+ bay|phy|port|id|lun|ses
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay: +
+
+
read the slot number from the bay identifier.
+
+
read the slot number from the phy identifier.
+
+
use the SAS port as the slot number.
+
+
use the scsi id as the slot number.
+
+
use the scsi lun as the slot number.
+
+
use the SCSI Enclosure Services (SES) enclosure device slot number, as + reported by sg_ses(8). Intended for use only on + systems where bay is unsupported, noting that + port and id may be unstable across + disk replacement.
+
+
+
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping:

+
+
multipath     no
+topology      sas_direct
+phys_per_port 4
+slot          bay
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         C
+channel 86:00.0  0         D
+
+# Custom mapping for Channel A
+
+#    Linux      Mapped
+#    Slot       Slot      Channel
+slot 1          7         A
+slot 2          10        A
+slot 3          3         A
+slot 4          6         A
+
+# Default mapping for B, C, and D
+
+slot 1          4
+slot 2          2
+slot 3          1
+slot 4          3
+
+

A SAS-switch topology. Note, that the + channel keyword takes only two arguments in this + example:

+
+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path:

+
+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+

A configuration with enclosure_symlinks enabled:

+
+
multipath yes
+enclosure_symlinks yes
+
+#          PCI_ID      HBA PORT     CHANNEL NAME
+channel    05:00.0     1            U
+channel    05:00.0     0            L
+channel    06:00.0     1            U
+channel    06:00.0     0            L
+
+In addition to the disks symlinks, this configuration will create: +
+
/dev/by-enclosure/enc-L0
+/dev/by-enclosure/enc-L1
+/dev/by-enclosure/enc-U0
+/dev/by-enclosure/enc-U1
+
+

A configuration using device link aliases:

+
+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/dracut.zfs.7.html b/man/v2.1/7/dracut.zfs.7.html new file mode 100644 index 000000000..7f5c82b5e --- /dev/null +++ b/man/v2.1/7/dracut.zfs.7.html @@ -0,0 +1,402 @@ + + + + + + + dracut.zfs.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

dracut.zfs.7

+
+ + + + + +
DRACUT.ZFS(7)Miscellaneous Information ManualDRACUT.ZFS(7)
+
+
+

+

dracut.zfs — + overview of ZFS dracut hooks

+
+
+

+
+
                      parse-zfs.sh → dracut-cmdline.service
+                          |                     ↓
+                          |                     …
+                          |                     ↓
+                          \————————→ dracut-initqueue.service
+                                                |                      zfs-import-opts.sh
+   zfs-load-module.service                      ↓                          |       |
+     |                  |                sysinit.target                    ↓       |
+     ↓                  |                       |        zfs-import-scan.service   ↓
+zfs-import-scan.service ↓                       ↓           | zfs-import-cache.service
+     |   zfs-import-cache.service         basic.target      |     |
+     \__________________|                       |           ↓     ↓
+                        ↓                       |     zfs-load-key.sh
+     zfs-env-bootfs.service                     |         |
+                        ↓                       ↓         ↓
+                 zfs-import.target → dracut-pre-mount.service
+                        |          ↑            |
+                        | dracut-zfs-generator  |
+                        | _____________________/|
+                        |/                      ↓
+                        |                   sysroot.mount ←——— dracut-zfs-generator
+                        |                       |
+                        |                       ↓
+                        |             initrd-root-fs.target ←— zfs-nonroot-necessities.service
+                        |                       |                                 |
+                        |                       ↓                                 |
+                        ↓             dracut-mount.service                        |
+       zfs-snapshot-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        ↓                       …                                 |
+       zfs-rollback-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        |          /sysroot/{usr,etc,lib,&c.} ←———————————————————/
+                        |                       |
+                        |                       ↓
+                        |                initrd-fs.target
+                        \______________________ |
+                                               \|
+                                                ↓
+        export-zfs.sh                      initrd.target
+              |                                 |
+              ↓                                 ↓
+   dracut-shutdown.service                      …
+                                                |
+                                                ↓
+                 zfs-needshutdown.sh → initrd-cleanup.service
+
+

Compare dracut.bootup(7) for the full + flowchart.

+
+
+

+

Under dracut, booting with + ZFS-on-/ is facilitated by a + number of hooks in the 90zfs module.

+

Booting into a ZFS dataset requires + mountpoint=/ to be set on the + dataset containing the root filesystem (henceforth "the boot + dataset") and at the very least either the bootfs + property to be set to that dataset, or the root= kernel + cmdline (or dracut drop-in) argument to specify it.

+

All children of the boot dataset with + = + with mountpoints matching /etc, + /bin, /lib, + /lib??, /libx32, + and /usr globs are deemed + essential and will be mounted as well.

+

zfs-mount-generator(8) is recommended for proper + functioning of the system afterward (correct mount properties, remounting, + &c.).

+
+
+

+
+

+
+
dataset, + dataset
+
Use dataset as the boot dataset. All pluses + (‘+’) are replaced with spaces + (‘ ’).
+
, + root=zfs:, + , + [root=]
+
After import, search for the first pool with the bootfs + property set, use its value as-if specified as the + dataset above.
+
rootfstype=zfs root=dataset
+
Equivalent to + root=zfs:dataset.
+
+ [root=]
+
Equivalent to root=zfs:AUTO.
+
flags
+
Mount the boot dataset with -o + flags; cf. + Temporary Mount + Point Properties in zfsprops(7). These properties + will not last, since all filesystems will be re-mounted from the real + root.
+
+
If specified, dracut-zfs-generator logs to the + journal.
+
+

Be careful about setting neither rootfstype=zfs + nor root=zfs:dataset — other + automatic boot selection methods, like + systemd-gpt-auto-generator and + systemd-fstab-generator might take precedent.

+
+
+

+
+
[=snapshot-name]
+
Execute zfs snapshot + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
[=snapshot-name]
+
Execute zfs snapshot + -Rf + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
host-id
+
Use zgenhostid(8) to set the host ID to + host-id; otherwise, + /etc/hostid inherited from the real root is + used.
+
, + zfs.force, zfsforce
+
Appends -f to all zpool + import invocations; primarily useful in + conjunction with spl_hostid=, or if no host ID was + inherited.
+
+
+
+
+

+
+
parse-zfs.sh + ()
+
Processes spl_hostid=. If root= + matches a known pattern, above, provides /dev/root + and delays the initqueue until zfs(4) is loaded,
+
zfs-import-opts.sh + (systemd environment + generator)
+
Turns zfs_force, zfs.force, + or zfsforce into + ZPOOL_IMPORT_OPTS=-f for + zfs-import-scan.service or + zfs-import-cache.service.
+
zfs-load-key.sh + ()
+
Loads encryption keys for the boot dataset and its essential descendants. +
+
+
=
+
Is prompted for via systemd-ask-password + thrice.
+
=URL, + keylocation=URL
+
network-online.target is started before + loading.
+
=path
+
If path doesn't exist, + udevadm is + settled. If it still doesn't, it's waited for + for up to + s.
+
+
+
+
zfs-env-bootfs.service + (systemd service)
+
After pool import, sets BOOTFS= in the systemd + environment to the first non-null bootfs value in + iteration order.
+
dracut-zfs-generator + (systemd generator)
+
Generates sysroot.mount (using + rootflags=, if any). If an + explicit boot dataset was specified, also generates essential mountpoints + (sysroot-etc.mount, + sysroot-bin.mount, + &c.), otherwise generates + zfs-nonroot-necessities.service which mounts them + explicitly after /sysroot using + BOOTFS=.
+
zfs-snapshot-bootfs.service, + zfs-rollback-bootfs.service + (systemd services)
+
Consume bootfs.snapshot and + bootfs.rollback as described in + CMDLINE. Use + BOOTFS= if no explicit boot dataset was + specified.
+
zfs-needshutdown.sh + ()
+
If any pools were imported, signals that shutdown hooks are required.
+
export-zfs.sh + ()
+
Forcibly exports all pools.
+
/etc/hostid, + /etc/zfs/zpool.cache, + /etc/zfs/vdev_id.conf (regular files)
+
Included verbatim, hostonly.
+
mount-zfs.sh + ()
+
Does nothing on systemd systems (if + dracut-zfs-generator + succeeded). Otherwise, loads encryption key for + the boot dataset from the console or via plymouth. It may not work at + all!
+
+
+
+

+

zfsprops(7), + zpoolprops(7), + dracut-shutdown.service(8), + systemd-fstab-generator(8), + systemd-gpt-auto-generator(8), + zfs-mount-generator(8), + zgenhostid(8)

+
+
+ + + + + +
March 28, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/index.html b/man/v2.1/7/index.html new file mode 100644 index 000000000..5dafcde58 --- /dev/null +++ b/man/v2.1/7/index.html @@ -0,0 +1,157 @@ + + + + + + + Miscellaneous (7) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/v2.1/7/zfsconcepts.7.html b/man/v2.1/7/zfsconcepts.7.html new file mode 100644 index 000000000..cc82bb4e6 --- /dev/null +++ b/man/v2.1/7/zfsconcepts.7.html @@ -0,0 +1,301 @@ + + + + + + + zfsconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsconcepts.7

+
+ + + + + +
ZFSCONCEPTS(7)Miscellaneous Information ManualZFSCONCEPTS(7)
+
+
+

+

zfsconcepts — + overview of ZFS concepts

+
+
+

+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of + volumes can be cloned or rolled back, visibility is determined by the + property + of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the + + property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a + snapshot is cloned, it creates an implicit dependency between the parent and + child. Even though the clone is created somewhere else in the dataset + hierarchy, the original snapshot cannot be destroyed as long as a clone + exists. The + property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set in + the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if + has a + mount point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + prevents the + file system from being mounted.

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, + umount, /etc/fstab). If a + file system's mount point is set to + , ZFS makes + no attempt to manage the file system, and the administrator is responsible + for mounting and unmounting the file system. Because pools must be imported + before a legacy mount can succeed, administrators should ensure that legacy + mounts are only attempted after the zpool import process finishes at boot + time. For example, on machines using systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for + details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the + + property enabled, duplicate data blocks are removed synchronously. The + result is that only unique data is stored and common components are shared + among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow IO and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk IO.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. Consider using the + + property as a less resource-intensive alternative.

+
+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/zfsprops.7.html b/man/v2.1/7/zfsprops.7.html new file mode 100644 index 000000000..a3d01074a --- /dev/null +++ b/man/v2.1/7/zfsprops.7.html @@ -0,0 +1,1494 @@ + + + + + + + zfsprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsprops.7

+
+ + + + + +
ZFSPROPS(7)Miscellaneous Information ManualZFSPROPS(7)
+
+
+

+

zfspropsnative + and user-defined properties of ZFS datasets

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the + Encryption section of + zfs-load-key(8) for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + . + See zfs load-key and + zfs unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid, the + objsetid of a dataset is not transferred to other pools + when the snapshot is copied with a send/receive operation. The + objsetid can be reused (for a new dataset) after the + dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive + -s, this opaque token can be provided to + zfs send + -t to resume and complete the + zfs receive.
+
+
For bookmarks, this is the list of snapshot guids the bookmark contains a + redaction list for. For snapshots, this is the list of snapshot guids the + snapshot is redacted with respect to.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: + , + , + , + or + .
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section of + zfsconcepts(7)) is space that is referenced + exclusively by this snapshot. If this snapshot is destroyed, the amount + of used space will be freed. Space that is shared by + multiple snapshots isn't accounted for in this metric. When a snapshot + is destroyed, space that was previously shared with this snapshot can + become unique to snapshots adjacent to it, thus changing the used space + of those snapshots. The used space of the latest snapshot can also be + affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced + does not take into account pending changes. Pending changes are + generally accounted for within a few seconds. Committing a change to a + disk using fsync(2) or + does + not necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du + and ls + -s. See the zfs + userspace command for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@... + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on + is set on a file system additional objects will be created per-file to + store extended attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal + objects are normally required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property + xattr=on is set on a fileset, ZFS will + create additional objects per-file to store extended attributes. These + additional objects are reflected in the projectobjused + value and are counted against the project's + projectobjquota. When a filesystem is configured to use + xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 8 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which + for clones may be a snapshot in the origin's filesystem (or the origin + of the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=discard|groupmask|passthrough|restricted
+
Controls how an ACL is modified during chmod(2) and how inherited ACEs are + modified by the file creation mode: +
+
+
+
default, deletes all + + except for those representing the mode of the file or directory + requested by chmod(2).
+
+
reduces permissions granted in all + + entries found in the + + such that they are no greater than the group permissions specified by + chmod(2).
+
+
indicates that no changes are made to the ACL other than creating or + updating the necessary ACL entries to represent the new mode of the + file or directory.
+
+
will cause the chmod(2) operation to return an error + when used on any file or directory which has a non-trivial ACL whose + entries can not be represented by a mode. chmod(2) + is required to change the set user ID, set group ID, or sticky bits on + a file or directory, as they do not have equivalent ACL entries. In + order to use chmod(2) on a file or directory with a + non-trivial ACL when aclmode is set to + restricted, you must first remove all ACL entries + which do not represent the current mode.
+
+
+
+
=off|nfsv4|posix
+
Controls whether ACLs are enabled and if so what type of ACL to use. When + this property is set to a type of ACL not supported by the current + platform, the behavior is the same as if it were set to + off. +
+
+
+
default on Linux, when a file system has the acltype + property set to off then ACLs are disabled.
+
+
an alias for off
+
+
default on FreeBSD, indicates that NFSv4-style + ZFS ACLs should be used. These ACLs can be managed with the + getfacl(1) and setfacl(1). The + nfsv4 ZFS ACL type is not yet supported on + Linux.
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+
an alias for posix
+
+
+

To obtain the best performance when setting + posix users are strongly encouraged to set the + xattr=sa property. This will result + in the POSIX ACL being stored more efficiently on disk. But as a + consequence, all new extended attributes will only be accessible from + OpenZFS implementations which support the + xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, and + edonr checksum algorithms require enabling the + appropriate features on the pool. FreeBSD does + not support the edonr algorithm.

+

Please see zpool-features(7) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N
+
Controls the compression algorithm used for this dataset. +

Setting compression to on indicates that the + current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(7) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where + N is an integer from 1 (fastest) to 9 (best + compression ratio). Currently, gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zstd compression algorithm + provides both high compression ratios and good performance. You can + specify the zstd level by using the value + zstd-N, where + N is an integer from 1 (fastest) to 19 (best + compression ratio). zstd is equivalent to + .

+

Faster speeds at the cost of the compression + ratio can be requested by setting a negative zstd + level. This is done using + zstd-fast-N, where + N is an integer in [1-9,10,20,30,...,100,500,1000] + which maps to a negative zstd level. The lower the + level the faster the compression - 1000 + provides the fastest compression and lowest compression + ratio. zstd-fast is equivalent to + .

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example, 8kB + blocks on disks with 4kB disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=||
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a missing + top-level vdev. Do NOT create, for example a two-disk + striped pool and set copies=2 on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the + implementation stores some encryption metadata where the third copy + would normally be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify|sha256[,verify]|sha512[,verify]|skein[,verify]|edonr,verify
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + sha256,verify. +

If set to verify, ZFS will do a byte-to-byte + comparison in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should + be enabled on + a system. See the Deduplication + section of zfsconcepts(7).

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy + requires the large_dnode + pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the + workload makes heavy use of extended attributes. This may be applicable + to SELinux-enabled systems, Lustre servers, and Samba servers, for + example. Literal values are supported for cases where the optimal size + is known in advance and for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode + feature, or if you need to import this pool on a system that doesn't + support the large_dnode + feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section of + zfs-load-key(8).

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
# dd + + /path/to/output/key
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt||<address> + |<address>
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is prompt. +

Even though the encryption suite cannot be changed after + dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via the + standard input stream, but users should be careful not to place keys + which should be kept secret on the command line. If a file URI is + selected, the key will be loaded from the specified absolute file path. + If an HTTPS or HTTP URL is selected, it will be GETted using + fetch(3), libcurl, or nothing, depending on + compile-time configuration and run-time availability. The + SSL_CA_CERT_FILE environment variable can be set + to set the location of the concatenated certificate store. The + SSL_CA_CERT_PATH environment variable can be set + to override the location of the directory containing the certificate + authority bundle. The SSL_CLIENT_CERT_FILE and + SSL_CLIENT_KEY_FILE environment variables can be + set to configure the path to the client certificate and its key.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(7)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512B up to 1M. The default size is 0 which means no + small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpoolconcepts(7) for more + details on the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section of + zfsconcepts(7) for more information on how this property + is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none, or if they were mounted before the property + was changed. In addition, any shared file systems are unshared and + shared in the new location.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non-blocking mandatory locks). This is used for + SMB clients. Changes to this property only take effect when the file + system is umounted and remounted. Support for these locks is scarce and + not described by POSIX.
+
=on|off
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux and + FreeBSD file systems. On these platforms the + property is on by default. Set to off + to disable overlay mounts for consistency with OpenZFS on other + platforms.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(7)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace command + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@... properties are not + displayed by zfs get + all. The user's name must be appended after the + @ symbol, using one of the following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + mount + options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two + greater than or equal to 512B and less than or + equal to 128kB. If the + + feature is enabled on the pool, the size may be up to + 1MB. See zpool-features(7) for + details on ZFS feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most|some|none
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 1000 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

When set to some, ZFS stores an extra copy + of only critical metadata. This can improve file create performance + since less metadata needs to be written. If a single on-disk block is + corrupt, at worst a single user file can be lost.

+

When set to none, ZFS does not store any + copies of metadata redundantly. If a single on-disk block is corrupt, an + entire dataset can be lost.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + atime=on is set. Turning this property + on causes the access time to be updated relative to the modify or change + time. Access time is only updated if the previous access time was earlier + than the current modify or change time or if the existing access time + hasn't been updated within the past 24 hours. The default value is + off. The values on and + off are equivalent to the relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a + . +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", i.e. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user, system + passwd/shadow, LDAP or smbpasswd based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the /etc/exports file. + Otherwise, the file system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +
sec=sys,rw,crossmnt,no_subtree_check
+

Please note that the options are comma-separated, unlike those + found in exports(5). This is done to negate the need + for quoting, as well as to make parsing with scripts easier.

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + /dev/zvol/pool⟩ + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section of + zfsconcepts(7). The default value is + hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX-specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse volume" (also + known as "thin provisioned") can be created by specifying the + -s option to the zfs + create -V command, or by + changing the value of the refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the refreservation. A volume that is not sparse is + said to be "thick provisioned". A sparse volume can become + thick provisioned by setting refreservation to + auto.

+
+
=default|full|geom|dev|none
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshotted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + , + where full, dev and + none are encoded as 1, 2 and 3 respectively. The default + value is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used on Linux.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported: either directory based or + system attribute based. +

The default value of on enables directory + based extended attributes. This style of extended attribute imposes no + practical limit on either the size or number of attributes which can be + set on a file. Although under Linux the getxattr(2) + and setxattr(2) system calls limit the maximum size to + 64K. This is the most compatible style of extended attribute and is + supported by all ZFS implementations.

+

System attribute based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk IO required. Up to + 64K of data may be stored per-file in the space reserved for system + attributes. If there is not enough space available for an extended + attribute then it will be automatically written as a directory based + xattr. System attribute based extended attributes are not accessible on + platforms which do not support the + xattr=sa feature.

+

The use of system attribute based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely on extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=off|on
+
Controls whether the dataset is managed from a jail. See + zfs-jail(8) for more information. Jails are a + FreeBSD feature and are not relevant on other + platforms. The default value is off.
+
=on|off
+
Controls whether the dataset is managed from a non-global zone. Zones are + a Solaris feature and are not relevant on other platforms. The default + value is off.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
+
+
atime/noatime
+
+
auto/noauto
+
+
dev/nodev
+
+
exec/noexec
+
+
ro/rw
+
+
relatime/norelatime
+
+
suid/nosuid
+
+
xattr/noxattr
+
+
mand/nomand
+
=
+
context=
+
=
+
fscontext=
+
=
+
defcontext=
+
=
+
rootcontext=
+
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+
+ + + + + +
July 21, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/zpool-features.7.html b/man/v2.1/7/zpool-features.7.html new file mode 100644 index 000000000..ad04b55ac --- /dev/null +++ b/man/v2.1/7/zpool-features.7.html @@ -0,0 +1,1101 @@ + + + + + + + zpool-features.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.7

+
+ + + + + +
ZPOOL-FEATURES(7)Miscellaneous Information ManualZPOOL-FEATURES(7)
+
+
+

+

zpool-features — + description of ZFS pool features

+
+
+

+

ZFS pool on-disk format versions are specified via + "features" which replace the old on-disk format numbers (the last + supported on-disk format number is 28). To enable a feature on a pool use + the zpool upgrade, or set + the feature@feature-name property to + enabled. Please also see the + Compatibility feature + sets section for information on how sets of features may be enabled + together.

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

Since most features can be enabled independently of each other, + the on-disk format of the pool is specified by the set of all features + marked as active on the pool. If the pool was created by + another software version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature-name. The + reversed DNS name ensures that the feature's GUID is unique across all ZFS + implementations. When unsupported features are encountered on a pool they + will be identified by their GUIDs. Refer to the documentation for the ZFS + implementation that created the pool for information about those + features.

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the + ‘:’ (i.e. + com.example:feature-name would + have the short name feature-name), however a feature's + short name may differ across ZFS implementations if following the convention + would result in name conflicts.

+
+
+

+

Features can be in one of three states:

+
+
+
This feature's on-disk format changes are in effect on the pool. Support + for this feature is required to import the pool in read-write mode. If + this feature is not read-only compatible, support is also required to + import the pool in read-only mode (see + Read-only + compatibility).
+
+
An administrator has marked this feature as enabled on the pool, but the + feature's on-disk format changes have not been made yet. The pool can + still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support + returning to the enabled state after becoming + active. See feature-specific documentation for + details.
+
+
This feature's on-disk format changes have not been made and will not be + made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they + have been enabled.
+
+

The state of supported features is exposed through pool properties + of the form feature@short-name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as “read-only compatible”. If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly + property during import (see zpool-import(8) for details on + importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool, a pool + property named + @feature-name + will indicate why the import was allowed despite the unsupported feature. + Possible values for this property are:

+
+
+
The feature is in the enabled state and therefore the + pool's on-disk format is still compatible with software that does not + support this feature.
+
+
The feature is read-only compatible and the pool has been imported in + read-only mode.
+
+
+
+

+

Some features depend on other features being enabled in order to + function. Enabling a feature will automatically enable any features it + depends on.

+
+
+

+

It is sometimes necessary for a pool to maintain compatibility + with a specific on-disk format, by enabling and disabling particular + features. The compatibility feature facilitates this by + allowing feature sets to be read from text files. When set to + (the + default), compatibility feature sets are disabled (i.e. all features are + enabled); when set to legacy, no features are enabled. + When set to a comma-separated list of filenames (each filename may either be + an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d), the lists of + requested features are read from those files, separated by whitespace and/or + commas. Only features present in all files are enabled.

+

Simple sanity checks are applied to the files: they must be + between 1B and 16kB in size, and must end with a newline character.

+

The requested features are applied when a pool is created using + zpool create + -o + compatibility= and controls + which features are enabled when using zpool + upgrade. zpool + status will not show a warning about disabled + features which are not part of the requested feature set.

+

The special value legacy prevents any features + from being enabled, either via zpool + upgrade or zpool + set + feature@feature-name=enabled. + This setting also prevents pools from being upgraded to newer on-disk + versions. This is a safety measure to prevent new features from being + accidentally enabled, breaking compatibility.

+

By convention, compatibility files in + /usr/share/zfs/compatibility.d are provided by the + distribution, and include feature sets supported by important versions of + popular distributions, and feature sets commonly supported at the start of + each year. Compatibility files in + /etc/zfs/compatibility.d, if present, will take + precedence over files with the same name in + /usr/share/zfs/compatibility.d.

+

If an unrecognized feature is found in these files, an error + message will be shown. If the unrecognized feature is in a file in + /etc/zfs/compatibility.d, this is treated as an + error and processing will stop. If the unrecognized feature is under + /usr/share/zfs/compatibility.d, this is treated as a + warning and processing will continue. This difference is to allow + distributions to include features which might not be recognized by the + currently-installed binaries.

+

Compatibility files may include comments: any text from + ‘#’ to the end of the line is ignored.

+

:

+
+
example# cat /usr/share/zfs/compatibility.d/grub2
+# Features which are supported by GRUB2
+async_destroy
+bookmarks
+embedded_data
+empty_bpobj
+enabled_txg
+extensible_dataset
+filesystem_limits
+hole_birth
+large_blocks
+lz4_compress
+spacemap_histogram
+
+example# zpool create -o compatibility=grub2 bootpool vdev
+
+

See zpool-create(8) and + zpool-upgrade(8) for more information on how these + commands are affected by feature sets.

+
+
+
+

+

The following features are supported on this system:

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables support for separate allocation + classes.

+

This feature becomes active when a dedicated + allocation class vdev (dedup or special) is created with the + zpool create + or zpool + add commands. With + device removal, it can be returned to the enabled + state if all the dedicated allocation class vdevs are removed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Destroying a file system requires traversing all of its data + in order to return its used space to the pool. Without + async_destroy, the file system is not fully removed + until all space has been reclaimed. If the destroy operation is + interrupted by a reboot or power outage, the next attempt to open the + pool will need to complete the destroy operation synchronously.

+

When async_destroy is enabled, the file + system's data will be reclaimed by a background process, allowing the + destroy operation to complete without traversing the entire file system. + The background process is able to resume interrupted destroys after the + pool has been opened, eliminating the need to finish interrupted + destroys as part of the open operation. The amount of space remaining to + be reclaimed by the background process is available through the + freeing property.

+

This feature is only active while + freeing is non-zero.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables use of the zfs + bookmark command.

+

This feature is active while + any bookmarks exist in the pool. All bookmarks in the pool can be listed + by running zfs list + -t + + -r poolname.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 + bookmark is created and will be returned to the + enabled state when all v2 bookmarks are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset, bookmark_v2
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables additional bookmark + accounting fields, enabling the + #bookmark + property (space written since a bookmark) and estimates of send stream + sizes for incrementals from bookmarks.

+

This feature becomes active when a bookmark + is created and will be returned to the enabled state + when all bookmarks with these fields are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the ability for the + zpool attach and + zpool replace commands + to perform sequential reconstruction (instead of healing reconstruction) + when resilvering.

+

Sequential reconstruction resilvers a device in LBA order + without immediately verifying the checksums. Once complete, a scrub is + started, which then verifies the checksums. This approach allows full + redundancy to be restored to the pool in the minimum amount of time. + This two-phase approach will take longer than a healing resilver when + the time to verify the checksums is included. However, unless there is + additional pool damage, no checksum errors should be reported by the + scrub. This feature is incompatible with raidz configurations. This + feature becomes active while a sequential resilver is + in progress, and returns to enabled when the resilver + completes.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the zpool + remove command to remove top-level vdevs, + evacuating them to reduce the total size of the pool.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables use of the draid vdev + type. dRAID is a variant of raidz which provides integrated distributed + hot spares that allow faster resilvering while retaining the benefits of + raidz. Data, parity, and spare space are organized in redundancy groups + and distributed evenly over all of the devices.

+

This feature becomes active when creating a + pool which uses the draid vdev type, or when adding a + new draid vdev to an existing pool.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Edon-R hash + algorithm for checksum, including for nopwrite (if compression is also + enabled, an overwrite of a block whose checksum matches the data being + written will be ignored). In an abundance of caution, Edon-R requires + verification when used with dedup: zfs + set + =edonr, + (see zfs-set(8)).

+

Edon-R is a very high-performance hash algorithm that was part + of the NIST SHA-3 competition. It provides extremely high hash + performance (over 350% faster than SHA-256), but was not selected + because of its unsuitability as a general purpose secure hash algorithm. + This implementation utilizes the new salted checksumming functionality + in ZFS, which means that the checksum is pre-seeded with a secret + 256-bit random key (stored on the pool) before being fed the data block + to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the edonr feature is set to + enabled, the administrator can turn on the + edonr checksum on any dataset using + zfs set + checksum=edonr + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + edonr, and will return to being + enabled once all filesystems that have ever had their + checksum set to edonr are destroyed.

+

FreeBSD does not support the + edonr feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 + bytes or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of + highly-compressible blocks are stored in the block "pointer" + itself (a misnomer in this case, as it contains the compressed data, + rather than a pointer to its location on disk). Thus the space of the + block (one sector, typically 512B or 4kB) is saved, and no additional + I/O is needed to read and write the data block. This + feature becomes active as soon + as it is enabled and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also + reduces the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobjs) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobjs are empty. This + feature allows us to create each bpobj on-demand, thus eliminating the + empty bpobjs.

+

This feature is active while there are any + filesystems, volumes, or snapshots which were created after enabling + this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Once this feature is enabled, ZFS records the transaction + group number in which new features are enabled. This has no user-visible + impact, but other features may depend on this feature.

+

This feature becomes active +
+ as soon as it is enabled and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark_v2, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an + encrypted dataset is created and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first + dependent feature uses it, and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables filesystem and snapshot limits. These + limits can be used to control how many filesystems and/or snapshots can + be created at the point in the tree on which the limits are set.

+

This feature is active once either of the + limit properties has been set on a dataset. Once activated the feature + is never deactivated.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
enabled_txg
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature has/had bugs, + the result of which is that, if you do a zfs + send -i (or + -R, since it uses + -i) from an affected dataset, the receiving + party will not see any checksum or other errors, but the resulting + destination snapshot will not match the source. Its use by + zfs send + -i has been disabled by default (see + + in zfs(4)).

+

This feature improves performance of incremental sends + (zfs send + -i) and receives for objects with many holes. + The most common case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A + to snapshot B contains + information about every block that changed between A + and B. Blocks which did not + change between those snapshots can be identified and omitted from the + stream using a piece of metadata called the "block birth + time", but birth times are not recorded for holes (blocks filled + only with zeroes). Since holes created after A + cannot be distinguished from holes created + before A, information about every hole in the + entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. + However, when incrementally replicating filesystems or zvols with many + holes (for example a zvol formatted with another filesystem) a lot of + time will be spent sending and receiving unnecessary information about + holes that already exist on the receiving side.

+

Once the hole_birth feature has been enabled + the block birth times of all new holes will be recorded. Incremental + sends between snapshots created after this feature is enabled will use + this new metadata to avoid sending information about holes that already + exist on the receiving side.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the record size on a dataset to be set + larger than 128kB.

+

This feature becomes active once a dataset + contains a file with a block size larger than 128kB, and will return to + being enabled once all filesystems that have ever had + their recordsize larger than 128kB are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the size of dnodes in a + dataset to be set larger than 512B. This feature becomes + active once a dataset contains an object with a dnode + larger than 512B, which occurs as a result of setting the + + dataset property to a value other than legacy. The + feature will return to being enabled once all + filesystems that have ever contained a dnode larger than 512B are + destroyed. Large dnodes allow more data to be stored in the bonus + buffer, thus potentially improving performance by avoiding the use of + spill blocks.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows clones to be deleted faster than the + traditional method when a large number of random/sparse writes have been + made to the clone. All blocks allocated and freed after a clone is + created are tracked by the the clone's livelist which is referenced + during the deletion of the clone. The feature is activated when a clone + is created and remains active until all clones have + been destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
com.delphix:spacemap_v2
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature improves performance for heavily-fragmented + pools, especially when workloads are heavy in random-writes. It does so + by logging all the metaslab changes on a single spacemap every TXG + instead of scattering multiple writes to all the metaslab spacemaps.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

lz4 is a high-performance real-time + compression algorithm that features significantly faster compression and + decompression as well as a higher compression ratio than the older + lzjb compression. Typically, lz4 + compression is approximately 50% faster on compressible data and 200% + faster on incompressible data than lzjb. It is also + approximately 80% faster on decompression, while giving approximately a + 10% better compression ratio.

+

When the lz4_compress feature is set to + enabled, the administrator can turn on + lz4 compression on any dataset on the pool using the + zfs-set(8) command. All newly written metadata will be + compressed with the lz4 algorithm.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored + or raidz configuration.

+

When the multi_vdev_crash_dump feature is + set to enabled, the administrator can use + dumpadm(1M) to configure a dump device on a pool + comprised of multiple vdevs.

+

Under FreeBSD and Linux this feature + is unused, but registered for compatibility. New pools created on these + systems will have the feature enabled but will never + transition to active, as this functionality is not + required for crash dump support. Existing pools where this feature is + active can be imported.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
device_removal
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature is an enhancement of + device_removal, which will over time reduce the memory + used to track removed devices. When indirect blocks are freed or + remapped, we note that their part of the indirect mapping is + "obsolete" – no longer needed.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is an object-based attribute. When + upgrading an existing filesystem, objects without a project ID will be + assigned a zero project ID. When this feature is enabled, newly created + objects inherit their parent directories' project ID if the parent's + inherit flag is set (via chattr + + or zfs + project + -s|-C). Otherwise, the + new object's project ID will be zero. An object's project ID can be + changed at any time by the owner (or privileged user) via + chattr -p + prjid or zfs + project -p + prjid.

+

This feature will become active as soon as + it is enabled and will never return to being disabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmarks, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of redacted + zfs sends, which create + redaction bookmarks storing the list of blocks redacted by the send that + created them. For more information about redacted sends, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the receiving of redacted + zfs sendstreams. which + create redacted datasets when received. These datasets are missing some + of their blocks, and so cannot be safely mounted, and their contents + cannot be safely read. For more information about redacted receives, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows ZFS to postpone new resilvers if an + existing one is already in progress. Without this feature, any new + resilvers will cause the currently running one to be immediately + restarted from the beginning.

+

This feature becomes active once a resilver + has been deferred, and returns to being enabled when + the deferred resilver begins.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit + arithmetic of SHA-512 provides an approximate 50% performance boost over + SHA-256 on 64-bit hardware and is thus a good minimum-change replacement + candidate for systems where hash performance is important, but these + systems cannot for whatever reason utilize the faster + skein and + edonr algorithms.

+

When the sha512 feature is set to + enabled, the administrator can turn on the + sha512 checksum on any dataset using + zfs set + checksum=sha512 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + sha512, and will return to being + enabled once all filesystems that have ever had their + checksum set to sha512 are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm + that was a finalist in the NIST SHA-3 competition. It provides a very + high security margin and high performance on 64-bit hardware (80% faster + than SHA-256). This implementation also utilizes the new salted + checksumming functionality in ZFS, which means that the checksum is + pre-seeded with a secret 256-bit random key (stored on the pool) before + being fed the data block to be checksummed. Thus the produced checksums + are unique to a given pool, preventing hash collision attacks on systems + with dedup.

+

When the skein feature is set to + enabled, the administrator can turn on the + skein checksum on any dataset using + zfs set + checksum=skein + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + skein, and will return to being + enabled once all filesystems that have ever had their + checksum set to skein are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This features allows ZFS to maintain more information about + how free space is organized within the pool. If this feature is + enabled, it will be activated when a new space map + object is created, or an existing space map is upgraded to the new + format, and never returns back to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the use of the new space map encoding + which consists of two words (instead of one) whenever it is + advantageous. The new encoding allows space maps to represent large + regions of space more efficiently on-disk while also increasing their + maximum addressable offset.

+

This feature becomes active once it is + enabled, and never returns back to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the zpool + checkpoint command that can checkpoint the state + of the pool at the time it was issued and later rewind back to it or + discard it.

+

This feature becomes active when the + zpool checkpoint command + is used to checkpoint the pool. The feature will only return back to + being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

zstd is a high-performance + compression algorithm that features a combination of high compression + ratios and high speed. Compared to + , + zstd offers slightly better compression at much higher + speeds. Compared to lz4, zstd offers + much better compression while being only modestly slower. Typically, + zstd compression speed ranges from 250 to 500 MB/s per + thread and decompression speed is over 1 GB/s per thread.

+

When the zstd feature is set to + enabled, the administrator can turn on + zstd compression of any dataset using + zfs set + compress=zstd + dset (see zfs-set(8)). This + feature becomes active once a + compress property has been set to + zstd, and will return to being + enabled once all filesystems that have ever had their + compress property set to zstd are + destroyed.

+
+
+
+
+

+

zpool(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/zpoolconcepts.7.html b/man/v2.1/7/zpoolconcepts.7.html new file mode 100644 index 000000000..7faec45ef --- /dev/null +++ b/man/v2.1/7/zpoolconcepts.7.html @@ -0,0 +1,602 @@ + + + + + + + zpoolconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolconcepts.7

+
+ + + + + +
ZPOOLCONCEPTS(7)Miscellaneous Information ManualZPOOLCONCEPTS(7)
+
+
+

+

zpoolconcepts — + overview of ZFS storage pools

+
+
+

+
+

+

A "virtual device" describes a single device or a + collection of devices organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system on which it + resides. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with + N disks of size + X can hold X + bytes and can withstand + + devices failing without losing data.
+
, + raidz1, raidz2, + raidz3
+
A variation on RAID-5 that allows for better distribution of parity and + eliminates the RAID-5 "write hole" (in which data and parity + become inconsistent after a power loss). Data and parity is striped across + all disks within a raidz group. +

A raidz group can have single, double, or triple parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N + disks of size X + with P parity + disks can hold approximately + + bytes and can withstand P + devices failing without losing data. The minimum + number of devices in a raidz group is one more than the number of parity + disks. The recommended number is between 3 and 9 to help increase + performance.

+
+
, + draid1, draid2, + draid3
+
A variant of raidz that provides integrated distributed hot spares which + allows for faster resilvering while retaining the benefits of raidz. A + dRAID vdev is constructed from multiple internal raidz groups, each with + D data devices and + P parity devices. These groups + are distributed over all of the children in order to fully utilize the + available disk performance. +

Unlike raidz, dRAID uses a fixed stripe width + (padding as necessary with zeros) to allow fully sequential resilvering. + This fixed stripe width significantly effects both usable capacity and + IOPS. For example, with the default + + and + + disk sectors the minimum allocation size is + . + If using compression, this relatively large allocation size can reduce + the effective compression ratio. When using ZFS volumes and dRAID, the + default of the + + property is increased to account for the allocation size. If a dRAID + pool will hold a significant amount of small blocks, it is recommended + to also add a mirrored special vdev to store those + blocks.

+

In regards to I/O, + performance is similar to raidz since for any read all + D data disks must be accessed. + Delivered random IOPS can be reasonably approximated as + .

+

Like raidzm a dRAID can have single-, double-, or + triple-parity. The draid1, draid2, + and draid3 types can be used to specify the parity + level. The draid vdev type is an alias for + draid1.

+

A dRAID with N disks + of size X, D + data disks per redundancy group, + P parity level, and + + distributed hot spares can hold approximately + + bytes and can withstand P + devices failing without losing data.

+
+
[parity][:data][:children][:spares]
+
A non-default dRAID configuration can be specified by appending one or + more of the following optional arguments to the draid + keyword: +
+
parity
+
The parity level (1-3).
+
data
+
The number of data devices per redundancy group. In general, a smaller + value of D will increase IOPS, + improve the compression ratio, and speed up resilvering at the + expense of total usable capacity. Defaults to 8, + unless + + is less than 8.
+
children
+
The expected number of children. Useful as a cross-check when listing + a large number of devices. An error is returned when the provided + number of children differs.
+
spares
+
The number of distributed hot spares. Defaults to zero.
+
+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device dedicated solely for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested, so a mirror or raidz virtual + device can only contain files or disks. Mirrors of mirrors (or other + combinations) are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. Keywords like mirror + and raidz are used to distinguish + where a group ends and another begins. For example, the following creates a + pool with two root vdevs, each a mirror of two disks:

+
# zpool + create mypool + mirror sda sdb + mirror sdc sdd
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three + states: , + , + or + . + An online pool has all devices operating normally. A degraded pool is one in + which one or more devices have failed, but the data is still available due + to a redundant configuration. A faulted pool has corrupted metadata, or one + or more faulted devices, and insufficient replicas to continue + functioning.

+

The health of the top-level vdev, such as a mirror or raidz + device, is potentially impacted by the state of its associated vdevs, or + component devices. A top-level vdev or component device is in one of the + following states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

Checksum errors represent events where a disk returned data that + was expected to be correct, but was not. In other words, these are instances + of silent data corruption. The checksum errors are reported in + zpool status and + zpool events. When a block + is stored redundantly, a damaged block may be reconstructed (e.g. from raidz + parity or a mirrored copy). In this case, ZFS reports the checksum error + against the disks that contained damaged data. If a block is unable to be + reconstructed (e.g. due to 3 disks being damaged in a raidz2 group), it is + not possible to determine which disks were silently corrupted. In this case, + checksum errors are reported for all disks on which the block is stored.

+

If a device is removed and later re-attached to the system, ZFS + attempts online the device automatically. Device attachment detection is + hardware-dependent and might not be supported on all platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool, but when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
# zpool + create pool + mirror sda sdb spare + sdc sdd
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool can not be exported since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

The draid vdev type provides distributed hot + spares. These hot spares are named after the dRAID vdev they're a part of + (draid1-2-3 + specifies spare 3 + of vdev 2, + which is a single parity dRAID) and may only be used + by that dRAID vdev. Otherwise, they behave the same as normal hot + spares.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
# zpool + create pool sda sdb + log sdc
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allows much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
# zpool + create pool sda sdb + cache sdc sdd
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is + persistent across reboots and restored asynchronously when importing the + pool in L2ARC (persistent L2ARC). This can be disabled by setting + =0. + For cache devices smaller than + , we do + not write the metadata structures required for rebuilding the L2ARC in order + not to waste space. This can be changed with + . + The cache device header + () is + updated even if no metadata structures are written. Setting + =0 + will result in scanning the full-length ARC lists for cacheable content to + be written in L2ARC (persistent ARC). If a cache device is added with + zpool add its label and + header will be overwritten and its contents are not going to be restored in + L2ARC, even if the device was previously part of the pool. If a cache device + is onlined with zpool online + its contents will be restored in L2ARC. This is useful in case of memory + pressure where the contents of the cache device are not fully restored in + L2ARC. The user can off- and online the cache device when there is less + memory pressure in order to fully restore its contents to L2ARC.

+
+
+

+

Before starting critical procedures that include destructive + actions (like zfs destroy), + an administrator can checkpoint the pool's state and in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, certain operations are not allowed + while a pool has a checkpoint. Specifically, vdev removal/attach/detach, + mirror splitting, and changing the pool's GUID. Adding a new vdev is + supported, but in the case of a rewind it will have to be added again. + Finally, users of this feature should keep in mind that scrubs in a pool + that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
# zpool + checkpoint pool
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
# zpool + export pool
+
# zpool + import --rewind-to-checkpoint + pool
+

To discard the checkpoint from a pool:

+
# zpool + checkpoint -d + pool
+

Dataset reservations (controlled by the + + and + + properties) may be unenforceable while a checkpoint exists, because the + checkpoint is allowed to consume the dataset's reservation. Finally, data + that is part of the checkpoint but has been freed in the current state of + the pool won't be scanned during a scrub.

+
+
+

+

Allocations in the special class are dedicated to specific block + types. By default this includes all metadata, the indirect blocks of user + data, and any deduplication tables. The class can also be provisioned to + accept small file blocks.

+

A pool must always have at least one normal + (non-dedup/-special) vdev before other + devices can be assigned to the special class. If the + special class becomes full, then allocations intended for + it will spill back into the normal class.

+

Deduplication tables can be excluded + from the special class by unsetting the + + ZFS module parameter.

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + property to nonzero. See zfsprops(7) for more info on this + property.

+
+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/7/zpoolprops.7.html b/man/v2.1/7/zpoolprops.7.html new file mode 100644 index 000000000..abdb14004 --- /dev/null +++ b/man/v2.1/7/zpoolprops.7.html @@ -0,0 +1,457 @@ + + + + + + + zpoolprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolprops.7

+
+ + + + + +
ZPOOLPROPS(7)Miscellaneous Information ManualZPOOLPROPS(7)
+
+
+

+

zpoolprops — + properties of ZFS storage pools

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. On whole-disk vdevs, this is + the space beyond the end of the GPT – typically occurring when a + LUN is dynamically expanded or a disk replaced with a larger one. On + partition vdevs, this is the space appended to the partition after it was + added to the pool – most likely by resizing it in-place. The space + can be claimed for the pool by bringing it online with + + or using zpool online + -e.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz parity; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs(4) for more information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
Space not released while freeing due to corruption, now + permanently leaked into the pool.
+
+
The current health of the pool. Health can be one of + , + , + , + , + .
+
+
A unique identifier for the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (i.e. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
guid
+
Information about unsupported features that are enabled on the pool. See + zpool-features(7) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpoolprops command does not. For non-full pools + of a reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of + (internally + referred to as ashift). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space vs. performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + ashift= + (which is + + = + ). + When set, this property is used as the default hint value in subsequent + vdev operations (add, attach and replace). Changing this value will not + modify any existing vdev, not even on disk replacement; however it can be + used, for instance, to replace a dying 512B sectors disk with a newer 4KiB + sectors device: this will probably result in bad performance but at the + same time could prevent loss of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) manual page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) manual page for more + details.
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default value for this property is + off. +

Automatic TRIM does not immediately + reclaim blocks after a free. Instead, it will optimistically delay + allowing smaller ranges to be aggregated into a few larger ones. These + can then be issued more efficiently to the storage. TRIM on L2ARC + devices is enabled by setting + .

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower-end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
=|pool[/dataset]
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=off|legacy|file[,file]…
+
Specifies that the pool maintain compatibility with specific feature sets. + When set to off (or unset) compatibility is disabled + (all features may be enabled); when set to legacyno + features may be enabled. When set to a comma-separated list of filenames + (each filename may either be an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d) the lists of + requested features are read from those files, separated by whitespace + and/or commas. Only features present in all files may be enabled. +

See zpool-features(7), + zpool-create(8) and zpool-upgrade(8) + for more information on the operation of compatibility feature sets.

+
+
=number
+
This property is deprecated and no longer has any effect.
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared with zpool + clear. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(7) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It does not + protect against an individual device being used in multiple pools, + regardless of the type of vdev. See the discussion under + zpool create.

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs(4) manual page. In order to enable this + property each host must set a unique hostid. See + zgenhostid(8) + spl(4) for additional details. The default value is + off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/fsck.zfs.8.html b/man/v2.1/8/fsck.zfs.8.html new file mode 100644 index 000000000..99a736238 --- /dev/null +++ b/man/v2.1/8/fsck.zfs.8.html @@ -0,0 +1,290 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
FSCK.ZFS(8)System Manager's ManualFSCK.ZFS(8)
+
+
+

+

fsck.zfsdummy + ZFS filesystem checker

+
+
+

+ + + + + +
fsck.zfs[options] + dataset
+
+
+

+

fsck.zfs is a thin shell wrapper that at + most checks the status of a dataset's container pool. It is installed by + OpenZFS because some Linux distributions expect a fsck helper for all + filesystems.

+

If more than one dataset is specified, each + is checked in turn and the results binary-ored.

+
+
+

+

Ignored.

+
+
+

+

ZFS datasets are checked by running zpool + scrub on the containing pool. An individual ZFS + dataset is never checked independently of its pool, which is unlike a + regular filesystem.

+

However, the fsck(8) interface still + allows it to communicate some errors: if the dataset + is in a degraded pool, then fsck.zfs will return + exit code to indicate + an uncorrected filesystem error.

+

Similarly, if the dataset is in a + faulted pool and has a legacy /etc/fstab record, + then fsck.zfs will return exit code + to indicate a fatal + operational error.

+
+
+

+

fstab(5), fsck(8), + zpool-scrub(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/index.html b/man/v2.1/8/index.html new file mode 100644 index 000000000..4eee83da7 --- /dev/null +++ b/man/v2.1/8/index.html @@ -0,0 +1,309 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/mount.zfs.8.html b/man/v2.1/8/mount.zfs.8.html new file mode 100644 index 000000000..ddc476c3c --- /dev/null +++ b/man/v2.1/8/mount.zfs.8.html @@ -0,0 +1,297 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
MOUNT.ZFS(8)System Manager's ManualMOUNT.ZFS(8)
+
+
+

+

mount.zfsmount + ZFS filesystem

+
+
+

+ + + + + +
mount.zfs[-sfnvh] [-o + options] dataset + mountpoint
+
+
+

+

The mount.zfs helper is used by + mount(8) to mount filesystem snapshots and + legacy + ZFS filesystems, as well as by zfs(8) when the + + environment variable is not set. Users should should invoke either + zfs(8) in most cases.

+

options are handled according + to the section in zfsprops(7), except + for those described below.

+

If /etc/mtab is a regular file and + -n was not specified, it will be updated via + libmount.

+
+
+

+
+
+
Ignore unknown (sloppy) mount options.
+
+
Do everything except actually executing the system call.
+
+
Never update /etc/mtab.
+
+
Print resolved mount options and parser state.
+
+
Print the usage message.
+
+ zfsutil
+
This private flag indicates that mount(8) is being + called by the zfs(8) command.
+
+
+
+

+

fstab(5), mount(8), + zfs-mount(8)

+
+
+ + + + + +
May 24, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/vdev_id.8.html b/man/v2.1/8/vdev_id.8.html new file mode 100644 index 000000000..da0de5234 --- /dev/null +++ b/man/v2.1/8/vdev_id.8.html @@ -0,0 +1,322 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
VDEV_ID(8)System Manager's ManualVDEV_ID(8)
+
+
+

+

vdev_idgenerate + user-friendly names for JBOD disks

+
+
+

+ + + + + +
vdev_id-d dev + -c config_file + -g + sas_direct|sas_switch|scsi + -m -p + phys_per_port
+
+
+

+

vdev_id is an udev helper which parses + vdev_id.conf(5) to map a physical path in a storage + topology to a channel name. The channel name is combined with a disk + enclosure slot number to create an alias that reflects the physical location + of the drive. This is particularly helpful when it comes to tasks like + replacing failed drives. Slot numbers may also be remapped in case the + default numbering is unsatisfactory. The drive aliases will be created as + symbolic links in /dev/disk/by-vdev.

+

The currently supported topologies are + sas_direct, sas_switch, and + scsi. A multipath mode is supported in which dm-mpath + devices are handled by examining the first running component disk as + reported by the driver. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating + aliases based on existing udev links in the /dev hierarchy using the + configuration + file keyword. See vdev_id.conf(5) for details.

+
+
+

+
+
+ device
+
The device node to classify, like /dev/sda.
+
+ config_file
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+
Only handle dm-multipath devices. If specified, examine the first running + component disk of a dm-multipath device as provided by the driver to + determine the physical path.
+
+ phys_per_port
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zdb.8.html b/man/v2.1/8/zdb.8.html new file mode 100644 index 000000000..96d5c65ef --- /dev/null +++ b/man/v2.1/8/zdb.8.html @@ -0,0 +1,724 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's ManualZDB(8)
+
+
+

+

zdbdisplay ZFS + storage pool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMNPsvXYy] + [-e [-V] + [-p path]…] + [-I inflight I/Os] + [-o + var=value]… + [-t txg] + [-U cache] + [-x dumpdir] + [poolname[/dataset | + objset ID]] + [object|range…]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path]…] [-U + cache] + poolname[/dataset + | objset ID] + [object|range…]
+
+ + + + + +
zdb-C [-A] + [-U cache]
+
+ + + + + +
zdb-E [-A] + word0:word1:…:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path]…] + [-t txg] + [-U cache] + poolname [vdev + [metaslab]…]
+
+ + + + + +
zdb-O dataset path
+
+ + + + + +
zdb-r dataset path + destination
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path]…] + [-U cache] + poolname + vdev:offset:[lsize/]psize[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path]…] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. This is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as "pool/".

+

When operating on an imported and active pool it is possible, + though unlikely, that zdb may interpret inconsistent pool data and behave + erratically.

+
+
+

+

Display options:

+
+
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. See + -N for determining if + [poolname[/dataset | + objset ID]] is to use the specified + [dataset | objset ID] as a + string (dataset name) or a number (objset ID) when datasets have numeric + names. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs or object ID ranges are specified, display + information about those specific objects or ranges only.

+

An object ID range is specified in terms of a colon-separated + tuple of the form + ⟨start⟩:⟨end⟩[:⟨flags⟩]. The + fields start and end are + integer object identifiers that denote the upper and lower bounds of the + range. An end value of -1 specifies a range with + no upper bound. The flags field optionally + specifies a set of flags, described below, that control which object + types are dumped. By default, all object types are dumped. A minus sign + (-) negates the effect of the flag that follows it and has no effect + unless preceded by the A flag. For example, the + range 0:-1:A-d will dump all object types except for directories.

+

+
+
+
Dump all objects (this is the default)
+
+
Dump ZFS directory objects
+
+
Dump ZFS plain file objects
+
+
Dump SPA space map objects
+
+
Dump ZAP objects
+
-
+
Negate the effect of next flag
+
+
+
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + * compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
+ word0:word1:…:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
+ device
+
Read the vdev labels and L2ARC header from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. The presence of L2ARC header is indicated by a specific sequence + (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size or the + number of L2ARC log blocks zdb + -l will return 1. Each unique configuration is + displayed only once.
+
+ device
+
In addition display label space usage stats. If a valid L2ARC header was + found also display the properties of log blocks used for restoring L2ARC + contents (persistent L2ARC).
+
+ device
+
Display every configuration, unique or not. If a valid L2ARC header was + found also display the properties of log entries in log blocks used for + restoring L2ARC contents (persistent L2ARC). +

If the -q option is also specified, + don't print the labels or the L2ARC header.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
+
Display the offset, spacemap, free space of each metaslab, all the log + spacemaps and their obsolete entry statistics.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Display the offset, spacemap, and free space of each metaslab.
+
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Same as -d but force zdb to interpret the + [dataset | objset ID] in + [poolname[/dataset | + objset ID]] as a numeric objset ID.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
+ dataset path destination
+
Copy the specified path inside of the + dataset to the specified destination. Specified + path must be relative to the root of + dataset. This option can be combined with + -v for increasing verbosity.
+
+ poolname + vdev:offset:[lsize/]psize[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer at hex offset
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
+
Display the current uberblock.
+
+

Other options:

+
+
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
+ [-p path]…
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
+ dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
+ inflight I/Os
+
Limit the number of outstanding checksum I/Os to the specified value. The + default value is 200. This option affects the performance of the + -c option.
+
+ var=value …
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
+
Print numbers in an unscaled form more amenable to parsing, e.g. + + rather than + .
+
+ transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
+ cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
+
Enable verbosity. Specify multiple times for increased verbosity.
+
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
+
Perform validation for livelists that are being deleted. Scans through the + livelist and metaslabs, checking for duplicate entries and compares the + two, checking for potential double frees. If it encounters issues, + warnings will be printed, but the command will not necessarily fail.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+
: Display the configuration of imported pool + rpool
+
+
+
# zdb -C rpool
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ …
+
+
+
: Display basic dataset information about + rpool
+
+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ …
+
+
+
: Display basic information about object 0 in + rpool/export/home
+
+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
: Display the predicted effect of enabling deduplication on + rpool
+
+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ …
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
October 7, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zed.8.html b/man/v2.1/8/zed.8.html new file mode 100644 index 000000000..cbc2cfb99 --- /dev/null +++ b/man/v2.1/8/zed.8.html @@ -0,0 +1,463 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Manager's ManualZED(8)
+
+
+

+

ZEDZFS Event + Daemon

+
+
+

+ + + + + +
ZED[-fFhILMvVZ] [-d + zedletdir] [-p + pidfile] [-P + path] [-s + statefile] [-j + jobs]
+
+
+

+

The ZED (ZFS Event Daemon) monitors events + generated by the ZFS kernel module. When a zevent (ZFS Event) is posted, the + ZED will run any ZEDLETs (ZFS Event Daemon Linkage + for Executable Tasks) that have been enabled for the corresponding zevent + class.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Don't daemonise: remain attached to the controlling terminal, log to the + standard I/O streams.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Request that the daemon idle rather than exit when the kernel modules are + not loaded. Processing of events will start, or resume, when the kernel + modules are (re)loaded. Under Linux the kernel modules cannot be unloaded + while the daemon is running.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+ zedletdir
+
Read the enabled ZEDLETs from the specified directory.
+
+ pidfile
+
Write the daemon's process ID to the specified file.
+
+ path
+
Custom $PATH for zedlets to use. Normally zedlets + run in a locked-down environment, with hardcoded paths to the ZFS commands + ($ZFS, $ZPOOL, + $ZED, ...), and a + hard-coded $PATH. This is done for security + reasons. However, the ZFS test suite uses a custom PATH for its ZFS + commands, and passes it to ZED with + -P. In short, -P is only + to be used by the ZFS test suite; never use it in production!
+
+ statefile
+
Write the daemon's state to the specified file.
+
+ jobs
+
Allow at most jobs ZEDLETs to run concurrently, + delaying execution of new ones until they finish. Defaults to + .
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the + zpool events + -v command.

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory + (zedletdir). These can be symlinked or copied from the + + directory; symlinks allow for automatic updates from the installed ZEDLETs, + whereas copies preserve local modifications. As a security measure, since + ownership change is a privileged operation, ZEDLETs must be owned by root. + They must have execute permissions for the user, but they must not have + write permissions for group or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they + should be invoked. In particular, a ZEDLET will be invoked for a given + zevent if either its class or subclass string is a prefix of its filename + (and is followed by a non-alphabetic character). As a special case, the + prefix matches + all zevents. Multiple ZEDLETs may be invoked for a given zevent.

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + .

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner:

+
    +
  1. it is prefixed with + ,
  2. +
  3. it is converted to uppercase, and
  4. +
  5. each non-alphanumeric character is converted to an underscore.
  6. +
+

Some additional environment variables have been defined to present + certain nvpair values in a more convenient form. An incomplete list of + zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as “seconds + nanoseconds” since the Epoch.
+
+
The seconds component of + ZEVENT_TIME.
+
+
The + + component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The alias + (“--”) + string of the ZFS distribution the daemon is part of.
+
+
The ZFS version the daemon is part of.
+
+
The ZFS release the daemon is part of.
+
+

ZEDLETs may need to call other ZFS commands. The + installation paths of the following executables are defined as environment + variables: , + , + , + , + and + . + These variables may be overridden in the rc file.

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state.
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
, +
+
Terminate the daemon.
+
+
+
+

+

zfs(8), zpool(8), + zpool-events(8)

+
+
+

+

The ZED requires root privileges.

+

Do not taunt the ZED.

+
+
+

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Internationalization support via gettext has not been added.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-allow.8.html b/man/v2.1/8/zfs-allow.8.html new file mode 100644 index 000000000..832f04db1 --- /dev/null +++ b/man/v2.1/8/zfs-allow.8.html @@ -0,0 +1,849 @@ + + + + + + + zfs-allow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-allow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@... property
groupobjquotaotherAllows accessing any groupobjquota@... property
groupusedotherAllows reading any groupused@... property
groupobjusedotherAllows reading any groupobjused@... property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@... property
userobjquotaotherAllows accessing any userobjquota@... property
userusedotherAllows reading any userused@... property
userobjusedotherAllows reading any userobjused@... property
projectobjquotaotherAllows accessing any projectobjquota@... property
projectquotaotherAllows accessing any projectquota@... property
projectobjusedotherAllows reading any projectobjused@... property
projectusedotherAllows reading any projectused@... property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-bookmark.8.html b/man/v2.1/8/zfs-bookmark.8.html new file mode 100644 index 000000000..0149fd12a --- /dev/null +++ b/man/v2.1/8/zfs-bookmark.8.html @@ -0,0 +1,276 @@ + + + + + + + zfs-bookmark.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-bookmark.8

+
+ + + + + +
ZFS-BOOKMARK(8)System Manager's ManualZFS-BOOKMARK(8)
+
+
+

+

zfs-bookmark — + create bookmark of ZFS snapshot

+
+
+

+ + + + + +
zfsbookmark + snapshot|bookmark + newbookmark
+
+
+

+

Creates a new bookmark of the given snapshot or bookmark. + Bookmarks mark the point in time when the snapshot was created, and can be + used as the incremental source for a zfs + send.

+

When creating a bookmark from an existing redaction + bookmark, the resulting bookmark is + a redaction + bookmark.

+

This feature must be enabled to be used. See + zpool-features(7) for details on ZFS feature flags and the + + feature.

+
+
+

+

zfs-destroy(8), zfs-send(8), + zfs-snapshot(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-change-key.8.html b/man/v2.1/8/zfs-change-key.8.html new file mode 100644 index 000000000..15a46632e --- /dev/null +++ b/man/v2.1/8/zfs-change-key.8.html @@ -0,0 +1,474 @@ + + + + + + + zfs-change-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-change-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-clone.8.html b/man/v2.1/8/zfs-clone.8.html new file mode 100644 index 000000000..868e6142e --- /dev/null +++ b/man/v2.1/8/zfs-clone.8.html @@ -0,0 +1,282 @@ + + + + + + + zfs-clone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-clone.8

+
+ + + + + +
ZFS-CLONE(8)System Manager's ManualZFS-CLONE(8)
+
+
+

+

zfs-cloneclone + snapshot of ZFS dataset

+
+
+

+ + + + + +
zfsclone [-p] + [-o + property=value]… + snapshot + filesystem|volume
+
+
+

+

See the Clones section of + zfsconcepts(7) for details. The target dataset can be + located anywhere in the ZFS hierarchy, and is created as the same type as + the original.

+
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + + property inherited from their parent. If the target filesystem or volume + already exists, the operation completes successfully.
+
+
+
+

+

zfs-promote(8), + zfs-snapshot(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-create.8.html b/man/v2.1/8/zfs-create.8.html new file mode 100644 index 000000000..4c70798f0 --- /dev/null +++ b/man/v2.1/8/zfs-create.8.html @@ -0,0 +1,412 @@ + + + + + + + zfs-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-create.8

+
+ + + + + +
ZFS-CREATE(8)System Manager's ManualZFS-CREATE(8)
+
+
+

+

zfs-create — + create ZFS dataset

+
+
+

+ + + + + +
zfscreate [-Pnpuv] + [-o + property=value]… + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]… + -V size + volume
+
+
+

+
+
zfs create + [-Pnpuv] [-o + property=value]… + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent, unless the -u option is used. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have filesystem as + their only value. The create_ancestors key only + appears if the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + filesystem due to the use of the + -o option.
+
+
Do not mount the newly created file system.
+
+
Print verbose information about the created dataset.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]… + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically + rounded up to the nearest multiple of the + .

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + + in the + section of zfsprops(7) for more + information about sparse volumes.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have volume as their + only value. The create_ancestors key only appears if + the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + volume due to the use of the + -b or -o options, as + well as + + if the volume is not sparse.
+
+
Print verbose information about the created dataset.
+
+
+
+
+

+

ZFS volumes may be used as swap devices. After creating the volume + with the zfs create + -V enable the swap area using the + swapon(8) command. Swapping to files on ZFS filesystems is + not supported.

+
+
+
+

+

zfs-destroy(8), zfs-list(8), + zpool-create(8)

+
+
+ + + + + +
December 1, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-destroy.8.html b/man/v2.1/8/zfs-destroy.8.html new file mode 100644 index 000000000..5e29e15fc --- /dev/null +++ b/man/v2.1/8/zfs-destroy.8.html @@ -0,0 +1,365 @@ + + + + + + + zfs-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-destroy.8

+
+ + + + + +
ZFS-DESTROY(8)System Manager's ManualZFS-DESTROY(8)
+
+
+

+

zfs-destroy — + destroy ZFS dataset, snapshots, or bookmark

+
+
+

+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+
+

+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Forcibly unmount file systems. This option has no effect on non-file + systems or unmounted file systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + ) should be + specified when using a range or comma-separated list to identify + multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
+
+
+

+

zfs-create(8), zfs-hold(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-diff.8.html b/man/v2.1/8/zfs-diff.8.html new file mode 100644 index 000000000..d0b9a43bc --- /dev/null +++ b/man/v2.1/8/zfs-diff.8.html @@ -0,0 +1,318 @@ + + + + + + + zfs-diff.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-diff.8

+
+ + + + + +
ZFS-DIFF(8)System Manager's ManualZFS-DIFF(8)
+
+
+

+

zfs-diffshow + difference between ZFS snapshots

+
+
+

+ + + + + +
zfsdiff [-FHth] + snapshot + snapshot|filesystem
+
+
+

+

Display the difference between a snapshot of a given filesystem + and another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are:

+
+
+
-
+
The path has been removed
+
+
The path has been created
+
+
The path has been modified
+
+
The path has been renamed
+
+
+
+
+
Display an indication of the type of file, in a manner similar to the + -F option of ls(1). +
+
+
+
Block device
+
+
Character device
+
+
Directory
+
+
Door
+
+
Named pipe
+
+
Symbolic link
+
+
Event port
+
+
Socket
+
+
Regular file
+
+
+
+
+
Give more parsable tab-separated output, without header lines and without + arrows.
+
+
Display the path's inode change time as the first column of output.
+
+
Do not + ooo-escape + non-ASCII paths.
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
May 29, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-get.8.html b/man/v2.1/8/zfs-get.8.html new file mode 100644 index 000000000..722a4aea7 --- /dev/null +++ b/man/v2.1/8/zfs-get.8.html @@ -0,0 +1,408 @@ + + + + + + + zfs-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-get.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-groupspace.8.html b/man/v2.1/8/zfs-groupspace.8.html new file mode 100644 index 000000000..8c33b7d33 --- /dev/null +++ b/man/v2.1/8/zfs-groupspace.8.html @@ -0,0 +1,388 @@ + + + + + + + zfs-groupspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-groupspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-hold.8.html b/man/v2.1/8/zfs-hold.8.html new file mode 100644 index 000000000..ed0529649 --- /dev/null +++ b/man/v2.1/8/zfs-hold.8.html @@ -0,0 +1,321 @@ + + + + + + + zfs-hold.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-hold.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rH] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-inherit.8.html b/man/v2.1/8/zfs-inherit.8.html new file mode 100644 index 000000000..eed5b4724 --- /dev/null +++ b/man/v2.1/8/zfs-inherit.8.html @@ -0,0 +1,408 @@ + + + + + + + zfs-inherit.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-inherit.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-jail.8.html b/man/v2.1/8/zfs-jail.8.html new file mode 100644 index 000000000..14ff6749f --- /dev/null +++ b/man/v2.1/8/zfs-jail.8.html @@ -0,0 +1,312 @@ + + + + + + + zfs-jail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-jail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-list.8.html b/man/v2.1/8/zfs-list.8.html new file mode 100644 index 000000000..d309783a6 --- /dev/null +++ b/man/v2.1/8/zfs-list.8.html @@ -0,0 +1,352 @@ + + + + + + + zfs-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-list.8

+
+ + + + + +
ZFS-LIST(8)System Manager's ManualZFS-LIST(8)
+
+
+

+

zfs-listlist + properties of ZFS datasets

+
+
+

+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]…] + [-s property]… + [-S property]… + [-t + type[,type]…] + [filesystem|volume|snapshot]…
+
+
+

+

If specified, you can list property information by the absolute + pathname or the relative pathname. By default, all file systems and volumes + are displayed. Snapshots are displayed if the + + pool property is + (the + default is + ), or if + the -t snapshot or + -t all options are specified. The + following fields are displayed: name, + used, + , + , + .

+
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ property
+
Same as the -s option, but sorts by property in + descending order.
+
+ depth
+
Recursively display any children of the dataset, limiting the recursion to + depth. A depth of + will display + only the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: + +
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command line.
+
+ property
+
A property for sorting the output by column in ascending order based on + the value of the property. The property must be one of the properties + described in the Properties section + of zfsprops(7) or the value name to + sort by the dataset name. Multiple properties can be specified at one time + using multiple -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior of + zfs list is + preserved.

+
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + , + or all. For example, specifying + -t snapshot displays only + snapshots.
+
+
+
+

+

zfsprops(7), zfs-get(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-load-key.8.html b/man/v2.1/8/zfs-load-key.8.html new file mode 100644 index 000000000..2ca258688 --- /dev/null +++ b/man/v2.1/8/zfs-load-key.8.html @@ -0,0 +1,474 @@ + + + + + + + zfs-load-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-load-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-mount-generator.8.html b/man/v2.1/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..c2a40c6c6 --- /dev/null +++ b/man/v2.1/8/zfs-mount-generator.8.html @@ -0,0 +1,437 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)System Manager's ManualZFS-MOUNT-GENERATOR(8)
+
+
+

+

zfs-mount-generator — + generate systemd mount units for ZFS filesystems

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+
+
+

+

zfs-mount-generator is a + systemd.generator(7) that generates native + systemd.mount(5) units for configured ZFS datasets.

+
+

+
+
=
+
+ + or none.
+
=
+
off. Skipped if + only noauto datasets exist for a given mountpoint + and there's more than one. Datasets with + + take precedence over ones with + noauto for the same mountpoint. + Sets logical noauto + flag if noauto. Encryption roots + always generate + zfs-load-key@root.service, + even if off.
+
=, + relatime=, + =, + =, + =, + =, + =
+
Used to generate mount options equivalent to zfs + mount.
+
=, + keylocation=
+
If the dataset is an encryption root, its mount unit will bind to + zfs-load-key@root.service, + with additional dependencies as follows: +
+
+
=
+
None, uses systemd-ask-password(1)
+
=URL + (et al.)
+
=, + After=: + network-online.target
+
=<path>
+
=path
+
+
+ The service also uses the same Wants=, + After=, Requires=, + and RequiresMountsFor=, as the + mount unit.
+
=path[ + path]…
+
+ Requires= for the mount- and key-loading unit.
+
=path[ + path]…
+
+ RequiresMountsFor= for the mount- and key-loading + unit.
+
=unit[ + unit]…
+
+ Before= for the mount unit.
+
=unit[ + unit]…
+
+ After= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + WantedBy= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + RequiredBy= for the mount unit.
+
=(unset)|on|off
+
Waxes or wanes strength of default reverse dependencies of the mount unit, + see below.
+
=on|off
+
on. Defaults to + off.
+
+
+
+

+

Additionally, unless the pool the dataset resides on is imported + at generation time, both units gain + Wants=zfs-import.target and + After=zfs-import.target.

+

Additionally, unless the logical noauto flag is + set, the mount unit gains a reverse-dependency for + local-fs.target of strength

+
+
+
(unset)
+
= + + Before=
+
+
=
+
+
= + + Before=
+
+
+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of

+
zfs + list -Ho + name,⟨every property above in + order⟩
+for datasets that should be mounted by systemd should be kept at + @sysconfdir@/zfs/zfs-list.cache/poolname, + and, if writeable, will be kept synchronized for the entire pool by the + history_event-zfs-list-cacher.sh ZEDLET, if enabled + (see zed(8)). +
+
+
+

+

If the + + environment variable is nonzero (or unset and + /proc/cmdline contains + ""), + print summary accounting information at the end.

+
+
+

+

To begin, enable tracking for the pool:

+
# touch + @sysconfdir@/zfs/zfs-list.cache/poolname
+Then enable the tracking ZEDLET: +
# ln + -s + @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh + @sysconfdir@/zfs/zed.d
+
# systemctl + enable + zfs-zed.service
+
# systemctl + restart + zfs-zed.service
+

If no history event is in the queue, inject one to ensure the + ZEDLET runs to refresh the cache file by setting a monitored property + somewhere on the pool:

+
# zfs + set relatime=off + poolname/dset
+
# zfs + inherit relatime + poolname/dset
+

To test the generator output:

+
$ mkdir + /tmp/zfs-mount-generator
+
$ + @systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator
+If the generated units are satisfactory, instruct + systemd to re-run all generators: +
# systemctl + daemon-reload
+
+
+

+

systemd.mount(5), + zfs(5), + systemd.generator(7), + zed(8), + zpool-events(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-mount.8.html b/man/v2.1/8/zfs-mount.8.html new file mode 100644 index 000000000..22374bc0b --- /dev/null +++ b/man/v2.1/8/zfs-mount.8.html @@ -0,0 +1,336 @@ + + + + + + + zfs-mount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-mount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-program.8.html b/man/v2.1/8/zfs-program.8.html new file mode 100644 index 000000000..970ccd69b --- /dev/null +++ b/man/v2.1/8/zfs-program.8.html @@ -0,0 +1,989 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs-program — + execute ZFS channel programs

+
+
+

+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script + [script arguments]
+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at + http://www.lua.org/manual/5.2/

+

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MB, and can be set to a maximum of 100 MB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libZFS interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libZFS interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
return ret0, ret1, ret2, + ...
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
error: "error string, including + Lua stack trace"
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

: + ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libZFS interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
string->string
number->int64
boolean->boolean_value
nil->boolean (no value)
table->nvlist
+

Likewise, table keys are replaced by string equivalents as + follows:

+ + + + + + + + + + + + + + + + + + + +
string->no change
number->signed decimal string ("%lld")
boolean->"true" | "false"
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
assertrawlencollectgarbagerawget
errorrawsetgetmetatableselect
ipairssetmetatablenexttonumber
pairstostringrawequaltype
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+ + + + + + + + + + +
dofileloadfileloadpcallprintxpcall
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
zfs.sync.destroy("rpool@snap")
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
zfs.sync.destroy({1="rpool@snap", + defer=true})
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
zfs.sync.snapshot{"rpool@snap", + defer=true}
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
errno = + zfs.sync.promote(dataset)
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
EPERMECHILDENODEVENOSPCENOENTEAGAINENOTDIR
ESPIPEESRCHENOMEMEISDIREROFSEINTREACCES
EINVALEMLINKEIOEFAULTENFILEEPIPEENXIO
ENOTBLKEMFILEEDOME2BIGEBUSYENOTTYERANGE
ENOEXECEEXISTETXTBSYEDQUOTEBADFEXDEVEFBIG
+
+
+

+

For detailed descriptions of the exact behavior of any ZFS + administrative operations, see the main zfs(8) manual + page.

+
+
(msg)
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running +
dtrace -n + 'zfs-dbgmsg{trace(stringof(arg0))}'
+

+
+
msg (string)
+
Debug message to be printed.
+
+
+
(dataset)
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns + false, but + zfs.exists("somepool/fs_that_may_exist") will + error. +

+
+
dataset (string)
+
Dataset to check for existence. Must be in the target pool.
+
+
+
(dataset, + property)
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like GUIDs) may wrap around and appear negative. +

+
+
dataset (string)
+
Filesystem or snapshot path to retrieve properties from.
+
property (string)
+
Name of property to retrieve. All filesystem, snapshot and volume + properties are supported except for + and + . + Also supports the + snap + and + bookmark + properties and the + ⟨|⟩⟨|id + properties, though the id must be in numeric form.
+
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
(dataset, + [defer=true|false])
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

+
+
dataset (string)
+
Filesystem or snapshot to be destroyed.
+
[defer (boolean)]
+
Valid only for destroying snapshots. If set to true, and the + snapshot has holds or clones, allows the snapshot to be marked for + deferred deletion rather than failing.
+
+
+
(dataset, + property)
+
Clears the specified property in the given dataset, causing it to be + inherited from an ancestor, or restored to the default if no ancestor + property is set. The zfs + inherit -S option has + not been implemented. Returns 0 on success, or a nonzero error code if + the property could not be cleared. +

+
+
dataset (string)
+
Filesystem or snapshot containing the property to clear.
+
property (string)
+
The property to clear. Allowed properties are the same as those + for the zfs + inherit command.
+
+
+
(dataset)
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

+
+
dataset (string)
+
Clone to be promoted.
+
+
+
(filesystem)
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

+
+
filesystem (string)
+
Filesystem to rollback.
+
+
+
(dataset, + property, value)
+
Sets the given property on a dataset. Currently only user properties + are supported. Returns 0 if the property was set, or a nonzero error + code otherwise. +

+
+
dataset (string)
+
The dataset where the property will be set.
+
property (string)
+
The property to set.
+
value (string)
+
The value of the property to be set.
+
+
+
(dataset)
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

+
+
dataset (string)
+
Name of snapshot to create.
+
+
+
(source, + newbookmark)
+
Create a bookmark of an existing source snapshot or bookmark. Returns + 0 if the new bookmark was successfully created, and a nonzero error + code otherwise. +

Note: Bookmarking requires the corresponding pool feature + to be enabled.

+

+
+
source (string)
+
Full name of the existing snapshot or bookmark.
+
newbookmark (string)
+
Full name of the new bookmark.
+
+
+
+
+
+
For each function in the zfs.sync submodule, there is a + corresponding zfs.check function which performs a + "dry run" of the same operation. Each takes the same arguments + as its zfs.sync counterpart and returns 0 if the + operation would succeed, or a non-zero error code if it would fail, along + with any other error details. That is, each has the same behavior as the + corresponding sync function except for actually executing the requested + change. For example, + ("fs") + returns 0 if + zfs.sync.destroy("fs") + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
(dataset, + [defer=true|false])
+
 
+
(dataset)
+
 
+
(filesystem)
+
 
+
(dataset, + property, value)
+
 
+
(dataset)
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
(snapshot)
+
Iterate through all clones of the given snapshot. +

+
+
snapshot (string)
+
Must be a valid snapshot path in the current pool.
+
+
+
(dataset)
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all bookmarks of the given dataset. Each bookmark is + returned as a string containing the full dataset name, e.g. + "pool/fs#bookmark". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(snapshot)
+
Iterate through all user holds on the given snapshot. Each hold is + returned as a pair of the hold's tag and the timestamp (in seconds + since the epoch) at which it was created. +

+
+
snapshot (string)
+
Must be a valid snapshot.
+
+
+
(dataset)
+
An alias for zfs.list.user_properties (see relevant entry). +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Iterate through all user properties for the given dataset. For each + step of the iteration, output the property name, its value, and its + source. Throws a Lua error if the dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot or volume.
+
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-project.8.html b/man/v2.1/8/zfs-project.8.html new file mode 100644 index 000000000..e423a3d10 --- /dev/null +++ b/man/v2.1/8/zfs-project.8.html @@ -0,0 +1,359 @@ + + + + + + + zfs-project.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-project.8

+
+ + + + + +
ZFS-PROJECT(8)System Manager's ManualZFS-PROJECT(8)
+
+
+

+

zfs-project — + manage projects in ZFS filesystem

+
+
+

+ + + + + +
zfsproject + [-d|-r] + file|directory
+
+ + + + + +
zfsproject -C + [-kr] + file|directory
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory
+
+
+

+
+
zfs project + [-d|-r] + file|directory
+
List project identifier (ID) and inherit flag of files and directories. +
+
+
Show the directory project ID and inherit flag, not its children.
+
+
List subdirectories recursively.
+
+
+
zfs project + -C [-kr] + file|directory
+
Clear project inherit flag and/or ID on the files and directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset to zero.
+
+
Clear subdirectories' flags recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory
+
Check project ID and inherit flag on the files and directories: report + entries without the project inherit flag, or with project IDs different + from the target directory's project ID or the one specified with + -p. +
+
+
Delimit filenames with a NUL byte instead of newline.
+
+
Check the directory project ID and inherit flag, not its + children.
+
+ id
+
Compare to id instead of the target files and + directories' project IDs.
+
+
Check subdirectories recursively.
+
+
+
zfs project + -p id + [-rs] + file|directory
+
Set project ID and/or inherit flag on the files and directories. +
+
+ id
+
Set the project ID to the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given files and directories. This is + usually used for setting up tree quotas with + -r. In that case, the directory's project ID + will be set for all its descendants, unless specified explicitly with + -p.
+
+
+
+
+
+

+

zfs-projectspace(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-projectspace.8.html b/man/v2.1/8/zfs-projectspace.8.html new file mode 100644 index 000000000..8e76370d4 --- /dev/null +++ b/man/v2.1/8/zfs-projectspace.8.html @@ -0,0 +1,388 @@ + + + + + + + zfs-projectspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-projectspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-promote.8.html b/man/v2.1/8/zfs-promote.8.html new file mode 100644 index 000000000..4a9e5dd31 --- /dev/null +++ b/man/v2.1/8/zfs-promote.8.html @@ -0,0 +1,275 @@ + + + + + + + zfs-promote.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-promote.8

+
+ + + + + +
ZFS-PROMOTE(8)System Manager's ManualZFS-PROMOTE(8)
+
+
+

+

zfs-promote — + promote clone dataset to no longer depend on origin + snapshot

+
+
+

+ + + + + +
zfspromote clone
+
+
+

+

The zfs promote + command makes it possible to destroy the dataset that the clone was created + from. The clone parent-child dependency relationship is reversed, so that + the origin dataset becomes a clone of the specified dataset.

+

The snapshot that was cloned, and any snapshots previous to this + snapshot, are now owned by the promoted clone. The space they use moves from + the origin dataset to the promoted clone, so enough space must be available + to accommodate these snapshots. No new space is consumed by this operation, + but the space accounting is adjusted. The promoted clone must not have any + conflicting snapshot names of its own. The zfs + rename subcommand can be used to rename any + conflicting snapshots.

+
+
+

+

zfs-clone(8), + zfs-rename(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-receive.8.html b/man/v2.1/8/zfs-receive.8.html new file mode 100644 index 000000000..a1397823c --- /dev/null +++ b/man/v2.1/8/zfs-receive.8.html @@ -0,0 +1,561 @@ + + + + + + + zfs-receive.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-receive.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no affect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-recv.8.html b/man/v2.1/8/zfs-recv.8.html new file mode 100644 index 000000000..e502d28d3 --- /dev/null +++ b/man/v2.1/8/zfs-recv.8.html @@ -0,0 +1,561 @@ + + + + + + + zfs-recv.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-recv.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no affect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-redact.8.html b/man/v2.1/8/zfs-redact.8.html new file mode 100644 index 000000000..e01c36527 --- /dev/null +++ b/man/v2.1/8/zfs-redact.8.html @@ -0,0 +1,767 @@ + + + + + + + zfs-redact.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-redact.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVRbcehnpsvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o + = + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
January 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-release.8.html b/man/v2.1/8/zfs-release.8.html new file mode 100644 index 000000000..7fd1d4696 --- /dev/null +++ b/man/v2.1/8/zfs-release.8.html @@ -0,0 +1,321 @@ + + + + + + + zfs-release.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-release.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rH] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rH] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-rename.8.html b/man/v2.1/8/zfs-rename.8.html new file mode 100644 index 000000000..d79e02ece --- /dev/null +++ b/man/v2.1/8/zfs-rename.8.html @@ -0,0 +1,332 @@ + + + + + + + zfs-rename.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rename.8

+
+ + + + + +
ZFS-RENAME(8)System Manager's ManualZFS-RENAME(8)
+
+
+

+

zfs-rename — + rename ZFS dataset

+
+
+

+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename -p + [-f] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -u + [-f] filesystem + filesystem
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+
+

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + -p [-f] + filesystem|volume + filesystem|volume
+
 
+
zfs rename + -u [-f] + filesystem filesystem
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any file systems that need to be unmounted in the + process. This flag has no effect if used together with the + -u flag.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
Do not remount file systems during rename. If a file system's + mountpoint property is set to + + or + , + the file system is not unmounted even if this option is not + given.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
+
+
+ + + + + +
September 1, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-rollback.8.html b/man/v2.1/8/zfs-rollback.8.html new file mode 100644 index 000000000..9ff84ea6c --- /dev/null +++ b/man/v2.1/8/zfs-rollback.8.html @@ -0,0 +1,284 @@ + + + + + + + zfs-rollback.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rollback.8

+
+ + + + + +
ZFS-ROLLBACK(8)System Manager's ManualZFS-ROLLBACK(8)
+
+
+

+

zfs-rollback — + roll ZFS dataset back to snapshot

+
+
+

+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+
+

+

When a dataset is rolled back, all data that has changed since the + snapshot is discarded, and the dataset reverts to the state at the time of + the snapshot. By default, the command refuses to roll back to a snapshot + other than the most recent one. In order to do so, all intermediate + snapshots and bookmarks must be destroyed by specifying the + -r option.

+

The -rR options do not recursively destroy + the child snapshots of a recursive snapshot. Only direct snapshots of the + specified filesystem are destroyed by either of these options. To completely + roll back a recursive snapshot, you must roll back the individual child + snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones of + those snapshots.
+
+
Used with the -R option to force an unmount of any + clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-send.8.html b/man/v2.1/8/zfs-send.8.html new file mode 100644 index 000000000..4aa9a1ffc --- /dev/null +++ b/man/v2.1/8/zfs-send.8.html @@ -0,0 +1,767 @@ + + + + + + + zfs-send.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-send.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVRbcehnpsvw] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVRbcehnpvw] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o + = + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128KB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128KB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
January 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-set.8.html b/man/v2.1/8/zfs-set.8.html new file mode 100644 index 000000000..89a0a85ce --- /dev/null +++ b/man/v2.1/8/zfs-set.8.html @@ -0,0 +1,408 @@ + + + + + + + zfs-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-set.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7).
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-share.8.html b/man/v2.1/8/zfs-share.8.html new file mode 100644 index 000000000..d8a4a4bf6 --- /dev/null +++ b/man/v2.1/8/zfs-share.8.html @@ -0,0 +1,308 @@ + + + + + + + zfs-share.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-share.8

+
+ + + + + +
ZFS-SHARE(8)System Manager's ManualZFS-SHARE(8)
+
+
+

+

zfs-shareshare + and unshare ZFS filesystems

+
+
+

+ + + + + +
zfsshare [-l] + -a|filesystem
+
+ + + + + +
zfsunshare + -a|filesystem|mountpoint
+
+
+

+
+
zfs share + [-l] + -a|filesystem
+
Shares available ZFS file systems. +
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a|filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
+
+
+

+

exports(5), smb.conf(5), + zfsprops(7)

+
+
+ + + + + +
May 17, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-snapshot.8.html b/man/v2.1/8/zfs-snapshot.8.html new file mode 100644 index 000000000..d9e5a3010 --- /dev/null +++ b/man/v2.1/8/zfs-snapshot.8.html @@ -0,0 +1,282 @@ + + + + + + + zfs-snapshot.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-snapshot.8

+
+ + + + + +
ZFS-SNAPSHOT(8)System Manager's ManualZFS-SNAPSHOT(8)
+
+
+

+

zfs-snapshot — + create snapshots of ZFS datasets

+
+
+

+ + + + + +
zfssnapshot [-r] + [-o + property=value]… + dataset@snapname
+
+
+

+

All previous modifications by successful system calls to the file + system are part of the snapshots. Snapshots are taken atomically, so that + all snapshots correspond to the same moment in time. + zfs snap can be used as an + alias for zfs snapshot. See + the Snapshots section of + zfsconcepts(7) for details.

+
+
+ property=value
+
Set the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
+

+

zfs-bookmark(8), zfs-clone(8), + zfs-destroy(8), zfs-diff(8), + zfs-hold(8), zfs-rename(8), + zfs-rollback(8), zfs-send(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-unallow.8.html b/man/v2.1/8/zfs-unallow.8.html new file mode 100644 index 000000000..dca808f0b --- /dev/null +++ b/man/v2.1/8/zfs-unallow.8.html @@ -0,0 +1,849 @@ + + + + + + + zfs-unallow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unallow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of + , + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@... property
groupobjquotaotherAllows accessing any groupobjquota@... property
groupusedotherAllows reading any groupused@... property
groupobjusedotherAllows reading any groupobjused@... property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@... property
userobjquotaotherAllows accessing any userobjquota@... property
userusedotherAllows reading any userused@... property
userobjusedotherAllows reading any userobjused@... property
projectobjquotaotherAllows accessing any projectobjquota@... property
projectquotaotherAllows accessing any projectquota@... property
projectobjusedotherAllows reading any projectobjused@... property
projectusedotherAllows reading any projectused@... property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-unjail.8.html b/man/v2.1/8/zfs-unjail.8.html new file mode 100644 index 000000000..279727d67 --- /dev/null +++ b/man/v2.1/8/zfs-unjail.8.html @@ -0,0 +1,312 @@ + + + + + + + zfs-unjail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unjail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-unload-key.8.html b/man/v2.1/8/zfs-unload-key.8.html new file mode 100644 index 000000000..11ae14d48 --- /dev/null +++ b/man/v2.1/8/zfs-unload-key.8.html @@ -0,0 +1,474 @@ + + + + + + + zfs-unload-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unload-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-unmount.8.html b/man/v2.1/8/zfs-unmount.8.html new file mode 100644 index 000000000..26c56ca58 --- /dev/null +++ b/man/v2.1/8/zfs-unmount.8.html @@ -0,0 +1,336 @@ + + + + + + + zfs-unmount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unmount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-upgrade.8.html b/man/v2.1/8/zfs-upgrade.8.html new file mode 100644 index 000000000..9999a3e24 --- /dev/null +++ b/man/v2.1/8/zfs-upgrade.8.html @@ -0,0 +1,315 @@ + + + + + + + zfs-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-upgrade.8

+
+ + + + + +
ZFS-UPGRADE(8)System Manager's ManualZFS-UPGRADE(8)
+
+
+

+

zfs-upgrade — + manage on-disk version of ZFS filesystems

+
+
+

+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a|filesystem
+
+
+

+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] + -a|filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of ZFS. zfs send + streams generated from new snapshots of these file systems cannot be + accessed on systems running older versions of ZFS. +

In general, the file system version is independent of the pool + version. See zpool-features(7) for information on + features of ZFS storage pools.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to version. If not specified, upgrade to + the most recent version. This option can only be used to increase the + version number, and only up to the most recent version supported by + this version of ZFS.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
+
+
+

+

zpool-upgrade(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-userspace.8.html b/man/v2.1/8/zfs-userspace.8.html new file mode 100644 index 000000000..719498067 --- /dev/null +++ b/man/v2.1/8/zfs-userspace.8.html @@ -0,0 +1,388 @@ + + + + + + + zfs-userspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-userspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs-wait.8.html b/man/v2.1/8/zfs-wait.8.html new file mode 100644 index 000000000..dd05ae4a5 --- /dev/null +++ b/man/v2.1/8/zfs-wait.8.html @@ -0,0 +1,280 @@ + + + + + + + zfs-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-wait.8

+
+ + + + + +
ZFS-WAIT(8)System Manager's ManualZFS-WAIT(8)
+
+
+

+

zfs-waitwait + for activity in ZFS filesystem to stop

+
+
+

+ + + + + +
zfswait [-t + activity[,activity]…] + filesystem
+
+
+

+

Waits until all background activity of the given types has ceased + in the given filesystem. The activity could cease because it has completed + or because the filesystem has been destroyed or unmounted. If no activities + are specified, the command waits until background activity of every type + listed below has ceased. If there is no activity of the given types in + progress, the command returns immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
The filesystem's internal delete queue to empty
+
+
+

Note that the internal delete queue does not finish draining until + all large files have had time to be fully destroyed and all open file + handles to unlinked files are closed.

+
+
+

+

lsof(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs.8.html b/man/v2.1/8/zfs.8.html new file mode 100644 index 000000000..e64fe72fa --- /dev/null +++ b/man/v2.1/8/zfs.8.html @@ -0,0 +1,996 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's ManualZFS(8)
+
+
+

+

zfsconfigure + ZFS datasets

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfsversion
+
+ + + + + +
zfssubcommand + [arguments]
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace. For + example:

+
pool/{filesystem,volume,snapshot}
+

where the maximum length of a dataset name is + + (256B) and the maximum amount of nesting allowed in a path is 50 levels + deep.

+

A dataset can be one of the following:

+
+
+
+
Can be mounted within the standard system namespace and behaves like other + file systems. While ZFS file systems are designed to be POSIX-compliant, + known issues exist that prevent compliance in some cases. Applications + that depend on standards conformance might fail due to non-standard + behavior when checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

See zfsconcepts(7) for details.

+
+

+

Properties are divided into two types: native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about properties, see + zfsprops(7).

+
+
+

+

Enabling the + + feature allows for the creation of encrypted filesystems and volumes. ZFS + will encrypt file and zvol data, file attributes, ACLs, permission bits, + directory listings, FUID mappings, and + // + data. For an overview of encryption, see + zfs-load-key(8).

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
 
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+

+
+
zfs-list(8)
+
Lists the property information for the given datasets in tabular + form.
+
zfs-create(8)
+
Creates a new ZFS file system or volume.
+
zfs-destroy(8)
+
Destroys the given dataset(s), snapshot(s), or bookmark.
+
zfs-rename(8)
+
Renames the given dataset (filesystem or snapshot).
+
zfs-upgrade(8)
+
Manage upgrading the on-disk version of filesystems.
+
+
+
+

+
+
zfs-snapshot(8)
+
Creates snapshots with the given names.
+
zfs-rollback(8)
+
Roll back the given dataset to a previous snapshot.
+
zfs-hold(8)/zfs-release(8)
+
Add or remove a hold reference to the specified snapshot or snapshots. If + a hold exists on a snapshot, attempts to destroy that snapshot by using + the zfs destroy command + return + .
+
zfs-diff(8)
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem.
+
+
+
+

+
+
zfs-clone(8)
+
Creates a clone of the given snapshot.
+
zfs-promote(8)
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot.
+
+
+
+

+
+
zfs-send(8)
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark.
+
zfs-receive(8)
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the + zfs-send(8) subcommand, which by default creates a full + stream.
+
zfs-bookmark(8)
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs + send command.
+
zfs-redact(8)
+
Generate a new redaction bookmark. This feature can be used to allow + clones of a filesystem to be made available on a remote system, in the + case where their parent need not (or needs to not) be usable.
+
+
+
+

+
+
zfs-get(8)
+
Displays properties for the given datasets.
+
zfs-set(8)
+
Sets the property or list of properties to the given value(s) for each + dataset.
+
zfs-inherit(8)
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists.
+
+
+
+

+
+
zfs-userspace(8)/zfs-groupspace(8)/zfs-projectspace(8)
+
Displays space consumed by, and quotas on, each user, group, or project in + the specified filesystem or snapshot.
+
zfs-project(8)
+
List, set, or clear project ID and/or inherit flag on the file(s) or + directories.
+
+
+
+

+
+
zfs-mount(8)
+
Displays all ZFS file systems currently mounted, or mount ZFS filesystem + on a path described by its mountpoint property.
+
zfs-unmount(8)
+
Unmounts currently mounted ZFS file systems.
+
+
+
+

+
+
zfs-share(8)
+
Shares available ZFS file systems.
+
zfs-unshare(8)
+
Unshares currently shared ZFS file systems.
+
+
+
+

+
+
zfs-allow(8)
+
Delegate permissions on the specified filesystem or volume.
+
zfs-unallow(8)
+
Remove delegated permissions on the specified filesystem or volume.
+
+
+
+

+
+
zfs-change-key(8)
+
Add or change an encryption key on the specified dataset.
+
zfs-load-key(8)
+
Load the key for the specified encrypted dataset, enabling access.
+
zfs-unload-key(8)
+
Unload a key for the specified dataset, removing the ability to access the + dataset.
+
+
+
+

+
+
zfs-program(8)
+
Execute ZFS administrative operations programmatically via a Lua + script-language channel program.
+
+
+
+

+
+
zfs-jail(8)
+
Attaches a filesystem to a jail.
+
zfs-unjail(8)
+
Detaches a filesystem from a jail.
+
+
+
+

+
+
zfs-wait(8)
+
Wait for background activity in a filesystem to complete.
+
+
+
+
+

+

The zfs utility exits + on success, + if an error + occurs, and if + invalid command line options were specified.

+
+
+

+
+
: Creating a ZFS File System Hierarchy
+
The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, + and is automatically inherited by the child file system. +
# zfs + create + pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
: Creating a ZFS Snapshot
+
The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system. +
# zfs + snapshot + pool/home/bob@yesterday
+
+
: Creating and Destroying Multiple + Snapshots
+
The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. + Each snapshot is mounted on demand in the + .zfs/snapshot directory at the root of its file + system. The second command destroys the newly created snapshots. +
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
: Disabling and Enabling File System + Compression
+
The following command disables the compression property + for all file systems under pool/home. The next + command explicitly enables compression for + pool/home/anne. +
# zfs + set + compression=off + pool/home
+
# zfs + set + compression=on + pool/home/anne
+
+
: Listing ZFS Datasets
+
The following command lists all active file systems and volumes in the + system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties. +
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
: Setting a Quota on a ZFS File System
+
The following command sets a quota of 50 Gbytes for + pool/home/bob: +
# zfs + set quota=50G + pool/home/bob
+
+
: Listing ZFS Properties
+
The following command lists all properties for + pool/home/bob: +
+
# zfs get  pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings + for pool/home/bob:

+
+
# zfs get -r -s  -o ,,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
: Rolling Back a ZFS File System
+
The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots: +
# zfs + rollback -r + pool/home/anne@yesterday
+
+
: Creating a ZFS Clone
+
The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday. +
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
: Promoting a ZFS Clone
+
The following commands illustrate how to test out changes to a file + system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming: +
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
: Inheriting ZFS Properties
+
The following command causes pool/home/bob + and pool/home/anne to + inherit the checksum property from their parent. +
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
: Remotely Replicating ZFS Data
+
The following commands send a full stream and then an incremental stream + to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + . +
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
: Using the zfs + receive -d + Option
+
The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as + an empty file system. +
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
: Setting User Properties
+
The following example sets the user-defined + com.example:department + property for a dataset: +
# zfs + set + com.example:department=12345 + tank/accounting
+
+
: Performing a Rolling Snapshot
+
The following example shows how to maintain a history of snapshots with a + consistent naming scheme. To keep a week's worth of snapshots, the user + destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows: +
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
: Setting sharenfs Property Options on a ZFS File + System
+
The following commands show how to set sharenfs property + options to enable read-write access for a set of IP addresses and to + enable root access for system "neo" on the + tank/home file system: +
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
: Delegating ZFS Administration Permissions on a + ZFS Dataset
+
The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take + snapshots on tank/cindys. The permissions on + tank/cindys are also displayed. +
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys + will be unable to mount file systems under + tank/cindys. Add an ACE similar to the following + syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
: Delegating Create Time Permissions on a ZFS + Dataset
+
The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed. +
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
: Defining and Granting a Permission Set on a ZFS + Dataset
+
The following example shows how to define and grant a permission set on + the tank/users file system. The permissions on + tank/users are also displayed. +
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
: Delegating Property Permissions on a ZFS + Dataset
+
The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed. +
+
# zfs allow cindys quota, users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
: Removing ZFS Delegated Permissions on a ZFS + Dataset
+
The following example shows how to remove the snapshot permission from the + staff group on the tank/users file + system. The permissions on tank/users are also + displayed. +
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
: Showing the differences between a snapshot and + a ZFS Dataset
+
The following example shows how to see what has changed between a prior + snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected. +
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
: Creating a bookmark
+
The following example create a bookmark to a snapshot. This bookmark can + then be used instead of snapshot in send streams. +
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
: Setting + + Property Options on a ZFS File System
+
The following example show how to share SMB filesystem through ZFS. Note + that a user and their password must be given. +
# + smbmount //127.0.0.1/share_tmp + /mnt/tmp -o + user=workgroup/turbo,password=obrut,uid=1000
+

Minimal /etc/samba/smb.conf + configuration is required, as follows.

+

Samba will need to bind to the loopback interface for the ZFS + utilities to communicate with Samba. This is the default behavior for + most Linux distributions.

+

Samba must be able to authenticate a user. This can be done in + a number of ways (passwd(5), LDAP, + smbpasswd(5), &c.). How to do this is outside the + scope of this document – refer to smb.conf(5) + for more information.

+

See the USERSHARES + section for all configuration options, in case you need to modify any + options of the share afterwards. Do note that any changes done with the + net(8) command will be undone if the share is ever + unshared (like via a reboot).

+
+
+
+
+

+
+
+
Use ANSI color in zfs diff + and zfs list output.
+
+
+
+
Cause zfs mount to use + mount(8) to mount ZFS datasets. This option is provided + for backwards compatibility with older ZFS versions.
+
+
+
+
Tells zfs to set the maximum pipe size for + sends/recieves. Disabled by default on Linux due to an unfixed deadlock in + Linux's pipe size handling code.
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + zfsconcepts(7), zfsprops(7), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-allow(8), zfs-bookmark(8), + zfs-change-key(8), zfs-clone(8), + zfs-create(8), zfs-destroy(8), + zfs-diff(8), zfs-get(8), + zfs-groupspace(8), zfs-hold(8), + zfs-inherit(8), zfs-jail(8), + zfs-list(8), zfs-load-key(8), + zfs-mount(8), zfs-program(8), + zfs-project(8), zfs-projectspace(8), + zfs-promote(8), zfs-receive(8), + zfs-redact(8), zfs-release(8), + zfs-rename(8), zfs-rollback(8), + zfs-send(8), zfs-set(8), + zfs-share(8), zfs-snapshot(8), + zfs-unallow(8), zfs-unjail(8), + zfs-unload-key(8), zfs-unmount(8), + zfs-upgrade(8), + zfs-userspace(8), zfs-wait(8), + zpool(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs_ids_to_path.8.html b/man/v2.1/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..d61e28c93 --- /dev/null +++ b/man/v2.1/8/zfs_ids_to_path.8.html @@ -0,0 +1,272 @@ + + + + + + + zfs_ids_to_path.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_ids_to_path.8

+
+ + + + + +
ZFS_IDS_TO_PATH(8)System Manager's ManualZFS_IDS_TO_PATH(8)
+
+
+

+

zfs_ids_to_path — + convert objset and object ids to names and paths

+
+
+

+ + + + + +
zfs_ids_to_path[-v] pool + objset-id object-id
+
+
+

+

The + + utility converts a provided objset and object ids into a path to the file + they refer to.

+
+
+
Verbose. Print the dataset name and the file path within the dataset + separately. This will work correctly even if the dataset is not + mounted.
+
+
+
+

+

zdb(8), zfs(8)

+
+
+ + + + + +
April 17, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zfs_prepare_disk.8.html b/man/v2.1/8/zfs_prepare_disk.8.html new file mode 100644 index 000000000..2b17566c4 --- /dev/null +++ b/man/v2.1/8/zfs_prepare_disk.8.html @@ -0,0 +1,300 @@ + + + + + + + zfs_prepare_disk.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_prepare_disk.8

+
+ + + + + +
ZFS_PREPARE_DISK(8)System Manager's ManualZFS_PREPARE_DISK(8)
+
+
+

+

zfs_prepare_disk — + special script that gets run before bringing a disk into a + pool

+
+
+

+

zfs_prepare_disk is an optional script + that gets called by libzfs before bringing a disk into a pool. It can be + modified by the user to run whatever commands are necessary to prepare a + disk for inclusion into the pool. For example, users can add lines to + zfs_prepare_disk to do things like update the + drive's firmware or check the drive's health. + zfs_prepare_disk is optional and can be removed if + not needed. libzfs will look for the script at + @zfsexecdir@/zfs_prepare_disk.

+
+

+

zfs_prepare_disk will be passed the + following environment variables:

+

+
+
POOL_NAME
+
+
VDEV_PATH
+
+
VDEV_PREPARE
+
('create', 'add', 'replace', or + 'autoreplace'). This can be useful if you only want the script to be run + under certain actions.
+
VDEV_UPATH
+
disk. For multipath this would + return one of the /dev/sd* paths to the disk. If the device is not a + device mapper device, then VDEV_UPATH just returns + the same value as VDEV_PATH
+
VDEV_ENC_SYSFS_PATH
+
+
+

Note that some of these variables may have a blank value. + POOL_NAME is blank at pool creation time, for + example.

+
+
+
+

+

zfs_prepare_disk runs with a limited + $PATH.

+
+
+

+

zfs_prepare_disk should return 0 on + success, non-zero otherwise. If non-zero is returned, the disk will not be + included in the pool.

+
+
+ + + + + +
August 30, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zgenhostid.8.html b/man/v2.1/8/zgenhostid.8.html new file mode 100644 index 000000000..b4dddb8fb --- /dev/null +++ b/man/v2.1/8/zgenhostid.8.html @@ -0,0 +1,330 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's ManualZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate host ID into /etc/hostid

+
+
+

+ + + + + +
zgenhostid[-f] [-o + filename] [hostid]
+
+
+

+

Creates /etc/hostid file and stores the + host ID in it. If hostid was provided, validate and + store that value. Otherwise, randomly generate an ID.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Allow output overwrite.
+
+ filename
+
Write to filename instead of the default + /etc/hostid.
+
hostid
+
Specifies the value to be placed in /etc/hostid. + It should be a number with a value between 1 and 2^32-1. If + , generate a random + ID. This value must be unique among your systems. It + must be an 8-digit-long hexadecimal number, optionally + prefixed by "0x".
+
+
+
+

+

/etc/hostid

+
+
+

+
+
Generate a random hostid and store it
+
+
# + zgenhostid
+
+
Record the libc-generated hostid in + /etc/hostid
+
+
# + zgenhostid + "$(hostid)"
+
+
Record a custom hostid (0xdeadbeef) in + /etc/hostid
+
+
# + zgenhostid + deadbeef
+
+
Record a custom hostid (0x01234567) in + /tmp/hostid and overwrite the file + if it exists
+
+
# + zgenhostid -f + -o /tmp/hostid + 0x01234567
+
+
+
+
+

+

genhostid(1), hostid(1), + spl(4)

+
+
+

+

zgenhostid emulates the + genhostid(1) utility and is provided for use on systems + which do not include the utility or do not provide the + sethostid(3) function.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zinject.8.html b/man/v2.1/8/zinject.8.html new file mode 100644 index 000000000..f6efafde5 --- /dev/null +++ b/man/v2.1/8/zinject.8.html @@ -0,0 +1,548 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
ZINJECT(8)System Manager's ManualZINJECT(8)
+
+
+

+

zinjectZFS + Fault Injector

+
+
+

+

zinject creates artificial problems in a + ZFS pool by simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+ + + + + +
zinject
+
+
List injection records.
+
+ + + + + +
zinject-b + objset:object:level:start:end + [-f frequency] + -amu [pool]
+
+
Force an error into the pool at a bookmark.
+
+ + + + + +
zinject-c + id|all
+
+
Cancel injection records.
+
+ + + + + +
zinject-d vdev + -A + | + pool
+
+
Force a vdev into the DEGRADED or FAULTED state.
+
+ + + + + +
zinject-d vdev + -D + latency:lanes + pool
+
+
Add an artificial delay to IO requests on a particular device, such that + the requests take a minimum of latency milliseconds + to complete. Each delay has an associated number of + lanes which defines the number of concurrent IO + requests that can be processed. +

For example, with a single lane delay of 10 ms + (-D + 10:1), the device will only + be able to service a single IO request at a time with each request + taking 10 ms to complete. So, if only a single request is submitted + every 10 ms, the average latency will be 10 ms; but if more than one + request is submitted every 10 ms, the average latency will be more than + 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D + 10:2), then the device will + be able to service two requests at a time, each with a minimum latency + of 10 ms. So, if two requests are submitted every 10 ms, then the + average latency will be 10 ms; but if more than two requests are + submitted every 10 ms, the average latency will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + -D + 10:1 are roughly equivalent + to a single invocation of -D + 10:2. This also means, that + one can specify multiple lanes with differing target latencies. For + example, an invocation of -D + 10:1 followed by + -D + 25:2 will create 3 lanes on + the device: one lane with a latency of 10 ms and two lanes with a 25 ms + latency.

+
+
+ + + + + +
zinject-d vdev + [-e device_error] + [-L label_error] + [-T failure] + [-f frequency] + [-F] pool
+
+
Force a vdev error.
+
+ + + + + +
zinject-I [-s + seconds|-g + txgs] pool
+
+
Simulate a hardware failure that fails to honor a cache flush.
+
+ + + + + +
zinject-p function + pool
+
+
Panic inside the specified function.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amq] path
+
+
Force an error into the contents of a file.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-amq] path
+
+
Force an error into the metadnode for a file or directory.
+
+ + + + + +
zinject-t mos_type + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amqu] pool
+
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+ objset:object:level:start:end
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+ dvas
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas + (ex. + 0,2). This option is not + applicable to logical data errors such as decompress and + decrypt.
+
+ vdev
+
A vdev specified by path or GUID.
+
+ device_error
+
Specify +
+
+
for an ECKSUM error,
+
+
for a data decompression error,
+
+
for a data decryption error,
+
+
to flip a bit in the data after a read,
+
+
for an ECHILD error,
+
+
for an EIO error where reopening the device will succeed, or
+
+
for an ENXIO error where reopening the device will fail.
+
+

For EIO and ENXIO, the "failed" reads or writes + still occur. The probe simply sets the error value reported by the I/O + pipeline so it appears the read or write failed. Decryption errors only + currently work with file data.

+
+
+ frequency
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between + + and + .
+
+
Fail faster. Do fewer checks.
+
+ txgs
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+ level
+
Inject an error at a particular block level. The default is + .
+
+ label_error
+
Set the label error region to one of + , + , + , or + .
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+ range
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+ seconds
+
Run for this many seconds before reporting failure.
+
+ failure
+
Set the failure type to one of all, + , + , + , or + .
+
+ mos_type
+
Set this to +
+
+
for any data in the MOS,
+
+
for an object directory,
+
+
for the pool configuration,
+
+
for the block pointer list,
+
+
for the space map,
+
+
for the metaslab, or
+
+
for the persistent error log.
+
+
+
+
Unload the pool after injection.
+
+
+
+

+
+
+
Run zinject in debug mode.
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-add.8.html b/man/v2.1/8/zpool-add.8.html new file mode 100644 index 000000000..10abc41c0 --- /dev/null +++ b/man/v2.1/8/zpool-add.8.html @@ -0,0 +1,302 @@ + + + + + + + zpool-add.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-add.8

+
+ + + + + +
ZPOOL-ADD(8)System Manager's ManualZPOOL-ADD(8)
+
+
+

+

zpool-addadd + vdevs to ZFS storage pool

+
+
+

+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev
+
+
+

+

Adds the specified virtual devices to the given pool. The + vdev specification is described in the + section of zpoolconcepts(7). The behavior + of the -f option, and the device checks performed + are described in the zpool + create subcommand.

+
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name + regardless of the /dev/disk path used to open + it.
+
+
Displays the configuration that would be used without actually adding the + vdevs. The actual pool creation can still fail due + to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
+
+

+

zpool-attach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-remove(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-attach.8.html b/man/v2.1/8/zpool-attach.8.html new file mode 100644 index 000000000..078ea8b16 --- /dev/null +++ b/man/v2.1/8/zpool-attach.8.html @@ -0,0 +1,297 @@ + + + + + + + zpool-attach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-attach.8

+
+ + + + + +
ZPOOL-ATTACH(8)System Manager's ManualZPOOL-ATTACH(8)
+
+
+

+

zpool-attach — + attach new device to existing ZFS vdev

+
+
+

+ + + + + +
zpoolattach [-fsw] + [-o + property=value] + pool device new_device
+
+
+

+

Attaches new_device to the existing + device. The existing device cannot be part of a raidz + configuration. If device is not currently part of a + mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part of + a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately and any + running scrub is cancelled.

+
+
+
Forces use of new_device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new_device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until new_device has finished resilvering + before returning.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-replace(8), + zpool-resilver(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-checkpoint.8.html b/man/v2.1/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..baf70c639 --- /dev/null +++ b/man/v2.1/8/zpool-checkpoint.8.html @@ -0,0 +1,288 @@ + + + + + + + zpool-checkpoint.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-checkpoint.8

+
+ + + + + +
ZPOOL-CHECKPOINT(8)System Manager's ManualZPOOL-CHECKPOINT(8)
+
+
+

+

zpool-checkpoint — + check-point current ZFS storage pool state

+
+
+

+ + + + + +
zpoolcheckpoint [-d + [-w]] pool
+
+
+

+

Checkpoints the current state of pool , + which can be later restored by zpool + import --rewind-to-checkpoint. The existence of a + checkpoint in a pool prohibits the following zpool + subcommands: remove, attach, + detach, split, + and reguid. In addition, it + may break reservation boundaries if the pool lacks free space. The + zpool status command + indicates the existence of a checkpoint or the progress of discarding a + checkpoint from a pool. zpool + list can be used to check how much space the + checkpoint takes from the pool.

+
+
+

+
+
, + --discard
+
Discards an existing checkpoint from pool.
+
, + --wait
+
Waits until the checkpoint has finished being discarded before + returning.
+
+
+
+

+

zfs-snapshot(8), + zpool-import(8), zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-clear.8.html b/man/v2.1/8/zpool-clear.8.html new file mode 100644 index 000000000..4f8440fb7 --- /dev/null +++ b/man/v2.1/8/zpool-clear.8.html @@ -0,0 +1,282 @@ + + + + + + + zpool-clear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-clear.8

+
+ + + + + +
ZPOOL-CLEAR(8)System Manager's ManualZPOOL-CLEAR(8)
+
+
+

+

zpool-clear — + clear device errors in ZFS storage pool

+
+
+

+ + + + + +
zpoolclear [--power] + pool [device]…
+
+
+

+

Clears device errors in a pool. If no arguments are specified, all + device errors within the pool are cleared. If one or more devices is + specified, only those errors associated with the specified device or devices + are cleared.

+

If the pool was suspended it will be brought back + online provided the devices can be accessed. Pools with + + enabled which have been suspended cannot be resumed. While the pool was + suspended, it may have been imported on another host, and resuming I/O could + result in pool damage.

+
+
+
Power on the devices's slot in the storage enclosure and wait for the + device to show up before attempting to clear errors. This is done on all + the devices specified. Alternatively, you can set the + + environment variable to always enable this behavior. Note: This flag + currently works on Linux only.
+
+
+
+

+

zdb(8), zpool-reopen(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-create.8.html b/man/v2.1/8/zpool-create.8.html new file mode 100644 index 000000000..b4df12928 --- /dev/null +++ b/man/v2.1/8/zpool-create.8.html @@ -0,0 +1,383 @@ + + + + + + + zpool-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-create.8

+
+ + + + + +
ZPOOL-CREATE(8)System Manager's ManualZPOOL-CREATE(8)
+
+
+

+

zpool-create — + create ZFS storage pool

+
+
+

+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]… + [-o + feature@feature=value] + [-o + compatibility=off|legacy|file[,file]…] + [-O + file-system-property=value]… + [-R root] + [-t tname] + pool vdev
+
+
+

+

Creates a new storage pool containing the virtual devices + specified on the command line. The pool name must begin with a letter, and + can only contain alphanumeric characters as well as the underscore + (""), + dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + draid, spare and + are + reserved, as are names beginning with mirror, + raidz, draid, and + spare. The vdev specification is + described in the Virtual Devices + section of zpoolconcepts(7).

+

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts to + use a new device in different pools, even if + = + enabled. The administrator must ensure, that simultaneous + invocations of any combination of zpool + replace, zpool + create, zpool + add, or zpool + labelclear, do not refer to the same device. Using + the same device in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or specified + as the dedicated dump device, that prevents a device from ever being used by + ZFS. Other uses, such as having a preexisting UFS file system, can be + overridden with -f.

+

The command also checks that the replication strategy for the pool + is consistent. An attempt to combine redundant and non-redundant storage in + a single pool, or to mix disks and files, results in an error unless + -f is specified. The use of differently-sized + devices within a single raidz or mirror group is also flagged as an error + unless -f is specified.

+

Unless the -R option is specified, the + default mount point is /pool. + The mount point must not exist or must be empty, or else the root dataset + will not be able to be be mounted. This can be overridden with the + -m option.

+

By default all supported features are enabled + on the new pool. The -d option and the + -o compatibility property (e.g + -o + =2020) + can be used to restrict the features that are enabled, so that the pool can + be imported on other releases of ZFS.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with -o. See + zpool-features(7) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool if + altroot is specified. The mount point must be an + absolute path, legacy, or none. For + more information on dataset mount points, see + zfsprops(7).
+
+
Displays the configuration that would be used without actually creating + the pool. The actual pool creation can still fail due to insufficient + privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See zpoolprops(7) for a + list of valid properties that can be set.
+
+ compatibility=off|legacy|file[,file]…
+
Specifies compatibility feature sets. See + zpool-features(7) for more information about + compatibility feature sets.
+
+ feature@feature=value
+
Sets the given pool feature. See the zpool-features(7) + section for a list of valid features that can be set. Value can be either + disabled or enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the pool. + See zfsprops(7) for a list of valid properties that can + be set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to tname while the + on-disk name will be the name specified as pool. + This will set the default of the cachefile property to + none. This is intended to handle name space collisions + when creating pools for other systems, such as virtual machines or + physical machines whose pools live on network block devices.
+
+
+
+

+

zpool-destroy(8), + zpool-export(8), zpool-import(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-destroy.8.html b/man/v2.1/8/zpool-destroy.8.html new file mode 100644 index 000000000..47219d906 --- /dev/null +++ b/man/v2.1/8/zpool-destroy.8.html @@ -0,0 +1,264 @@ + + + + + + + zpool-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-destroy.8

+
+ + + + + +
ZPOOL-DESTROY(8)System Manager's ManualZPOOL-DESTROY(8)
+
+
+

+

zpool-destroy — + destroy ZFS storage pool

+
+
+

+ + + + + +
zpooldestroy [-f] + pool
+
+
+

+

Destroys the given pool, freeing up any devices for other use. + This command tries to unmount any active datasets before destroying the + pool.

+
+
+
Forcefully unmount all active datasets.
+
+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-detach.8.html b/man/v2.1/8/zpool-detach.8.html new file mode 100644 index 000000000..d44c4b2e3 --- /dev/null +++ b/man/v2.1/8/zpool-detach.8.html @@ -0,0 +1,269 @@ + + + + + + + zpool-detach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-detach.8

+
+ + + + + +
ZPOOL-DETACH(8)System Manager's ManualZPOOL-DETACH(8)
+
+
+

+

zpool-detach — + detach device from ZFS mirror

+
+
+

+ + + + + +
zpooldetach pool device
+
+
+

+

Detaches device from a mirror. The operation + is refused if there are no other valid replicas of the data. If + device may be re-added to the pool later on then + consider the zpool offline + command instead.

+
+
+

+

zpool-attach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-remove(8), zpool-replace(8), + zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-events.8.html b/man/v2.1/8/zpool-events.8.html new file mode 100644 index 000000000..7310295f9 --- /dev/null +++ b/man/v2.1/8/zpool-events.8.html @@ -0,0 +1,885 @@ + + + + + + + zpool-events.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-events.8

+
+ + + + + +
ZPOOL-EVENTS(8)System Manager's ManualZPOOL-EVENTS(8)
+
+
+

+

zpool-events — + list recent events generated by kernel

+
+
+

+ + + + + +
zpoolevents [-vHf] + [pool]
+
+ + + + + +
zpoolevents -c
+
+
+

+

Lists all recent events generated by the ZFS kernel modules. These + events are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. For + more information about the subclasses and event payloads that can be + generated see EVENTS and the following + sections.

+
+
+

+
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
+

+

These are the different event subclasses. The full event name + would be + , + but only the last part is listed here.

+

+
+
+
Issued when a checksum error has been detected.
+
+
Issued when there is an I/O error in a vdev in the pool.
+
+
Issued when there have been data errors in the pool.
+
+
Issued when an I/O request is determined to be "hung", this can + be caused by lost completion events due to flaky hardware or drivers. See + + in zfs(4) for additional information regarding + "hung" I/O detection and configuration.
+
+
Issued when a completed I/O request exceeds the maximum allowed time + specified by the + + module parameter. This can be an indicator of problems with the underlying + storage device. The number of delay events is ratelimited by the + + module parameter.
+
+
Issued every time a vdev change have been done to the pool.
+
+
Issued when a pool cannot be imported.
+
+
Issued when a pool is destroyed.
+
+
Issued when a pool is exported.
+
+
Issued when a pool is imported.
+
+
Issued when a REGUID (new unique identifier for the pool have been + regenerated) have been detected.
+
+
Issued when the vdev is unknown. Such as trying to clear device errors on + a vdev that have failed/been kicked from the system/pool and is no longer + available.
+
+
Issued when a vdev could not be opened (because it didn't exist for + example).
+
+
Issued when corrupt data have been detected on a vdev.
+
+
Issued when there are no more replicas to sustain the pool. This would + lead to the pool being + .
+
+
Issued when a missing device in the pool have been detected.
+
+
Issued when the system (kernel) have removed a device, and ZFS notices + that the device isn't there any more. This is usually followed by a + probe_failure event.
+
+
Issued when the label is OK but invalid.
+
+
Issued when the ashift alignment requirement has increased.
+
+
Issued when a vdev is detached from a mirror (or a spare detached from a + vdev where it have been used to replace a failed drive - only works if the + original drive have been re-added).
+
+
Issued when clearing device errors in a pool. Such as running + zpool clear on a device in + the pool.
+
+
Issued when a check to see if a given vdev could be opened is + started.
+
+
Issued when a spare have kicked in to replace a failed device.
+
+
Issued when a vdev can be automatically expanded.
+
+
Issued when there is an I/O failure in a vdev in the pool.
+
+
Issued when a probe fails on a vdev. This would occur if a vdev have been + kicked from the system outside of ZFS (such as the kernel have removed the + device).
+
+
Issued when the intent log cannot be replayed. The can occur in the case + of a missing or damaged log device.
+
+
Issued when a resilver is started.
+
+
Issued when the running resilver have finished.
+
+
Issued when a scrub is started on a pool.
+
+
Issued when a pool has finished scrubbing.
+
+
Issued when a scrub is aborted on a pool.
+
+
Issued when a scrub is resumed on a pool.
+
+
Issued when a scrub is paused on a pool.
+
+
 
+
+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to + uppercase and prefixed with + .

+

+
+
+
Pool name.
+
+
Failmode - + , + , + or + . + See the + + property in zpoolprops(7) for more information.
+
+
The GUID of the pool.
+
+
The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, + 4=recover 5=error).
+
+
The GUID of the vdev in question (the vdev failing or operated upon with + zpool clear, etc.).
+
+
Type of vdev - + , + , + , + etc. See the + section of zpoolconcepts(7) for more + information on possible values.
+
+
Full path of the vdev, including any -partX.
+
+
ID of vdev (if any).
+
+
Physical FRU location.
+
+
State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed + to open, 5=faulted, 6=degraded, 7=healthy).
+
+
The ashift value of the vdev.
+
+
The time the last I/O request completed for the specified vdev.
+
+
The time since the last I/O request completed for the specified vdev.
+
+
List of spares, including full path and any -partX.
+
+
GUID(s) of spares.
+
+
How many read errors that have been detected on the vdev.
+
+
How many write errors that have been detected on the vdev.
+
+
How many checksum errors that have been detected on the vdev.
+
+
GUID of the vdev parent.
+
+
Type of parent. See vdev_type.
+
+
Path of the vdev parent (if any).
+
+
ID of the vdev parent (if any).
+
+
The object set number for a given I/O request.
+
+
The object number for a given I/O request.
+
+
The indirect level for the block. Level 0 is the lowest level and includes + data blocks. Values > 0 indicate metadata blocks at the appropriate + level.
+
+
The block ID for a given I/O request.
+
+
The error number for a failure when handling a given I/O request, + compatible with errno(3) with the value of + + used to indicate a ZFS checksum error.
+
+
The offset in bytes of where to write the I/O request for the specified + vdev.
+
+
The size in bytes of the I/O request.
+
+
The current flags describing how the I/O request should be handled. See + the I/O FLAGS section for the full list of I/O + flags.
+
+
The current stage of the I/O in the pipeline. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The time elapsed (in nanoseconds) waiting for the block layer to complete + the I/O request. Unlike zio_delta, this does not include + any vdev queuing time and is therefore solely a measure of the block layer + performance.
+
+
The time when a given I/O request was submitted.
+
+
The time required to service a given I/O request.
+
+
The previous state of the vdev.
+
+
The expected checksum value for the block.
+
+
The actual checksum value for an errant block.
+
+
Checksum algorithm used. See zfsprops(7) for more + information on the available checksum algorithms.
+
+
Whether or not the data is byteswapped.
+
+
start, + end) pairs of corruption offsets. Offsets are always + aligned on a 64-bit boundary, and can include some gaps of non-corruption. + (See bad_ranges_min_gap)
+
+
In order to bound the size of the bad_ranges array, gaps + of non-corruption less than or equal to + bad_ranges_min_gap bytes have been merged with adjacent + corruption. Always at least 8 bytes, since corruption is detected on a + 64-bit word basis.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits in that range which were clear in the + good data and set in the bad data.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits for that range which were set in the + good data and clear in the bad data.
+
+
If this field exists, it is an array of (bad data + & ~(good data)); that + is, the bits set in the bad data which are cleared in the good data. Each + element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, + the first element is the first byte in the first + bad_ranges range, and the last element is the last byte + in the last bad_ranges range.
+
+
Like bad_set_bits, but contains (good + data & ~(bad + data)); that is, the bits set in the good data which are cleared in + the bad data.
+
+
If this field exists, it is an array of counters. Each entry counts bits + set in a particular bit of a big-endian uint64 type. The first entry + counts bits set in the high-order bit of the first byte, the 9th byte, + etc, and the last entry counts bits set of the low-order bit of the 8th + byte, the 16th byte, etc. This information is useful for observing a stuck + bit in a parallel data path, such as IDE or parallel SCSI.
+
+
If this field exists, it is an array of counters. Each entry counts bit + clears in a particular bit of a big-endian uint64 type. The first entry + counts bits clears of the high-order bit of the first byte, the 9th byte, + etc, and the last entry counts clears of the low-order bit of the 8th + byte, the 16th byte, etc. This information is useful for observing a stuck + bit in a parallel data path, such as IDE or parallel SCSI.
+
+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O request.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_DDT_READ_START0x00000200R----
ZIO_STAGE_DDT_READ_DONE0x00000400R----
ZIO_STAGE_DDT_WRITE0x00000800-W---
ZIO_STAGE_DDT_FREE0x00001000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00002000RWFC-
ZIO_STAGE_GANG_ISSUE0x00004000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00008000-W---
ZIO_STAGE_DVA_ALLOCATE0x00010000-W---
ZIO_STAGE_DVA_FREE0x00020000--F--
ZIO_STAGE_DVA_CLAIM0x00040000---C-
ZIO_STAGE_READY0x00080000RWFCI
ZIO_STAGE_VDEV_IO_START0x00100000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00200000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00400000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x00800000R----
ZIO_STAGE_DONE0x01000000RWFCI
+
+
+

+

Every I/O request in the pipeline contains a set of flags which + describe its function and are used to govern its behavior. These flags will + be set in an event as a zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_DONT_CACHE0x00000800
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+

+

zfs(4), zed(8), + zpool-wait(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-export.8.html b/man/v2.1/8/zpool-export.8.html new file mode 100644 index 000000000..88be0316e --- /dev/null +++ b/man/v2.1/8/zpool-export.8.html @@ -0,0 +1,285 @@ + + + + + + + zpool-export.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-export.8

+
+ + + + + +
ZPOOL-EXPORT(8)System Manager's ManualZPOOL-EXPORT(8)
+
+
+

+

zpool-export — + export ZFS storage pools

+
+
+

+ + + + + +
zpoolexport [-f] + -a|pool
+
+
+

+

Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present.

+

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, so + that ZFS can label the disks with portable EFI labels. Otherwise, disk + drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, and allow export of pools with active + shared spares. +

This command will forcefully export the pool even if it has a + shared spare that is currently being used. This may lead to potential + data corruption.

+
+
+
+
+

+

zpool-import(8)

+
+
+ + + + + +
February 16, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-get.8.html b/man/v2.1/8/zpool-get.8.html new file mode 100644 index 000000000..d1a322198 --- /dev/null +++ b/man/v2.1/8/zpool-get.8.html @@ -0,0 +1,322 @@ + + + + + + + zpool-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-get.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolset + property=value + pool
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either + + or + .
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
+
+
+

+

zpool-features(7), + zpoolprops(7), zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-history.8.html b/man/v2.1/8/zpool-history.8.html new file mode 100644 index 000000000..0a4f46f06 --- /dev/null +++ b/man/v2.1/8/zpool-history.8.html @@ -0,0 +1,275 @@ + + + + + + + zpool-history.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-history.8

+
+ + + + + +
ZPOOL-HISTORY(8)System Manager's ManualZPOOL-HISTORY(8)
+
+
+

+

zpool-history — + inspect command history of ZFS storage pools

+
+
+

+ + + + + +
zpoolhistory [-il] + [pool]…
+
+
+

+

Displays the command history of the specified pool(s) or all pools + if no pool is specified.

+
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard format + includes, the user name, the hostname, and the zone in which the operation + was performed.
+
+
+
+

+

zpool-checkpoint(8), + zpool-events(8), zpool-status(8), + zpool-wait(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-import.8.html b/man/v2.1/8/zpool-import.8.html new file mode 100644 index 000000000..aeb4bfbb5 --- /dev/null +++ b/man/v2.1/8/zpool-import.8.html @@ -0,0 +1,548 @@ + + + + + + + zpool-import.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-import.8

+
+ + + + + +
ZPOOL-IMPORT(8)System Manager's ManualZPOOL-IMPORT(8)
+
+
+

+

zpool-import — + import ZFS storage pools or list available pools

+
+
+

+ + + + + +
zpoolimport [-D] + [-d + dir|device]…
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root]
+
+ + + + + +
zpoolimport [-Dflmt] + [-F [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
+
+

+
+
zpool import + [-D] [-d + dir|device]…
+
Lists pools available to import. If the -d or + -c options are not specified, this command + searches for devices using libblkid on Linux and geom on + FreeBSD. The -d option can + be specified multiple times, and all directories are searched. If the + device appears to be part of an exported pool, this command displays a + summary of the pool with the name of the pool, a numeric identifier, as + well as the vdev layout and current health of the device for each device + or file. Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-nTX]] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the + + mounting option is enabled. In this case, the checkpointed state of + the pool is opened and an administrator can see how the pool would + look like if they were to fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflmt] [-F + [-nTX]] [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. + : + This option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set + -o + cachefile=none when not explicitly + specified.
+
+
+
+
+
+

+

zpool-export(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-initialize.8.html b/man/v2.1/8/zpool-initialize.8.html new file mode 100644 index 000000000..40c1f0f1a --- /dev/null +++ b/man/v2.1/8/zpool-initialize.8.html @@ -0,0 +1,296 @@ + + + + + + + zpool-initialize.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-initialize.8

+
+ + + + + +
ZPOOL-INITIALIZE(8)System Manager's ManualZPOOL-INITIALIZE(8)
+
+
+

+

zpool-initialize — + write to unallocated regions of ZFS storage pool

+
+
+

+ + + + + +
zpoolinitialize + [-c|-s + |-u] [-w] + pool [device]…
+
+
+

+

Begins initializing by writing to all unallocated regions on the + specified devices, or all eligible devices in the pool if no individual + devices are specified. Only leaf data or log devices may be initialized.

+
+
, + --cancel
+
Cancel initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no cancellation + will occur on any device.
+
, + --suspend
+
Suspend initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no suspension will + occur on any device. Initializing can then be resumed by running + zpool initialize with no + flags on the relevant target devices.
+
, + --uninit
+
Clears the initialization state on the specified devices, or all eligible + devices if none are specified. If the devices are being actively + initialized the command will fail. After being cleared + zpool initialize with no + flags can be used to re-initialize all unallocoated regions on the + relevant target devices.
+
, + --wait
+
Wait until the devices have finished initializing before returning.
+
+
+
+

+

zpool-add(8), zpool-attach(8), + zpool-create(8), zpool-online(8), + zpool-replace(8), zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-iostat.8.html b/man/v2.1/8/zpool-iostat.8.html new file mode 100644 index 000000000..5b9561c9e --- /dev/null +++ b/man/v2.1/8/zpool-iostat.8.html @@ -0,0 +1,431 @@ + + + + + + + zpool-iostat.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-iostat.8

+
+ + + + + +
ZPOOL-IOSTAT(8)System Manager's ManualZPOOL-IOSTAT(8)
+
+
+

+

zpool-iostat — + display logical I/O statistics for ZFS storage + pools

+
+
+

+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [pool…|[pool + vdev…]|vdev…] + [interval [count]]
+
+
+

+

Displays logical I/O statistics for the given pools/vdevs. + Physical I/O statistics may be observed via iostat(1). If + writes are located nearby, they may be merged into a single larger + operation. Additional I/O may be generated depending on the level of vdev + redundancy. To filter output, you may pass in a list of pools, a pool and + list of vdevs in that pool, or a list of any vdevs from any pool. If no + items are specified, statistics for every pool in the system are shown. When + given an interval, the statistics are printed every + interval seconds until killed. If + -n flag is specified the headers are displayed only + once, otherwise they are displayed periodically. If + count is specified, the command exits after + count reports are printed. The first report printed is + always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the -y + flag. Also note that the units of + , + , + … that + are printed in the report are in base 1024. To get the raw values, use the + -p flag.

+
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool iostat + output. Users can run any script found in their + ~/.zpool.d directory or from the system + /etc/zfs/zpool.d directory. Script names + containing the slash + () character + are not allowed. The default search path can be overridden by setting the + + environment variable. A privileged user can only run + -c if they have the + + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or add + the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script name, + it prints a list of all scripts. -c also sets + verbose mode + (-v).

+

Script output should be in the form of "name=value". + The column name is set to "name" and the value is set to + "value". Multiple lines can be used to output multiple + columns. The first line of output not in the "name=value" + format is displayed without a column title, and no more output after + that is displayed. This can be useful for printing error messages. Blank + or NULL values are printed as a '-' to make output AWKable.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
Underlying path to the vdev (/dev/sd*). For + use with device mapper, multipath, or partitioned vdevs.
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Print request size histograms for the leaf vdev's I/O. This includes + histograms of individual I/O (ind) and aggregate I/O (agg). These stats + can be useful for observing how well I/O aggregation is working. Note that + TRIM I/O may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
Normally the first line of output reports the statistics since boot: + suppress it.
+
+
Display latency histograms: +
+
+
Total I/O time (queuing + disk I/O time).
+
+
Disk I/O time (time reading/writing the disk).
+
+
Amount of time I/O spent in synchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in asynchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in scrub queue. Does not include disk + time.
+
+
+
+
Include average latency statistics: +
+
+
Average total I/O time (queuing + disk I/O time).
+
+
Average disk I/O time (time reading/writing the disk).
+
+
Average amount of time I/O spent in synchronous priority queues. Does + not include disk time.
+
+
Average amount of time I/O spent in asynchronous priority queues. Does + not include disk time.
+
+
Average queuing time in scrub queue. Does not include disk time.
+
+
Average queuing time in trim queue. Does not include disk time.
+
+
+
+
Include active queue statistics. Each priority queue has both pending + () + and active + () + I/O requests. Pending requests are waiting to be issued to the disk, and + active requests have been issued to disk and are waiting for completion. + These stats are broken out by priority queue: +
+
+
Current number of entries in synchronous priority queues.
+
+
Current number of entries in asynchronous priority queues.
+
+
Current number of entries in scrub queue.
+
+
Current number of entries in trim queue.
+
+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
+

+

iostat(1), smartctl(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-labelclear.8.html b/man/v2.1/8/zpool-labelclear.8.html new file mode 100644 index 000000000..6c36112fb --- /dev/null +++ b/man/v2.1/8/zpool-labelclear.8.html @@ -0,0 +1,273 @@ + + + + + + + zpool-labelclear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-labelclear.8

+
+ + + + + +
ZPOOL-LABELCLEAR(8)System Manager's ManualZPOOL-LABELCLEAR(8)
+
+
+

+

zpool-labelclear — + remove ZFS label information from device

+
+
+

+ + + + + +
zpoollabelclear [-f] + device
+
+
+

+

Removes ZFS label information from the specified + device. If the device is a cache + device, it also removes the L2ARC header (persistent L2ARC). The + device must not be part of an active pool + configuration.

+
+
+
Treat exported or foreign devices as inactive.
+
+
+
+

+

zpool-destroy(8), + zpool-detach(8), zpool-remove(8), + zpool-replace(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-list.8.html b/man/v2.1/8/zpool-list.8.html new file mode 100644 index 000000000..783ab50fa --- /dev/null +++ b/man/v2.1/8/zpool-list.8.html @@ -0,0 +1,317 @@ + + + + + + + zpool-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-list.8

+
+ + + + + +
ZPOOL-LIST(8)System Manager's ManualZPOOL-LIST(8)
+
+
+

+

zpool-listlist + information about ZFS storage pools

+
+
+

+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]…] + [-T u|d] + [pool]… [interval + [count]]
+
+
+

+

Lists the given pools along with a health status and space usage. + If no pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until killed. If + count is specified, the command exits after + count reports are printed.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + zpoolprops(7) manual page for a list of valid + properties. The default list is + , + , + , + , + , + , + , + , + , + .
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
+
+

+

zpool-import(8), + zpool-status(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-offline.8.html b/man/v2.1/8/zpool-offline.8.html new file mode 100644 index 000000000..54f939123 --- /dev/null +++ b/man/v2.1/8/zpool-offline.8.html @@ -0,0 +1,316 @@ + + + + + + + zpool-offline.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-offline.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline + [--power|[-ft]] + pool device
+
+ + + + + +
zpoolonline + [--power] + [-e] pool + device
+
+
+

+
+
zpool offline + [--power|[-ft]] + pool device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Power off the device's slot in the storage enclosure. This flag + currently works on Linux only
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [--power] [-e] + pool device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Power on the device's slot in the storage enclosure and wait for the + device to show up before attempting to online it. Alternatively, you + can set the + + environment variable to always enable this behavior. This flag + currently works on Linux only
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-online.8.html b/man/v2.1/8/zpool-online.8.html new file mode 100644 index 000000000..814e25455 --- /dev/null +++ b/man/v2.1/8/zpool-online.8.html @@ -0,0 +1,316 @@ + + + + + + + zpool-online.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-online.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline + [--power|[-ft]] + pool device
+
+ + + + + +
zpoolonline + [--power] + [-e] pool + device
+
+
+

+
+
zpool offline + [--power|[-ft]] + pool device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Power off the device's slot in the storage enclosure. This flag + currently works on Linux only
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [--power] [-e] + pool device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Power on the device's slot in the storage enclosure and wait for the + device to show up before attempting to online it. Alternatively, you + can set the + + environment variable to always enable this behavior. This flag + currently works on Linux only
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-reguid.8.html b/man/v2.1/8/zpool-reguid.8.html new file mode 100644 index 000000000..a6cf70cc4 --- /dev/null +++ b/man/v2.1/8/zpool-reguid.8.html @@ -0,0 +1,266 @@ + + + + + + + zpool-reguid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reguid.8

+
+ + + + + +
ZPOOL-REGUID(8)System Manager's ManualZPOOL-REGUID(8)
+
+
+

+

zpool-reguid — + generate new unique identifier for ZFS storage + pool

+
+
+

+ + + + + +
zpoolreguid pool
+
+
+

+

Generates a new unique identifier for the pool. You must ensure + that all devices in this pool are online and healthy before performing this + action.

+
+
+

+

zpool-export(8), + zpool-import(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-remove.8.html b/man/v2.1/8/zpool-remove.8.html new file mode 100644 index 000000000..8d5a7829f --- /dev/null +++ b/man/v2.1/8/zpool-remove.8.html @@ -0,0 +1,319 @@ + + + + + + + zpool-remove.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-remove.8

+
+ + + + + +
ZPOOL-REMOVE(8)System Manager's ManualZPOOL-REMOVE(8)
+
+
+

+

zpool-remove — + remove devices from ZFS storage pool

+
+
+

+ + + + + +
zpoolremove [-npw] + pool device
+
+ + + + + +
zpoolremove -s + pool
+
+
+

+
+
zpool remove + [-npw] pool + device
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. +

Top-level vdevs can only be removed if the primary pool + storage does not contain a top-level raidz vdev, all top-level vdevs + have the same sector size, and the keys for all encrypted datasets are + loaded.

+

Removing a top-level vdev reduces the + total amount of space in the storage pool. The specified device will be + evacuated by copying all allocated space from it to the other devices in + the pool. In this case, the zpool + remove command initiates the removal and + returns, while the evacuation continues in the background. The removal + progress can be monitored with zpool + status. If an IO error is encountered during the + removal process it will be cancelled. The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(7).

+

A mirrored top-level device (log or data) can be removed by + specifying the top-level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("No-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
Waits until the removal has completed before returning.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-replace(8), zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-reopen.8.html b/man/v2.1/8/zpool-reopen.8.html new file mode 100644 index 000000000..502485cbe --- /dev/null +++ b/man/v2.1/8/zpool-reopen.8.html @@ -0,0 +1,268 @@ + + + + + + + zpool-reopen.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reopen.8

+
+ + + + + +
ZPOOL-REOPEN(8)System Manager's ManualZPOOL-REOPEN(8)
+
+
+

+

zpool-reopen — + reopen vdevs associated with ZFS storage pools

+
+
+

+ + + + + +
zpoolreopen [-n] + [pool]…
+
+
+

+

Reopen all vdevs associated with the specified pools, or all pools + if none specified.

+
+
+

+
+
+
Do not restart an in-progress scrub operation. This is not recommended and + can result in partially resilvered devices unless a second scrub is + performed.
+
+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-replace.8.html b/man/v2.1/8/zpool-replace.8.html new file mode 100644 index 000000000..235de2398 --- /dev/null +++ b/man/v2.1/8/zpool-replace.8.html @@ -0,0 +1,302 @@ + + + + + + + zpool-replace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-replace.8

+
+ + + + + +
ZPOOL-REPLACE(8)System Manager's ManualZPOOL-REPLACE(8)
+
+
+

+

zpool-replace — + replace one device with another in ZFS storage + pool

+
+
+

+ + + + + +
zpoolreplace [-fsw] + [-o + property=value] + pool device + [new-device]
+
+
+

+

Replaces device with + new-device. This is equivalent to attaching + new-device, waiting for it to resilver, and then + detaching device. Any in progress scrub will be + cancelled.

+

The size of new-device must be greater than + or equal to the minimum size of all the devices in a mirror or raidz + configuration.

+

new-device is required if the pool is not + redundant. If new-device is not specified, it defaults + to device. This form of replacement is useful after an + existing disk has failed and has been physically replaced. In this case, the + new disk may have the same /dev path as the old + device, even though it is actually a different disk. ZFS recognizes + this.

+
+
+
Forces use of new-device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new-device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until the replacement has completed before returning.
+
+
+
+

+

zpool-detach(8), + zpool-initialize(8), zpool-online(8), + zpool-resilver(8)

+
+
+ + + + + +
May 29, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-resilver.8.html b/man/v2.1/8/zpool-resilver.8.html new file mode 100644 index 000000000..9dee56812 --- /dev/null +++ b/man/v2.1/8/zpool-resilver.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-resilver.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-resilver.8

+
+ + + + + +
ZPOOL-RESILVER(8)System Manager's ManualZPOOL-RESILVER(8)
+
+
+

+

zpool-resilver — + resilver devices in ZFS storage pools

+
+
+

+ + + + + +
zpoolresilver pool
+
+
+

+

Starts a resilver of the specified pools. If an existing resilver + is already running it will be restarted from the beginning. Any drives that + were scheduled for a deferred resilver will be added to the new one. This + requires the + + pool feature.

+
+
+

+

zpool-iostat(8), + zpool-online(8), zpool-reopen(8), + zpool-replace(8), zpool-scrub(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-scrub.8.html b/man/v2.1/8/zpool-scrub.8.html new file mode 100644 index 000000000..26acfbec3 --- /dev/null +++ b/man/v2.1/8/zpool-scrub.8.html @@ -0,0 +1,348 @@ + + + + + + + zpool-scrub.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-scrub.8

+
+ + + + + +
ZPOOL-SCRUB(8)System Manager's ManualZPOOL-SCRUB(8)
+
+
+

+

zpool-scrub — + begin or resume scrub of ZFS storage pools

+
+
+

+ + + + + +
zpoolscrub + [-s|-p] + [-w] pool
+
+
+

+

Begins a scrub or resumes a paused scrub. The scrub examines all + data in the specified pools to verify that it checksums correctly. For + replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any + damage discovered during the scrub. The zpool + status command reports the progress of the scrub and + summarizes the results of the scrub upon completion.

+

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be out + of date (for example, when attaching a new device to a mirror or replacing + an existing device), whereas scrubbing examines all data to discover silent + errors due to hardware faults or disk failure.

+

Because scrubbing and resilvering are I/O-intensive operations, + ZFS only allows one at a time.

+

A scrub is split into two parts: metadata scanning and block + scrubbing. The metadata scanning sorts blocks into large sequential ranges + which can then be read much more efficiently from disk when issuing the + scrub I/O.

+

If a scrub is paused, the zpool + scrub resumes it. If a resilver is in progress, ZFS + does not allow a scrub to be started until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During this + period, no completion time estimate will be provided.

+
+
+

+
+
+
Stop scrubbing.
+
+
Pause scrubbing. Scrub pause state and progress are periodically synced to + disk. If the system is restarted or pool is exported during a paused + scrub, even after import, scrub will remain paused until it is resumed. + Once resumed the scrub will pick up from the place where it was last + checkpointed to disk. To resume a paused scrub issue + zpool scrub again.
+
+
Wait until scrub has completed before returning.
+
+
+
+

+
+
: +
+
Output: +
+
# zpool status
+  ...
+  scan: scrub in progress since Sun Jul 25 16:07:49 2021
+        403M scanned at 100M/s, 68.4M issued at 10.0M/s, 405M total
+        0B repaired, 16.91% done, 00:00:04 to go
+  ...
+
+ Where: +
    +
  • Metadata which references 403M of file data has been scanned at + 100M/s, and 68.4M of that file data has been scrubbed sequentially at + 10.0M/s.
  • +
+
+
+
+
+

+

On machines using systemd, scrub timers can be enabled on per-pool + basis. weekly and monthly + timer units are provided.

+
+
+
systemctl enable + zfs-scrub-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-scrub-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpool-iostat(8), + zpool-resilver(8), + zpool-status(8)

+
+
+ + + + + +
July 25, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-set.8.html b/man/v2.1/8/zpool-set.8.html new file mode 100644 index 000000000..1b3b7ac3d --- /dev/null +++ b/man/v2.1/8/zpool-set.8.html @@ -0,0 +1,322 @@ + + + + + + + zpool-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-set.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolset + property=value + pool
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either + + or + .
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
+
+
+

+

zpool-features(7), + zpoolprops(7), zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-split.8.html b/man/v2.1/8/zpool-split.8.html new file mode 100644 index 000000000..4cc0aca17 --- /dev/null +++ b/man/v2.1/8/zpool-split.8.html @@ -0,0 +1,315 @@ + + + + + + + zpool-split.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-split.8

+
+ + + + + +
ZPOOL-SPLIT(8)System Manager's ManualZPOOL-SPLIT(8)
+
+
+

+

zpool-split — + split devices off ZFS storage pool, creating new + pool

+
+
+

+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]… + [-R root] + pool newpool + [device]…
+
+
+

+

Splits devices off pool creating + newpool. All vdevs in pool must + be mirrors and the pool must not be in the process of resilvering. At the + time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool.

+

The optional device specification causes the specified device(s) + to be included in the new pool and, should any devices + remain unspecified, the last device in each mirror is used as would be by + default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all encrypted + datasets it attempts to mount as it is bringing the new pool online. Note + that if any datasets have + =, + this command will block waiting for the keys to be entered. Without this + flag, encrypted datasets will be left unavailable until the keys are + loaded.
+
+
Do a dry-run ("No-op") split: do not actually perform it. Print + out the expected configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ property=value
+
Sets the specified property for newpool. See the + zpoolprops(7) manual page for more information on the + available pool properties.
+
+ root
+
Set + + for newpool to root and + automatically import it.
+
+
+
+

+

zpool-import(8), + zpool-list(8), zpool-remove(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-status.8.html b/man/v2.1/8/zpool-status.8.html new file mode 100644 index 000000000..9bf6803d9 --- /dev/null +++ b/man/v2.1/8/zpool-status.8.html @@ -0,0 +1,334 @@ + + + + + + + zpool-status.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-status.8

+
+ + + + + +
ZPOOL-STATUS(8)System Manager's ManualZPOOL-STATUS(8)
+
+
+

+

zpool-status — + show detailed health status for ZFS storage + pools

+
+
+

+ + + + + +
zpoolstatus [-DeigLpPstvx] + [-T u|d] + [-c + [SCRIPT1[,SCRIPT2]…]] + [pool]… [interval + [count]]
+
+
+

+

Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in the + system is displayed. For more information on pool and device health, see the + Device Failure and + Recovery section of zpoolconcepts(7).

+

If a scrub or resilver is in progress, this command reports the + percentage done and the estimated time to completion. Both of these are only + approximate, because the amount of data in the pool and the other workloads + on the system can change.

+
+
+
Display vdev enclosure slot power status (on or off).
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool status + output. See the -c option of + zpool iostat for complete + details.
+
+
Only show unhealthy vdevs (not-ONLINE or with errors).
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the number of leaf VDEV slow IOs. This is the number of IOs that + didn't complete in + + milliseconds (default 30 seconds). This does not necessarily mean the IOs + failed to complete, just took an unreasonably long amount of time. This + may indicate a problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
Displays verbose data error information, printing out a complete list of + all data errors since the last complete pool scrub.
+
+
Only display status for pools that are exhibiting errors or are otherwise + unavailable. Warnings about pools not using the latest on-disk format will + not be included.
+
+
+
+

+

zpool-events(8), + zpool-history(8), zpool-iostat(8), + zpool-list(8), zpool-resilver(8), + zpool-scrub(8), zpool-wait(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-sync.8.html b/man/v2.1/8/zpool-sync.8.html new file mode 100644 index 000000000..21f8e0ff1 --- /dev/null +++ b/man/v2.1/8/zpool-sync.8.html @@ -0,0 +1,267 @@ + + + + + + + zpool-sync.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-sync.8

+
+ + + + + +
ZPOOL-SYNC(8)System Manager's ManualZPOOL-SYNC(8)
+
+
+

+

zpool-syncflush + data to primary storage of ZFS storage pools

+
+
+

+ + + + + +
zpoolsync [pool]…
+
+
+

+

This command forces all in-core dirty data to be written to the + primary pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified pools.

+
+
+

+

zpoolconcepts(7), + zpool-export(8), zpool-iostat(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-trim.8.html b/man/v2.1/8/zpool-trim.8.html new file mode 100644 index 000000000..30c482bc7 --- /dev/null +++ b/man/v2.1/8/zpool-trim.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-trim.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-trim.8

+
+ + + + + +
ZPOOL-TRIM(8)System Manager's ManualZPOOL-TRIM(8)
+
+
+

+

zpool-trim — + initiate TRIM of free space in ZFS storage pool

+
+
+

+ + + + + +
zpooltrim [-dw] + [-r rate] + [-c|-s] + pool [device]…
+
+
+

+

Initiates an immediate on-demand TRIM operation for all of the + free space in a pool. This operation informs the underlying storage devices + of all blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.

+

A manual on-demand TRIM operation can be initiated irrespective of + the autotrim pool property setting. See the documentation + for the autotrim property above for the types of vdev + devices which can be trimmed.

+
+
, + --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, the + device guarantees that data stored on the trimmed blocks has been erased. + This requires support from the device and is not supported by all + SSDs.
+
, + --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
, + --cancel
+
Cancel trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no cancellation will + occur on any device.
+
, + --suspend
+
Suspend trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no suspension will + occur on any device. Trimming can then be resumed by running + zpool trim with no flags + on the relevant target devices.
+
, + --wait
+
Wait until the devices are done being trimmed before returning.
+
+
+
+

+

zpoolprops(7), + zpool-initialize(8), zpool-wait(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-upgrade.8.html b/man/v2.1/8/zpool-upgrade.8.html new file mode 100644 index 000000000..26c6ba3e8 --- /dev/null +++ b/man/v2.1/8/zpool-upgrade.8.html @@ -0,0 +1,321 @@ + + + + + + + zpool-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-upgrade.8

+
+ + + + + +
ZPOOL-UPGRADE(8)System Manager's ManualZPOOL-UPGRADE(8)
+
+
+

+

zpool-upgrade — + manage version and feature flags of ZFS storage + pools

+
+
+

+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool
+
+
+

+
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools (subject to + the -o compatibility + property).
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the this version of ZFS. See + zpool-features(7) for a description of feature flags + features supported by this version of ZFS.
+
zpool upgrade + [-V version] + -a|pool
+
Enables all supported features on the given pool. +

If the pool has specified compatibility feature sets using the + -o compatibility property, + only the features present in all requested compatibility sets will be + enabled. If this property is set to legacy then no + upgrade will take place.

+

Once this is done, the pool will no longer be accessible on + systems that do not support feature flags. See + zpool-features(7) for details on compatibility with + systems that support feature flags, but do not support all features + enabled on the pool.

+
+
+
Enables all supported features (from specified compatibility sets, if + any) on all pools.
+
+ version
+
Upgrade to the specified legacy version. If specified, no features + will be enabled on the pool. This option can only be used to increase + the version number up to the last supported legacy version + number.
+
+
+
+
+
+

+

zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zpool-history(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool-wait.8.html b/man/v2.1/8/zpool-wait.8.html new file mode 100644 index 000000000..f814e9ab9 --- /dev/null +++ b/man/v2.1/8/zpool-wait.8.html @@ -0,0 +1,316 @@ + + + + + + + zpool-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-wait.8

+
+ + + + + +
ZPOOL-WAIT(8)System Manager's ManualZPOOL-WAIT(8)
+
+
+

+

zpool-waitwait + for activity to stop in a ZFS storage pool

+
+
+

+ + + + + +
zpoolwait [-Hp] + [-T u|d] + [-t + activity[,activity]…] + pool [interval]
+
+
+

+

Waits until all background activity of the given types has ceased + in the given pool. The activity could cease because it has completed, or + because it has been paused or canceled by a user, or because the pool has + been exported or destroyed. If no activities are specified, the command + waits until background activity of every type listed below has ceased. If + there is no activity of the given types in progress, the command returns + immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
Checkpoint to be discarded
+
+
+ property to become +
+
+
All initializations to cease
+
+
All device replacements to cease
+
+
Device removal to cease
+
+
Resilver to cease
+
+
Scrub to cease
+
+
Manual trim to cease
+
+
+

If an interval is provided, the amount of + work remaining, in bytes, for each activity is printed every + interval seconds.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display numbers in parsable (exact) values.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(2). Specify d for standard date + format. See date(1).
+
+
+
+

+

zpool-checkpoint(8), + zpool-initialize(8), zpool-remove(8), + zpool-replace(8), zpool-resilver(8), + zpool-scrub(8), zpool-status(8), + zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool.8.html b/man/v2.1/8/zpool.8.html new file mode 100644 index 000000000..deacb5d4c --- /dev/null +++ b/man/v2.1/8/zpool.8.html @@ -0,0 +1,800 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's ManualZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpoolversion
+
+ + + + + +
zpoolsubcommand + [argumentss]
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+

For an overview of creating and managing ZFS storage pools see the + zpoolconcepts(7) manual page.

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
 
+
zpool version
+
Displays the software version of the zpool + userland utility and the ZFS kernel module.
+
+
+

+
+
zpool-create(8)
+
Creates a new storage pool containing the virtual devices specified on the + command line.
+
zpool-initialize(8)
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified.
+
+
+
+

+
+
zpool-destroy(8)
+
Destroys the given pool, freeing up any devices for other use.
+
zpool-labelclear(8)
+
Removes ZFS label information from the specified + device.
+
+
+
+

+
+
zpool-attach(8)/zpool-detach(8)
+
Increases or decreases redundancy by attaching or + detaching a device on an existing vdev (virtual + device).
+
zpool-add(8)/zpool-remove(8)
+
Adds the specified virtual devices to the given pool, or removes the + specified device from the pool.
+
zpool-replace(8)
+
Replaces an existing device (which may be faulted) with a new one.
+
zpool-split(8)
+
Creates a new pool by splitting all mirrors in an existing pool (which + decreases its redundancy).
+
+
+
+

+

Available pool properties listed in the + zpoolprops(7) manual page.

+
+
zpool-list(8)
+
Lists the given pools along with a health status and space usage.
+
zpool-get(8)/zpool-set(8)
+
Retrieves the given list of properties (or all properties if + is used) for + the specified storage pool(s).
+
+
+
+

+
+
zpool-status(8)
+
Displays the detailed health status for the given pools.
+
zpool-iostat(8)
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os + may be observed via iostat(1).
+
zpool-events(8)
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + That manual page also describes the subclasses and event payloads that can + be generated.
+
zpool-history(8)
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified.
+
+
+
+

+
+
zpool-scrub(8)
+
Begins a scrub or resumes a paused scrub.
+
zpool-checkpoint(8)
+
Checkpoints the current state of pool, which can be + later restored by zpool + import + --rewind-to-checkpoint.
+
zpool-trim(8)
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.
+
zpool-sync(8)
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified + pool(s).
+
zpool-upgrade(8)
+
Manage the on-disk format version of storage pools.
+
zpool-wait(8)
+
Waits until all background activity of the given types has ceased in the + given pool.
+
+
+
+

+
+
zpool-offline(8)/zpool-online(8)
+
Takes the specified physical device offline or brings it online.
+
zpool-resilver(8)
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning.
+
zpool-reopen(8)
+
Reopen all the vdevs associated with the pool.
+
zpool-clear(8)
+
Clears device errors in a pool.
+
+
+
+

+
+
zpool-import(8)
+
Make disks containing ZFS storage pools available for use on the + system.
+
zpool-export(8)
+
Exports the given pools from the system.
+
zpool-reguid(8)
+
Generates a new unique identifier for the pool.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+
+

+
+
: Creating a RAID-Z Storage Pool
+
The following command creates a pool with a single raidz root vdev that + consists of six disks: +
# zpool + create tank + + sda sdb sdc sdd sde sdf
+
+
: Creating a Mirrored Storage Pool
+
The following command creates a pool with two mirrors, where each mirror + contains two disks: +
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
: Creating a ZFS Storage Pool by Using + Partitions
+
The following command creates an unmirrored pool using two disk + partitions: +
# zpool + create tank sda1 + sdb2
+
+
: Creating a ZFS Storage Pool by Using + Files
+
The following command creates an unmirrored pool using files. While not + recommended, a pool based on files can be useful for experimental + purposes. +
# zpool + create tank /path/to/file/a + /path/to/file/b
+
+
: Adding a Mirror to a ZFS Storage + Pool
+
The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of + two-way mirrors. The additional space is immediately available to any + datasets within the pool. +
# zpool + add tank + mirror sda sdb
+
+
: Listing Available ZFS Storage Pools
+
The following command lists all available pools on the system. In this + case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following: +
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
: Destroying a ZFS Storage Pool
+
The following command destroys the pool tank and any + datasets contained within: +
# zpool + destroy -f + tank
+
+
: Exporting a ZFS Storage Pool
+
The following command exports the devices in pool + tank so that they can be relocated or later + imported: +
# zpool + export tank
+
+
: Importing a ZFS Storage Pool
+
The following command displays available pools, and then imports the pool + tank for use on the system. The results from this + command are similar to the following: +
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
: Upgrading All ZFS Storage Pools to the Current + Version
+
The following command upgrades all ZFS Storage pools to the current + version of the software: +
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
: Managing Hot Spares
+
The following command creates a new pool with an available hot spare: +
# zpool + create tank + mirror sda sdb + + sdc
+

If one of the disks were to fail, the pool would be reduced to + the degraded state. The failed device can be replaced using the + following command:

+
# zpool + replace tank sda + sdd
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The + hot spare can be permanently removed from the pool using the following + command:

+
# zpool + remove tank sdc
+
+
: Creating a ZFS Pool with Mirrored Separate + Intent Logs
+
The following command creates a ZFS storage pool consisting of two, + two-way mirrors and mirrored log devices: +
# zpool + create pool + mirror sda sdb + mirror sdc sdd + + sde sdf
+
+
: Adding Cache Devices to a ZFS Pool
+
The following command adds two disks for use as cache devices to a ZFS + storage pool: +
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take + over an hour for them to fill. Capacity and reads can be monitored using + the iostat subcommand as follows:

+
# zpool + iostat -v + pool 5
+
+
: Removing a Mirrored top-level (Log or Data) + Device
+
The following commands remove the mirrored log device + + and mirrored top-level data device + . +

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
: Displaying expanded space on a + device
+
The following command displays the detailed information for the pool + data. This pool is comprised of a single raidz vdev + where one of its devices increased its capacity by 10GB. In this example, + the pool will not be able to utilize this extra capacity until all the + devices under the raidz vdev have been expanded. +
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
: Adding output columns
+
Additional columns can be added to the zpool + status and + zpool iostat + output with -c. +
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
Use ANSI color in zpool status and + zpool iostat output.
+
+
Automatically attempt to turn on the drives enclosure slot power to a + drive when running the zpool + online or zpool + clear commands. This has the same effect as + passing the --power option to those commands.
+
+
The maximum time in milliseconds to wait for a slot power sysfs value to + return the correct value after writing it. For example, after writing + "on" to the sysfs enclosure slot power_control file, it can take + some time for the enclosure to power down the slot and return + "on" if you read back the 'power_control' value. Defaults to 30 + seconds (30000ms) if not set.
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
If set, suppress warning about non-native vdev ashift in + zpool status. The value is not used, only the + presence or absence of the variable matters.
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool + status -g command line + option.
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the + zpool status + -L command line option.
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the + zpool status + -P command line option.
+
+
Older OpenZFS implementations had issues when attempting to display pool + config VDEV names if a devid NVP value is present in the + pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and + zpool status would fail when listing the config. + This would also be true for future Linux-based pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool + add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+

+
+
+
Allow a privileged user to run zpool status/iostat + -c. Normally, only unprivileged users are allowed + to run -c.
+
+
The search path for scripts when running zpool + status/iostat -c. This is a colon-separated + list of directories and overrides the default + ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
Allow a user to run zpool status/iostat + -c. If ZPOOL_SCRIPTS_ENABLED is + not set, it is assumed that the user is allowed to run + zpool + status/iostat + -c.
+
+
+
+

+

+
+
+

+

zfs(4), zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zed(8), zfs(8), + zpool-add(8), zpool-attach(8), + zpool-checkpoint(8), zpool-clear(8), + zpool-create(8), zpool-destroy(8), + zpool-detach(8), zpool-events(8), + zpool-export(8), zpool-get(8), + zpool-history(8), zpool-import(8), + zpool-initialize(8), zpool-iostat(8), + zpool-labelclear(8), zpool-list(8), + zpool-offline(8), zpool-online(8), + zpool-reguid(8), zpool-remove(8), + zpool-reopen(8), zpool-replace(8), + zpool-resilver(8), zpool-scrub(8), + zpool-set(8), zpool-split(8), + zpool-status(8), zpool-sync(8), + zpool-trim(8), zpool-upgrade(8), + zpool-wait(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zpool_influxdb.8.html b/man/v2.1/8/zpool_influxdb.8.html new file mode 100644 index 000000000..40f298f3c --- /dev/null +++ b/man/v2.1/8/zpool_influxdb.8.html @@ -0,0 +1,317 @@ + + + + + + + zpool_influxdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool_influxdb.8

+
+ + + + + +
ZPOOL_INFLUXDB(8)System Manager's ManualZPOOL_INFLUXDB(8)
+
+
+

+

zpool_influxdb — + collect ZFS pool statistics in InfluxDB line protocol + format

+
+
+

+ + + + + +
zpool_influxdb[-e|--execd] + [-n|--no-histogram] + [-s|--sum-histogram-buckets] + [-t|--tags + key=value[,key=value]…] + [pool]
+
+
+

+

zpool_influxdb produces + InfluxDB-line-protocol-compatible metrics from zpools. Like the + zpool command, + zpool_influxdb reads the current pool status and + statistics. Unlike the zpool command which is + intended for humans, zpool_influxdb formats the + output in the InfluxDB line protocol. The expected use is as a plugin to a + metrics collector or aggregator, such as Telegraf.

+

By default, zpool_influxdb prints pool + metrics and status in the InfluxDB line protocol format. All pools are + printed, similar to the zpool + status command. Providing a pool name restricts the + output to the named pool.

+
+
+

+
+
, + --execd
+
Run in daemon mode compatible with Telegraf's + execd plugin. In this mode, the pools are sampled + every time a newline appears on the standard input.
+
, + --no-histogram
+
Do not print latency and I/O size histograms. This can reduce the total + amount of data, but one should consider the value brought by the insights + that latency and I/O size distributions provide. The resulting values are + suitable for graphing with Grafana's heatmap plugin.
+
, + --sum-histogram-buckets
+
Accumulates bucket values. By default, the values are not accumulated and + the raw data appears as shown by zpool + iostat. This works well for Grafana's heatmap + plugin. Summing the buckets produces output similar to Prometheus + histograms.
+
, + --tags + key=value[,key=value]…
+
Adds specified tags to the tag set. No sanity checking is performed. See + the InfluxDB Line Protocol format documentation for details on escaping + special characters used in tags.
+
, + --help
+
Print a usage summary.
+
+
+
+

+

zpool-iostat(8), + zpool-status(8), + InfluxDB, + Telegraf, + Grafana, + Prometheus

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zstream.8.html b/man/v2.1/8/zstream.8.html new file mode 100644 index 000000000..bace2c4c4 --- /dev/null +++ b/man/v2.1/8/zstream.8.html @@ -0,0 +1,329 @@ + + + + + + + zstream.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zstream.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8)

+
+
+ + + + + +
May 8, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/8/zstreamdump.8.html b/man/v2.1/8/zstreamdump.8.html new file mode 100644 index 000000000..3835d5d6a --- /dev/null +++ b/man/v2.1/8/zstreamdump.8.html @@ -0,0 +1,329 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8)

+
+
+ + + + + +
May 8, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.1/index.html b/man/v2.1/index.html new file mode 100644 index 000000000..f190ccafc --- /dev/null +++ b/man/v2.1/index.html @@ -0,0 +1,147 @@ + + + + + + + v2.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/v2.2/1/arcstat.1.html b/man/v2.2/1/arcstat.1.html new file mode 100644 index 000000000..06d2ebb30 --- /dev/null +++ b/man/v2.2/1/arcstat.1.html @@ -0,0 +1,411 @@ + + + + + + + arcstat.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

arcstat.1

+
+ + + + + +
ARCSTAT(1)General Commands ManualARCSTAT(1)
+
+
+

+

arcstatreport + ZFS ARC and L2ARC statistics

+
+
+

+ + + + + +
arcstat[-havxp] [-f + field[,field…]] + [-o file] + [-s string] + [interval] [count]
+
+
+

+

arcstat prints various ZFS ARC and L2ARC + statistics in vmstat-like fashion:

+
+
+
+
ARC target size
+
+
Demand hit percentage
+
+
Demand I/O hit percentage
+
+
Demand miss percentage
+
+
Demand data hit percentage
+
+
Demand data I/O hit percentage
+
+
Demand data miss percentage
+
+
Demand metadata hit percentage
+
+
Demand metadata I/O hit percentage
+
+
Demand metadata miss percentage
+
+
MFU list hits per second
+
+
Metadata hit percentage
+
+
Metadata I/O hit percentage
+
+
Metadata miss percentage
+
+
MRU list hits per second
+
+
Prefetch hits percentage
+
+
Prefetch I/O hits percentage
+
+
Prefetch miss percentage
+
+
Prefetch data hits percentage
+
+
Prefetch data I/O hits percentage
+
+
Prefetch data miss percentage
+
+
Prefetch metadata hits percentage
+
+
Prefetch metadata I/O hits percentage
+
+
Prefetch metadata miss percentage
+
+
Demand hits per second
+
+
Demand I/O hits per second
+
+
Demand misses per second
+
+
Demand data hits per second
+
+
Demand data I/O hits per second
+
+
Demand data misses per second
+
+
Demand metadata hits per second
+
+
Demand metadata I/O hits per second
+
+
Demand metadata misses per second
+
+
ARC hit percentage
+
+
ARC hits per second
+
+
ARC I/O hits percentage
+
+
ARC I/O hits per second
+
+
MFU ghost list hits per second
+
+
Metadata hits per second
+
+
Metadata I/O hits per second
+
+
ARC misses per second
+
+
Metadata misses per second
+
+
MRU ghost list hits per second
+
+
Prefetch hits per second
+
+
Prefetch I/O hits per second
+
+
Prefetch misses per second
+
+
Prefetch data hits per second
+
+
Prefetch data I/O hits per second
+
+
Prefetch data misses per second
+
+
Prefetch metadata hits per second
+
+
Prefetch metadata I/O hits per second
+
+
Prefetch metadata misses per second
+
+
Total ARC accesses per second
+
+
Current time
+
+
ARC size
+
+
Alias for size
+
+
Uncached list hits per second
+
+
Demand accesses per second
+
+
Demand data accesses per second
+
+
Demand metadata accesses per second
+
+
evict_skip per second
+
+
ARC miss percentage
+
+
Metadata accesses per second
+
+
Prefetch accesses per second
+
+
Prefetch data accesses per second
+
+
Prefetch metadata accesses per second
+
+
L2ARC access hit percentage
+
+
L2ARC hits per second
+
+
L2ARC misses per second
+
+
Total L2ARC accesses per second
+
+
L2ARC prefetch allocated size per second
+
+
L2ARC prefetch allocated size percentage
+
+
L2ARC MFU allocated size per second
+
+
L2ARC MFU allocated size percentage
+
+
L2ARC MRU allocated size per second
+
+
L2ARC MRU allocated size percentage
+
+
L2ARC data (buf content) allocated size per second
+
+
L2ARC data (buf content) allocated size percentage
+
+
L2ARC metadata (buf content) allocated size per second
+
+
L2ARC metadata (buf content) allocated size percentage
+
+
Size of the L2ARC
+
+
mutex_miss per second
+
+
Bytes read per second from the L2ARC
+
+
L2ARC access miss percentage
+
+
Actual (compressed) size of the L2ARC
+
+
ARC grow disabled
+
+
ARC reclaim needed
+
+
The ARC's idea of how much free memory there is, which includes evictable + memory in the page cache. Since the ARC tries to keep + avail above zero, avail is usually + more instructive to observe than free.
+
+
The ARC's idea of how much free memory is available to it, which is a bit + less than free. May temporarily be negative, in which + case the ARC will reduce the target size c.
+
+
+
+
+

+
+
+
Print all possible stats.
+
+
Display only specific fields. See + DESCRIPTION for supported + statistics.
+
+
Display help message.
+
+
Report statistics to a file instead of the standard output.
+
+
Disable auto-scaling of numerical fields (for raw, machine-parsable + values).
+
+
Display data with a specified separator (default: 2 spaces).
+
+
Print extended stats (same as -f + time,mfu,mru,mfug,mrug,eskip,mtxmis,dread,pread,read).
+
+
Show field headers and definitions
+
+
+
+

+

The following operands are supported:

+
+
+
interval
+
Specify the sampling interval in seconds.
+
count
+
Display only count reports.
+
+
+
+
+ + + + + +
December 23, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/cstyle.1.html b/man/v2.2/1/cstyle.1.html new file mode 100644 index 000000000..043f0d50f --- /dev/null +++ b/man/v2.2/1/cstyle.1.html @@ -0,0 +1,293 @@ + + + + + + + cstyle.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cstyle.1

+
+ + + + + +
CSTYLE(1)General Commands ManualCSTYLE(1)
+
+
+

+

cstylecheck for + some common stylistic errors in C source files

+
+
+

+ + + + + +
cstyle[-chpvCP] + [file]…
+
+
+

+

cstyle inspects C source files (*.c and + *.h) for common stylistic errors. It attempts to check for the cstyle + documented in + http://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf. + Note that there is much in that document that + + be checked for; just because your code is + cstyle-clean does not mean that you've followed + Sun's C style. + .

+
+
+

+
+
+
Check continuation line indentation inside of functions. Sun's C style + states that all statements must be indented to an appropriate tab stop, + and any continuation lines after them must be indented + + four spaces from the start line. This option enables a series of checks + designed to find continuation line problems within functions only. The + checks have some limitations; see + , below.
+
+
Performs some of the more picky checks. Includes ANSI + + and + + rules, and tries to detect spaces after casts. Used as part of the putback + checks.
+
+
Verbose output; includes the text of the line of error, and, for + -c, the first statement in the current + continuation block.
+
+
Check for use of non-POSIX types. Historically, types like + + and + + were used, but they are now deprecated in favor of the POSIX types + , + , + etc. This detects any use of the deprecated types. Used as part of the + putback checks.
+
+
Also print GitHub-Actions-style ::error + output.
+
+
+
+

+
+
+
If set and nonempty, equivalent to -g.
+
+
+
+

+

The continuation checker is a reasonably simple state machine that + knows something about how C is laid out, and can match parenthesis, etc. + over multiple lines. It does have some limitations:

+
    +
  1. Preprocessor macros which cause unmatched parenthesis will + confuse the checker for that line. To fix this, you'll need to make sure + that each branch of the + statement has + balanced parenthesis.
  2. +
  3. Some cpp(1) macros do not require + ;s after them. Any such macros + be ALL_CAPS; + any lower case letters will cause bad output. +

    The bad output will generally be corrected after the + next ;, + , + or + .

    +
  4. +
+Some continuation error messages deserve some additional explanation: +
+
+
A multi-line statement which is not broken at statement boundaries. For + example: +
+
if (this_is_a_long_variable == another_variable) a =
+    b + c;
+
+

Will trigger this error. Instead, do:

+
+
if (this_is_a_long_variable == another_variable)
+    a = b + c;
+
+
+
+
For visibility, empty bodies for if, for, and while statements should be + on their own line. For example: +
+
while (do_something(&x) == 0);
+
+

Will trigger this error. Instead, do:

+
+
while (do_something(&x) == 0)
+    ;
+
+
+
+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/index.html b/man/v2.2/1/index.html new file mode 100644 index 000000000..32b379f92 --- /dev/null +++ b/man/v2.2/1/index.html @@ -0,0 +1,159 @@ + + + + + + + User Commands (1) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

User Commands (1)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/raidz_test.1.html b/man/v2.2/1/raidz_test.1.html new file mode 100644 index 000000000..7930e17ba --- /dev/null +++ b/man/v2.2/1/raidz_test.1.html @@ -0,0 +1,254 @@ + + + + + + + raidz_test.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

raidz_test.1

+
+ + + + + +
RAIDZ_TEST(1)General Commands ManualRAIDZ_TEST(1)
+
+
+

+

raidz_testraidz + implementation verification and benchmarking tool

+
+
+

+ + + + + +
raidz_test[-StBevTD] [-a + ashift] [-o + zio_off_shift] [-d + raidz_data_disks] [-s + zio_size_shift] [-r + reflow_offset]
+
+
+

+

The purpose of this tool is to run all supported raidz + implementation and verify the results of all methods. It also contains a + parameter sweep option where all parameters affecting a RAID-Z block are + verified (like ashift size, data offset, data size, etc.). The tool also + supports a benchmarking mode using the -B + option.

+
+
+

+
+
+
Print a help summary.
+
+ ashift (default: + )
+
Ashift value.
+
+ zio_off_shift (default: + )
+
ZIO offset for each raidz block. The offset's value is + .
+
+ raidz_data_disks (default: + )
+
Number of raidz data disks to use. Additional disks will be used for + parity.
+
+ zio_size_shift (default: + )
+
Size of data for raidz block. The real size is + .
+
+ reflow_offset (default: + )
+
Set raidz expansion offset. The expanded raidz map allocation function + will produce different map configurations depending on this value.
+
(weep)
+
Sweep parameter space while verifying the raidz implementations. This + option will exhaust all most of valid values for the + -aods options. Runtime using this option will be + long.
+
(imeout)
+
Wall time for sweep test in seconds. The actual runtime could be + longer.
+
(enchmark)
+
All implementations are benchmarked using increasing per disk data size. + Results are given as throughput per disk, measured in MiB/s.
+
(xpansion)
+
Use expanded raidz map allocation function.
+
(erbose)
+
Increase verbosity.
+
(est + the test)
+
Debugging option: fail all tests. This is to check if tests would properly + verify bit-exactness.
+
(ebug)
+
Debugging option: attach gdb(1) when + + or + + are received.
+
+
+
+

+

ztest(1)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/test-runner.1.html b/man/v2.2/1/test-runner.1.html new file mode 100644 index 000000000..cfb352b35 --- /dev/null +++ b/man/v2.2/1/test-runner.1.html @@ -0,0 +1,437 @@ + + + + + + + test-runner.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

test-runner.1

+
+ + + + + +
RUN(1)General Commands ManualRUN(1)
+
+
+

+

runfind, + execute, and log the results of tests

+
+
+

+ + + + + +
run[-dgq] [-o + outputdir] [-pP + script] [-t + -seconds] [-uxX + username] + pathname
+

+
+ + + + + +
run-w runfile + [-gq] [-o + outputdir] [-pP + script] [-t + -seconds] [-uxX + username] + pathname
+

+
+ + + + + +
run-c runfile + [-dq]
+

+
+ + + + + +
run[-h]
+
+
+

+

run command has three basic modes of + operation. With neither -c nor + -w, run processes the + arguments provided on the command line, adding them to the list for this + run. If a specified pathname is an executable file, it + is added as a test. If a specified pathname is a + directory, the behavior depends upon the presence of + -g. If -g is specified, the + directory is treated as a test group. See the section on + below. Without -g, + run simply descends into the directory looking for + executable files. The tests are then executed, and the results are + logged.

+

With -w, run finds + tests in the manner described above. Rather than executing the tests and + logging the results, the test configuration is stored in a + runfile, which can be used in future invocations, or + edited to modify which tests are executed and which options are applied. + Options included on the command line with -w become + defaults in the runfile.

+

With -c, run + parses a runfile, which can specify a series of tests + and test groups to be executed. The tests are then executed, and the results + are logged.

+
+

+

A test group is comprised of a set of executable files, all of + which exist in one directory. The options specified on the command line or + in a runfile apply to individual tests in the group. + The exception is options pertaining to pre and post scripts, which act on + all tests as a group. Rather than running before and after each test, these + scripts are run only once each at the start and end of the test group.

+
+
+

+

The specified tests run serially, and are typically assigned + results according to exit values. Tests that exit zero and non-zero are + marked + and + , + respectively. When a pre script fails for a test group, only the post script + is executed, and the remaining tests are marked + . + Any test that exceeds its timeout is terminated, and + marked + .

+

By default, tests are executed with the credentials of the + run script. Executing tests with other credentials + is done via sudo(1m), which must be configured to allow + execution without prompting for a password. Environment variables from the + calling shell are available to individual tests. During test execution, the + working directory is changed to outputdir.

+
+
+

+

By default, run will print one line on + standard output at the conclusion of each test indicating the test name, + result and elapsed time. Additionally, for each invocation of + run, a directory is created using the ISO 8601 date + format. Within this directory is a file named + + containing all the test output with timestamps, and a directory for each + test. Within the test directories, there is one file each for standard + output, standard error and merged output. The default location for the + outputdir is + /var/tmp/test_results.

+
+
+

+

The runfile is an INI-style configuration + file that describes a test run. The file has one section named + , + which contains configuration option names and their values in + + = value format. The values in + this section apply to all the subsequent sections, unless they are also + specified there, in which case the default is overridden. The remaining + section names are the absolute pathnames of files and directories, + describing tests and test groups respectively. The legal option names + are:

+
+
+ = pathname
+
The name of the directory that holds test logs.
+
+ = script
+
Run script prior to the test or test group.
+
+ = username
+
Execute the pre script as username.
+
+ = script
+
Run script after the test or test group.
+
+ = username
+
Execute the post script as username.
+
+ = + True|
+
If True, only the results summary is printed to standard + out.
+
+ = ['filename', + ]
+
Specify a list of filenames for this test group. + Only the basename of the absolute path is required. This option is only + valid for test groups, and each filename must be + single quoted.
+
+ = n
+
A timeout value of n seconds.
+
+ = username
+
Execute the test or test group as username.
+
+
+
+
+

+
+
+ runfile
+
Specify a runfile to be consumed by the run + command.
+
+
Dry run mode. Execute no tests, but print a description of each test that + would have been run.
+
+
Enable kmemleak reporting (Linux only)
+
+
Create test groups from any directories found while searching for + tests.
+
+ outputdir
+
Specify the directory in which to write test results.
+
+ script
+
Run script prior to any test or test group.
+
+ script
+
Run script after any test or test group.
+
+
Print only the results summary to the standard output.
+
+ script
+
Run script as a failsafe after any test is + killed.
+
+ username
+
Execute the failsafe script as username.
+
+ n
+
Specify a timeout value of n seconds per test.
+
+ username
+
Execute tests or test groups as username.
+
+ runfile
+
Specify the name of the runfile to create.
+
+ username
+
Execute the pre script as username.
+
+ username
+
Execute the post script as username.
+
+
+
+

+
+
: Running ad-hoc tests.
+
This example demonstrates the simplest invocation of + run. +
+
% run my-tests
+Test: /home/jkennedy/my-tests/test-01                    [00:02] [PASS]
+Test: /home/jkennedy/my-tests/test-02                    [00:04] [PASS]
+Test: /home/jkennedy/my-tests/test-03                    [00:01] [PASS]
+
+Results Summary
+PASS       3
+
+Running Time:   00:00:07
+Percent passed: 100.0%
+Log directory:  /var/tmp/test_results/20120923T180654
+
+
+
: Creating a runfile + for future use.
+
This example demonstrates creating a runfile with + non-default options. +
+
% run -p setup -x root -g -w new-tests.run new-tests
+% cat new-tests.run
+[DEFAULT]
+pre = setup
+post_user =
+quiet = False
+user =
+timeout = 60
+post =
+pre_user = root
+outputdir = /var/tmp/test_results
+
+[/home/jkennedy/new-tests]
+tests = ['test-01', 'test-02', 'test-03']
+
+
+
+
+
+

+

sudo(1m)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/zhack.1.html b/man/v2.2/1/zhack.1.html new file mode 100644 index 000000000..900c155d0 --- /dev/null +++ b/man/v2.2/1/zhack.1.html @@ -0,0 +1,297 @@ + + + + + + + zhack.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zhack.1

+
+ + + + + +
ZHACK(1)General Commands ManualZHACK(1)
+
+
+

+

zhacklibzpool + debugging tool

+
+
+

+

This utility pokes configuration changes directly into a ZFS pool, + which is dangerous and can cause data corruption.

+
+
+

+
+
+ + + + + +
zhackfeature stat pool
+
+
List feature flags.
+
+ + + + + +
zhackfeature enable [-d + description] [-r] + pool guid
+
+
Add a new feature to pool that is uniquely + identified by guid, which is specified in the same + form as a zfs(8) user property. +

The description is a short human + readable explanation of the new feature.

+

The -r flag indicates that + pool can be safely opened in read-only mode by a + system that does not understand the guid + feature.

+
+
+ + + + + +
zhackfeature ref + [-d|-m] + pool guid
+
+
Increment the reference count of the guid feature in + pool. +

The -d flag decrements the reference + count of the guid feature in + pool instead.

+

The -m flag indicates that the + guid feature is now required to read the pool + MOS.

+
+
+ + + + + +
zhacklabel repair [-cu] + device
+
+
Repair labels of a specified device according to + options. +

Flags may be combined to do their functions + simultaneously.

+

The -c flag repairs corrupted label + checksums

+

The -u flag restores the label on a + detached device

+

Example:

+
+ + + + + +
zhack label repair + -cu device +
+ Fix checksums and undetach a device
+
+
+
+
+

+

The following can be passed to all zhack + invocations before any subcommand:

+
+
+ cachefile
+
Read pool configuration from the + cachefile, which is + /etc/zfs/zpool.cache by default.
+
+ dir
+
Search for pool members in + dir. Can be specified more than once.
+
+
+
+

+
+
# zhack feature stat tank
+for_read_obj:
+	org.illumos:lz4_compress = 0
+for_write_obj:
+	com.delphix:async_destroy = 0
+	com.delphix:empty_bpobj = 0
+descriptions_obj:
+	com.delphix:async_destroy = Destroy filesystems asynchronously.
+	com.delphix:empty_bpobj = Snapshots use less space.
+	org.illumos:lz4_compress = LZ4 compression algorithm support.
+
+# zhack feature enable -d 'Predict future disk failures.' tank com.example:clairvoyance
+# zhack feature ref tank com.example:clairvoyance
+
+
+
+

+

ztest(1), zpool-features(7), + zfs(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/ztest.1.html b/man/v2.2/1/ztest.1.html new file mode 100644 index 000000000..ca0b48f99 --- /dev/null +++ b/man/v2.2/1/ztest.1.html @@ -0,0 +1,386 @@ + + + + + + + ztest.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

ztest.1

+
+ + + + + +
ZTEST(1)General Commands ManualZTEST(1)
+
+
+

+

ztestwas + written by the ZFS Developers as a ZFS unit test

+
+
+

+ + + + + +
ztest[-VEG] [-v + vdevs] [-s + size_of_each_vdev] [-a + alignment_shift] [-m + mirror_copies] [-r + raidz_disks/draid_disks] [-R + raid_parity] [-K + raid_kind] [-D + draid_data] [-S + draid_spares] [-C + vdev_class_state] [-d + datasets] [-t + threads] [-g + gang_block_threshold] [-i + initialize_pool_i_times] [-k + kill_percentage] [-p + pool_name] [-T + time] [-z + zil_failure_rate]
+
+
+

+

ztest was written by the ZFS Developers as + a ZFS unit test. The tool was developed in tandem with the ZFS functionality + and was executed nightly as one of the many regression test against the + daily build. As features were added to ZFS, unit tests were also added to + ztest. In addition, a separate test development team + wrote and executed more functional and stress tests.

+

By default ztest runs for ten minutes and + uses block files (stored in /tmp) to create pools + rather than using physical disks. Block files afford + ztest its flexibility to play around with zpool + components without requiring large hardware configurations. However, storing + the block files in /tmp may not work for you if you + have a small tmp directory.

+

By default is non-verbose. This is why entering the command above + will result in ztest quietly executing for 5 + minutes. The -V option can be used to increase the + verbosity of the tool. Adding multiple -V options is + allowed and the more you add the more chatty ztest + becomes.

+

After the ztest run completes, you should + notice many ztest.* files lying around. Once the run + completes you can safely remove these files. Note that you shouldn't remove + these files during a run. You can re-use these files in your next + ztest run by using the -E + option.

+
+
+

+
+
, + -?, --help
+
Print a help summary.
+
, + --vdevs= (default: + )
+
Number of vdevs.
+
, + --vdev-size= (default: + )
+
Size of each vdev.
+
, + --alignment-shift= (default: + ) + (use + + for random)
+
Alignment shift used in test.
+
, + --mirror-copies= (default: + )
+
Number of mirror copies.
+
, + --raid-disks= (default: 4 + for + raidz/ + for draid)
+
Number of raidz/draid disks.
+
, + --raid-parity= (default: 1)
+
Raid parity (raidz & draid).
+
, + --raid-kind=||random + (default: random)
+
The kind of RAID config to use. With random the kind + alternates between raidz and draid.
+
, + --draid-data= (default: 4)
+
Number of data disks in a dRAID redundancy group.
+
, + --draid-spares= (default: 1)
+
Number of dRAID distributed spare disks.
+
, + --datasets= (default: + )
+
Number of datasets.
+
, + --threads= (default: + )
+
Number of threads.
+
, + --gang-block-threshold= (default: + 32K)
+
Gang block threshold.
+
, + --init-count= (default: 1)
+
Number of pool initializations.
+
, + --kill-percentage= (default: + )
+
Kill percentage.
+
, + --pool-name= (default: + )
+
Pool name.
+
, + --vdev-file-directory= (default: + /tmp)
+
File directory for vdev files.
+
, + --multi-host
+
Multi-host; simulate pool imported on remote host.
+
, + --use-existing-pool
+
Use existing pool (use existing pool instead of creating new one).
+
, + --run-time= (default: + s)
+
Total test run time.
+
, + --pass-time= (default: + s)
+
Time per pass.
+
, + --freeze-loops= (default: + )
+
Max loops in + ().
+
, + --alt-ztest=
+
Path to alternate ("older") ztest to + drive, which will be used to initialise the pool, and, a stochastic half + the time, to run the tests. The parallel lib + directory is prepended to LD_LIBRARY_PATH; i.e. + given -B + ./chroots/lenny/usr/bin/ztest, + ./chroots/lenny/usr/lib will be loaded.
+
, + --vdev-class-state=||random + (default: random)
+
The vdev allocation class state.
+
, + --option=variable=value
+
Set global variable to an unsigned 32-bit integer + value (little-endian only).
+
, + --dump-debug
+
Dump zfs_dbgmsg buffer before exiting due to an error.
+
, + --verbose
+
Verbose (use multiple times for ever more verbosity).
+
+
+
+

+

To override /tmp as your location for + block files, you can use the -f option:

+
# ztest -f /
+

To get an idea of what ztest is actually + testing try this:

+
# ztest -f / -VVV
+

Maybe you'd like to run ztest for longer? + To do so simply use the -T option and specify the + runlength in seconds like so:

+
# ztest -f / -V -T 120
+
+
+

+
+
=id
+
Use id instead of the SPL hostid to identify this host. + Intended for use with ztest, but this environment + variable will affect any utility which uses libzpool, including + zpool(8). Since the kernel is unaware of this setting, + results with utilities other than ztest are undefined.
+
=stacksize
+
Limit the default stack size to stacksize bytes for the + purpose of detecting and debugging kernel stack overflows. This value + defaults to 32K which is double the default + Linux + kernel stack size. +

In practice, setting the stack size slightly higher is needed + because differences in stack usage between kernel and user space can + lead to spurious stack overflows (especially when debugging is enabled). + The specified value will be rounded up to a floor of PTHREAD_STACK_MIN + which is the minimum stack required for a NULL procedure in user + space.

+

By default the stack size is limited to + .

+
+
+
+
+

+

zdb(1), zfs(1), + zpool(1), spl(4)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/1/zvol_wait.1.html b/man/v2.2/1/zvol_wait.1.html new file mode 100644 index 000000000..dc10e6a93 --- /dev/null +++ b/man/v2.2/1/zvol_wait.1.html @@ -0,0 +1,191 @@ + + + + + + + zvol_wait.1 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zvol_wait.1

+
+ + + + + +
ZVOL_WAIT(1)General Commands ManualZVOL_WAIT(1)
+
+
+

+

zvol_waitwait + for ZFS volume links to appear in /dev

+
+
+

+ + + + + +
zvol_wait
+
+
+

+

When a ZFS pool is imported, the volumes within it will appear as + block devices. As they're registered, udev(7) + asynchronously creates symlinks under /dev/zvol + using the volumes' names. zvol_wait will wait for + all those symlinks to be created before exiting.

+
+
+

+

udev(7)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/4/index.html b/man/v2.2/4/index.html new file mode 100644 index 000000000..1122d5151 --- /dev/null +++ b/man/v2.2/4/index.html @@ -0,0 +1,149 @@ + + + + + + + Devices and Special Files (4) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Devices and Special Files (4)

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/4/spl.4.html b/man/v2.2/4/spl.4.html new file mode 100644 index 000000000..9d537ccc9 --- /dev/null +++ b/man/v2.2/4/spl.4.html @@ -0,0 +1,322 @@ + + + + + + + spl.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

spl.4

+
+ + + + + +
SPL(4)Device Drivers ManualSPL(4)
+
+
+

+

splparameters + of the SPL kernel module

+
+
+

+
+
=4 + (uint)
+
The number of threads created for the spl_kmem_cache task queue. This task + queue is responsible for allocating new slabs for use by the kmem caches. + For the majority of systems and workloads only a small number of threads + are required.
+
= + (uint)
+
The preferred number of objects per slab in the cache. In general, a + larger value will increase the caches memory footprint while decreasing + the time required to perform an allocation. Conversely, a smaller value + will minimize the footprint and improve cache reclaim time but individual + allocations may take longer.
+
= + (64-bit) or 4 (32-bit) (uint)
+
The maximum size of a kmem cache slab in MiB. This effectively limits the + maximum cache object size to + spl_kmem_cache_max_size/spl_kmem_cache_obj_per_slab. +

Caches may not be created with object sized larger than this + limit.

+
+
= + (uint)
+
For small objects the Linux slab allocator should be used to make the most + efficient use of the memory. However, large objects are not supported by + the Linux slab and therefore the SPL implementation is preferred. This + value is used to determine the cutoff between a small and large object. +

Objects of size spl_kmem_cache_slab_limit or + smaller will be allocated using the Linux slab allocator, large objects + use the SPL allocator. A cutoff of 16K was determined to be optimal for + architectures using 4K pages.

+
+
= + (uint)
+
As a general rule + () + allocations should be small, preferably just a few pages, since they must + by physically contiguous. Therefore, a rate limited warning will be + printed to the console for any kmem_alloc() which + exceeds a reasonable threshold. +

The default warning threshold is set to eight pages but capped + at 32K to accommodate systems using large pages. This value was selected + to be small enough to ensure the largest allocations are quickly noticed + and fixed. But large enough to avoid logging any warnings when a + allocation size is larger than optimal but not a serious concern. Since + this value is tunable, developers are encouraged to set it lower when + testing so any new largish allocations are quickly caught. These + warnings may be disabled by setting the threshold to zero.

+
+
=KMALLOC_MAX_SIZE/4 + (uint)
+
Large + () + allocations will fail if they exceed KMALLOC_MAX_SIZE. + Allocations which are marginally smaller than this limit may succeed but + should still be avoided due to the expense of locating a contiguous range + of free pages. Therefore, a maximum kmem size with reasonable safely + margin of 4x is set. kmem_alloc() allocations + larger than this maximum will quickly fail. + () + allocations less than or equal to this value will use + (), + but shift to + () + when exceeding this value.
+
=0 + (uint)
+
Cache magazines are an optimization designed to minimize the cost of + allocating memory. They do this by keeping a per-cpu cache of recently + freed objects, which can then be reallocated without taking a lock. This + can improve performance on highly contended caches. However, because + objects in magazines will prevent otherwise empty slabs from being + immediately released this may not be ideal for low memory machines. +

For this reason, + spl_kmem_cache_magazine_size can be used to set a + maximum magazine size. When this value is set to 0 the magazine size + will be automatically determined based on the object size. Otherwise + magazines will be limited to 2-256 objects per magazine (i.e per cpu). + Magazines may never be entirely disabled in this implementation.

+
+
=0 + (ulong)
+
The system hostid, when set this can be used to uniquely identify a + system. By default this value is set to zero which indicates the hostid is + disabled. It can be explicitly enabled by placing a unique non-zero value + in /etc/hostid.
+
=/etc/hostid + (charp)
+
The expected path to locate the system hostid when specified. This value + may be overridden for non-standard configurations.
+
=0 + (uint)
+
Cause a kernel panic on assertion failures. When not enabled, the thread + is halted to facilitate further debugging. +

Set to a non-zero value to enable.

+
+
=0 + (uint)
+
Kick stuck taskq to spawn threads. When writing a non-zero value to it, it + will scan all the taskqs. If any of them have a pending task more than 5 + seconds old, it will kick it to spawn more threads. This can be used if + you find a rare deadlock occurs because one or more taskqs didn't spawn a + thread when it should.
+
=0 + (int)
+
Bind taskq threads to specific CPUs. When enabled all taskq threads will + be distributed evenly across the available CPUs. By default, this behavior + is disabled to allow the Linux scheduler the maximum flexibility to + determine where a thread should run.
+
=1 + (int)
+
Allow dynamic taskqs. When enabled taskqs which set the + + flag will by default create only a single thread. New threads will be + created on demand up to a maximum allowed number to facilitate the + completion of outstanding tasks. Threads which are no longer needed will + be promptly destroyed. By default this behavior is enabled but it can be + disabled to aid performance analysis or troubleshooting.
+
=1 + (int)
+
Allow newly created taskq threads to set a non-default scheduler priority. + When enabled, the priority specified when a taskq is created will be + applied to all threads created by that taskq. When disabled all threads + will use the default Linux kernel thread priority. By default, this + behavior is enabled.
+
=4 + (int)
+
The number of items a taskq worker thread must handle without interruption + before requesting a new worker thread be spawned. This is used to control + how quickly taskqs ramp up the number of threads processing the queue. + Because Linux thread creation and destruction are relatively inexpensive a + small default value has been selected. This means that normally threads + will be created aggressively which is desirable. Increasing this value + will result in a slower thread creation rate which may be preferable for + some configurations.
+
= + (uint)
+
The maximum number of tasks per pending list in each taskq shown in + /proc/spl/taskq{,-all}. Write 0 + to turn off the limit. The proc file will walk the lists with lock held, + reading it could cause a lock-up if the list grow too large without + limiting the output. "(truncated)" will be shown if the list is + larger than the limit.
+
= + (uint)
+
(Linux-only) How long a taskq has to have had no work before we tear it + down. Previously, we would tear down a dynamic taskq worker as soon as we + noticed it had no work, but it was observed that this led to a lot of + churn in tearing down things we then immediately spawned anew. In + practice, it seems any nonzero value will remove the vast majority of this + churn, while the nontrivially larger value was chosen to help filter out + the little remaining churn on a mostly idle system. Setting this value to + 0 will revert to the previous behavior.
+
+
+
+ + + + + +
August 24, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/4/zfs.4.html b/man/v2.2/4/zfs.4.html new file mode 100644 index 000000000..abb1a54c3 --- /dev/null +++ b/man/v2.2/4/zfs.4.html @@ -0,0 +1,2698 @@ + + + + + + + zfs.4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.4

+
+ + + + + +
ZFS(4)Device Drivers ManualZFS(4)
+
+
+

+

zfstuning of + the ZFS kernel module

+
+
+

+

The ZFS module supports these parameters:

+
+
=UINT64_MAXB + (u64)
+
Maximum size in bytes of the dbuf cache. The target size is determined by + the MIN versus + 1/2^dbuf_cache_shift (1/32nd) of + the target ARC size. The behavior of the dbuf cache and its associated + settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=UINT64_MAXB + (u64)
+
Maximum size in bytes of the metadata dbuf cache. The target size is + determined by the MIN versus + 1/2^dbuf_metadata_cache_shift + (1/64th) of the target ARC size. The behavior of the metadata dbuf cache + and its associated settings can be observed via the + /proc/spl/kstat/zfs/dbufstats kstat.
+
=10% + (uint)
+
The percentage over dbuf_cache_max_bytes when dbufs must + be evicted directly.
+
=10% + (uint)
+
The percentage below dbuf_cache_max_bytes when the evict + thread stops evicting dbufs.
+
=5 + (uint)
+
Set the size of the dbuf cache (dbuf_cache_max_bytes) to + a log2 fraction of the target ARC size.
+
= + (uint)
+
Set the size of the dbuf metadata cache + (dbuf_metadata_cache_max_bytes) to a log2 fraction of + the target ARC size.
+
=0 + (uint)
+
Set the size of the mutex array for the dbuf cache. When set to + 0 the array is dynamically sized based on total system + memory.
+
=7 + (128) (uint)
+
dnode slots allocated in a single operation as a power of 2. The default + value minimizes lock contention for the bulk operation performed.
+
=134217728B + (128 MiB) (uint)
+
Limit the amount we can prefetch with one call to this amount in bytes. + This helps to limit the amount of memory that can be used by + prefetching.
+
+ (int)
+
Alias for send_holes_without_birth_time.
+
=1|0 + (int)
+
Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set + as fast as possible.
+
=200 + (u64)
+
Min feed interval in milliseconds. Requires + l2arc_feed_again=1 and only + applicable in related situations.
+
=1 + (u64)
+
Seconds between L2ARC writing.
+
=2 + (u64)
+
How far through the ARC lists to search for L2ARC cacheable content, + expressed as a multiplier of l2arc_write_max. ARC + persistence across reboots can be achieved with persistent L2ARC by + setting this parameter to 0, allowing the full length of + ARC lists to be searched for cacheable content.
+
=200% + (u64)
+
Scales l2arc_headroom by this percentage when L2ARC + contents are being successfully compressed before writing. A value of + 100 disables this feature.
+
=0|1 + (int)
+
Controls whether buffers present on special vdevs are eligible for caching + into L2ARC. If set to 1, exclude dbufs on special vdevs from being cached + to L2ARC.
+
=0|1 + (int)
+
Controls whether only MFU metadata and data are cached from ARC into + L2ARC. This may be desired to avoid wasting space on L2ARC when + reading/writing large amounts of data that are not expected to be accessed + more than once. +

The default is off, meaning both MRU and MFU data and metadata + are cached. When turning off this feature, some MRU buffers will still + be present in ARC and eventually cached on L2ARC. + If + l2arc_noprefetch=0, some prefetched + buffers will be cached to L2ARC, and those might later transition to + MRU, in which case the l2arc_mru_asize + arcstat will not be 0.

+

Regardless of l2arc_noprefetch, some MFU + buffers might be evicted from ARC, accessed later on as prefetches and + transition to MRU as prefetches. If accessed again they are counted as + MRU and the l2arc_mru_asize arcstat + will not be 0.

+

The ARC status of L2ARC buffers when they + were first cached in L2ARC can be seen in the + l2arc_mru_asize, + , + and + + arcstats when importing the pool or onlining a cache device if + persistent L2ARC is enabled.

+

The + + arcstat does not take into account if this option is enabled as the + information provided by the + + arcstats can be used to decide if toggling this option is appropriate + for the current workload.

+
+
=% + (uint)
+
Percent of ARC size allowed for L2ARC-only headers. Since L2ARC buffers + are not evicted on memory pressure, too many headers on a system with an + irrationally large L2ARC can render it slow or unusable. This parameter + limits L2ARC writes and rebuilds to achieve the target.
+
=0% + (u64)
+
Trims ahead of the current write size (l2arc_write_max) + on L2ARC devices by this percentage of write size if we have filled the + device. If set to 100 we TRIM twice the space required + to accommodate upcoming writes. A minimum of 64 MiB will + be trimmed. It also enables TRIM of the whole L2ARC device upon creation + or addition to an existing pool or if the header of the device is invalid + upon importing a pool or onlining a cache device. A value of + 0 disables TRIM on L2ARC altogether and is the default + as it can put significant stress on the underlying storage devices. This + will vary depending of how well the specific device handles these + commands.
+
=1|0 + (int)
+
Do not write buffers to L2ARC if they were prefetched but not used by + applications. In case there are prefetched buffers in L2ARC and this + option is later set, we do not read the prefetched buffers from L2ARC. + Unsetting this option is useful for caching sequential reads from the + disks to L2ARC and serve those reads from L2ARC later on. This may be + beneficial in case the L2ARC device is significantly faster in sequential + reads than the disks of the pool. +

Use 1 to disable and 0 to + enable caching/reading prefetches to/from L2ARC.

+
+
=0|1 + (int)
+
No reads during writes.
+
=8388608B + (8 MiB) (u64)
+
Cold L2ARC devices will have l2arc_write_max increased + by this amount while they remain cold.
+
=8388608B + (8 MiB) (u64)
+
Max write bytes per interval.
+
=1|0 + (int)
+
Rebuild the L2ARC when importing a pool (persistent L2ARC). This can be + disabled if there are problems importing a pool or attaching an L2ARC + device (e.g. the L2ARC device is slow in reading stored log metadata, or + the metadata has become somehow fragmented/unusable).
+
=1073741824B + (1 GiB) (u64)
+
Mininum size of an L2ARC device required in order to write log blocks in + it. The log blocks are used upon importing the pool to rebuild the + persistent L2ARC. +

For L2ARC devices less than 1 GiB, the amount + of data + () + evicts is significant compared to the amount of restored L2ARC data. In + this case, do not write log blocks in L2ARC in order not to waste + space.

+
+
=1048576B + (1 MiB) (u64)
+
Metaslab granularity, in bytes. This is roughly similar to what would be + referred to as the "stripe size" in traditional RAID arrays. In + normal operation, ZFS will try to write this amount of data to each disk + before moving on to the next top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group biasing based on their vdevs' over- or + under-utilization relative to the pool.
+
=B + (16 MiB + 1 B) (u64)
+
Make some blocks above a certain size be gang blocks. This option is used + by the test suite to facilitate testing.
+
=3% + (uint)
+
For blocks that could be forced to be a gang block (due to + metaslab_force_ganging), force this many of them to be + gang blocks.
+
=15 + (32 KiB) (int)
+
Default DDT ZAP data block size as a power of 2. Note that changing this + after creating a DDT on the pool will not affect existing DDTs, only newly + created ones.
+
=15 + (32 KiB) (int)
+
Default DDT ZAP indirect block size as a power of 2. Note that changing + this after creating a DDT on the pool will not affect existing DDTs, only + newly created ones.
+
=9 + (512 B) (int)
+
Default dnode block size as a power of 2.
+
= + (128 KiB) (int)
+
Default dnode indirect block size as a power of 2.
+
=1048576B + (1 MiB) (u64)
+
When attempting to log an output nvlist of an ioctl in the on-disk + history, the output will not be stored if it is larger than this size (in + bytes). This must be less than + + (64 MiB). This applies primarily to + () + (cf. zfs-program(8)).
+
=0|1 + (int)
+
Prevent log spacemaps from being destroyed during pool exports and + destroys.
+
=1|0 + (int)
+
Enable/disable segment-based metaslab selection.
+
=2 + (int)
+
When using segment-based metaslab selection, continue allocating from the + active metaslab until this option's worth of buckets have been + exhausted.
+
=0|1 + (int)
+
Load all metaslabs during pool import.
+
=0|1 + (int)
+
Prevent metaslabs from being unloaded.
+
=1|0 + (int)
+
Enable use of the fragmentation metric in computing metaslab weights.
+ +
Maximum distance to search forward from the last offset. Without this + limit, fragmented pools can see + + iterations and + () + becomes the performance limiting factor on high-performance storage. +

With the default setting of 16 + MiB, we typically see less than 500 iterations, + even with very fragmented ashift=9 + pools. The maximum number of iterations possible is + metaslab_df_max_search / 2^(ashift+1). With the + default setting of 16 MiB this is + (with + ashift=9) or + + (with + ashift=).

+
+
=0|1 + (int)
+
If not searching forward (due to metaslab_df_max_search, + , + or + ), + this tunable controls which segment is used. If set, we will use the + largest free segment. If unset, we will use a segment of at least the + requested size.
+
=s + (1 hour) (u64)
+
When we unload a metaslab, we cache the size of the largest free chunk. We + use that cached size to determine whether or not to load a metaslab for a + given allocation. As more frees accumulate in that metaslab while it's + unloaded, the cached max size becomes less and less accurate. After a + number of seconds controlled by this tunable, we stop considering the + cached max size and start considering only the histogram instead.
+
=25% + (uint)
+
When we are loading a new metaslab, we check the amount of memory being + used to store metaslab range trees. If it is over a threshold, we attempt + to unload the least recently used metaslab to prevent the system from + clogging all of its memory with range trees. This tunable sets the + percentage of total system memory that is the threshold.
+
=0|1 + (int)
+
+
    +
  • If unset, we will first try normal allocation.
  • +
  • If that fails then we will do a gang allocation.
  • +
  • If that fails then we will do a "try hard" gang + allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+

+
    +
  • If set, we will first try normal allocation.
  • +
  • If that fails then we will do a "try hard" allocation.
  • +
  • If that fails we will do a gang allocation.
  • +
  • If that fails we will do a "try hard" gang allocation.
  • +
  • If that fails then we will have a multi-layer gang block.
  • +
+
+
=100 + (uint)
+
When not trying hard, we only consider this number of the best metaslabs. + This improves performance, especially when there are many metaslabs per + vdev and the allocation can't actually be satisfied (so we would otherwise + iterate all metaslabs).
+
=200 + (uint)
+
When a vdev is added, target this number of metaslabs per top-level + vdev.
+
= + (512 MiB) (uint)
+
Default lower limit for metaslab size.
+
= + (16 GiB) (uint)
+
Default upper limit for metaslab size.
+
= + (uint)
+
Maximum ashift used when optimizing for logical → physical sector + size on new top-level vdevs. May be increased up to + + (16), but this may negatively impact pool space efficiency.
+
= + (9) (uint)
+
Minimum ashift used when creating new top-level vdevs.
+
=16 + (uint)
+
Minimum number of metaslabs to create in a top-level vdev.
+
=0|1 + (int)
+
Skip label validation steps during pool import. Changing is not + recommended unless you know what you're doing and are recovering a damaged + label.
+
=131072 + (128k) (uint)
+
Practical upper limit of total metaslabs per top-level vdev.
+
=1|0 + (int)
+
Enable metaslab group preloading.
+
=10 + (uint)
+
Maximum number of metaslabs per group to preload
+
=50 + (uint)
+
Percentage of CPUs to run a metaslab preload taskq
+
=1|0 + (int)
+
Give more weight to metaslabs with lower LBAs, assuming they have greater + bandwidth, as is typically the case on a modern constant angular velocity + disk drive.
+
=32 + (uint)
+
After a metaslab is used, we keep it loaded for this many TXGs, to attempt + to reduce unnecessary reloading. Note that both this many TXGs and + metaslab_unload_delay_ms milliseconds must pass before + unloading will occur.
+
=600000ms + (10 min) (uint)
+
After a metaslab is used, we keep it loaded for this many milliseconds, to + attempt to reduce unnecessary reloading. Note, that both this many + milliseconds and metaslab_unload_delay TXGs must pass + before unloading will occur.
+
=3 + (uint)
+
Maximum reference holders being tracked when reference_tracking_enable is + active.
+
=0|1 + (int)
+
Track reference holders to + + objects (debug builds only).
+
=1|0 + (int)
+
When set, the hole_birth optimization will not be used, + and all holes will always be sent during a zfs + send. This is useful if you suspect your datasets + are affected by a bug in hole_birth.
+
=/etc/zfs/zpool.cache + (charp)
+
SPA config file.
+
= + (uint)
+
Multiplication factor used to estimate actual disk consumption from the + size of data being written. The default value is a worst case estimate, + but lower values may be valid for a given pool depending on its + configuration. Pool administrators who understand the factors involved may + wish to specify a more realistic inflation factor, particularly if they + operate close to quota or capacity limits.
+
=0|1 + (int)
+
Whether to print the vdev tree in the debugging message buffer during pool + import.
+
=1|0 + (int)
+
Whether to traverse data blocks during an "extreme rewind" + (-X) import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal skips non-metadata blocks. It can be toggled once the import + has started to stop or start the traversal of non-metadata blocks.

+
+
=1|0 + (int)
+
Whether to traverse blocks during an "extreme rewind" + (-X) pool import. +

An extreme rewind import normally performs a full traversal of + all blocks in the pool for verification. If this parameter is unset, the + traversal is not performed. It can be toggled once the import has + started to stop or start the traversal.

+
+
=4 + (1/16th) (uint)
+
Sets the maximum number of bytes to consume during pool import to the log2 + fraction of the target ARC size.
+
=5 + (1/32nd) (int)
+
Normally, we don't allow the last + + () + of space in the pool to be consumed. This ensures that we don't run the + pool completely out of space, due to unaccounted changes (e.g. to the + MOS). It also limits the worst-case time to allocate space. If we have + less than this amount of free space, most ZPL operations (e.g. write, + create) will return + .
+
=0 + (uint)
+
Limits the number of on-disk error log entries that will be converted to + the new format when enabling the + + feature. The default is to convert all log entries.
+
=32768B + (32 KiB) (uint)
+
During top-level vdev removal, chunks of data are copied from the vdev + which may include free space in order to trade bandwidth for IOPS. This + parameter determines the maximum span of free space, in bytes, which will + be included as "unnecessary" data in a chunk of copied data. +

The default value here was chosen to align with + zfs_vdev_read_gap_limit, which is a similar concept + when doing regular reads (but there's no reason it has to be the + same).

+
+
=9 + (512 B) (u64)
+
Logical ashift for file-based devices.
+
=9 + (512 B) (u64)
+
Physical ashift for file-based devices.
+
=1|0 + (int)
+
If set, when we start iterating over a ZAP object, prefetch the entire + object (all leaf blocks). However, this is limited by + dmu_prefetch_max.
+
=131072B + (128 KiB) (int)
+
Maximum micro ZAP size. A micro ZAP is upgraded to a fat ZAP, once it + grows beyond the specified size.
+
=4194304B + (4 MiB) (uint)
+
Min bytes to prefetch per stream. Prefetch distance starts from the demand + access size and quickly grows to this value, doubling on each hit. After + that it may grow further by 1/8 per hit, but only if some prefetch since + last time haven't completed in time to satisfy demand request, i.e. + prefetch depth didn't cover the read latency or the pool got + saturated.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch per stream.
+
=67108864B + (64 MiB) (uint)
+
Max bytes to prefetch indirects for per stream.
+
=8 + (uint)
+
Max number of streams per zfetch (prefetch streams per file).
+
=1 + (uint)
+
Min time before inactive prefetch stream can be reclaimed
+
=2 + (uint)
+
Max time before inactive prefetch stream can be deleted
+
=1|0 + (int)
+
Enables ARC from using scatter/gather lists and forces all allocations to + be linear in kernel memory. Disabling can improve performance in some code + paths at the expense of fragmented kernel memory.
+
=MAX_ORDER-1 + (uint)
+
Maximum number of consecutive memory pages allocated in a single block for + scatter/gather lists. +

The value of MAX_ORDER depends on kernel + configuration.

+
+
=B + (1.5 KiB) (uint)
+
This is the minimum allocation size that will use scatter (page-based) + ABDs. Smaller allocations will use linear ABDs.
+
=0B + (u64)
+
When the number of bytes consumed by dnodes in the ARC exceeds this number + of bytes, try to unpin some of it in response to demand for non-metadata. + This value acts as a ceiling to the amount of dnode metadata, and defaults + to 0, which indicates that a percent which is based on + zfs_arc_dnode_limit_percent of the ARC meta buffers that + may be used for dnodes.
+
=10% + (u64)
+
Percentage that can be consumed by dnodes of ARC meta buffers. +

See also zfs_arc_dnode_limit, which serves a + similar purpose but has a higher priority if nonzero.

+
+
=10% + (u64)
+
Percentage of ARC dnodes to try to scan in response to demand for + non-metadata when the number of bytes consumed by dnodes exceeds + zfs_arc_dnode_limit.
+
=B + (8 KiB) (uint)
+
The ARC's buffer hash table is sized based on the assumption of an average + block size of this value. This works out to roughly 1 MiB of hash table + per 1 GiB of physical memory with 8-byte pointers. For configurations with + a known larger average block size, this value can be increased to reduce + the memory footprint.
+
=200% + (uint)
+
When + (), + () + waits for this percent of the requested amount of data to be evicted. For + example, by default, for every 2 KiB that's evicted, + 1 KiB of it may be "reused" by a new + allocation. Since this is above 100%, it ensures that + progress is made towards getting arc_size + under arc_c. Since this is + finite, it ensures that allocations can still happen, even during the + potentially long time that arc_size is + more than arc_c.
+
=10 + (uint)
+
Number ARC headers to evict per sub-list before proceeding to another + sub-list. This batch-style operation prevents entire sub-lists from being + evicted at once but comes at a cost of additional unlocking and + locking.
+
=0s + (uint)
+
If set to a non zero value, it will replace the + arc_grow_retry value with this value. The + arc_grow_retry value (default + 5s) is the number of seconds the ARC will wait before + trying to resume growth after a memory pressure event.
+
=10% + (int)
+
Throttle I/O when free system memory drops below this percentage of total + system memory. Setting this value to 0 will disable the + throttle.
+
=0B + (u64)
+
Max size of ARC in bytes. If 0, then the max size of ARC + is determined by the amount of system memory installed. Under Linux, half + of system memory will be used as the limit. Under + FreeBSD, the larger of + all_system_memory - + 1 GiB and + + × all_system_memory will + be used as the limit. This value must be at least + 67108864B (64 MiB). +

This value can be changed dynamically, with some caveats. It + cannot be set back to 0 while running, and reducing it + below the current ARC size will not cause the ARC to shrink without + memory pressure to induce shrinking.

+
+
=500 + (uint)
+
Balance between metadata and data on ghost hits. Values above 100 increase + metadata caching by proportionally reducing effect of ghost data hits on + target data/metadata rate.
+
=0B + (u64)
+
Min size of ARC in bytes. If set to + 0, + + will default to consuming the larger of 32 MiB and + all_system_memory / + 32.
+
=0ms(≡1s) + (uint)
+
Minimum time prefetched blocks are locked in the ARC.
+
=0ms(≡6s) + (uint)
+
Minimum time "prescient prefetched" blocks are locked in the + ARC. These blocks are meant to be prefetched fairly aggressively ahead of + the code that may use them.
+
=1 + (int)
+
Number of arc_prune threads. FreeBSD does not need + more than one. Linux may theoretically use one per mount point up to + number of CPUs, but that was not proven to be useful.
+
=0 + (int)
+
Number of missing top-level vdevs which will be allowed during pool import + (only in read-only mode).
+
= + 0 (u64)
+
Maximum size in bytes allowed to be passed as + + for ioctls on /dev/zfs. This prevents a user from + causing the kernel to allocate an excessive amount of memory. When the + limit is exceeded, the ioctl fails with + + and a description of the error is sent to the + zfs-dbgmsg log. This parameter should not need to + be touched under normal circumstances. If 0, equivalent + to a quarter of the user-wired memory limit under + FreeBSD and to 134217728B (128 + MiB) under Linux.
+
=0 + (uint)
+
To allow more fine-grained locking, each ARC state contains a series of + lists for both data and metadata objects. Locking is performed at the + level of these "sub-lists". This parameters controls the number + of sub-lists per ARC state, and also applies to other uses of the + multilist data structure. +

If 0, equivalent to the greater of the + number of online CPUs and 4.

+
+
=8 + (int)
+
The ARC size is considered to be overflowing if it exceeds the current ARC + target size (arc_c) by thresholds determined by this + parameter. Exceeding by (arc_c + >> zfs_arc_overflow_shift) + / 2 starts ARC reclamation + process. If that appears insufficient, exceeding by + (arc_c >> + zfs_arc_overflow_shift) × + blocks + new buffer allocation until the reclaim thread catches up. Started + reclamation process continues till ARC size returns below the target size. +

The default value of 8 causes the + ARC to start reclamation if it exceeds the target size by + of the + target size, and block allocations by + .

+
+
=0 + (uint)
+
If nonzero, this will update + + (default 7) with the new value.
+
=0% + (off) (uint)
+
Percent of pagecache to reclaim ARC to. +

This tunable allows the ZFS ARC to play + more nicely with the kernel's LRU pagecache. It can guarantee that the + ARC size won't collapse under scanning pressure on the pagecache, yet + still allows the ARC to be reclaimed down to + zfs_arc_min if necessary. This value is specified as + percent of pagecache size (as measured by + ), + where that percent may exceed 100. This only operates + during memory pressure/reclaim.

+
+
=10000 + (int)
+
This is a limit on how many pages the ARC shrinker makes available for + eviction in response to one page allocation attempt. Note that in + practice, the kernel's shrinker can ask us to evict up to about four times + this for one allocation attempt. +

The default limit of 10000 (in + practice, + per allocation attempt with 4 KiB pages) limits + the amount of time spent attempting to reclaim ARC memory to less than + 100 ms per allocation attempt, even with a small average compressed + block size of ~8 KiB.

+

The parameter can be set to 0 (zero) to disable the limit, and + only applies on Linux.

+
+
=0B + (u64)
+
The target number of bytes the ARC should leave as free memory on the + system. If zero, equivalent to the bigger of 512 KiB + and + .
+
=1|0 + (int)
+
Disable pool import at module load by ignoring the cache file + (spa_config_path).
+
=20/s + (uint)
+
Rate limit checksum events to this many per second. Note that this should + not be set below the ZED thresholds (currently 10 checksums over 10 + seconds) or else the daemon may not trigger any action.
+
=5% + (uint)
+
This controls the amount of time that a ZIL block (lwb) will remain + "open" when it isn't "full", and it has a thread + waiting for it to be committed to stable storage. The timeout is scaled + based on a percentage of the last lwb latency to avoid significantly + impacting the latency of each individual transaction record (itx).
+
=0ms + (int)
+
Vdev indirection layer (used for device removal) sleeps for this many + milliseconds during mapping generation. Intended for use with the test + suite to throttle vdev removal speed.
+
=25% + (uint)
+
Minimum percent of obsolete bytes in vdev mapping required to attempt to + condense (see zfs_condense_indirect_vdevs_enable). + Intended for use with the test suite to facilitate triggering condensing + as needed.
+
=1|0 + (int)
+
Enable condensing indirect vdev mappings. When set, attempt to condense + indirect vdev mappings if the mapping uses more than + zfs_condense_min_mapping_bytes bytes of memory and if + the obsolete space map object uses more than + zfs_condense_max_obsolete_bytes bytes on-disk. The + condensing process is an attempt to save memory by removing obsolete + mappings.
+
=1073741824B + (1 GiB) (u64)
+
Only attempt to condense indirect vdev mappings if the on-disk size of the + obsolete space map object is greater than this number of bytes (see + zfs_condense_indirect_vdevs_enable).
+
=131072B + (128 KiB) (u64)
+
Minimum size vdev mapping to attempt to condense (see + zfs_condense_indirect_vdevs_enable).
+
=1|0 + (int)
+
Internally ZFS keeps a small log to facilitate debugging. The log is + enabled by default, and can be disabled by unsetting this option. The + contents of the log can be accessed by reading + /proc/spl/kstat/zfs/dbgmsg. Writing + 0 to the file clears the log. +

This setting does not influence debug prints due to + zfs_flags.

+
+
=4194304B + (4 MiB) (uint)
+
Maximum size of the internal ZFS debug log.
+
=0 + (int)
+
Historically used for controlling what reporting was available under + /proc/spl/kstat/zfs. No effect.
+
=1|0 + (int)
+
When a pool sync operation takes longer than + zfs_deadman_synctime_ms, or when an individual I/O + operation takes longer than zfs_deadman_ziotime_ms, then + the operation is considered to be "hung". If + zfs_deadman_enabled is set, then the deadman behavior is + invoked as described by zfs_deadman_failmode. By + default, the deadman is enabled and set to wait which + results in "hung" I/O operations only being logged. The deadman + is automatically disabled when a pool gets suspended.
+
=wait + (charp)
+
Controls the failure behavior when the deadman detects a "hung" + I/O operation. Valid values are: +
+
+
+
Wait for a "hung" operation to complete. For each + "hung" operation a "deadman" event will be posted + describing that operation.
+
+
Attempt to recover from a "hung" operation by re-dispatching + it to the I/O pipeline if possible.
+
+
Panic the system. This can be used to facilitate automatic fail-over + to a properly configured fail-over partner.
+
+
+
+
=ms + (1 min) (u64)
+
Check time in milliseconds. This defines the frequency at which we check + for hung I/O requests and potentially invoke the + zfs_deadman_failmode behavior.
+
=600000ms + (10 min) (u64)
+
Interval in milliseconds after which the deadman is triggered and also the + interval after which a pool sync operation is considered to be + "hung". Once this limit is exceeded the deadman will be invoked + every zfs_deadman_checktime_ms milliseconds until the + pool sync completes.
+
=ms + (5 min) (u64)
+
Interval in milliseconds after which the deadman is triggered and an + individual I/O operation is considered to be "hung". As long as + the operation remains "hung", the deadman will be invoked every + zfs_deadman_checktime_ms milliseconds until the + operation completes.
+
=0|1 + (int)
+
Enable prefetching dedup-ed blocks which are going to be freed.
+
=60% + (uint)
+
Start to delay each transaction once there is this amount of dirty data, + expressed as a percentage of zfs_dirty_data_max. This + value should be at least + zfs_vdev_async_write_active_max_dirty_percent. + See + ZFS TRANSACTION + DELAY.
+
=500000 + (int)
+
This controls how quickly the transaction delay approaches infinity. + Larger values cause longer delays for a given amount of dirty data. +

For the smoothest delay, this value should be about 1 billion + divided by the maximum number of operations per second. This will + smoothly handle between ten times and a tenth of this number. + See + ZFS TRANSACTION + DELAY.

+

zfs_delay_scale + × zfs_dirty_data_max + + be smaller than + .

+
+
=0|1 + (int)
+
Disables requirement for IVset GUIDs to be present and match when doing a + raw receive of encrypted datasets. Intended for users whose pools were + created with OpenZFS pre-release versions and now have compatibility + issues.
+
= + (4*10^8) (ulong)
+
Maximum number of uses of a single salt value before generating a new one + for encrypted datasets. The default value is also the maximum.
+
=64 + (uint)
+
Size of the znode hashtable used for holds. +

Due to the need to hold locks on objects that may not exist + yet, kernel mutexes are not created per-object and instead a hashtable + is used where collisions will result in objects waiting when there is + not actually contention on the same object.

+
+
=20/s + (int)
+
Rate limit delay and deadman zevents (which report slow I/O operations) to + this many per second.
+
=1073741824B + (1 GiB) (u64)
+
Upper-bound limit for unflushed metadata changes to be held by the log + spacemap in memory, in bytes.
+
=1000ppm + (0.1%) (u64)
+
Part of overall system memory that ZFS allows to be used for unflushed + metadata changes by the log spacemap, in millionths.
+
=131072 + (128k) (u64)
+
Describes the maximum number of log spacemap blocks allowed for each pool. + The default value means that the space in all the log spacemaps can add up + to no more than 131072 blocks (which means + 16 GiB of logical space before compression and ditto + blocks, assuming that blocksize is 128 KiB). +

This tunable is important because it involves a trade-off + between import time after an unclean export and the frequency of + flushing metaslabs. The higher this number is, the more log blocks we + allow when the pool is active which means that we flush metaslabs less + often and thus decrease the number of I/O operations for spacemap + updates per TXG. At the same time though, that means that in the event + of an unclean export, there will be more log spacemap blocks for us to + read, inducing overhead in the import time of the pool. The lower the + number, the amount of flushing increases, destroying log blocks quicker + as they become obsolete faster, which leaves less blocks to be read + during import time after a crash.

+

Each log spacemap block existing during pool import leads to + approximately one extra logical I/O issued. This is the reason why this + tunable is exposed in terms of blocks rather than space used.

+
+
=1000 + (u64)
+
If the number of metaslabs is small and our incoming rate is high, we + could get into a situation that we are flushing all our metaslabs every + TXG. Thus we always allow at least this many log blocks.
+
=% + (u64)
+
Tunable used to determine the number of blocks that can be used for the + spacemap log, expressed as a percentage of the total number of unflushed + metaslabs in the pool.
+
=1000 + (u64)
+
Tunable limiting maximum time in TXGs any metaslab may remain unflushed. + It effectively limits maximum number of unflushed per-TXG spacemap logs + that need to be read after unclean pool export.
+ +
When enabled, files will not be asynchronously removed from the list of + pending unlinks and the space they consume will be leaked. Once this + option has been disabled and the dataset is remounted, the pending unlinks + will be processed and the freed space returned to the pool. This option is + used by the test suite.
+
= + (ulong)
+
This is the used to define a large file for the purposes of deletion. + Files containing more than zfs_delete_blocks will be + deleted asynchronously, while smaller files are deleted synchronously. + Decreasing this value will reduce the time spent in an + unlink(2) system call, at the expense of a longer delay + before the freed space is available. This only applies on Linux.
+
= + (int)
+
Determines the dirty space limit in bytes. Once this limit is exceeded, + new writes are halted until space frees up. This parameter takes + precedence over zfs_dirty_data_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to + , + capped at zfs_dirty_data_max_max.

+
+
= + (int)
+
Maximum allowable value of zfs_dirty_data_max, expressed + in bytes. This limit is only enforced at module load time, and will be + ignored if zfs_dirty_data_max is later changed. This + parameter takes precedence over + zfs_dirty_data_max_max_percent. + See + ZFS TRANSACTION DELAY. +

Defaults to min(physical_ram/4, 4GiB), or + min(physical_ram/4, 1GiB) for 32-bit systems.

+
+
=25% + (uint)
+
Maximum allowable value of zfs_dirty_data_max, expressed + as a percentage of physical RAM. This limit is only enforced at module + load time, and will be ignored if zfs_dirty_data_max is + later changed. The parameter zfs_dirty_data_max_max + takes precedence over this one. See + ZFS TRANSACTION + DELAY.
+
=10% + (uint)
+
Determines the dirty space limit, expressed as a percentage of all memory. + Once this limit is exceeded, new writes are halted until space frees up. + The parameter zfs_dirty_data_max takes precedence over + this one. See + ZFS TRANSACTION DELAY. +

Subject to zfs_dirty_data_max_max.

+
+
=20% + (uint)
+
Start syncing out a transaction group if there's at least this much dirty + data (as a percentage of zfs_dirty_data_max). This + should be less than + zfs_vdev_async_write_active_min_dirty_percent.
+
= + (int)
+
The upper limit of write-transaction zil log data size in bytes. Write + operations are throttled when approaching the limit until log data is + cleared out after transaction group sync. Because of some overhead, it + should be set at least 2 times the size of + zfs_dirty_data_max to prevent harming + normal write throughput. It also should be smaller than the size of + the slog device if slog is present. +

Defaults to +

+
+
=% + (uint)
+
Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be + preallocated for a file in order to guarantee that later writes will not + run out of space. Instead, fallocate(2) space + preallocation only checks that sufficient space is currently available in + the pool or the user's project quota allocation, and then creates a sparse + file of the requested size. The requested space is multiplied by + zfs_fallocate_reserve_percent to allow additional space + for indirect blocks and other internal metadata. Setting this to + 0 disables support for fallocate(2) + and causes it to return + .
+
=fastest + (string)
+
Select a fletcher 4 implementation. +

Supported selectors are: fastest, + scalar, sse2, + , + avx2, + , + , + and + . + All except fastest and + scalar require instruction set extensions to be + available, and will only appear if ZFS detects that they are present at + runtime. If multiple implementations of fletcher 4 are available, the + fastest will be chosen using a micro benchmark. + Selecting scalar results in the original CPU-based + calculation being used. Selecting any option other than + fastest or + scalar results in vector instructions from the + respective CPU instruction set being used.

+
+
=1|0 + (int)
+
Enable the experimental block cloning feature. If this setting is 0, then + even if feature@block_cloning is enabled, attempts to clone blocks will + act as though the feature is disabled.
+
=0|1 + (int)
+
When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to + be written to disk. This allows the clone operation to reliably succeed + when a file is modified and then immediately cloned. For small files this + may be slower than making a copy of the file. Therefore, this setting + defaults to 0 which causes a clone operation to immediately fail when + encountering a dirty block.
+
=fastest + (string)
+
Select a BLAKE3 implementation. +

Supported selectors are: cycle, + fastest, generic, + sse2, + , + avx2, + . + All except cycle, fastest + and generic require + instruction set extensions to be available, and will only appear if ZFS + detects that they are present at runtime. If multiple implementations of + BLAKE3 are available, the fastest will be chosen using a + micro benchmark. You can see the benchmark results by reading this + kstat file: + /proc/spl/kstat/zfs/chksum_bench.

+
+
=1|0 + (int)
+
Enable/disable the processing of the free_bpobj object.
+
=UINT64_MAX + (unlimited) (u64)
+
Maximum number of blocks freed in a single TXG.
+
= + (10^5) (u64)
+
Maximum number of dedup blocks freed in a single TXG.
+
=3 + (uint)
+
Maximum asynchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum asynchronous read I/O operation active to each device. + See ZFS + I/O SCHEDULER.
+
=60% + (uint)
+
When the pool has more than this much dirty data, use + zfs_vdev_async_write_max_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=30% + (uint)
+
When the pool has less than this much dirty data, use + zfs_vdev_async_write_min_active to limit active async + writes. If the dirty data is between the minimum and maximum, the active + I/O limit is linearly interpolated. See + ZFS I/O SCHEDULER.
+
=10 + (uint)
+
Maximum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Minimum asynchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER. +

Lower values are associated with better latency on rotational + media but poorer resilver performance. The default value of + 2 was chosen as a compromise. A value of + 3 has been shown to improve resilver performance + further at a cost of further increasing latency.

+
+
=1 + (uint)
+
Maximum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum initializing I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
The maximum number of I/O operations active to each device. Ideally, this + will be at least the sum of each queue's max_active. + See ZFS + I/O SCHEDULER.
+
=1000 + (uint)
+
Timeout value to wait before determining a device is missing during + import. This is helpful for transient missing paths due to links being + briefly removed and recreated in response to udev events.
+
=3 + (uint)
+
Maximum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum sequential resilver I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum removal I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum scrub I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Maximum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Minimum synchronous read I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Maximum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=10 + (uint)
+
Minimum synchronous write I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=2 + (uint)
+
Maximum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=1 + (uint)
+
Minimum trim/discard I/O operations active to each device. + See ZFS + I/O SCHEDULER.
+
=5 + (uint)
+
For non-interactive I/O (scrub, resilver, removal, initialize and + rebuild), the number of concurrently-active I/O operations is limited to + , + unless the vdev is "idle". When there are no interactive I/O + operations active (synchronous or otherwise), and + zfs_vdev_nia_delay operations have completed since the + last interactive operation, then the vdev is considered to be + "idle", and the number of concurrently-active non-interactive + operations is increased to zfs_*_max_active. + See ZFS + I/O SCHEDULER.
+
=5 + (uint)
+
Some HDDs tend to prioritize sequential I/O so strongly, that concurrent + random I/O latency reaches several seconds. On some HDDs this happens even + if sequential I/O operations are submitted one at a time, and so setting + zfs_*_max_active= 1 does not help. To + prevent non-interactive I/O, like scrub, from monopolizing the device, no + more than zfs_vdev_nia_credit operations can be sent + while there are outstanding incomplete interactive operations. This + enforced wait ensures the HDD services the interactive I/O within a + reasonable amount of time. See + ZFS I/O SCHEDULER.
+
=1000% + (uint)
+
Maximum number of queued allocations per top-level vdev expressed as a + percentage of zfs_vdev_async_write_max_active, which + allows the system to detect devices that are more capable of handling + allocations and to allocate more blocks to those devices. This allows for + dynamic allocation distribution when devices are imbalanced, as fuller + devices will tend to be slower than empty devices. +

Also see zio_dva_throttle_enabled.

+
+
=32 + (uint)
+
Default queue depth for each vdev IO allocator. Higher values allow for + better coalescing of sequential writes before sending them to the disk, + but can increase transaction commit times.
+
=1 + (uint)
+
Defines if the driver should retire on a given error type. The following + options may be bitwise-ored together: + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueNameDescription
1DeviceNo driver retries on device errors
2TransportNo driver retries on transport errors.
4DriverNo driver retries on driver errors.
+
+
=s + (int)
+
Time before expiring .zfs/snapshot.
+
=0|1 + (int)
+
Allow the creation, removal, or renaming of entries in the + + directory to cause the creation, destruction, or renaming of snapshots. + When enabled, this functionality works both locally and over NFS exports + which have the + + option set.
+
=0 + (int)
+
Set additional debugging flags. The following flags may be bitwise-ored + together: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueNameDescription
1ZFS_DEBUG_DPRINTFEnable dprintf entries in the debug log.
*2ZFS_DEBUG_DBUF_VERIFYEnable extra dbuf verifications.
*4ZFS_DEBUG_DNODE_VERIFYEnable extra dnode verifications.
8ZFS_DEBUG_SNAPNAMESEnable snapshot name verification.
*16ZFS_DEBUG_MODIFYCheck for illegally modified ARC buffers.
64ZFS_DEBUG_ZIO_FREEEnable verification of block frees.
128ZFS_DEBUG_HISTOGRAM_VERIFYEnable extra spacemap histogram verifications.
256ZFS_DEBUG_METASLAB_VERIFYVerify space accounting on disk matches in-memory + range_trees.
512ZFS_DEBUG_SET_ERROREnable SET_ERROR and dprintf entries in the debug log.
1024ZFS_DEBUG_INDIRECT_REMAPVerify split blocks created by device removal.
2048ZFS_DEBUG_TRIMVerify TRIM ranges are always within the allocatable range + tree.
4096ZFS_DEBUG_LOG_SPACEMAPVerify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and + flushing.
+ * Requires debug build.
+
=0 + (uint)
+
Enables btree verification. The following settings are culminative: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueDescription
1Verify height.
2Verify pointers from children to parent.
3Verify element counts.
4Verify element order. (expensive)
*5Verify unused memory is poisoned. (expensive)
+ * Requires debug build.
+
=0|1 + (int)
+
If destroy encounters an EIO while reading metadata + (e.g. indirect blocks), space referenced by the missing metadata can not + be freed. Normally this causes the background destroy to become + "stalled", as it is unable to make forward progress. While in + this stalled state, all remaining space to free from the + error-encountering filesystem is "temporarily leaked". Set this + flag to cause it to ignore the EIO, permanently leak the + space from indirect blocks that can not be read, and continue to free + everything else that it can. +

The default "stalling" behavior is useful if the + storage partially fails (i.e. some but not all I/O operations fail), and + then later recovers. In this case, we will be able to continue pool + operations while it is partially failed, and when it recovers, we can + continue to free the space, with no leaks. Note, however, that this case + is actually fairly rare.

+

Typically pools either

+
    +
  1. fail completely (but perhaps temporarily, e.g. due to a top-level vdev + going offline), or
  2. +
  3. have localized, permanent errors (e.g. disk returns the wrong data due + to bit flip or firmware bug).
  4. +
+ In the former case, this setting does not matter because the pool will be + suspended and the sync thread will not be able to make forward progress + regardless. In the latter, because the error is permanent, the best we can + do is leak the minimum amount of space, which is what setting this flag + will do. It is therefore reasonable for this flag to normally be set, but + we chose the more conservative approach of not setting it, so that there + is no possibility of leaking space in the "partial temporary" + failure case.
+
=1000ms + (1s) (uint)
+
During a zfs destroy + operation using the + + feature, a minimum of this much time will be spent working on freeing + blocks per TXG.
+
=500ms + (uint)
+
Similar to zfs_free_min_time_ms, but for cleanup of old + indirection records for removed vdevs.
+
=32768B + (32 KiB) (s64)
+
Largest data block to write to the ZIL. Larger blocks will be treated as + if the dataset being written to had the + = + property set.
+
= + (0xDEADBEEFDEADBEEE) (u64)
+
Pattern written to vdev free space by + zpool-initialize(8).
+
=1048576B + (1 MiB) (u64)
+
Size of writes used by zpool-initialize(8). This option + is used by the test suite.
+
=500000 + (5*10^5) (u64)
+
The threshold size (in block pointers) at which we create a new + sub-livelist. Larger sublists are more costly from a memory perspective + but the fewer sublists there are, the lower the cost of insertion.
+
=75% + (int)
+
If the amount of shared space between a snapshot and its clone drops below + this threshold, the clone turns off the livelist and reverts to the old + deletion method. This is in place because livelists no long give us a + benefit once a clone has been overwritten enough.
+
=0 + (int)
+
Incremented each time an extra ALLOC blkptr is added to a livelist entry + while it is being condensed. This option is used by the test suite to + track race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the synctask — + spa_livelist_condense_sync(). This option is used + by the test suite to trigger race conditions.
+
=0 + (int)
+
Incremented each time livelist condensing is canceled while in + (). + This option is used by the test suite to track race conditions.
+
=0|1 + (int)
+
When set, the livelist condense process pauses indefinitely before + executing the open context condensing work in + spa_livelist_condense_cb(). This option is used by + the test suite to trigger race conditions.
+
= + (10^8) (u64)
+
The maximum execution time limit that can be set for a ZFS channel + program, specified as a number of Lua instructions.
+
= + (100 MiB) (u64)
+
The maximum memory limit that can be set for a ZFS channel program, + specified in bytes.
+
=50 + (int)
+
The maximum depth of nested datasets. This value can be tuned temporarily + to fix existing datasets that exceed the predefined limit.
+
=5 + (u64)
+
The number of past TXGs that the flushing algorithm of the log spacemap + feature uses to estimate incoming log blocks.
+
=10 + (u64)
+
Maximum number of rows allowed in the summary of the spacemap log.
+
=16777216 + (16 MiB) (uint)
+
We currently support block sizes from 512 (512 B) + to 16777216 (16 MiB). The + benefits of larger blocks, and thus larger I/O, need to be weighed against + the cost of COWing a giant block to modify one byte. Additionally, very + large blocks can have an impact on I/O latency, and also potentially on + the memory allocator. Therefore, we formerly forbade creating blocks + larger than 1M. Larger blocks could be created by changing it, and pools + with larger blocks can always be imported and used, regardless of this + setting.
+
=0|1 + (int)
+
Allow datasets received with redacted send/receive to be mounted. Normally + disabled because these datasets may be missing key data.
+
=1 + (u64)
+
Minimum number of metaslabs to flush per dirty TXG.
+
=% + (uint)
+
Allow metaslabs to keep their active state as long as their fragmentation + percentage is no more than this value. An active metaslab that exceeds + this threshold will no longer keep its active status allowing better + metaslabs to be selected.
+
=% + (uint)
+
Metaslab groups are considered eligible for allocations if their + fragmentation metric (measured as a percentage) is less than or equal to + this value. If a metaslab group exceeds this threshold then it will be + skipped unless all metaslab groups within the metaslab class have also + crossed this threshold.
+
=0% + (uint)
+
Defines a threshold at which metaslab groups should be eligible for + allocations. The value is expressed as a percentage of free space beyond + which a metaslab group is always eligible for allocations. If a metaslab + group's free space is less than or equal to the threshold, the allocator + will avoid allocating to that group unless all groups in the pool have + reached the threshold. Once all groups have reached the threshold, all + groups are allowed to accept allocations. The default value of + 0 disables the feature and causes all metaslab groups to + be eligible for allocations. +

This parameter allows one to deal + with pools having heavily imbalanced vdevs such as would be the case + when a new vdev has been added. Setting the threshold to a non-zero + percentage will stop allocations from being made to vdevs that aren't + filled to the specified percentage and allow lesser filled vdevs to + acquire more allocations than they otherwise would under the old + + facility.

+
+
=1|0 + (int)
+
If enabled, ZFS will place DDT data into the special allocation + class.
+
=1|0 + (int)
+
If enabled, ZFS will place user data indirect blocks into the special + allocation class.
+
=0 + (uint)
+
Historical statistics for this many latest multihost updates will be + available in + /proc/spl/kstat/zfs/pool/multihost.
+
=1000ms + (1 s) (u64)
+
Used to control the frequency of multihost writes which are performed when + the + + pool property is on. This is one of the factors used to determine the + length of the activity check during import. +

The multihost write period is + zfs_multihost_interval / + . + On average a multihost write will be issued for each leaf vdev every + zfs_multihost_interval milliseconds. In practice, the + observed period can vary with the I/O load and this observed value is + the delay which is stored in the uberblock.

+
+
=20 + (uint)
+
Used to control the duration of the activity test on import. Smaller + values of zfs_multihost_import_intervals will reduce the + import time but increase the risk of failing to detect an active pool. The + total activity check time is never allowed to drop below one second. +

On import the activity check waits a minimum amount of time + determined by zfs_multihost_interval + × + zfs_multihost_import_intervals, or the same product + computed on the host which last had the pool imported, whichever is + greater. The activity check time may be further extended if the value of + MMP delay found in the best uberblock indicates actual multihost updates + happened at longer intervals than + zfs_multihost_interval. A minimum of 100 + ms is enforced.

+

0 is equivalent to + 1.

+
+
=10 + (uint)
+
Controls the behavior of the pool when multihost write failures or delays + are detected. +

When 0, multihost write failures or delays + are ignored. The failures will still be reported to the ZED which + depending on its configuration may take action such as suspending the + pool or offlining a device.

+

Otherwise, the pool will be suspended if + zfs_multihost_fail_intervals + × + zfs_multihost_interval milliseconds pass without a + successful MMP write. This guarantees the activity test will see MMP + writes if the pool is imported. 1 is + equivalent to 2; this is necessary to prevent + the pool from being suspended due to normal, small I/O latency + variations.

+
+
=0|1 + (int)
+
Set to disable scrub I/O. This results in scrubs not actually scrubbing + data and simply doing a metadata crawl of the pool instead.
+
=0|1 + (int)
+
Set to disable block prefetching for scrubs.
+
=0|1 + (int)
+
Disable cache flush operations on disks when writing. Setting this will + cause pool corruption on power loss if a volatile out-of-order write cache + is enabled.
+
=1|0 + (int)
+
Allow no-operation writes. The occurrence of nopwrites will further depend + on other pool properties (i.a. the checksumming and compression + algorithms).
+
=1|0 + (int)
+
Enable forcing TXG sync to find holes. When enabled forces ZFS to sync + data when + + or + + flags are used allowing holes in a file to be accurately reported. When + disabled holes will not be reported in recently dirtied files.
+
=B + (50 MiB) (int)
+
The number of bytes which should be prefetched during a pool traversal, + like zfs send or other + data crawling operations.
+
=32 + (uint)
+
The number of blocks pointed by indirect (non-L0) block which should be + prefetched during a pool traversal, like zfs + send or other data crawling operations.
+
=30% + (u64)
+
Control percentage of dirtied indirect blocks from frees allowed into one + TXG. After this threshold is crossed, additional frees will wait until the + next TXG. 0 disables this + throttle.
+
=0|1 + (int)
+
Disable predictive prefetch. Note that it leaves "prescient" + prefetch (for, e.g., zfs + send) intact. Unlike predictive prefetch, + prescient prefetch never issues I/O that ends up not being needed, so it + can't hurt performance.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for SHA256 checksums. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for gzip compression. May be unset after + the ZFS modules have been loaded to initialize the QAT hardware as long as + support is compiled in and the QAT driver is present.
+
=0|1 + (int)
+
Disable QAT hardware acceleration for AES-GCM encryption. May be unset + after the ZFS modules have been loaded to initialize the QAT hardware as + long as support is compiled in and the QAT driver is present.
+
=1048576B + (1 MiB) (u64)
+
Bytes to read per chunk.
+
=0 + (uint)
+
Historical statistics for this many latest reads will be available in + /proc/spl/kstat/zfs/pool/reads.
+
=0|1 + (int)
+
Include cache hits in read history
+
=1048576B + (1 MiB) (u64)
+
Maximum read segment size to issue when sequentially resilvering a + top-level vdev.
+
=1|0 + (int)
+
Automatically start a pool scrub when the last active sequential resilver + completes in order to verify the checksums of all blocks which have been + resilvered. This is enabled by default and strongly recommended.
+
=67108864B + (64 MiB) (u64)
+
Maximum amount of I/O that can be concurrently issued for a sequential + resilver per leaf device, given in bytes.
+
=4096 + (int)
+
If an indirect split block contains more than this many possible unique + combinations when being reconstructed, consider it too computationally + expensive to check them all. Instead, try at most this many randomly + selected combinations each time the block is accessed. This allows all + segment copies to participate fairly in the reconstruction when all + combinations cannot be checked and prevents repeated use of one bad + copy.
+
=0|1 + (int)
+
Set to attempt to recover from fatal errors. This should only be used as a + last resort, as it typically results in leaked space, or worse.
+
=0|1 + (int)
+
Ignore hard I/O errors during device removal. When set, if a device + encounters a hard I/O error during the removal process the removal will + not be cancelled. This can result in a normally recoverable block becoming + permanently damaged and is hence not recommended. This should only be used + as a last resort when the pool cannot be returned to a healthy state prior + to removing the device.
+
=0|1 + (uint)
+
This is used by the test suite so that it can ensure that certain actions + happen while in the middle of a removal.
+
=16777216B + (16 MiB) (uint)
+
The largest contiguous segment that we will attempt to allocate when + removing a device. If there is a performance problem with attempting to + allocate large blocks, consider decreasing this. The default value is also + the maximum.
+
=0|1 + (int)
+
Ignore the + + feature, causing an operation that would start a resilver to immediately + restart the one in progress.
+
=ms + (3 s) (uint)
+
Resilvers are processed by the sync thread. While resilvering, it will + spend at least this much time working on a resilver between TXG + flushes.
+
=0|1 + (int)
+
If set, remove the DTL (dirty time list) upon completion of a pool scan + (scrub), even if there were unrepairable errors. Intended to be used + during pool repair or recovery to stop resilvering when the pool is next + imported.
+
=1000ms + (1 s) (uint)
+
Scrubs are processed by the sync thread. While scrubbing, it will spend at + least this much time working on a scrub between TXG flushes.
+
=4096 + (uint)
+
Error blocks to be scrubbed in one txg.
+
=s + (2 hour) (uint)
+
To preserve progress across reboots, the sequential scan algorithm + periodically needs to stop metadata scanning and issue all the + verification I/O to disk. The frequency of this flushing is determined by + this tunable.
+
=3 + (uint)
+
This tunable affects how scrub and resilver I/O segments are ordered. A + higher number indicates that we care more about how filled in a segment + is, while a lower number indicates we care more about the size of the + extent without considering the gaps within a segment. This value is only + tunable upon module insertion. Changing the value afterwards will have no + effect on scrub or resilver performance.
+
=0 + (uint)
+
Determines the order that data will be verified while scrubbing or + resilvering: +
+
+
+
Data will be verified as sequentially as possible, given the amount of + memory reserved for scrubbing (see + zfs_scan_mem_lim_fact). This may improve scrub + performance if the pool's data is very fragmented.
+
+
The largest mostly-contiguous chunk of found data will be verified + first. By deferring scrubbing of small segments, we may later find + adjacent data to coalesce and increase the segment size.
+
+
1 during normal + verification and strategy + 2 while taking a + checkpoint.
+
+
+
+
=0|1 + (int)
+
If unset, indicates that scrubs and resilvers will gather metadata in + memory before issuing sequential I/O. Otherwise indicates that the legacy + algorithm will be used, where I/O is initiated as soon as it is + discovered. Unsetting will not affect scrubs or resilvers that are already + in progress.
+
=B + (2 MiB) (int)
+
Sets the largest gap in bytes between scrub/resilver I/O operations that + will still be considered sequential for sorting purposes. Changing this + value will not affect scrubs or resilvers that are already in + progress.
+
=20^-1 + (uint)
+
Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. + This tunable determines the hard limit for I/O sorting memory usage. When + the hard limit is reached we stop scanning metadata and start issuing data + verification I/O. This is done until we get below the soft limit.
+
=20^-1 + (uint)
+
The fraction of the hard limit used to determined the soft limit for I/O + sorting by the sequential scan algorithm. When we cross this limit from + below no action is taken. When we cross this limit from above it is + because we are issuing verification I/O. In this case (unless the metadata + scan is done) we stop issuing verification I/O and start scanning metadata + again until we get to the hard limit.
+
=0|1 + (uint)
+
When reporting resilver throughput and estimated completion time use the + performance observed over roughly the last + zfs_scan_report_txgs TXGs. When set to zero performance + is calculated over the time between checkpoints.
+
=0|1 + (int)
+
Enforce tight memory limits on pool scans when a sequential scan is in + progress. When disabled, the memory limit may be exceeded by fast + disks.
+
=0|1 + (int)
+
Freezes a scrub/resilver in progress without actually pausing it. Intended + for testing/debugging.
+
=16777216B + (16 MiB) (int)
+
Maximum amount of data that can be concurrently issued at once for scrubs + and resilvers per leaf device, given in bytes.
+
=0|1 + (int)
+
Allow sending of corrupt data (ignore read/checksum errors when + sending).
+
=1|0 + (int)
+
Include unmodified spill blocks in the send stream. Under certain + circumstances, previous versions of ZFS could incorrectly remove the spill + block from an existing object. Including unmodified copies of the spill + blocks creates a backwards-compatible stream which will recreate a spill + block if it was incorrectly removed.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + send internal queues. The fill fraction controls + the timing with which internal threads are woken up.
+
=1048576B + (1 MiB) (uint)
+
The maximum number of bytes allowed in zfs + send's internal queues.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + send prefetch queue. The fill fraction controls + the timing with which internal threads are woken up.
+
=16777216B + (16 MiB) (uint)
+
The maximum number of bytes allowed that will be prefetched by + zfs send. This value must + be at least twice the maximum block size in use.
+
=20^-1 + (uint)
+
The fill fraction of the zfs + receive queue. The fill fraction controls the + timing with which internal threads are woken up.
+
=16777216B + (16 MiB) (uint)
+
The maximum number of bytes allowed in the zfs + receive queue. This value must be at least twice + the maximum block size in use.
+
=1048576B + (1 MiB) (uint)
+
The maximum amount of data, in bytes, that zfs + receive will write in one DMU transaction. This is + the uncompressed size, even when receiving a compressed send stream. This + setting will not reduce the write size below a single block. Capped at a + maximum of 32 MiB.
+
=0 + (int)
+
When this variable is set to non-zero a corrective receive: +
    +
  1. Does not enforce the restriction of source & destination snapshot + GUIDs matching.
  2. +
  3. If there is an error during healing, the healing receive is not + terminated instead it moves on to the next record.
  4. +
+
+
=0|1 + (uint)
+
Setting this variable overrides the default logic for estimating block + sizes when doing a zfs + send. The default heuristic is that the average + block size will be the current recordsize. Override this value if most + data in your dataset is not of that size and you require accurate zfs send + size estimates.
+
=2 + (uint)
+
Flushing of data to disk is done in passes. Defer frees starting in this + pass.
+
=16777216B + (16 MiB) (int)
+
Maximum memory used for prefetching a checkpoint's space map on each vdev + while discarding the checkpoint.
+
=25% + (uint)
+
Only allow small data blocks to be allocated on the special and dedup vdev + types when the available free space percentage on these vdevs exceeds this + value. This ensures reserved space is available for pool metadata as the + special vdevs approach capacity.
+
=8 + (uint)
+
Starting in this sync pass, disable compression (including of metadata). + With the default setting, in practice, we don't have this many sync + passes, so this has no effect. +

The original intent was that disabling compression would help + the sync passes to converge. However, in practice, disabling compression + increases the average number of sync passes; because when we turn + compression off, many blocks' size will change, and thus we have to + re-allocate (not overwrite) them. It also increases the number of + 128 KiB allocations (e.g. for indirect blocks and + spacemaps) because these will not be compressed. The 128 + KiB allocations are especially detrimental to performance on highly + fragmented systems, which may have very few free segments of this size, + and may need to load new metaslabs to satisfy these allocations.

+
+
=2 + (uint)
+
Rewrite new block pointers starting in this pass.
+
=75% + (int)
+
This controls the number of threads used by + . + The default value of + will + create a maximum of one thread per CPU.
+
=134217728B + (128 MiB) (uint)
+
Maximum size of TRIM command. Larger ranges will be split into chunks no + larger than this value before issuing.
+
=32768B + (32 KiB) (uint)
+
Minimum size of TRIM commands. TRIM ranges smaller than this will be + skipped, unless they're part of a larger range which was chunked. This is + done because it's common for these small TRIMs to negatively impact + overall performance.
+
=0|1 + (uint)
+
Skip uninitialized metaslabs during the TRIM process. This option is + useful for pools constructed from large thinly-provisioned devices where + TRIM operations are slow. As a pool ages, an increasing fraction of the + pool's metaslabs will be initialized, progressively degrading the + usefulness of this option. This setting is stored when starting a manual + TRIM and will persist for the duration of the requested TRIM.
+
=10 + (uint)
+
Maximum number of queued TRIMs outstanding per leaf vdev. The number of + concurrent TRIM commands issued to the device is controlled by + zfs_vdev_trim_min_active and + zfs_vdev_trim_max_active.
+
=32 + (uint)
+
The number of transaction groups' worth of frees which should be + aggregated before TRIM operations are issued to the device. This setting + represents a trade-off between issuing larger, more efficient TRIM + operations and the delay before the recently trimmed space is available + for use by the device. +

Increasing this value will allow frees to be aggregated for a + longer time. This will result is larger TRIM operations and potentially + increased memory usage. Decreasing this value will have the opposite + effect. The default of 32 was determined to be a + reasonable compromise.

+
+
=0 + (uint)
+
Historical statistics for this many latest TXGs will be available in + /proc/spl/kstat/zfs/pool/TXGs.
+
=5s + (uint)
+
Flush dirty data to disk at least every this many seconds (maximum TXG + duration).
+
=1048576B + (1 MiB) (uint)
+
Max vdev I/O aggregation size.
+
=131072B + (128 KiB) (uint)
+
Max vdev I/O aggregation size for non-rotating media.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation immediately follows its predecessor on rotational vdevs for the + purpose of making decisions based on load.
+
=5 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=1048576B + (1 MiB) (int)
+
The maximum distance for the last queued I/O operation in which the + balancing algorithm considers an operation to have locality. + See ZFS + I/O SCHEDULER.
+
=0 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member on + non-rotational vdevs when I/O operations do not immediately follow one + another.
+
=1 + (int)
+
A number by which the balancing algorithm increments the load calculation + for the purpose of selecting the least busy mirror member when an I/O + operation lacks locality as defined by the + zfs_vdev_mirror_rotating_seek_offset. Operations within + this that are not immediately following the previous operation are + incremented by half.
+
=32768B + (32 KiB) (uint)
+
Aggregate read I/O operations if the on-disk gap between them is within + this threshold.
+
=4096B + (4 KiB) (uint)
+
Aggregate write I/O operations if the on-disk gap between them is within + this threshold.
+
=fastest + (string)
+
Select the raidz parity implementation to use. +

Variants that don't depend on CPU-specific features may be + selected on module load, as they are supported on all systems. The + remaining options may only be set after the module is loaded, as they + are available only if the implementations are compiled in and supported + on the running system.

+

Once the module is loaded, + /sys/module/zfs/parameters/zfs_vdev_raidz_impl + will show the available options, with the currently selected one + enclosed in square brackets.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
fastestselected by built-in benchmark
originaloriginal implementation
scalarscalar implementation
sse2SSE2 instruction set64-bit x86
ssse3SSSE3 instruction set64-bit x86
avx2AVX2 instruction set64-bit x86
avx512fAVX512F instruction set64-bit x86
avx512bwAVX512F & AVX512BW instruction sets64-bit x86
aarch64_neonNEONAarch64/64-bit ARMv8
aarch64_neonx2NEON with more unrollingAarch64/64-bit ARMv8
powerpc_altivecAltivecPowerPC
+
+
+ (charp)
+
. + Prints warning to kernel log for compatibility.
+
=512 + (uint)
+
Max event queue length. Events in the queue can be viewed with + zpool-events(8).
+
=2000 + (int)
+
Maximum recent zevent records to retain for duplicate checking. Setting + this to 0 disables duplicate detection.
+
=s + (15 min) (int)
+
Lifespan for a recent ereport that was retained for duplicate + checking.
+
=1048576 + (int)
+
The maximum number of taskq entries that are allowed to be cached. When + this limit is exceeded transaction records (itxs) will be cleaned + synchronously.
+
= + (int)
+
The number of taskq entries that are pre-populated when the taskq is first + created and are immediately available for use.
+
=100% + (int)
+
This controls the number of threads used by + . + The default value of + + will create a maximum of one thread per cpu.
+
=131072B + (128 KiB) (uint)
+
This sets the maximum block size used by the ZIL. On very fragmented + pools, lowering this (typically to + ) can + improve performance.
+
=B + (7.5 KiB) (uint)
+
This sets the maximum number of write bytes logged via WR_COPIED. It tunes + a tradeoff between additional memory copy and possibly worse log space + efficiency vs additional range lock/unlock.
+
= + (u64)
+
This sets the minimum delay in nanoseconds ZIL care to delay block commit, + waiting for more records. If ZIL writes are too fast, kernel may not be + able sleep for so short interval, increasing log latency above allowed by + zfs_commit_timeout_pct.
+
=0|1 + (int)
+
Disable the cache flush commands that are normally sent to disk by the ZIL + after an LWB write has completed. Setting this will cause ZIL corruption + on power loss if a volatile out-of-order write cache is enabled.
+
=0|1 + (int)
+
Disable intent logging replay. Can be disabled for recovery from corrupted + ZIL.
+
=67108864B + (64 MiB) (u64)
+
Limit SLOG write size per commit executed with synchronous priority. Any + writes above that will be executed with lower (asynchronous) priority to + limit potential SLOG device abuse by single active ZIL writer.
+
=1|0 + (int)
+
Setting this tunable to zero disables ZIL logging of new + = + records if the + + feature is enabled on the pool. This would only be necessary to work + around bugs in the ZIL logging or replay code for this record type. The + tunable has no effect if the feature is disabled.
+
=64 + (uint)
+
Usually, one metaslab from each normal-class vdev is dedicated for use by + the ZIL to log synchronous writes. However, if there are fewer than + zfs_embedded_slog_min_ms metaslabs in the vdev, this + functionality is disabled. This ensures that we don't set aside an + unreasonable amount of space for the ZIL.
+
=1 + (uint)
+
Whether heuristic for detection of incompressible data with zstd levels + >= 3 using LZ4 and zstd-1 passes is enabled.
+
=131072 + (uint)
+
Minimal uncompressed size (inclusive) of a record before the early abort + heuristic will be attempted.
+
=0|1 + (int)
+
If non-zero, the zio deadman will produce debugging messages (see + zfs_dbgmsg_enable) for all zios, rather than only for + leaf zios possessing a vdev. This is meant to be used by developers to + gain diagnostic information for hang conditions which don't involve a + mutex or other locking primitive: typically conditions in which a thread + in the zio pipeline is looping indefinitely.
+
=ms + (30 s) (int)
+
When an I/O operation takes more than this much time to complete, it's + marked as slow. Each slow operation causes a delay zevent. Slow I/O + counters can be seen with zpool + status -s.
+
=1|0 + (int)
+
Throttle block allocations in the I/O pipeline. This allows for dynamic + allocation distribution when devices are imbalanced. When enabled, the + maximum number of pending allocations per top-level vdev is limited by + zfs_vdev_queue_depth_pct.
+
=0|1 + (int)
+
Control the naming scheme used when setting new xattrs in the user + namespace. If 0 (the default on Linux), user namespace + xattr names are prefixed with the namespace, to be backwards compatible + with previous versions of ZFS on Linux. If 1 (the + default on FreeBSD), user namespace xattr names + are not prefixed, to be backwards compatible with previous versions of ZFS + on illumos and FreeBSD. +

Either naming scheme can be read on this and future versions + of ZFS, regardless of this tunable, but legacy ZFS on illumos or + FreeBSD are unable to read user namespace xattrs + written in the Linux format, and legacy versions of ZFS on Linux are + unable to read user namespace xattrs written in the legacy ZFS + format.

+

An existing xattr with the alternate naming scheme is removed + when overwriting the xattr so as to not accumulate duplicates.

+
+
=0|1 + (int)
+
Prioritize requeued I/O.
+
=% + (uint)
+
Percentage of online CPUs which will run a worker thread for I/O. These + workers are responsible for I/O work such as compression and checksum + calculations. Fractional number of CPUs will be rounded down. +

The default value of + was chosen to + avoid using all CPUs which can result in latency issues and inconsistent + application performance, especially when slower compression and/or + checksumming is enabled.

+
+
=0 + (uint)
+
Number of worker threads per taskq. Lower values improve I/O ordering and + CPU utilization, while higher reduces lock contention. +

If 0, generate a system-dependent value + close to 6 threads per taskq.

+
+
= (charp)
+
Set the queue and thread configuration for the IO read queues. This is an + advanced debugging parameter. Don't change this unless you understand what + it does.
+
= (charp)
+
Set the queue and thread configuration for the IO write queues. This is an + advanced debugging parameter. Don't change this unless you understand what + it does.
+
=0|1 + (uint)
+
Do not create zvol device nodes. This may slightly improve startup time on + systems with a very large number of zvols.
+
= + (uint)
+
Major number for zvol block devices.
+
= + (long)
+
Discard (TRIM) operations done on zvols will be done in batches of this + many blocks, where block size is determined by the + volblocksize property of a zvol.
+
=131072B + (128 KiB) (uint)
+
When adding a zvol to the system, prefetch this many bytes from the start + and end of the volume. Prefetching these regions of the volume is + desirable, because they are likely to be accessed immediately by + blkid(8) or the kernel partitioner.
+
=0|1 + (uint)
+
When processing I/O requests for a zvol, submit them synchronously. This + effectively limits the queue depth to 1 for each I/O + submitter. When unset, requests are handled asynchronously by a thread + pool. The number of requests which can be handled concurrently is + controlled by zvol_threads. + zvol_request_sync is ignored when running on a kernel + that supports block multiqueue (blk-mq).
+
=0 + (uint)
+
The number of system wide threads to use for processing zvol block IOs. If + 0 (the default) then internally set + zvol_threads to the number of CPUs present or 32 + (whichever is greater).
+
=0 + (uint)
+
The number of threads per zvol to use for queuing IO requests. This + parameter will only appear if your kernel supports + blk-mq and is only read and assigned to a zvol at + zvol load time. If 0 (the default) then internally set + zvol_blk_mq_threads to the number of CPUs present.
+
=0|1 + (uint)
+
Set to 1 to use the blk-mq API + for zvols. Set to 0 (the default) to use the legacy zvol + APIs. This setting can give better or worse zvol performance depending on + the workload. This parameter will only appear if your kernel supports + blk-mq and is only read and assigned to a zvol at + zvol load time.
+
=8 + (uint)
+
If zvol_use_blk_mq is enabled, then process this number + of volblocksize-sized blocks per zvol thread. This + tunable can be use to favor better performance for zvol reads (lower + values) or writes (higher values). If set to 0, then the + zvol layer will process the maximum number of blocks per thread that it + can. This parameter will only appear if your kernel supports + blk-mq and is only applied at each zvol's load + time.
+
=0 + (uint)
+
The queue_depth value for the zvol blk-mq + interface. This parameter will only appear if your kernel supports + blk-mq and is only applied at each zvol's load + time. If 0 (the default) then use the kernel's default + queue depth. Values are clamped to the kernel's + BLKDEV_MIN_RQ and + BLKDEV_MAX_RQ/BLKDEV_DEFAULT_RQ + limits.
+
=1 + (uint)
+
Defines zvol block devices behaviour when + =: + +
+
=0|1 + (uint)
+
Enable strict ZVOL quota enforcement. The strict quota enforcement may + have a performance impact.
+
+
+
+

+

ZFS issues I/O operations to leaf vdevs to satisfy and complete + I/O operations. The scheduler determines when and in what order those + operations are issued. The scheduler divides operations into five I/O + classes, prioritized in the following order: sync read, sync write, async + read, async write, and scrub/resilver. Each queue defines the minimum and + maximum number of concurrent operations that may be issued to the device. In + addition, the device has an aggregate maximum, + zfs_vdev_max_active. Note that the sum of the per-queue + minima must not exceed the aggregate maximum. If the sum of the per-queue + maxima exceeds the aggregate maximum, then the number of active operations + may reach zfs_vdev_max_active, in which case no further + operations will be issued, regardless of whether all per-queue minima have + been met.

+

For many physical devices, throughput increases with the number of + concurrent operations, but latency typically suffers. Furthermore, physical + devices typically have a limit at which more concurrent operations have no + effect on throughput or can actually cause it to decrease.

+

The scheduler selects the next operation to issue by first looking + for an I/O class whose minimum has not been satisfied. Once all are + satisfied and the aggregate maximum has not been hit, the scheduler looks + for classes whose maximum has not been satisfied. Iteration through the I/O + classes is done in the order specified above. No further operations are + issued if the aggregate maximum number of concurrent operations has been + hit, or if there are no operations queued for an I/O class that has not hit + its maximum. Every time an I/O operation is queued or an operation + completes, the scheduler looks for new operations to issue.

+

In general, smaller max_actives will lead to + lower latency of synchronous operations. Larger + max_actives may lead to higher overall throughput, + depending on underlying storage.

+

The ratio of the queues' max_actives determines + the balance of performance between reads, writes, and scrubs. For example, + increasing zfs_vdev_scrub_max_active will cause the scrub + or resilver to complete more quickly, but reads and writes to have higher + latency and lower throughput.

+

All I/O classes have a fixed maximum number of outstanding + operations, except for the async write class. Asynchronous writes represent + the data that is committed to stable storage during the syncing stage for + transaction groups. Transaction groups enter the syncing state periodically, + so the number of queued async writes will quickly burst up and then bleed + down to zero. Rather than servicing them as quickly as possible, the I/O + scheduler changes the maximum number of active async write operations + according to the amount of dirty data in the pool. Since both throughput and + latency typically increase with the number of concurrent operations issued + to physical devices, reducing the burstiness in the number of simultaneous + operations also stabilizes the response time of operations from other + queues, in particular synchronous ones. In broad strokes, the I/O scheduler + will issue more concurrent operations from the async write queue as there is + more dirty data in the pool.

+
+

+

The number of concurrent operations issued for the async write I/O + class follows a piece-wise linear function defined by a few adjustable + points:

+
+
       |              o---------| <-- zfs_vdev_async_write_max_active
+  ^    |             /^         |
+  |    |            / |         |
+active |           /  |         |
+ I/O   |          /   |         |
+count  |         /    |         |
+       |        /     |         |
+       |-------o      |         | <-- zfs_vdev_async_write_min_active
+      0|_______^______|_________|
+       0%      |      |       100% of zfs_dirty_data_max
+               |      |
+               |      `-- zfs_vdev_async_write_active_max_dirty_percent
+               `--------- zfs_vdev_async_write_active_min_dirty_percent
+
+

Until the amount of dirty data exceeds a minimum percentage of the + dirty data allowed in the pool, the I/O scheduler will limit the number of + concurrent operations to the minimum. As that threshold is crossed, the + number of concurrent operations issued increases linearly to the maximum at + the specified maximum percentage of the dirty data allowed in the pool.

+

Ideally, the amount of dirty data on a busy pool will stay in the + sloped part of the function between + zfs_vdev_async_write_active_min_dirty_percent and + zfs_vdev_async_write_active_max_dirty_percent. If it + exceeds the maximum percentage, this indicates that the rate of incoming + data is greater than the rate that the backend storage can handle. In this + case, we must further throttle incoming writes, as described in the next + section.

+
+
+
+

+

We delay transactions when we've determined that the backend + storage isn't able to accommodate the rate of incoming writes.

+

If there is already a transaction waiting, we delay relative to + when that transaction will finish waiting. This way the calculated delay + time is independent of the number of threads concurrently executing + transactions.

+

If we are the only waiter, wait relative to when the transaction + started, rather than the current time. This credits the transaction for + "time already served", e.g. reading indirect blocks.

+

The minimum time for a transaction to take is calculated as

+
min_time = min(zfs_delay_scale + × (dirty + - + ) / + ( + - dirty), 100ms)
+

The delay has two degrees of freedom that can be adjusted via + tunables. The percentage of dirty data at which we start to delay is defined + by zfs_delay_min_dirty_percent. This should typically be + at or above zfs_vdev_async_write_active_max_dirty_percent, + so that we only start to delay after writing at full speed has failed to + keep up with the incoming write rate. The scale of the curve is defined by + zfs_delay_scale. Roughly speaking, this variable + determines the amount of delay at the midpoint of the curve.

+
+
delay
+ 10ms +-------------------------------------------------------------*+
+      |                                                             *|
+  9ms +                                                             *+
+      |                                                             *|
+  8ms +                                                             *+
+      |                                                            * |
+  7ms +                                                            * +
+      |                                                            * |
+  6ms +                                                            * +
+      |                                                            * |
+  5ms +                                                           *  +
+      |                                                           *  |
+  4ms +                                                           *  +
+      |                                                           *  |
+  3ms +                                                          *   +
+      |                                                          *   |
+  2ms +                                              (midpoint) *    +
+      |                                                  |    **     |
+  1ms +                                                  v ***       +
+      |             zfs_delay_scale ---------->     ********         |
+    0 +-------------------------------------*********----------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note, that since the delay is added to the outstanding time + remaining on the most recent transaction it's effectively the inverse of + IOPS. Here, the midpoint of 500 us translates to + 2000 IOPS. The shape of the curve was chosen such that + small changes in the amount of accumulated dirty data in the first three + quarters of the curve yield relatively small differences in the amount of + delay.

+

The effects can be easier to understand when the amount of delay + is represented on a logarithmic scale:

+
+
delay
+100ms +-------------------------------------------------------------++
+      +                                                              +
+      |                                                              |
+      +                                                             *+
+ 10ms +                                                             *+
+      +                                                           ** +
+      |                                              (midpoint)  **  |
+      +                                                  |     **    +
+  1ms +                                                  v ****      +
+      +             zfs_delay_scale ---------->        *****         +
+      |                                             ****             |
+      +                                          ****                +
+100us +                                        **                    +
+      +                                       *                      +
+      |                                      *                       |
+      +                                     *                        +
+ 10us +                                     *                        +
+      +                                                              +
+      |                                                              |
+      +                                                              +
+      +--------------------------------------------------------------+
+      0%                    <- zfs_dirty_data_max ->               100%
+
+

Note here that only as the amount of dirty data approaches its + limit does the delay start to increase rapidly. The goal of a properly tuned + system should be to keep the amount of dirty data out of that range by first + ensuring that the appropriate limits are set for the I/O scheduler to reach + optimal throughput on the back-end storage, and then by changing the value + of zfs_delay_scale to increase the steepness of the + curve.

+
+
+ + + + + +
July 21, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/5/index.html b/man/v2.2/5/index.html new file mode 100644 index 000000000..16c2659b6 --- /dev/null +++ b/man/v2.2/5/index.html @@ -0,0 +1,147 @@ + + + + + + + File Formats and Conventions (5) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

File Formats and Conventions (5)

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/5/vdev_id.conf.5.html b/man/v2.2/5/vdev_id.conf.5.html new file mode 100644 index 000000000..00c740676 --- /dev/null +++ b/man/v2.2/5/vdev_id.conf.5.html @@ -0,0 +1,367 @@ + + + + + + + vdev_id.conf.5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdev_id.conf.5

+
+ + + + + +
VDEV_ID.CONF(5)File Formats ManualVDEV_ID.CONF(5)
+
+
+

+

vdev_id.conf — + configuration file for vdev_id(8)

+
+
+

+

vdev_id.conf is the configuration file for + vdev_id(8). It controls the default behavior of + vdev_id(8) while it is mapping a disk device name to an + alias.

+

The vdev_id.conf file uses a simple format + consisting of a keyword followed by one or more values on a single line. Any + line not beginning with a recognized keyword is ignored. Comments may + optionally begin with a hash character.

+

The following keywords and values are used.

+
+
+ name devlink
+
Maps a device link in the /dev directory hierarchy + to a new device name. The udev rule defining the device link must have run + prior to vdev_id(8). A defined alias takes precedence + over a topology-derived name, but the two naming methods can otherwise + coexist. For example, one might name drives in a JBOD with the + sas_direct topology while naming an internal L2ARC + device with an alias. +

name is the name of the link to the + device that will by created under + /dev/disk/by-vdev.

+

devlink is the name of the device link + that has already been defined by udev. This may be an absolute path or + the base filename.

+
+
+ [pci_slot] port + name
+
Maps a physical path to a channel name (typically representing a single + disk enclosure).
+ +
Additionally create /dev/by-enclosure symlinks to + the disk enclosure + devices + using the naming scheme from vdev_id.conf. + enclosure_symlinks is only allowed for + sas_direct mode.
+ +
Specify the prefix for the enclosure symlinks in the form + /dev/by-enclosure/prefix⟩-⟨channel⟩⟨num⟩ +

Defaults to + “”.

+
+
+ prefix new + [channel]
+
Maps a disk slot number as reported by the operating system to an + alternative slot number. If the channel parameter is + specified then the mapping is only applied to slots in the named channel, + otherwise the mapping is applied to all channels. The first-specified + slot rule that can match a slot takes precedence. + Therefore a channel-specific mapping for a given slot should generally + appear before a generic mapping for the same slot. In this way a custom + mapping may be applied to a particular channel and a default mapping + applied to the others.
+
+ yes|no
+
Specifies whether vdev_id(8) will handle only + dm-multipath devices. If set to yes then + vdev_id(8) will examine the first running component disk + of a dm-multipath device as provided by the driver command to determine + the physical path.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+ num
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id(8) internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+ bay|phy|port|id|lun|ses
+
Specifies from which element of a SAS identifier the slot number is taken. + The default is bay: +
+
+
read the slot number from the bay identifier.
+
+
read the slot number from the phy identifier.
+
+
use the SAS port as the slot number.
+
+
use the scsi id as the slot number.
+
+
use the scsi lun as the slot number.
+
+
use the SCSI Enclosure Services (SES) enclosure device slot number, as + reported by sg_ses(8). Intended for use only on + systems where bay is unsupported, noting that + port and id may be unstable across + disk replacement.
+
+
+
+
+
+

+
+
/etc/zfs/vdev_id.conf
+
The configuration file for vdev_id(8).
+
+
+
+

+

A non-multipath configuration with direct-attached SAS enclosures + and an arbitrary slot re-mapping:

+
+
multipath     no
+topology      sas_direct
+phys_per_port 4
+slot          bay
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         C
+channel 86:00.0  0         D
+
+# Custom mapping for Channel A
+
+#    Linux      Mapped
+#    Slot       Slot      Channel
+slot 1          7         A
+slot 2          10        A
+slot 3          3         A
+slot 4          6         A
+
+# Default mapping for B, C, and D
+
+slot 1          4
+slot 2          2
+slot 3          1
+slot 4          3
+
+

A SAS-switch topology. Note, that the + channel keyword takes only two arguments in this + example:

+
+
topology      sas_switch
+
+#       SWITCH PORT  CHANNEL NAME
+channel 1            A
+channel 2            B
+channel 3            C
+channel 4            D
+
+

A multipath configuration. Note that channel names have multiple + definitions - one per physical path:

+
+
multipath yes
+
+#       PCI_SLOT HBA PORT  CHANNEL NAME
+channel 85:00.0  1         A
+channel 85:00.0  0         B
+channel 86:00.0  1         A
+channel 86:00.0  0         B
+
+

A configuration with enclosure_symlinks enabled:

+
+
multipath yes
+enclosure_symlinks yes
+
+#          PCI_ID      HBA PORT     CHANNEL NAME
+channel    05:00.0     1            U
+channel    05:00.0     0            L
+channel    06:00.0     1            U
+channel    06:00.0     0            L
+
+In addition to the disks symlinks, this configuration will create: +
+
/dev/by-enclosure/enc-L0
+/dev/by-enclosure/enc-L1
+/dev/by-enclosure/enc-U0
+/dev/by-enclosure/enc-U1
+
+

A configuration using device link aliases:

+
+
#     by-vdev
+#     name     fully qualified or base name of device link
+alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
+alias d2       wwn-0x5000c5002def789e
+
+
+
+

+

vdev_id(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/dracut.zfs.7.html b/man/v2.2/7/dracut.zfs.7.html new file mode 100644 index 000000000..840c3b9d0 --- /dev/null +++ b/man/v2.2/7/dracut.zfs.7.html @@ -0,0 +1,403 @@ + + + + + + + dracut.zfs.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

dracut.zfs.7

+
+ + + + + +
DRACUT.ZFS(7)Miscellaneous Information ManualDRACUT.ZFS(7)
+
+
+

+

dracut.zfs — + overview of ZFS dracut hooks

+
+
+

+
+
                      parse-zfs.sh → dracut-cmdline.service
+                          |                     ↓
+                          |                     …
+                          |                     ↓
+                          \————————→ dracut-initqueue.service
+                                                |                      zfs-import-opts.sh
+   zfs-load-module.service                      ↓                          |       |
+     |                  |                sysinit.target                    ↓       |
+     ↓                  |                       |        zfs-import-scan.service   ↓
+zfs-import-scan.service ↓                       ↓           | zfs-import-cache.service
+     |   zfs-import-cache.service         basic.target      |     |
+     \__________________|                       |           ↓     ↓
+                        ↓                       |     zfs-load-key.sh
+     zfs-env-bootfs.service                     |         |
+                        ↓                       ↓         ↓
+                 zfs-import.target → dracut-pre-mount.service
+                        |          ↑            |
+                        | dracut-zfs-generator  |
+                        | _____________________/|
+                        |/                      ↓
+                        |                   sysroot.mount ←——— dracut-zfs-generator
+                        |                       |
+                        |                       ↓
+                        |             initrd-root-fs.target ←— zfs-nonroot-necessities.service
+                        |                       |                                 |
+                        |                       ↓                                 |
+                        ↓             dracut-mount.service                        |
+       zfs-snapshot-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        ↓                       …                                 |
+       zfs-rollback-bootfs.service              |                                 |
+                        |                       ↓                                 |
+                        |          /sysroot/{usr,etc,lib,&c.} ←———————————————————/
+                        |                       |
+                        |                       ↓
+                        |                initrd-fs.target
+                        \______________________ |
+                                               \|
+                                                ↓
+        export-zfs.sh                      initrd.target
+              |                                 |
+              ↓                                 ↓
+   dracut-shutdown.service                      …
+                                                |
+                                                ↓
+                 zfs-needshutdown.sh → initrd-cleanup.service
+
+

Compare dracut.bootup(7) for the full + flowchart.

+
+
+

+

Under dracut, booting with + ZFS-on-/ is facilitated by a + number of hooks in the 90zfs module.

+

Booting into a ZFS dataset requires + mountpoint=/ to be set on the + dataset containing the root filesystem (henceforth "the boot + dataset") and at the very least either the bootfs + property to be set to that dataset, or the root= kernel + cmdline (or dracut drop-in) argument to specify it.

+

All children of the boot dataset with + = + with mountpoints matching /etc, + /bin, /lib, + /lib??, /libx32, + and /usr globs are deemed + essential and will be mounted as well.

+

zfs-mount-generator(8) is recommended for proper + functioning of the system afterward (correct mount properties, remounting, + &c.).

+
+
+

+
+

+
+
dataset, + dataset
+
Use dataset as the boot dataset. All pluses + (‘+’) are replaced with spaces + (‘ ’).
+
, + root=zfs:, + , + [root=]
+
After import, search for the first pool with the bootfs + property set, use its value as-if specified as the + dataset above.
+
rootfstype=zfs root=dataset
+
Equivalent to + root=zfs:dataset.
+
+ [root=]
+
Equivalent to root=zfs:AUTO.
+
flags
+
Mount the boot dataset with -o + flags; cf. + Temporary Mount + Point Properties in zfsprops(7). These properties + will not last, since all filesystems will be re-mounted from the real + root.
+
+
If specified, dracut-zfs-generator logs to the + journal.
+
+

Be careful about setting neither rootfstype=zfs + nor root=zfs:dataset — other + automatic boot selection methods, like + systemd-gpt-auto-generator and + systemd-fstab-generator might take precedent.

+
+
+

+
+
[=snapshot-name]
+
Execute zfs snapshot + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
[=snapshot-name]
+
Execute zfs snapshot + -Rf + boot-dataset@snapshot-name + before pivoting to the real root. snapshot-name + defaults to the current kernel release.
+
host-id
+
Use zgenhostid(8) to set the host ID to + host-id; otherwise, + /etc/hostid inherited from the real root is + used.
+
, + zfs.force, zfsforce
+
Appends -f to all zpool + import invocations; primarily useful in + conjunction with spl_hostid=, or if no host ID was + inherited.
+
+
+
+
+

+
+
parse-zfs.sh + ()
+
Processes spl_hostid=. If root= + matches a known pattern, above, provides /dev/root + and delays the initqueue until zfs(4) is loaded,
+
zfs-import-opts.sh + (systemd environment + generator)
+
Turns zfs_force, zfs.force, + or zfsforce into + ZPOOL_IMPORT_OPTS=-f for + zfs-import-scan.service or + zfs-import-cache.service.
+
zfs-load-key.sh + ()
+
Loads encryption keys for the boot dataset and its essential descendants. +
+
+
=
+
Is prompted for via systemd-ask-password + thrice.
+
=URL, + keylocation=URL
+
network-online.target is started before + loading.
+
=path
+
If path doesn't exist, + udevadm is + settled. If it still doesn't, it's waited for + for up to + s.
+
+
+
+
zfs-env-bootfs.service + (systemd service)
+
After pool import, sets BOOTFS= in the systemd + environment to the first non-null bootfs value in + iteration order.
+
dracut-zfs-generator + (systemd generator)
+
Generates sysroot.mount (using + rootflags=, if any). If an + explicit boot dataset was specified, also generates essential mountpoints + (sysroot-etc.mount, + sysroot-bin.mount, + &c.), otherwise generates + zfs-nonroot-necessities.service which mounts them + explicitly after /sysroot using + BOOTFS=.
+
zfs-snapshot-bootfs.service, + zfs-rollback-bootfs.service + (systemd services)
+
Consume bootfs.snapshot and + bootfs.rollback as described in + CMDLINE. Use + BOOTFS= if no explicit boot dataset was + specified.
+
zfs-needshutdown.sh + ()
+
If any pools were imported, signals that shutdown hooks are required.
+
export-zfs.sh + ()
+
Forcibly exports all pools.
+
/etc/hostid, + /etc/zfs/zpool.cache, + /etc/zfs/vdev_id.conf (regular files)
+
Included verbatim, hostonly.
+
mount-zfs.sh + ()
+
Does nothing on systemd systems (if + dracut-zfs-generator + succeeded). Otherwise, loads encryption key for + the boot dataset from the console or via plymouth. It may not work at + all!
+
+
+
+

+

zfsprops(7), + zpoolprops(7), + dracut-shutdown.service(8), + systemd-fstab-generator(8), + systemd-gpt-auto-generator(8), + zfs-mount-generator(8), + zgenhostid(8)

+
+
+ + + + + +
March 28, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/index.html b/man/v2.2/7/index.html new file mode 100644 index 000000000..bd15261e9 --- /dev/null +++ b/man/v2.2/7/index.html @@ -0,0 +1,159 @@ + + + + + + + Miscellaneous (7) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/man/v2.2/7/vdevprops.7.html b/man/v2.2/7/vdevprops.7.html new file mode 100644 index 000000000..9a96c784f --- /dev/null +++ b/man/v2.2/7/vdevprops.7.html @@ -0,0 +1,330 @@ + + + + + + + vdevprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

vdevprops.7

+
+ + + + + +
VDEVPROPS(7)Miscellaneous Information ManualVDEVPROPS(7)
+
+
+

+

vdevpropsnative + and user-defined properties of ZFS vdevs

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate vdevs in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every vdev has a set of properties that export statistics about + the vdev as well as control various behaviors. Properties are not inherited + from top-level vdevs, with the exception of checksum_n, checksum_t, io_n, + and io_t.

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, + 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase.

+

The following native properties consist of read-only statistics + about the vdev. These properties can not be changed.

+
+
+
Percentage of vdev space used
+
+
state of this vdev such as online, faulted, or offline
+
+
globally unique id of this vdev
+
+
The allocable size of this vdev
+
+
The physical size of this vdev
+
+
The physical sector size of this vdev expressed as the power of two
+
+
The total size of this vdev
+
+
The amount of remaining free space on this vdev
+
+
The amount of allocated space on this vdev
+
+
How much this vdev can expand by
+
+
Percent of fragmentation in this vdev
+
+
The level of parity for this vdev
+
+
The device id for this vdev
+
+
The physical path to the device
+
+
The enclosure path to the device
+
+
Field Replacable Unit, usually a model number
+
+
Parent of this vdev
+
+
Comma separated list of children of this vdev
+
+
The number of children belonging to this vdev
+
, + , + , +
+
The number of errors of each type encountered by this vdev
+
, + , + , + , + , +
+
The number of I/O operations of each type performed by this vdev
+
, + , + , + , + , +
+
The cumulative size of all operations of each type performed by this + vdev
+
+
If this device is currently being removed from the pool
+
+

The following native properties can be used to change the behavior + of a vdev.

+
+
, + , + , +
+
Tune the fault management daemon by specifying checksum/io thresholds of + <N> errors in <T> seconds, respectively. These properties can + be set on leaf and top-level vdevs. When the property is set on the leaf + and top-level vdev, the value of the leaf vdev will be used. If the + property is only set on the top-level vdev, this value will be used. The + value of these properties do not persist across vdev replacement. For this + reason, it is advisable to set the property on the top-level vdev - not on + the leaf vdev itself. The default values are 10 errors in 600 + seconds.
+
+
A text comment up to 8192 characters long
+
+
The amount of space to reserve for the EFI system partition
+
+
If this device should propage BIO errors back to ZFS, used to disable + failfast.
+
+
The path to the device for this vdev
+
+
If this device should perform new allocations, used to disable a device + when it is scheduled for later removal. See + zpool-remove(8).
+
+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate vdevs.

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings and are never + validated. Use the zpool set + command with a blank value to clear a user property. Property values are + limited to 8192 bytes.

+
+
+
+

+

zpoolprops(7), + zpool-set(8)

+
+
+ + + + + +
October 30, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zfsconcepts.7.html b/man/v2.2/7/zfsconcepts.7.html new file mode 100644 index 000000000..8af2a3c16 --- /dev/null +++ b/man/v2.2/7/zfsconcepts.7.html @@ -0,0 +1,326 @@ + + + + + + + zfsconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsconcepts.7

+
+ + + + + +
ZFSCONCEPTS(7)Miscellaneous Information ManualZFSCONCEPTS(7)
+
+
+

+

zfsconcepts — + overview of ZFS concepts

+
+
+

+
+

+

A ZFS storage pool is a logical collection of devices that provide + space for datasets. A storage pool is also the root of the ZFS file system + hierarchy.

+

The root of the pool can be accessed as a file system, such as + mounting and unmounting, taking snapshots, and setting properties. The + physical storage characteristics, however, are managed by the + zpool(8) command.

+

See zpool(8) for more information on creating + and administering pools.

+
+
+

+

A snapshot is a read-only copy of a file system or volume. + Snapshots can be created extremely quickly, and initially consume no + additional space within the pool. As data within the active dataset changes, + the snapshot consumes more data than would otherwise be shared with the + active dataset.

+

Snapshots can have arbitrary names. Snapshots of + volumes can be cloned or rolled back, visibility is determined by the + property + of the parent volume.

+

File system snapshots can be accessed under the + .zfs/snapshot directory in the root of the file + system. Snapshots are automatically mounted on demand and may be unmounted + at regular intervals. The visibility of the .zfs + directory can be controlled by the + + property.

+
+
+

+

A bookmark is like a snapshot, a read-only copy of a file system + or volume. Bookmarks can be created extremely quickly, compared to + snapshots, and they consume no additional space within the pool. Bookmarks + can also have arbitrary names, much like snapshots.

+

Unlike snapshots, bookmarks can not be accessed through the + filesystem in any way. From a storage standpoint a bookmark just provides a + way to reference when a snapshot was created as a distinct object. Bookmarks + are initially tied to a snapshot, not the filesystem or volume, and they + will survive if the snapshot itself is destroyed. Since they are very light + weight there's little incentive to destroy them.

+
+
+

+

A clone is a writable volume or file system whose initial contents + are the same as another dataset. As with snapshots, creating a clone is + nearly instantaneous, and initially consumes no additional space.

+

Clones can only be created from a snapshot. When a + snapshot is cloned, it creates an implicit dependency between the parent and + child. Even though the clone is created somewhere else in the dataset + hierarchy, the original snapshot cannot be destroyed as long as a clone + exists. The + property exposes this dependency, and the destroy + command lists any such dependencies, if they exist.

+

The clone parent-child dependency relationship can be reversed by + using the promote subcommand. This causes the + "origin" file system to become a clone of the specified file + system, which makes it possible to destroy the file system that the clone + was created from.

+
+
+

+

Creating a ZFS file system is a simple operation, so the number of + file systems per system is likely to be numerous. To cope with this, ZFS + automatically manages mounting and unmounting file systems without the need + to edit the /etc/fstab file. All automatically + managed file systems are mounted by ZFS at boot time.

+

By default, file systems are mounted under + /path, where path is the name + of the file system in the ZFS namespace. Directories are created and + destroyed as needed.

+

A file system can also have a mount point set in + the mountpoint property. This directory is created as + needed, and ZFS automatically mounts the file system when the + zfs mount + -a command is invoked (without editing + /etc/fstab). The mountpoint + property can be inherited, so if + has a + mount point of /export/stuff, then + + automatically inherits a mount point of + /export/stuff/user.

+

A file system mountpoint property of + prevents the + file system from being mounted.

+

If needed, ZFS file systems can also be managed with + traditional tools (mount, + umount, /etc/fstab). If a + file system's mount point is set to + , ZFS makes + no attempt to manage the file system, and the administrator is responsible + for mounting and unmounting the file system. Because pools must be imported + before a legacy mount can succeed, administrators should ensure that legacy + mounts are only attempted after the zpool import process finishes at boot + time. For example, on machines using systemd, the mount option

+

x-systemd.requires=zfs-import.target

+

will ensure that the zfs-import completes before systemd attempts + mounting the filesystem. See systemd.mount(5) for + details.

+
+
+

+

Deduplication is the process for removing redundant data at the + block level, reducing the total amount of data stored. If a file system has + the + + property enabled, duplicate data blocks are removed synchronously. The + result is that only unique data is stored and common components are shared + among files.

+

Deduplicating data is a very resource-intensive operation. It is + generally recommended that you have at least 1.25 GiB of RAM per 1 TiB of + storage when you enable deduplication. Calculating the exact requirement + depends heavily on the type of data stored in the pool.

+

Enabling deduplication on an improperly-designed system can result + in performance issues (slow I/O and administrative operations). It can + potentially lead to problems importing a pool due to memory exhaustion. + Deduplication can consume significant processing power (CPU) and memory as + well as generate additional disk I/O.

+

Before creating a pool with deduplication + enabled, ensure that you have planned your hardware requirements + appropriately and implemented appropriate recovery practices, such as + regular backups. Consider using the + + property as a less resource-intensive alternative.

+
+
+

+

Block cloning is a facility that allows a file (or parts of a + file) to be "cloned", that is, a shallow copy made where the + existing data blocks are referenced rather than copied. Later modifications + to the data will cause a copy of the data block to be taken and that copy + modified. This facility is used to implement "reflinks" or + "file-level copy-on-write".

+

Cloned blocks are tracked in a special on-disk structure called + the Block Reference Table (BRT). Unlike deduplication, this table has + minimal overhead, so can be enabled at all times.

+

Also unlike deduplication, cloning must be requested by a user + program. Many common file copying programs, including newer versions of + /bin/cp, will try to create clones automatically. + Look for "clone", "dedupe" or "reflink" in the + documentation for more information.

+

There are some limitations to block cloning. Only + whole blocks can be cloned, and blocks can not be cloned if they are not yet + written to disk, or if they are encrypted, or the source and destination + + properties differ. The OS may add additional restrictions; for example, most + versions of Linux will not allow clones across datasets.

+
+
+
+ + + + + +
October 6, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zfsprops.7.html b/man/v2.2/7/zfsprops.7.html new file mode 100644 index 000000000..61b80b125 --- /dev/null +++ b/man/v2.2/7/zfsprops.7.html @@ -0,0 +1,1535 @@ + + + + + + + zfsprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfsprops.7

+
+ + + + + +
ZFSPROPS(7)Miscellaneous Information ManualZFSPROPS(7)
+
+
+

+

zfspropsnative + and user-defined properties of ZFS datasets

+
+
+

+

Properties are divided into two types, native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about user properties, + see the User Properties section, + below.

+
+

+

Every dataset has a set of properties that export statistics about + the dataset as well as control various behaviors. Properties are inherited + from the parent unless overridden by the child. Some properties apply only + to certain types of datasets (file systems, volumes, or snapshots).

+

The values of numeric properties can be specified using + human-readable suffixes (for example, + , + , + , + , and so + forth, up to + for zettabyte). The following are all valid (and equal) specifications: + 1536M, 1.5g, + 1.50GB.

+

The values of non-numeric properties are case sensitive and must + be lowercase, except for mountpoint, + sharenfs, and sharesmb.

+

The following native properties consist of read-only statistics + about the dataset. These properties can be neither set, nor inherited. + Native properties apply to all dataset types unless otherwise noted.

+
+
+
The amount of space available to the dataset and all its children, + assuming that there is no other activity in the pool. Because space is + shared within a pool, availability can be limited by any number of + factors, including physical pool size, quotas, reservations, or other + datasets within the pool. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For non-snapshots, the compression ratio achieved for the + used space of this dataset, expressed as a multiplier. + The used property includes descendant datasets, and, for + clones, does not include the space shared with the origin snapshot. For + snapshots, the compressratio is the same as the + refcompressratio property. Compression can be turned on + by running: zfs set + compression=on + dataset. The default value is + off.
+
+
The transaction group (txg) in which the dataset was created. Bookmarks + have the same createtxg as the snapshot they are + initially tied to. This property is suitable for ordering a list of + snapshots, e.g. for incremental send and receive.
+
+
The time this dataset was created.
+
+
For snapshots, this property is a comma-separated list of filesystems or + volumes which are clones of this snapshot. The clones' + origin property is this snapshot. If the + clones property is not empty, then this snapshot can not + be destroyed (even with the -r or + -f options). The roles of origin and clone can be + swapped by promoting the clone with the zfs + promote command.
+
+
This property is on if the snapshot has been marked for + deferred destroy by using the zfs + destroy -d command. + Otherwise, the property is off.
+
+
For encrypted datasets, indicates where the dataset is currently + inheriting its encryption key from. Loading or unloading a key for the + encryptionroot will implicitly load / unload the key for + any inheriting datasets (see zfs + load-key and zfs + unload-key for details). Clones will always share + an encryption key with their origin. See the + Encryption section of + zfs-load-key(8) for details.
+
+
The total number of filesystems and volumes that exist under this location + in the dataset tree. This value is only available when a + filesystem_limit has been set somewhere in the tree + under which the dataset resides.
+
+
Indicates if an encryption key is currently loaded into ZFS. The possible + values are none, available, and + . + See zfs load-key and + zfs unload-key.
+
+
The 64 bit GUID of this dataset or bookmark which does not change over its + entire lifetime. When a snapshot is sent to another pool, the received + snapshot has the same GUID. Thus, the guid is suitable + to identify a snapshot across pools.
+
+
The amount of space that is "logically" accessible by this + dataset. See the referenced property. The logical space + ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space that is "logically" consumed by this dataset + and all its descendents. See the used property. The + logical space ignores the effect of the compression and + copies properties, giving a quantity closer to the + amount of data that applications see. However, it does include space + consumed by metadata. +

This property can also be referred to by its + shortened column name, + .

+
+
+
For file systems, indicates whether the file system is currently mounted. + This property can be either + or + .
+
+
A unique identifier for this dataset within the pool. Unlike the dataset's + guid, the + objsetid of a dataset is not transferred to other pools + when the snapshot is copied with a send/receive operation. The + objsetid can be reused (for a new dataset) after the + dataset is deleted.
+
+
For cloned file systems or volumes, the snapshot from which the clone was + created. See also the clones property.
+
+
For filesystems or volumes which have saved partially-completed state from + zfs receive + -s, this opaque token can be provided to + zfs send + -t to resume and complete the + zfs receive.
+
+
For bookmarks, this is the list of snapshot guids the bookmark contains a + redaction list for. For snapshots, this is the list of snapshot guids the + snapshot is redacted with respect to.
+
+
The amount of data that is accessible by this dataset, which may or may + not be shared with other datasets in the pool. When a snapshot or clone is + created, it initially references the same amount of space as the file + system or snapshot it was created from, since its contents are identical. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The compression ratio achieved for the referenced space + of this dataset, expressed as a multiplier. See also the + compressratio property.
+
+
The total number of snapshots that exist under this location in the + dataset tree. This value is only available when a + snapshot_limit has been set somewhere in the tree under + which the dataset resides.
+
+
The type of dataset: + , + , + , + or + .
+
+
The amount of space consumed by this dataset and all its descendents. This + is the value that is checked against this dataset's quota and reservation. + The space used does not include this dataset's reservation, but does take + into account the reservations of any descendent datasets. The amount of + space that a dataset consumes from its parent, as well as the amount of + space that is freed if this dataset is recursively destroyed, is the + greater of its space used and its reservation. +

The used space of a snapshot (see the + Snapshots section of + zfsconcepts(7)) is space that is referenced + exclusively by this snapshot. If this snapshot is destroyed, the amount + of used space will be freed. Space that is shared by + multiple snapshots isn't accounted for in this metric. When a snapshot + is destroyed, space that was previously shared with this snapshot can + become unique to snapshots adjacent to it, thus changing the used space + of those snapshots. The used space of the latest snapshot can also be + affected by changes in the file system. Note that the + used space of a snapshot is a subset of the + written space of the snapshot.

+

The amount of space used, available, or referenced + does not take into account pending changes. Pending changes are + generally accounted for within a few seconds. Committing a change to a + disk using fsync(2) or + does + not necessarily guarantee that the space usage information is updated + immediately.

+
+
+
The usedby* properties decompose the + used properties into the various reasons that space is + used. Specifically, used = + usedbychildren + + usedbydataset + + usedbyrefreservation + + usedbysnapshots. These properties are only available for + datasets created on zpool "version 13" + pools.
+
+
The amount of space used by children of this dataset, which would be freed + if all the dataset's children were destroyed.
+
+
The amount of space used by this dataset itself, which would be freed if + the dataset were destroyed (after first removing any + refreservation and destroying any necessary snapshots or + descendents).
+
+
The amount of space used by a refreservation set on this + dataset, which would be freed if the refreservation was + removed.
+
+
The amount of space consumed by snapshots of this dataset. In particular, + it is the amount of space that would be freed if all of this dataset's + snapshots were destroyed. Note that this is not simply the sum of the + snapshots' used properties because space can be shared + by multiple snapshots.
+
@user
+
The amount of space consumed by the specified user in this dataset. Space + is charged to the owner of each file, as displayed by + ls -l. The amount of space + charged is displayed by du + and ls + -s. See the zfs + userspace command for more information. +

Unprivileged users can access only their own space usage. The + root user, or a user who has been granted the userused + privilege with zfs + allow, can access everyone's usage.

+

The userused@ + properties are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
@user
+
The userobjused property is similar to + userused but instead it counts the number of objects + consumed by a user. This property counts all objects allocated on behalf + of the user, it may differ from the results of system tools such as + df -i. +

When the property xattr=on + is set on a file system additional objects will be created per-file to + store extended attributes. These additional objects are reflected in the + userobjused value and are counted against the user's + userobjquota. When a file system is configured to use + xattr=sa no additional internal + objects are normally required.

+
+
+
This property is set to the number of user holds on this snapshot. User + holds are set by using the zfs + hold command.
+
@group
+
The amount of space consumed by the specified group in this dataset. Space + is charged to the group of each file, as displayed by + ls -l. See the + userused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupused privilege with zfs + allow, can access all groups' usage.

+
+
@group
+
The number of objects consumed by the specified group in this dataset. + Multiple objects may be charged to the group for each file when extended + attributes are in use. See the + userobjused@user property for more + information. +

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + groupobjused privilege with + zfs allow, can access + all groups' usage.

+
+
@project
+
The amount of space consumed by the specified project in this dataset. + Project is identified via the project identifier (ID) that is object-based + numeral attribute. An object can inherit the project ID from its parent + object (if the parent has the flag of inherit project ID that can be set + and changed via chattr + -/+P or zfs project + -s) when being created. The privileged user can + set and change object's project ID via chattr + -p or zfs project + -s anytime. Space is charged to the project of + each file, as displayed by lsattr + -p or zfs project. See the + userused@user property for more + information. +

The root user, or a user who has been granted the + projectused privilege with zfs + allow, can access all projects' usage.

+
+
@project
+
The projectobjused is similar to + projectused but instead it counts the number of objects + consumed by project. When the property + xattr=on is set on a fileset, ZFS will + create additional objects per-file to store extended attributes. These + additional objects are reflected in the projectobjused + value and are counted against the project's + projectobjquota. When a filesystem is configured to use + xattr=sa no additional internal + objects are required. See the + userobjused@user property for more + information. +

The root user, or a user who has been granted the + projectobjused privilege with zfs + allow, can access all projects' objects usage.

+
+
+
Provides a mechanism to quickly determine whether snapshot list has + changed without having to mount a dataset or iterate the snapshot list. + Specifies the time at which a snapshot for a dataset was last created or + deleted. +

This allows us to be more efficient + how often we query snapshots. The property is persistent across mount + and unmount operations only if the + + feature is enabled.

+
+
+
For volumes, specifies the block size of the volume. The + blocksize cannot be changed once the volume has been + written, so it should be set at volume creation time. The default + blocksize for volumes is 16 Kbytes. Any power of 2 from + 512 bytes to 128 Kbytes is valid. +

This property can also be referred to by its + shortened column name, + .

+
+
+
The amount of space referenced by this dataset, that was + written since the previous snapshot (i.e. that is not referenced by the + previous snapshot).
+
@snapshot
+
The amount of referenced space written to this dataset + since the specified snapshot. This is the space that is referenced by this + dataset but was not referenced by the specified snapshot. +

The snapshot may be specified as a short + snapshot name (just the part after the @), in which + case it will be interpreted as a snapshot in the same filesystem as this + dataset. The snapshot may be a full snapshot name + (filesystem@snapshot), which + for clones may be a snapshot in the origin's filesystem (or the origin + of the origin's filesystem, etc.)

+
+
+

The following native properties can be used to change the behavior + of a ZFS dataset.

+
+
=discard|noallow|restricted|passthrough|passthrough-x
+
Controls how ACEs are inherited when files and directories are created. +
+
+
+
does not inherit any ACEs.
+
+
only inherits inheritable ACEs that specify "deny" + permissions.
+
+
default, removes the + + and + + permissions when the ACE is inherited.
+
+
inherits all inheritable ACEs without any modifications.
+
+
same meaning as passthrough, except that the + , + , + and + + ACEs inherit the execute permission only if the file creation mode + also requests the execute bit.
+
+
+

When the property value is set to + passthrough, files are created with a mode determined + by the inheritable ACEs. If no inheritable ACEs exist that affect the + mode, then the mode is set in accordance to the requested mode from the + application.

+

The aclinherit property does not apply to + POSIX ACLs.

+
+
=discard|groupmask|passthrough|restricted
+
Controls how an ACL is modified during chmod(2) and how inherited ACEs are + modified by the file creation mode: +
+
+
+
default, deletes all + + except for those representing the mode of the file or directory + requested by chmod(2).
+
+
reduces permissions granted in all + + entries found in the + + such that they are no greater than the group permissions specified by + chmod(2).
+
+
indicates that no changes are made to the ACL other than creating or + updating the necessary ACL entries to represent the new mode of the + file or directory.
+
+
will cause the chmod(2) operation to return an error + when used on any file or directory which has a non-trivial ACL whose + entries can not be represented by a mode. chmod(2) + is required to change the set user ID, set group ID, or sticky bits on + a file or directory, as they do not have equivalent ACL entries. In + order to use chmod(2) on a file or directory with a + non-trivial ACL when aclmode is set to + restricted, you must first remove all ACL entries + which do not represent the current mode.
+
+
+
+
=off|nfsv4|posix
+
Controls whether ACLs are enabled and if so what type of ACL to use. When + this property is set to a type of ACL not supported by the current + platform, the behavior is the same as if it were set to + off. +
+
+
+
default on Linux, when a file system has the acltype + property set to off then ACLs are disabled.
+
+
an alias for off
+
+
default on FreeBSD, indicates that NFSv4-style + ZFS ACLs should be used. These ACLs can be managed with the + getfacl(1) and setfacl(1). The + nfsv4 ZFS ACL type is not yet supported on + Linux.
+
+
indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux + and are not functional on other platforms. POSIX ACLs are stored as an + extended attribute and therefore will not overwrite any existing NFSv4 + ACLs which may be set.
+
+
an alias for posix
+
+
+

To obtain the best performance when setting + posix users are strongly encouraged to set the + xattr=sa property. This will result + in the POSIX ACL being stored more efficiently on disk. But as a + consequence, all new extended attributes will only be accessible from + OpenZFS implementations which support the + xattr=sa property. See the + xattr property for more details.

+
+
=on|off
+
Controls whether the access time for files is updated when they are read. + Turning this property off avoids producing write traffic when reading + files and can result in significant performance gains, though it might + confuse mailers and other similar utilities. The values + on and off are equivalent to the + atime and + + mount options. The default value is on. See also + relatime below.
+
=on|off|noauto
+
If this property is set to off, the file system cannot + be mounted, and is ignored by zfs + mount -a. Setting this + property to off is similar to setting the + mountpoint property to none, except + that the dataset still has a normal mountpoint property, + which can be inherited. Setting this property to off + allows datasets to be used solely as a mechanism to inherit properties. + One example of setting canmount=off is + to have two datasets with the same mountpoint, so that + the children of both datasets appear in the same directory, but might have + different inherited characteristics. +

When set to noauto, a dataset can only be + mounted and unmounted explicitly. The dataset is not mounted + automatically when the dataset is created or imported, nor is it mounted + by the zfs mount + -a command or unmounted by the + zfs unmount + -a command.

+

This property is not inherited.

+
+
=on|off||fletcher4|sha256|noparity|sha512|skein|edonr|blake3
+
Controls the checksum used to verify data integrity. The default value is + on, which automatically selects an appropriate algorithm + (currently, fletcher4, but this may change in future + releases). The value off disables integrity checking on + user data. The value noparity not only disables + integrity but also disables maintaining parity for user data. This setting + is used internally by a dump device residing on a RAID-Z pool and should + not be used by any other dataset. Disabling checksums is + NOT a recommended practice. +

The sha512, skein, + edonr, and blake3 checksum + algorithms require enabling the appropriate features on the pool.

+

Please see zpool-features(7) for more + information on these algorithms.

+

Changing this property affects only newly-written data.

+
+
=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N
+
Controls the compression algorithm used for this dataset. +

When set to on (the default), indicates that + the current default compression algorithm should be used. The default + balances compression and decompression speed, with compression ratio and + is expected to work well on a wide variety of workloads. Unlike all + other settings for this property, on does not select a + fixed compression type. As new compression algorithms are added to ZFS + and enabled on a pool, the default compression algorithm may change. The + current default compression algorithm is either lzjb + or, if the lz4_compress feature is enabled, + lz4.

+

The lz4 compression algorithm + is a high-performance replacement for the lzjb + algorithm. It features significantly faster compression and + decompression, as well as a moderately higher compression ratio than + lzjb, but can only be used on pools with the + lz4_compress feature set to + . See + zpool-features(7) for details on ZFS feature flags and + the lz4_compress feature.

+

The lzjb compression algorithm is optimized + for performance while providing decent data compression.

+

The gzip compression algorithm + uses the same compression as the gzip(1) command. You + can specify the gzip level by using the value + gzip-N, where + N is an integer from 1 (fastest) to 9 (best + compression ratio). Currently, gzip is equivalent to + (which + is also the default for gzip(1)).

+

The zstd compression algorithm + provides both high compression ratios and good performance. You can + specify the zstd level by using the value + zstd-N, where + N is an integer from 1 (fastest) to 19 (best + compression ratio). zstd is equivalent to + .

+

Faster speeds at the cost of the compression ratio can + be requested by setting a negative zstd level. This is + done using zstd-fast-N, where + N is an integer in + [1-, + , + , + , + , + , + 1000] which maps to a negative zstd + level. The lower the level the faster the compression — + 1000 provides the fastest compression and lowest + compression ratio. zstd-fast is equivalent to + zstd-fast-1.

+

The zle compression algorithm compresses + runs of zeros.

+

This property can also be referred to by its + shortened column name + . + Changing this property affects only newly-written data.

+

When any setting except off is selected, + compression will explicitly check for blocks consisting of only zeroes + (the NUL byte). When a zero-filled block is detected, it is stored as a + hole and not compressed using the indicated compression algorithm.

+

Any block being compressed must be no larger than 7/8 of its + original size after compression, otherwise the compression will not be + considered worthwhile and the block saved uncompressed. Note that when + the logical block is less than 8 times the disk sector size this + effectively reduces the necessary compression ratio; for example, 8 KiB + blocks on disks with 4 KiB disk sectors must compress to 1/2 or less of + their original size.

+
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for all files in the file system under + a mount point for that file system. See selinux(8) for + more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the file system file system being + mounted. See selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux default context for unlabeled files. See + selinux(8) for more information.
+
=none|SELinux-User:SELinux-Role:SELinux-Type:Sensitivity-Level
+
This flag sets the SELinux context for the root inode of the file system. + See selinux(8) for more information.
+
=1||
+
Controls the number of copies of data stored for this dataset. These + copies are in addition to any redundancy provided by the pool, for + example, mirroring or RAID-Z. The copies are stored on different disks, if + possible. The space used by multiple copies is charged to the associated + file and dataset, changing the used property and + counting against quotas and reservations. +

Changing this property only affects newly-written data. + Therefore, set this property at file system creation time by using the + -o + copies=N option.

+

Remember that ZFS will not import a pool with a missing + top-level vdev. Do NOT create, for example a two-disk + striped pool and set copies=2 on + some datasets thinking you have setup redundancy for them. When a disk + fails you will not be able to import the pool and will have lost all of + your data.

+

Encrypted datasets may not have + copies=3 since the + implementation stores some encryption metadata where the third copy + would normally be.

+
+
=on|off
+
Controls whether device nodes can be opened on this file system. The + default value is on. The values on and + off are equivalent to the dev and + + mount options.
+
=off|on|verify|sha256[,verify]|sha512[,verify]|skein[,verify]|edonr,verify|blake3[,verify]
+
Configures deduplication for a dataset. The default value is + off. The default deduplication checksum is + sha256 (this may change in the future). When + dedup is enabled, the checksum defined here overrides + the checksum property. Setting the value to + verify has the same effect as the setting + sha256,verify. +

If set to verify, ZFS will do a byte-to-byte + comparison in case of two blocks having the same signature to make sure + the block contents are identical. Specifying verify is + mandatory for the edonr algorithm.

+

Unless necessary, deduplication should + be enabled on + a system. See the Deduplication + section of zfsconcepts(7).

+
+
=legacy|auto|||||
+
Specifies a compatibility mode or literal value for the size of dnodes in + the file system. The default value is legacy. Setting + this property to a value other than legacy + requires the large_dnode + pool feature to be enabled. +

Consider setting dnodesize to + auto if the dataset uses the + xattr=sa property setting and the + workload makes heavy use of extended attributes. This may be applicable + to SELinux-enabled systems, Lustre servers, and Samba servers, for + example. Literal values are supported for cases where the optimal size + is known in advance and for performance testing.

+

Leave dnodesize set to + legacy if you need to receive a send stream of this + dataset on a pool that doesn't enable the large_dnode + feature, or if you need to import this pool on a system that doesn't + support the large_dnode + feature.

+

This property can also be referred to by its + shortened column name, + .

+
+
=off|on||||||aes-256-gcm
+
Controls the encryption cipher suite (block cipher, key length, and mode) + used for this dataset. Requires the encryption feature + to be enabled on the pool. Requires a keyformat to be + set at dataset creation time. +

Selecting encryption=on + when creating a dataset indicates that the default encryption suite will + be selected, which is currently aes-256-gcm. In order + to provide consistent data protection, encryption must be specified at + dataset creation time and it cannot be changed afterwards.

+

For more details and caveats about encryption see the + Encryption section of + zfs-load-key(8).

+
+
=||passphrase
+
Controls what format the user's encryption key will be provided as. This + property is only set when the dataset is encrypted. +

Raw keys and hex keys must be 32 bytes long (regardless of the + chosen encryption suite) and must be randomly generated. A raw key can + be generated with the following command:

+
# dd + + /path/to/output/key
+

Passphrases must be between 8 and 512 bytes long and will be + processed through PBKDF2 before being used (see the + pbkdf2iters property). Even though the encryption + suite cannot be changed after dataset creation, the keyformat can be + with zfs change-key.

+
+
=prompt|/absolute/file/path|address|address
+
Controls where the user's encryption key will be loaded from by default + for commands such as zfs + load-key and zfs + mount -l. This property is + only set for encrypted datasets which are encryption roots. If + unspecified, the default is prompt. +

Even though the encryption suite cannot + be changed after dataset creation, the keylocation can be with either + zfs set or + zfs change-key. If + prompt is selected ZFS will ask for the key at the + command prompt when it is required to access the encrypted data (see + zfs load-key for + details). This setting will also allow the key to be passed in via the + standard input stream, but users should be careful not to place keys + which should be kept secret on the command line. If a file URI is + selected, the key will be loaded from the specified absolute file path. + If an HTTPS or HTTP URL is selected, it will be GETted using + fetch(3), libcurl, or nothing, depending on + compile-time configuration and run-time availability. The + + environment variable can be set to set the location of the concatenated + certificate store. The + + environment variable can be set to override the location of the + directory containing the certificate authority bundle. The + + and + + environment variables can be set to configure the path to the client + certificate and its key.

+
+
=iterations
+
Controls the number of PBKDF2 iterations that a + passphrase encryption key should be run through when + processing it into an encryption key. This property is only defined when + encryption is enabled and a keyformat of passphrase is + selected. The goal of PBKDF2 is to significantly increase the + computational difficulty needed to brute force a user's passphrase. This + is accomplished by forcing the attacker to run each passphrase through a + computationally expensive hashing function many times before they arrive + at the resulting key. A user who actually knows the passphrase will only + have to pay this cost once. As CPUs become better at processing, this + number should be raised to ensure that a brute force attack is still not + possible. The current default is + + and the minimum is + . + This property may be changed with zfs + change-key.
+
=on|off
+
Controls whether processes can be executed from within this file system. + The default value is on. The values on + and off are equivalent to the exec and + + mount options.
+
=count|none
+
Limits the number of filesystems and volumes that can exist under this + point in the dataset tree. The limit is not enforced if the user is + allowed to change the limit. Setting a filesystem_limit + to on a descendent of a filesystem that already has a + filesystem_limit does not override the ancestor's + filesystem_limit, but rather imposes an additional + limit. This feature must be enabled to be used (see + zpool-features(7)).
+
=size
+
This value represents the threshold block size for including small file + blocks into the special allocation class. Blocks smaller than or equal to + this value will be assigned to the special allocation class while greater + blocks will be assigned to the regular class. Valid values are zero or a + power of two from 512 up to 1048576 (1 MiB). The default size is 0 which + means no small file blocks will be allocated in the special class. +

Before setting this property, a special class vdev must be + added to the pool. See zpoolconcepts(7) for more + details on the special allocation class.

+
+
=path|none|legacy
+
Controls the mount point used for this file system. See the + Mount Points section of + zfsconcepts(7) for more information on how this property + is used. +

When the mountpoint property is changed for + a file system, the file system and any children that inherit the mount + point are unmounted. If the new value is legacy, then + they remain unmounted. Otherwise, they are automatically remounted in + the new location if the property was previously legacy + or none. In addition, any shared file systems are + unshared and shared in the new location.

+

When the mountpoint property is set with + zfs set + -u , the mountpoint property + is updated but dataset is not mounted or unmounted and remains as it was + before.

+
+
=on|off
+
Controls whether the file system should be mounted with + nbmand (Non-blocking mandatory locks). Changes to this + property only take effect when the file system is umounted and remounted. + This was only supported by Linux prior to 5.15, and was buggy there, and + is not supported by FreeBSD. On Solaris it's used + for SMB clients.
+
=on|off
+
Allow mounting on a busy directory or a directory which already contains + files or directories. This is the default mount behavior for Linux and + FreeBSD file systems. On these platforms the + property is on by default. Set to off + to disable overlay mounts for consistency with OpenZFS on other + platforms.
+
=all|none|metadata
+
Controls what is cached in the primary cache (ARC). If this property is + set to all, then both user data and metadata is cached. + If this property is set to none, then neither user data + nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=size|none
+
Limits the amount of space a dataset and its descendents can consume. This + property enforces a hard limit on the amount of space used. This includes + all space consumed by descendents, including file systems and snapshots. + Setting a quota on a descendent of a dataset that already has a quota does + not override the ancestor's quota, but rather imposes an additional limit. +

Quotas cannot be set on volumes, as the + volsize property acts as an implicit quota.

+
+
=count|none
+
Limits the number of snapshots that can be created on a dataset and its + descendents. Setting a snapshot_limit on a descendent of + a dataset that already has a snapshot_limit does not + override the ancestor's snapshot_limit, but rather + imposes an additional limit. The limit is not enforced if the user is + allowed to change the limit. For example, this means that recursive + snapshots taken from the global zone are counted against each delegated + dataset within a zone. This feature must be enabled to be used (see + zpool-features(7)).
+
user=size|none
+
Limits the amount of space consumed by the specified user. User space + consumption is identified by the + user + property. +

Enforcement of user quotas may be delayed by several seconds. + This delay means that a user might exceed their quota before the system + notices that they are over quota and begins to refuse additional writes + with the EDQUOT error message. See the + zfs userspace command + for more information.

+

Unprivileged users can only access their own groups' space + usage. The root user, or a user who has been granted the + userquota privilege with zfs + allow, can get and set everyone's quota.

+

This property is not available on volumes, on file systems + before version 4, or on pools before version 15. The + userquota@ properties + are not displayed by zfs + get all. The user's name must + be appended after the @ symbol, using one of the + following forms:

+
    +
  • POSIX name ("joe")
  • +
  • POSIX numeric ID ("789")
  • +
  • SID name ("joe.smith@mydomain")
  • +
  • SID numeric ID ("S-1-123-456-789")
  • +
+

Files created on Linux always have POSIX owners.

+
+
user=size|none
+
The userobjquota is similar to + userquota but it limits the number of objects a user can + create. Please refer to userobjused for more information + about how objects are counted.
+
group=size|none
+
Limits the amount of space consumed by the specified group. Group space + consumption is identified by the + group + property. +

Unprivileged users can access only their own groups' space + usage. The root user, or a user who has been granted the + groupquota privilege with zfs + allow, can get and set all groups' quotas.

+
+
group=size|none
+
The + + is similar to groupquota but it limits number of objects + a group can consume. Please refer to userobjused for + more information about how objects are counted.
+
project=size|none
+
Limits the amount of space consumed by the specified project. Project + space consumption is identified by the + project + property. Please refer to projectused for more + information about how project is identified and set/changed. +

The root user, or a user who has been granted the + projectquota privilege with zfs + allow, can access all projects' quota.

+
+
project=size|none
+
The projectobjquota is similar to + projectquota but it limits number of objects a project + can consume. Please refer to userobjused for more + information about how objects are counted.
+
=on|off
+
Controls whether this dataset can be modified. The default value is + off. The values on and + off are equivalent to the + and + mount + options. +

This property can also be referred to by its + shortened column name, + .

+
+
=size
+
Specifies a suggested block size for files in the file system. This + property is designed solely for use with database workloads that access + files in fixed-size records. ZFS automatically tunes block sizes according + to internal algorithms optimized for typical access patterns. +

For databases that create very large files but access them in + small random chunks, these algorithms may be suboptimal. Specifying a + recordsize greater than or equal to the record size of + the database can result in significant performance gains. Use of this + property for general purpose file systems is strongly discouraged, and + may adversely affect performance.

+

The size specified must be a power of two + greater than or equal to 512 B and less than or + equal to 128 KiB. If the + + feature is enabled on the pool, the size may be up to 1 + MiB. See zpool-features(7) for details on ZFS + feature flags.

+

Changing the file system's recordsize + affects only files created afterward; existing files are unaffected.

+

This property can also be referred to by its + shortened column name, + .

+
+
=all|most|some|none
+
Controls what types of metadata are stored redundantly. ZFS stores an + extra copy of metadata, so that if a single block is corrupted, the amount + of user data lost is limited. This extra copy is in addition to any + redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and + is in addition to an extra copy specified by the copies + property (up to a total of 3 copies). For example if the pool is mirrored, + copies=2, and + redundant_metadata=most, then ZFS + stores 6 copies of most metadata, and 4 copies of data and some metadata. +

When set to all, ZFS stores an extra copy of + all metadata. If a single on-disk block is corrupt, at worst a single + block of user data (which is recordsize bytes long) + can be lost.

+

When set to most, ZFS stores an extra copy + of most types of metadata. This can improve performance of random + writes, because less metadata must be written. In practice, at worst + about 1000 blocks (of recordsize bytes each) of user + data can be lost if a single on-disk block is corrupt. The exact + behavior of which metadata blocks are stored redundantly may change in + future releases.

+

When set to some, ZFS stores an extra copy + of only critical metadata. This can improve file create performance + since less metadata needs to be written. If a single on-disk block is + corrupt, at worst a single user file can be lost.

+

When set to none, ZFS does not store any + copies of metadata redundantly. If a single on-disk block is corrupt, an + entire dataset can be lost.

+

The default value is all.

+
+
=size|none
+
Limits the amount of space a dataset can consume. This property enforces a + hard limit on the amount of space used. This hard limit does not include + space used by descendents, including file systems and snapshots.
+
=size|none|auto
+
The minimum amount of space guaranteed to a dataset, not including its + descendents. When the amount of space used is below this value, the + dataset is treated as if it were taking up the amount of space specified + by refreservation. The refreservation + reservation is accounted for in the parent datasets' space used, and + counts against the parent datasets' quotas and reservations. +

If refreservation is set, a snapshot is only + allowed if there is enough free pool space outside of this reservation + to accommodate the current number of "referenced" bytes in the + dataset.

+

If refreservation is set to + auto, a volume is thick provisioned (or "not + sparse"). refreservation=auto + is only supported on volumes. See volsize in the + Native Properties section + for more information about sparse volumes.

+

This property can also be referred to by its + shortened column name, + .

+
+
=on|off
+
Controls the manner in which the access time is updated when + atime=on is set. Turning this property + on causes the access time to be updated relative to the modify or change + time. Access time is only updated if the previous access time was earlier + than the current modify or change time or if the existing access time + hasn't been updated within the past 24 hours. The default value is + on. The values on and + off are equivalent to the relatime and + + mount options.
+
=size|none
+
The minimum amount of space guaranteed to a dataset and its descendants. + When the amount of space used is below this value, the dataset is treated + as if it were taking up the amount of space specified by its reservation. + Reservations are accounted for in the parent datasets' space used, and + count against the parent datasets' quotas and reservations. +

This property can also be referred to by its + shortened column name, + .

+
+
=all|none|metadata
+
Controls what is cached in the secondary cache (L2ARC). If this property + is set to all, then both user data and metadata is + cached. If this property is set to none, then neither + user data nor metadata is cached. If this property is set to + metadata, then only metadata is cached. The default + value is all.
+
=on|off
+
Controls whether the setuid bit is respected for the file system. The + default value is on. The values on and + off are equivalent to the + and + nosuid mount options.
+
=on|off|opts
+
Controls whether the file system is shared by using + and what options are to be used. Otherwise, the file + system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the net(8) command is invoked + to create a + . +

Because SMB shares requires a resource name, a unique resource + name is constructed from the dataset name. The constructed name is a + copy of the dataset name except that the characters in the dataset name, + which would be invalid in the resource name, are replaced with + underscore (_) characters. Linux does not currently support additional + options which might be available on Solaris.

+

If the sharesmb property is set to + off, the file systems are unshared.

+

The share is created with the ACL (Access Control List) + "Everyone:F" ("F" stands for "full + permissions", i.e. read and write permissions) and no guest access + (which means Samba must be able to authenticate a real user — + passwd(5)/shadow(5)-, LDAP- or + smbpasswd(5)-based) by default. This means that any + additional access control (disallow specific user specific access etc) + must be done on the underlying file system.

+

When the sharesmb property is updated with + zfs set + -u , the property is set to desired value, but + the operation to share, reshare or unshare the the dataset is not + performed.

+
+
=on|off|opts
+
Controls whether the file system is shared via NFS, and what options are + to be used. A file system with a sharenfs property of + off is managed with the exportfs(8) + command and entries in the /etc/exports file. + Otherwise, the file system is automatically shared and unshared with the + zfs share and + zfs unshare commands. If + the property is set to on, the dataset is shared using + the default options: +
sec=sys,rw,crossmnt,no_subtree_check
+

Please note that the options are comma-separated, unlike those + found in exports(5). This is done to negate the need + for quoting, as well as to make parsing with scripts easier.

+

See exports(5) for the meaning of the + default options. Otherwise, the exportfs(8) command is + invoked with options equivalent to the contents of this property.

+

When the sharenfs property is changed for a + dataset, the dataset and any children inheriting the property are + re-shared with the new options, only if the property was previously + off, or if they were shared before the property was + changed. If the new property is off, the file systems + are unshared.

+

When the sharenfs property is updated with + zfs set + -u , the property is set to desired value, but + the operation to share, reshare or unshare the the dataset is not + performed.

+
+
=latency|throughput
+
Provide a hint to ZFS about handling of synchronous requests in this + dataset. If logbias is set to latency + (the default), ZFS will use pool log devices (if configured) to handle the + requests at low latency. If logbias is set to + throughput, ZFS will not use configured pool log + devices. ZFS will instead optimize synchronous operations for global pool + throughput and efficient use of resources.
+
=hidden|visible
+
Controls whether the volume snapshot devices under + /dev/zvol/pool⟩ + are hidden or visible. The default value is hidden.
+
=hidden|visible
+
Controls whether the .zfs directory is hidden or + visible in the root of the file system as discussed in the + Snapshots section of + zfsconcepts(7). The default value is + hidden.
+
=standard|always|disabled
+
Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). + standard is the POSIX-specified behavior of ensuring all + synchronous requests are written to stable storage and all devices are + flushed to ensure data is not cached by device controllers (this is the + default). always causes every file system transaction to + be written and flushed before its system call returns. This has a large + performance penalty. disabled disables synchronous + requests. File system transactions are only committed to stable storage + periodically. This option will give the highest performance. However, it + is very dangerous as ZFS would be ignoring the synchronous transaction + demands of applications such as databases or NFS. Administrators should + only use this option when the risks are understood.
+
=N|
+
The on-disk version of this file system, which is independent of the pool + version. This property can only be set to later supported versions. See + the zfs upgrade + command.
+
=size
+
For volumes, specifies the logical size of the volume. By default, + creating a volume establishes a reservation of equal size. For storage + pools with a version number of 9 or higher, a + refreservation is set instead. Any changes to + volsize are reflected in an equivalent change to the + reservation (or refreservation). The + volsize can only be set to a multiple of + volblocksize, and cannot be zero. +

The reservation is kept equal to the volume's logical size to + prevent unexpected behavior for consumers. Without the reservation, the + volume could run out of space, resulting in undefined behavior or data + corruption, depending on how the volume is used. These effects can also + occur when the volume size is changed while it is in use (particularly + when shrinking the size). Extreme care should be used when adjusting the + volume size.

+

Though not recommended, a "sparse volume" (also + known as "thin provisioned") can be created by specifying the + -s option to the zfs + create -V command, or by + changing the value of the refreservation property (or + reservation property on pool version 8 or earlier) + after the volume has been created. A "sparse volume" is a + volume where the value of refreservation is less than + the size of the volume plus the space required to store its metadata. + Consequently, writes to a sparse volume can fail with + ENOSPC when the pool is low on space. For a + sparse volume, changes to volsize are not reflected in + the refreservation. A volume that is not sparse is + said to be "thick provisioned". A sparse volume can become + thick provisioned by setting refreservation to + auto.

+
+
=default|full|geom|dev|none
+
This property specifies how volumes should be exposed to the OS. Setting + it to full exposes volumes as fully fledged block + devices, providing maximal functionality. The value geom + is just an alias for full and is kept for compatibility. + Setting it to dev hides its partitions. Volumes with + property set to none are not exposed outside ZFS, but + can be snapshotted, cloned, replicated, etc, that can be suitable for + backup purposes. Value default means that volumes + exposition is controlled by system-wide tunable + , + where full, dev and + none are encoded as 1, 2 and 3 respectively. The default + value is full.
+
=on|off
+
Controls whether regular files should be scanned for viruses when a file + is opened and closed. In addition to enabling this property, the virus + scan service must also be enabled for virus scanning to occur. The default + value is off. This property is not used by OpenZFS.
+
=on|off|sa
+
Controls whether extended attributes are enabled for this file system. Two + styles of extended attributes are supported: either directory-based or + system-attribute-based. +

The default value of on enables + directory-based extended attributes. This style of extended attribute + imposes no practical limit on either the size or number of attributes + which can be set on a file. Although under Linux the + getxattr(2) and setxattr(2) system + calls limit the maximum size to 64K. This is the most + compatible style of extended attribute and is supported by all ZFS + implementations.

+

System-attribute-based xattrs can be enabled by setting the + value to sa. The key advantage of this type of xattr + is improved performance. Storing extended attributes as system + attributes significantly decreases the amount of disk I/O required. Up + to 64K of data may be stored per-file in the space + reserved for system attributes. If there is not enough space available + for an extended attribute then it will be automatically written as a + directory-based xattr. System-attribute-based extended attributes are + not accessible on platforms which do not support the + xattr=sa feature. OpenZFS supports + xattr=sa on both + FreeBSD and Linux.

+

The use of system-attribute-based xattrs is strongly + encouraged for users of SELinux or POSIX ACLs. Both of these features + heavily rely on extended attributes and benefit significantly from the + reduced access time.

+

The values on and + off are equivalent to the xattr and + mount + options.

+
+
=off|on
+
Controls whether the dataset is managed from a jail. See + zfs-jail(8) for more information. Jails are a + FreeBSD feature and this property is not available + on other platforms.
+
=off|on
+
Controls whether the dataset is managed from a non-global zone or + namespace. See zfs-zone(8) for more information. Zoning + is a Linux feature and this property is not available on other + platforms.
+
+

The following three properties cannot be changed after the file + system is created, and therefore, should be set when the file system is + created. If the properties are not set with the zfs + create or zpool + create commands, these properties are inherited from + the parent dataset. If the parent dataset lacks these properties due to + having been created prior to these features being supported, the new file + system will have the default values for these properties.

+
+
=sensitive||mixed
+
Indicates whether the file name matching algorithm used by the file system + should be case-sensitive, case-insensitive, or allow a combination of both + styles of matching. The default value for the + casesensitivity property is sensitive. + Traditionally, UNIX and POSIX file systems have + case-sensitive file names. +

The mixed value for the + casesensitivity property indicates that the file + system can support requests for both case-sensitive and case-insensitive + matching behavior. Currently, case-insensitive matching behavior on a + file system that supports mixed behavior is limited to the SMB server + product. For more information about the mixed value + behavior, see the "ZFS Administration Guide".

+
+
=none||||
+
Indicates whether the file system should perform a + + normalization of file names whenever two file names are compared, and + which normalization algorithm should be used. File names are always stored + unmodified, names are normalized as part of any comparison process. If + this property is set to a legal value other than none, + and the utf8only property was left unspecified, the + utf8only property is automatically set to + on. The default value of the + normalization property is none. This + property cannot be changed after the file system is created.
+
=on|off
+
Indicates whether the file system should reject file names that include + characters that are not present in the + + character code set. If this property is explicitly set to + off, the normalization property must either not be + explicitly set or be set to none. The default value for + the utf8only property is off. This + property cannot be changed after the file system is created.
+
+

The casesensitivity, + normalization, and utf8only properties + are also new permissions that can be assigned to non-privileged users by + using the ZFS delegated administration feature.

+
+
+

+

When a file system is mounted, either through + mount(8) for legacy mounts or the + zfs mount command for normal + file systems, its mount options are set according to its properties. The + correlation between properties and mount options is as follows:

+
+
+
+
atime/noatime
+
+
auto/noauto
+
+
dev/nodev
+
+
exec/noexec
+
+
ro/rw
+
+
relatime/norelatime
+
+
suid/nosuid
+
+
xattr/noxattr
+
+
mand/nomand
+
=
+
context=
+
=
+
fscontext=
+
=
+
defcontext=
+
=
+
rootcontext=
+
+
+

In addition, these options can be set on a + per-mount basis using the -o option, without + affecting the property that is stored on disk. The values specified on the + command line override the values stored in the dataset. The + nosuid option is an alias for + ,. + These properties are reported as "temporary" by the + zfs get command. If the + properties are changed while the dataset is mounted, the new setting + overrides any temporary settings.

+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate datasets (file + systems, volumes, and snapshots).

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings, are always + inherited, and are never validated. All of the commands that operate on + properties (zfs list, + zfs get, + zfs set, and so forth) can + be used to manipulate both native properties and user properties. Use the + zfs inherit command to clear + a user property. If the property is not defined in any parent dataset, it is + removed entirely. Property values are limited to 8192 bytes.

+
+
+
+ + + + + +
August 8, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zpool-features.7.html b/man/v2.2/7/zpool-features.7.html new file mode 100644 index 000000000..111db6457 --- /dev/null +++ b/man/v2.2/7/zpool-features.7.html @@ -0,0 +1,1222 @@ + + + + + + + zpool-features.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-features.7

+
+ + + + + +
ZPOOL-FEATURES(7)Miscellaneous Information ManualZPOOL-FEATURES(7)
+
+
+

+

zpool-features — + description of ZFS pool features

+
+
+

+

ZFS pool on-disk format versions are specified via + “features” which replace the old on-disk format numbers (the + last supported on-disk format number is 28). To enable a feature on a pool + use the zpool upgrade, or + set the feature@feature-name + property to enabled. Please also see the + Compatibility feature + sets section for information on how sets of features may be enabled + together.

+

The pool format does not affect file system version compatibility + or the ability to send file systems between pools.

+

Since most features can be enabled independently of each other, + the on-disk format of the pool is specified by the set of all features + marked as active on the pool. If the pool was created by + another software version this set may include unsupported features.

+
+

+

Every feature has a GUID of the form + com.example:feature-name. The + reversed DNS name ensures that the feature's GUID is unique across all ZFS + implementations. When unsupported features are encountered on a pool they + will be identified by their GUIDs. Refer to the documentation for the ZFS + implementation that created the pool for information about those + features.

+

Each supported feature also has a short name. By convention a + feature's short name is the portion of its GUID which follows the + ‘:’ (i.e. + com.example:feature-name would + have the short name feature-name), however a feature's + short name may differ across ZFS implementations if following the convention + would result in name conflicts.

+
+
+

+

Features can be in one of three states:

+
+
+
This feature's on-disk format changes are in effect on the pool. Support + for this feature is required to import the pool in read-write mode. If + this feature is not read-only compatible, support is also required to + import the pool in read-only mode (see + Read-only + compatibility).
+
+
An administrator has marked this feature as enabled on the pool, but the + feature's on-disk format changes have not been made yet. The pool can + still be imported by software that does not support this feature, but + changes may be made to the on-disk format at any time which will move the + feature to the active state. Some features may support + returning to the enabled state after becoming + active. See feature-specific documentation for + details.
+
+
This feature's on-disk format changes have not been made and will not be + made unless an administrator moves the feature to the + enabled state. Features cannot be disabled once they + have been enabled.
+
+

The state of supported features is exposed through pool properties + of the form feature@short-name.

+
+
+

+

Some features may make on-disk format changes that do not + interfere with other software's ability to read from the pool. These + features are referred to as “read-only compatible”. If all + unsupported features on a pool are read-only compatible, the pool can be + imported in read-only mode by setting the readonly + property during import (see zpool-import(8) for details on + importing pools).

+
+
+

+

For each unsupported feature enabled on an imported pool, a pool + property named + @feature-name + will indicate why the import was allowed despite the unsupported feature. + Possible values for this property are:

+
+
+
The feature is in the enabled state and therefore the + pool's on-disk format is still compatible with software that does not + support this feature.
+
+
The feature is read-only compatible and the pool has been imported in + read-only mode.
+
+
+
+

+

Some features depend on other features being enabled in order to + function. Enabling a feature will automatically enable any features it + depends on.

+
+
+

+

It is sometimes necessary for a pool to maintain compatibility + with a specific on-disk format, by enabling and disabling particular + features. The compatibility feature facilitates this by + allowing feature sets to be read from text files. When set to + (the + default), compatibility feature sets are disabled (i.e. all features are + enabled); when set to legacy, no features are enabled. + When set to a comma-separated list of filenames (each filename may either be + an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d), the lists of + requested features are read from those files, separated by whitespace and/or + commas. Only features present in all files are enabled.

+

Simple sanity checks are applied to the files: they must be + between 1 B and 16 KiB in size, and must end with a newline character.

+

The requested features are applied when a pool is created using + zpool create + -o + compatibility= and controls + which features are enabled when using zpool + upgrade. zpool + status will not show a warning about disabled + features which are not part of the requested feature set.

+

The special value legacy prevents any features + from being enabled, either via zpool + upgrade or zpool + set + feature@feature-name=enabled. + This setting also prevents pools from being upgraded to newer on-disk + versions. This is a safety measure to prevent new features from being + accidentally enabled, breaking compatibility.

+

By convention, compatibility files in + /usr/share/zfs/compatibility.d are provided by the + distribution, and include feature sets supported by important versions of + popular distributions, and feature sets commonly supported at the start of + each year. Compatibility files in + /etc/zfs/compatibility.d, if present, will take + precedence over files with the same name in + /usr/share/zfs/compatibility.d.

+

If an unrecognized feature is found in these files, an error + message will be shown. If the unrecognized feature is in a file in + /etc/zfs/compatibility.d, this is treated as an + error and processing will stop. If the unrecognized feature is under + /usr/share/zfs/compatibility.d, this is treated as a + warning and processing will continue. This difference is to allow + distributions to include features which might not be recognized by the + currently-installed binaries.

+

Compatibility files may include comments: any text from + ‘#’ to the end of the line is ignored.

+

:

+
+
example# cat /usr/share/zfs/compatibility.d/grub2
+# Features which are supported by GRUB2
+allocation_classes
+async_destroy
+block_cloning
+bookmarks
+device_rebuild
+embedded_data
+empty_bpobj
+enabled_txg
+extensible_dataset
+filesystem_limits
+hole_birth
+large_blocks
+livelist
+log_spacemap
+lz4_compress
+project_quota
+resilver_defer
+spacemap_histogram
+spacemap_v2
+userobj_accounting
+zilsaxattr
+zpool_checkpoint
+
+example# zpool create -o compatibility=grub2 bootpool vdev
+
+

See zpool-create(8) and + zpool-upgrade(8) for more information on how these + commands are affected by feature sets.

+
+
+
+

+

The following features are supported on this system:

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables support for separate allocation + classes.

+

This feature becomes active when a dedicated + allocation class vdev (dedup or special) is created with the + zpool create + or zpool + add commands. With + device removal, it can be returned to the enabled + state if all the dedicated allocation class vdevs are removed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Destroying a file system requires traversing all of its data + in order to return its used space to the pool. Without + async_destroy, the file system is not fully removed + until all space has been reclaimed. If the destroy operation is + interrupted by a reboot or power outage, the next attempt to open the + pool will need to complete the destroy operation synchronously.

+

When async_destroy is enabled, the file + system's data will be reclaimed by a background process, allowing the + destroy operation to complete without traversing the entire file system. + The background process is able to resume interrupted destroys after the + pool has been opened, eliminating the need to finish interrupted + destroys as part of the open operation. The amount of space remaining to + be reclaimed by the background process is available through the + freeing property.

+

This feature is only active while + freeing is non-zero.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the BLAKE3 hash algorithm for + checksum and dedup. BLAKE3 is a secure hash algorithm focused on high + performance.

+

When the blake3 feature is set to + enabled, the administrator can turn on the + blake3 checksum on any dataset using + zfs set + checksum=blake3 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + blake3, and will return to being + enabled once all filesystems that have ever had their + checksum set to blake3 are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

When this feature is enabled ZFS will use + block cloning for operations like + (2). + Block cloning allows to create multiple references to a single block. It + is much faster than copying the data (as the actual data is neither read + nor written) and takes no additional space. Blocks can be cloned across + datasets under some conditions (like equal + recordsize, the same master encryption key, + etc.). ZFS tries its best to clone across datasets including encrypted + ones. This is limited for various (nontrivial) reasons depending on the + OS and/or ZFS internals.

+

This feature becomes active when first block + is cloned. When the last cloned block is freed, it goes back to the + enabled state.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables use of the zfs + bookmark command.

+

This feature is active while + any bookmarks exist in the pool. All bookmarks in the pool can be listed + by running zfs list + -t + + -r poolname.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of larger + bookmarks which are needed for other features in ZFS.

+

This feature becomes active when a v2 + bookmark is created and will be returned to the + enabled state when all v2 bookmarks are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark, extensible_dataset, bookmark_v2
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables additional bookmark + accounting fields, enabling the + #bookmark + property (space written since a bookmark) and estimates of send stream + sizes for incrementals from bookmarks.

+

This feature becomes active when a bookmark + is created and will be returned to the enabled state + when all bookmarks with these fields are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the ability for the + zpool attach and + zpool replace commands + to perform sequential reconstruction (instead of healing reconstruction) + when resilvering.

+

Sequential reconstruction resilvers a device in LBA order + without immediately verifying the checksums. Once complete, a scrub is + started, which then verifies the checksums. This approach allows full + redundancy to be restored to the pool in the minimum amount of time. + This two-phase approach will take longer than a healing resilver when + the time to verify the checksums is included. However, unless there is + additional pool damage, no checksum errors should be reported by the + scrub. This feature is incompatible with raidz configurations. This + feature becomes active while a sequential resilver is + in progress, and returns to enabled when the resilver + completes.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the zpool + remove command to remove top-level vdevs, + evacuating them to reduce the total size of the pool.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables use of the draid vdev + type. dRAID is a variant of RAID-Z which provides integrated distributed + hot spares that allow faster resilvering while retaining the benefits of + RAID-Z. Data, parity, and spare space are organized in redundancy groups + and distributed evenly over all of the devices.

+

This feature becomes active when creating a + pool which uses the draid vdev type, or when adding a + new draid vdev to an existing pool.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Edon-R hash + algorithm for checksum, including for nopwrite (if compression is also + enabled, an overwrite of a block whose checksum matches the data being + written will be ignored). In an abundance of caution, Edon-R requires + verification when used with dedup: zfs + set + =edonr, + (see zfs-set(8)).

+

Edon-R is a very high-performance hash algorithm that was part + of the NIST SHA-3 competition. It provides extremely high hash + performance (over 350% faster than SHA-256), but was not selected + because of its unsuitability as a general purpose secure hash algorithm. + This implementation utilizes the new salted checksumming functionality + in ZFS, which means that the checksum is pre-seeded with a secret + 256-bit random key (stored on the pool) before being fed the data block + to be checksummed. Thus the produced checksums are unique to a given + pool, preventing hash collision attacks on systems with dedup.

+

When the edonr feature is set to + enabled, the administrator can turn on the + edonr checksum on any dataset using + zfs set + checksum=edonr + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + edonr, and will return to being + enabled once all filesystems that have ever had their + checksum set to edonr are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature improves the performance and compression ratio of + highly-compressible blocks. Blocks whose contents can compress to 112 + bytes or smaller can take advantage of this feature.

+

When this feature is enabled, the contents of + highly-compressible blocks are stored in the block + “pointer” itself (a misnomer in this case, as it contains + the compressed data, rather than a pointer to its location on disk). + Thus the space of the block (one sector, typically 512 B or 4 KiB) is + saved, and no additional I/O is needed to read and write the data block. + This feature becomes active + as soon as it is enabled and will never return to + being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature increases the performance of creating and using a + large number of snapshots of a single filesystem or volume, and also + reduces the disk space required.

+

When there are many snapshots, each snapshot uses many Block + Pointer Objects (bpobjs) to track blocks associated with that snapshot. + However, in common use cases, most of these bpobjs are empty. This + feature allows us to create each bpobj on-demand, thus eliminating the + empty bpobjs.

+

This feature is active while there are any + filesystems, volumes, or snapshots which were created after enabling + this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

Once this feature is enabled, ZFS records the transaction + group number in which new features are enabled. This has no user-visible + impact, but other features may depend on this feature.

+

This feature becomes active as soon as it is + enabled and will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmark_v2, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the creation and management of natively + encrypted datasets.

+

This feature becomes active when an + encrypted dataset is created and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows more flexible use of internal ZFS data + structures, and exists for other features to depend on.

+

This feature will be active when the first + dependent feature uses it, and will be returned to the + enabled state when all datasets that use this feature + are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables filesystem and snapshot limits. These + limits can be used to control how many filesystems and/or snapshots can + be created at the point in the tree on which the limits are set.

+

This feature is active once either of the + limit properties has been set on a dataset and will never return to + being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the upgraded version of errlog, which + required an on-disk error log format change. Now the error log of each + head dataset is stored separately in the zap object and keyed by the + head id. With this feature enabled, every dataset affected by an error + block is listed in the output of zpool + status. In case of encrypted filesystems with + unloaded keys we are unable to check their snapshots or clones for + errors and these will not be reported. An "access denied" + error will be reported.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
enabled_txg
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature has/had bugs, + the result of which is that, if you do a zfs + send -i (or + -R, since it uses + -i) from an affected dataset, the receiving + party will not see any checksum or other errors, but the resulting + destination snapshot will not match the source. Its use by + zfs send + -i has been disabled by default (see + + in zfs(4)).

+

This feature improves performance of incremental sends + (zfs send + -i) and receives for objects with many holes. + The most common case of hole-filled objects is zvols.

+

An incremental send stream from snapshot A + to snapshot B contains + information about every block that changed between A + and B. Blocks which did not + change between those snapshots can be identified and omitted from the + stream using a piece of metadata called the “block birth + time”, but birth times are not recorded for holes (blocks filled + only with zeroes). Since holes created after A + cannot be distinguished from holes created + before A, information about every hole in the + entire filesystem or zvol is included in the send stream.

+

For workloads where holes are rare this is not a problem. + However, when incrementally replicating filesystems or zvols with many + holes (for example a zvol formatted with another filesystem) a lot of + time will be spent sending and receiving unnecessary information about + holes that already exist on the receiving side.

+

Once the hole_birth feature has been enabled + the block birth times of all new holes will be recorded. Incremental + sends between snapshots created after this feature is enabled will use + this new metadata to avoid sending information about holes that already + exist on the receiving side.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the record size on a dataset to be set + larger than 128 KiB.

+

This feature becomes active once a dataset + contains a file with a block size larger than 128 KiB, and will return + to being enabled once all filesystems that have ever + had their recordsize larger than 128 KiB are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows the size of dnodes in a + dataset to be set larger than 512 B. This feature becomes + active once a dataset contains an object with a dnode + larger than 512 B, which occurs as a result of setting the + + dataset property to a value other than legacy. The + feature will return to being enabled once all + filesystems that have ever contained a dnode larger than 512 B are + destroyed. Large dnodes allow more data to be stored in the bonus + buffer, thus potentially improving performance by avoiding the use of + spill blocks.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows clones to be deleted faster than the + traditional method when a large number of random/sparse writes have been + made to the clone. All blocks allocated and freed after a clone is + created are tracked by the the clone's livelist which is referenced + during the deletion of the clone. The feature is activated when a clone + is created and remains active until all clones have + been destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
com.delphix:spacemap_v2
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature improves performance for heavily-fragmented + pools, especially when workloads are heavy in random-writes. It does so + by logging all the metaslab changes on a single spacemap every TXG + instead of scattering multiple writes to all the metaslab spacemaps.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

lz4 is a high-performance real-time + compression algorithm that features significantly faster compression and + decompression as well as a higher compression ratio than the older + lzjb compression. Typically, lz4 + compression is approximately 50% faster on compressible data and 200% + faster on incompressible data than lzjb. It is also + approximately 80% faster on decompression, while giving approximately a + 10% better compression ratio.

+

When the lz4_compress feature is set to + enabled, the administrator can turn on + lz4 compression on any dataset on the pool using the + zfs-set(8) command. All newly written metadata will be + compressed with the lz4 algorithm.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature allows a dump device to be configured with a pool + comprised of multiple vdevs. Those vdevs may be arranged in any mirrored + or raidz configuration.

+

When the multi_vdev_crash_dump feature is + set to enabled, the administrator can use + dumpadm(8) to configure a dump device on a pool + comprised of multiple vdevs.

+

Under FreeBSD and Linux this feature + is unused, but registered for compatibility. New pools created on these + systems will have the feature enabled but will never + transition to active, as this functionality is not + required for crash dump support. Existing pools where this feature is + active can be imported.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
device_removal
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature is an enhancement of + device_removal, which will over time reduce the memory + used to track removed devices. When indirect blocks are freed or + remapped, we note that their part of the indirect mapping is + “obsolete” – no longer needed.

+

This feature becomes active when the + zpool remove command is + used on a top-level vdev, and will never return to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the spaces and + objects usage information against the project identifier (ID).

+

The project ID is an object-based attribute. When + upgrading an existing filesystem, objects without a project ID will be + assigned a zero project ID. When this feature is enabled, newly created + objects inherit their parent directories' project ID if the parent's + inherit flag is set (via chattr + + or zfs + project + -s|-C). Otherwise, the + new object's project ID will be zero. An object's project ID can be + changed at any time by the owner (or privileged user) via + chattr -p + prjid or zfs + project -p + prjid.

+

This feature will become active as soon as + it is enabled and will never return to being disabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
bookmarks, extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of redacted + zfs sends, which create + redaction bookmarks storing the list of blocks redacted by the send that + created them. For more information about redacted sends, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the receiving of redacted + zfs send streams, which + create redacted datasets when received. These datasets are missing some + of their blocks, and so cannot be safely mounted, and their contents + cannot be safely read. For more information about redacted receives, see + zfs-send(8).

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows ZFS to postpone new resilvers if an + existing one is already in progress. Without this feature, any new + resilvers will cause the currently running one to be immediately + restarted from the beginning.

+

This feature becomes active once a resilver + has been deferred, and returns to being enabled when + the deferred resilver begins.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the SHA-512/256 truncated hash + algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit + arithmetic of SHA-512 provides an approximate 50% performance boost over + SHA-256 on 64-bit hardware and is thus a good minimum-change replacement + candidate for systems where hash performance is important, but these + systems cannot for whatever reason utilize the faster + skein and + edonr algorithms.

+

When the sha512 feature is set to + enabled, the administrator can turn on the + sha512 checksum on any dataset using + zfs set + checksum=sha512 + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + sha512, and will return to being + enabled once all filesystems that have ever had their + checksum set to sha512 are destroyed.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature enables the use of the Skein hash algorithm for + checksum and dedup. Skein is a high-performance secure hash algorithm + that was a finalist in the NIST SHA-3 competition. It provides a very + high security margin and high performance on 64-bit hardware (80% faster + than SHA-256). This implementation also utilizes the new salted + checksumming functionality in ZFS, which means that the checksum is + pre-seeded with a secret 256-bit random key (stored on the pool) before + being fed the data block to be checksummed. Thus the produced checksums + are unique to a given pool, preventing hash collision attacks on systems + with dedup.

+

When the skein feature is set to + enabled, the administrator can turn on the + skein checksum on any dataset using + zfs set + checksum=skein + dset (see zfs-set(8)). This + feature becomes active once a + checksum property has been set to + skein, and will return to being + enabled once all filesystems that have ever had their + checksum set to skein are destroyed.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This features allows ZFS to maintain more information about + how free space is organized within the pool. If this feature is + enabled, it will be activated when a new space map + object is created, or an existing space map is upgraded to the new + format, and never returns back to being enabled.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the use of the new space map encoding + which consists of two words (instead of one) whenever it is + advantageous. The new encoding allows space maps to represent large + regions of space more efficiently on-disk while also increasing their + maximum addressable offset.

+

This feature becomes active once it is + enabled, and never returns back to being + enabled.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature allows administrators to account the object usage + information by user and group.

+

This feature becomes + active as soon as it is enabled and + will never return to being enabled. + Each filesystem will be upgraded automatically when + remounted, or when a new file is created under that filesystem. The + upgrade can also be triggered on filesystems via + zfs set + version=current + fs. The upgrade process runs in + the background and may take a while to complete for filesystems + containing large amounts of files.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
no
+
+

This feature creates a ZAP object for the root vdev.

+

This feature becomes active after the next + zpool import or + zpool reguid. Properties can be retrieved or set + on the root vdev using zpool + get and zpool + set with + as the vdev + name which is an alias for + .

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables + xattr=sa extended attribute logging + in the ZIL. If enabled, extended attribute changes (both + = + and + xattr=sa) are guaranteed to be + durable if either the dataset had + = + set at the time the changes were made, or sync(2) is + called on the dataset after the changes were made.

+

This feature becomes active when a ZIL is + created for at least one dataset and will be returned to the + enabled state when it is destroyed for all datasets + that use this feature.

+
+
+
+
+
GUID
+
+
READ-ONLY COMPATIBLE
+
yes
+
+

This feature enables the zpool + checkpoint command that can checkpoint the state + of the pool at the time it was issued and later rewind back to it or + discard it.

+

This feature becomes active when the + zpool checkpoint command + is used to checkpoint the pool. The feature will only return back to + being enabled when the pool is rewound or the + checkpoint has been discarded.

+
+
+
+
+
GUID
+
+
DEPENDENCIES
+
extensible_dataset
+
READ-ONLY COMPATIBLE
+
no
+
+

zstd is a high-performance + compression algorithm that features a combination of high compression + ratios and high speed. Compared to + , + zstd offers slightly better compression at much higher + speeds. Compared to lz4, zstd offers + much better compression while being only modestly slower. Typically, + zstd compression speed ranges from 250 to 500 MB/s per + thread and decompression speed is over 1 GB/s per thread.

+

When the zstd feature is set to + enabled, the administrator can turn on + zstd compression of any dataset using + zfs set + compress=zstd + dset (see zfs-set(8)). This + feature becomes active once a + compress property has been set to + zstd, and will return to being + enabled once all filesystems that have ever had their + compress property set to zstd are + destroyed.

+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
June 23, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zpoolconcepts.7.html b/man/v2.2/7/zpoolconcepts.7.html new file mode 100644 index 000000000..22736821b --- /dev/null +++ b/man/v2.2/7/zpoolconcepts.7.html @@ -0,0 +1,605 @@ + + + + + + + zpoolconcepts.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolconcepts.7

+
+ + + + + +
ZPOOLCONCEPTS(7)Miscellaneous Information ManualZPOOLCONCEPTS(7)
+
+
+

+

zpoolconcepts — + overview of ZFS storage pools

+
+
+

+
+

+

A "virtual device" describes a single device or a + collection of devices, organized according to certain performance and fault + characteristics. The following virtual devices are supported:

+
+
+
A block device, typically located under /dev. ZFS + can use individual slices or partitions, though the recommended mode of + operation is to use whole disks. A disk can be specified by a full path, + or it can be a shorthand name (the relative portion of the path under + /dev). A whole disk can be specified by omitting + the slice or partition designation. For example, + sda is equivalent to + /dev/sda. When given a whole disk, ZFS + automatically labels the disk, if necessary.
+
+
A regular file. The use of files as a backing store is strongly + discouraged. It is designed primarily for experimental purposes, as the + fault tolerance of a file is only as good as the file system on which it + resides. A file must be specified by a full path.
+
+
A mirror of two or more devices. Data is replicated in an identical + fashion across all components of a mirror. A mirror with + N disks of size + X can hold X + bytes and can withstand + + devices failing, without losing data.
+
, + raidz1, raidz2, + raidz3
+
A distributed-parity layout, similar to RAID-5/6, with improved + distribution of parity, and which does not suffer from the RAID-5/6 + "write hole", (in which data and parity become inconsistent + after a power loss). Data and parity is striped across all disks within a + raidz group, though not necessarily in a consistent stripe width. +

A raidz group can have single, double, or triple parity, + meaning that the raidz group can sustain one, two, or three failures, + respectively, without losing any data. The raidz1 vdev + type specifies a single-parity raidz group; the raidz2 + vdev type specifies a double-parity raidz group; and the + raidz3 vdev type specifies a triple-parity raidz + group. The raidz vdev type is an alias for + raidz1.

+

A raidz group with N + disks of size X + with P parity + disks can hold approximately + + bytes and can withstand P + devices failing without losing data. The minimum + number of devices in a raidz group is one more than the number of parity + disks. The recommended number is between 3 and 9 to help increase + performance.

+
+
, + draid1, draid2, + draid3
+
A variant of raidz that provides integrated distributed hot spares, + allowing for faster resilvering, while retaining the benefits of raidz. A + dRAID vdev is constructed from multiple internal raidz groups, each with + D data devices and + P parity devices. These groups + are distributed over all of the children in order to fully utilize the + available disk performance. +

Unlike raidz, dRAID uses a fixed stripe width + (padding as necessary with zeros) to allow fully sequential resilvering. + This fixed stripe width significantly affects both usable capacity and + IOPS. For example, with the default + + and + + disk sectors the minimum allocation size is + . If + using compression, this relatively large allocation size can reduce the + effective compression ratio. When using ZFS volumes (zvols) and dRAID, + the default of the + + property is increased to account for the allocation size. If a dRAID + pool will hold a significant amount of small blocks, it is recommended + to also add a mirrored special vdev to store those + blocks.

+

In regards to I/O, + performance is similar to raidz since, for any read, all + D data disks must be accessed. + Delivered random IOPS can be reasonably approximated as + .

+

Like raidz, a dRAID can have single-, double-, or + triple-parity. The draid1, draid2, + and draid3 types can be used to specify the parity + level. The draid vdev type is an alias for + draid1.

+

A dRAID with N disks + of size X, D + data disks per redundancy group, + P parity level, and + + distributed hot spares can hold approximately + + bytes and can withstand P + devices failing without losing data.

+
+
[parity][:data][:children][:spares]
+
A non-default dRAID configuration can be specified by appending one or + more of the following optional arguments to the draid + keyword: +
+
parity
+
The parity level (1-3).
+
data
+
The number of data devices per redundancy group. In general, a smaller + value of D will increase IOPS, + improve the compression ratio, and speed up resilvering at the + expense of total usable capacity. Defaults to 8, + unless + + is less than 8.
+
children
+
The expected number of children. Useful as a cross-check when listing + a large number of devices. An error is returned when the provided + number of children differs.
+
spares
+
The number of distributed hot spares. Defaults to zero.
+
+
+
+
A pseudo-vdev which keeps track of available hot spares for a pool. For + more information, see the Hot Spares + section.
+
+
A separate intent log device. If more than one log device is specified, + then writes are load-balanced between devices. Log devices can be + mirrored. However, raidz vdev types are not supported for the intent log. + For more information, see the Intent + Log section.
+
+
A device solely dedicated for deduplication tables. The redundancy of this + device should match the redundancy of the other normal devices in the + pool. If more than one dedup device is specified, then allocations are + load-balanced between those devices.
+
+
A device dedicated solely for allocating various kinds of internal + metadata, and optionally small file blocks. The redundancy of this device + should match the redundancy of the other normal devices in the pool. If + more than one special device is specified, then allocations are + load-balanced between those devices. +

For more information on special allocations, see the + Special Allocation + Class section.

+
+
+
A device used to cache storage pool data. A cache device cannot be + configured as a mirror or raidz group. For more information, see the + Cache Devices section.
+
+

Virtual devices cannot be nested arbitrarily. A mirror, raidz or + draid virtual device can only be created with files or disks. Mirrors of + mirrors or other such combinations are not allowed.

+

A pool can have any number of virtual devices at the top of the + configuration (known as "root vdevs"). Data is dynamically + distributed across all top-level devices to balance data among devices. As + new virtual devices are added, ZFS automatically places data on the newly + available devices.

+

Virtual devices are specified one at a time on the command line, + separated by whitespace. Keywords like mirror + and raidz are used to distinguish + where a group ends and another begins. For example, the following creates a + pool with two root vdevs, each a mirror of two disks:

+
# zpool + create mypool + mirror sda sdb + mirror sdc sdd
+
+
+

+

ZFS supports a rich set of mechanisms for handling device failure + and data corruption. All metadata and data is checksummed, and ZFS + automatically repairs bad data from a good copy, when corruption is + detected.

+

In order to take advantage of these features, a pool must make use + of some form of redundancy, using either mirrored or raidz groups. While ZFS + supports running in a non-redundant configuration, where each root vdev is + simply a disk or file, this is strongly discouraged. A single case of bit + corruption can render some or all of your data unavailable.

+

A pool's health status is described by one of three + states: , + , + or + . + An online pool has all devices operating normally. A degraded pool is one in + which one or more devices have failed, but the data is still available due + to a redundant configuration. A faulted pool has corrupted metadata, or one + or more faulted devices, and insufficient replicas to continue + functioning.

+

The health of the top-level vdev, such as a mirror or raidz + device, is potentially impacted by the state of its associated vdevs or + component devices. A top-level vdev or component device is in one of the + following states:

+
+
+
One or more top-level vdevs is in the degraded state because one or more + component devices are offline. Sufficient replicas exist to continue + functioning. +

One or more component devices is in the degraded or faulted + state, but sufficient replicas exist to continue functioning. The + underlying conditions are as follows:

+
    +
  • The number of checksum errors exceeds acceptable levels and the device + is degraded as an indication that something may be wrong. ZFS + continues to use the device as necessary.
  • +
  • The number of I/O errors exceeds acceptable levels. The device could + not be marked as faulted because there are insufficient replicas to + continue functioning.
  • +
+
+
+
One or more top-level vdevs is in the faulted state because one or more + component devices are offline. Insufficient replicas exist to continue + functioning. +

One or more component devices is in the faulted state, and + insufficient replicas exist to continue functioning. The underlying + conditions are as follows:

+
    +
  • The device could be opened, but the contents did not match expected + values.
  • +
  • The number of I/O errors exceeds acceptable levels and the device is + faulted to prevent further use of the device.
  • +
+
+
+
The device was explicitly taken offline by the + zpool offline + command.
+
+
The device is online and functioning.
+
+
The device was physically removed while the system was running. Device + removal detection is hardware-dependent and may not be supported on all + platforms.
+
+
The device could not be opened. If a pool is imported when a device was + unavailable, then the device will be identified by a unique identifier + instead of its path since the path was never correct in the first + place.
+
+

Checksum errors represent events where a disk returned data that + was expected to be correct, but was not. In other words, these are instances + of silent data corruption. The checksum errors are reported in + zpool status and + zpool events. When a block + is stored redundantly, a damaged block may be reconstructed (e.g. from raidz + parity or a mirrored copy). In this case, ZFS reports the checksum error + against the disks that contained damaged data. If a block is unable to be + reconstructed (e.g. due to 3 disks being damaged in a raidz2 group), it is + not possible to determine which disks were silently corrupted. In this case, + checksum errors are reported for all disks on which the block is stored.

+

If a device is removed and later re-attached to the system, ZFS + attempts to bring the device online automatically. Device attachment + detection is hardware-dependent and might not be supported on all + platforms.

+
+
+

+

ZFS allows devices to be associated with pools as "hot + spares". These devices are not actively used in the pool. But, when an + active device fails, it is automatically replaced by a hot spare. To create + a pool with hot spares, specify a spare vdev with any + number of devices. For example,

+
# zpool + create pool + mirror sda sdb spare + sdc sdd
+

Spares can be shared across multiple pools, and can be added with + the zpool add command and + removed with the zpool + remove command. Once a spare replacement is + initiated, a new spare vdev is created within the + configuration that will remain there until the original device is replaced. + At this point, the hot spare becomes available again, if another device + fails.

+

If a pool has a shared spare that is currently being used, the + pool cannot be exported, since other pools may use this shared spare, which + may lead to potential data corruption.

+

Shared spares add some risk. If the pools are imported on + different hosts, and both pools suffer a device failure at the same time, + both could attempt to use the spare at the same time. This may not be + detected, resulting in data corruption.

+

An in-progress spare replacement can be cancelled by detaching the + hot spare. If the original faulted device is detached, then the hot spare + assumes its place in the configuration, and is removed from the spare list + of all active pools.

+

The draid vdev type provides distributed hot + spares. These hot spares are named after the dRAID vdev they're a part of + (draid1-2-3 + specifies spare 3 + of vdev 2, + which is a single parity dRAID) and may only be used + by that dRAID vdev. Otherwise, they behave the same as normal hot + spares.

+

Spares cannot replace log devices.

+
+
+

+

The ZFS Intent Log (ZIL) satisfies POSIX requirements for + synchronous transactions. For instance, databases often require their + transactions to be on stable storage devices when returning from a system + call. NFS and other applications can also use fsync(2) to + ensure data stability. By default, the intent log is allocated from blocks + within the main pool. However, it might be possible to get better + performance using separate intent log devices such as NVRAM or a dedicated + disk. For example:

+
# zpool + create pool sda sdb + log sdc
+

Multiple log devices can also be specified, and they can be + mirrored. See the EXAMPLES section for an + example of mirroring multiple log devices.

+

Log devices can be added, replaced, attached, detached, and + removed. In addition, log devices are imported and exported as part of the + pool that contains them. Mirrored devices can be removed by specifying the + top-level mirror vdev.

+
+
+

+

Devices can be added to a storage pool as "cache + devices". These devices provide an additional layer of caching between + main memory and disk. For read-heavy workloads, where the working set size + is much larger than what can be cached in main memory, using cache devices + allows much more of this working set to be served from low latency media. + Using cache devices provides the greatest performance improvement for random + read-workloads of mostly static content.

+

To create a pool with cache devices, specify a + cache vdev with any number of devices. For example:

+
# zpool + create pool sda sdb + cache sdc sdd
+

Cache devices cannot be mirrored or part of a raidz configuration. + If a read error is encountered on a cache device, that read I/O is reissued + to the original storage pool device, which might be part of a mirrored or + raidz configuration.

+

The content of the cache devices is + persistent across reboots and restored asynchronously when importing the + pool in L2ARC (persistent L2ARC). This can be disabled by setting + =0. + For cache devices smaller than + , ZFS does + not write the metadata structures required for rebuilding the L2ARC, to + conserve space. This can be changed with + . + The cache device header + () is + updated even if no metadata structures are written. Setting + =0 + will result in scanning the full-length ARC lists for cacheable content to + be written in L2ARC (persistent ARC). If a cache device is added with + zpool add, its label and + header will be overwritten and its contents will not be restored in L2ARC, + even if the device was previously part of the pool. If a cache device is + onlined with zpool online, + its contents will be restored in L2ARC. This is useful in case of memory + pressure, where the contents of the cache device are not fully restored in + L2ARC. The user can off- and online the cache device when there is less + memory pressure, to fully restore its contents to L2ARC.

+
+
+

+

Before starting critical procedures that include destructive + actions (like zfs destroy), + an administrator can checkpoint the pool's state and, in the case of a + mistake or failure, rewind the entire pool back to the checkpoint. + Otherwise, the checkpoint can be discarded when the procedure has completed + successfully.

+

A pool checkpoint can be thought of as a pool-wide snapshot and + should be used with care as it contains every part of the pool's state, from + properties to vdev configuration. Thus, certain operations are not allowed + while a pool has a checkpoint. Specifically, vdev removal/attach/detach, + mirror splitting, and changing the pool's GUID. Adding a new vdev is + supported, but in the case of a rewind it will have to be added again. + Finally, users of this feature should keep in mind that scrubs in a pool + that has a checkpoint do not repair checkpointed data.

+

To create a checkpoint for a pool:

+
# zpool + checkpoint pool
+

To later rewind to its checkpointed state, you need to first + export it and then rewind it during import:

+
# zpool + export pool
+
# zpool + import --rewind-to-checkpoint + pool
+

To discard the checkpoint from a pool:

+
# zpool + checkpoint -d + pool
+

Dataset reservations (controlled by the + + and + + properties) may be unenforceable while a checkpoint exists, because the + checkpoint is allowed to consume the dataset's reservation. Finally, data + that is part of the checkpoint but has been freed in the current state of + the pool won't be scanned during a scrub.

+
+
+

+

Allocations in the special class are dedicated to specific block + types. By default, this includes all metadata, the indirect blocks of user + data, and any deduplication tables. The class can also be provisioned to + accept small file blocks.

+

A pool must always have at least one normal + (non-dedup/-special) vdev before other + devices can be assigned to the special class. If the + special class becomes full, then allocations intended for + it will spill back into the normal class.

+

Deduplication tables can be excluded + from the special class by unsetting the + + ZFS module parameter.

+

Inclusion of small file blocks in the + special class is opt-in. Each dataset can control the size of small file + blocks allowed in the special class by setting the + + property to nonzero. See zfsprops(7) for more info on this + property.

+
+
+
+ + + + + +
April 7, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/7/zpoolprops.7.html b/man/v2.2/7/zpoolprops.7.html new file mode 100644 index 000000000..76fe4c0cd --- /dev/null +++ b/man/v2.2/7/zpoolprops.7.html @@ -0,0 +1,511 @@ + + + + + + + zpoolprops.7 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpoolprops.7

+
+ + + + + +
ZPOOLPROPS(7)Miscellaneous Information ManualZPOOLPROPS(7)
+
+
+

+

zpoolprops — + properties of ZFS storage pools

+
+
+

+

Each pool has several properties associated with it. Some + properties are read-only statistics while others are configurable and change + the behavior of the pool.

+

User properties have no effect on ZFS behavior. Use them to + annotate pools in a way that is meaningful in your environment. For more + information about user properties, see the + User Properties section.

+

The following are read-only properties:

+
+
+
Amount of storage used within the pool. See + fragmentation and free for more + information.
+
+
The ratio of the total amount of storage that would be required to store + all the cloned blocks without cloning to the actual storage used. The + bcloneratio property is calculated as: +

((bclonesaved + bcloneused) + ) +

+
+
+
The amount of additional storage that would be required if block cloning + was not used.
+
+
The amount of storage used by cloned blocks.
+
+
Percentage of pool space used. This property can also be referred to by + its shortened column name, + .
+
+
Amount of uninitialized space within the pool or device that can be used + to increase the total capacity of the pool. On whole-disk vdevs, this is + the space beyond the end of the GPT – typically occurring when a + LUN is dynamically expanded or a disk replaced with a larger one. On + partition vdevs, this is the space appended to the partition after it was + added to the pool – most likely by resizing it in-place. The space + can be claimed for the pool by bringing it online with + + or using zpool online + -e.
+
+
The amount of fragmentation in the pool. As the amount of space + allocated increases, it becomes more difficult to locate + free space. This may result in lower write performance + compared to pools with more unfragmented free space.
+
+
The amount of free space available in the pool. By contrast, the + zfs(8) available property describes + how much new data can be written to ZFS filesystems/volumes. The zpool + free property is not generally useful for this purpose, + and can be substantially more than the zfs available + space. This discrepancy is due to several factors, including raidz parity; + zfs reservation, quota, refreservation, and refquota properties; and space + set aside by + + (see zfs(4) for more information).
+
+
After a file system or snapshot is destroyed, the space it was using is + returned to the pool asynchronously. freeing is the + amount of space remaining to be reclaimed. Over time + freeing will decrease while free + increases.
+
+
A unique identifier for the pool.
+
+
The current health of the pool. Health can be one of + , + , + , + , + .
+
+
Space not released while freeing due to corruption, now + permanently leaked into the pool.
+
+
A unique identifier for the pool. Unlike the guid + property, this identifier is generated every time we load the pool (i.e. + does not persist across imports/exports) and never changes while the pool + is loaded (even if a + + operation takes place).
+
+
Total size of the storage pool.
+
guid
+
Information about unsupported features that are enabled on the pool. See + zpool-features(7) for details.
+
+

The space usage properties report actual physical space available + to the storage pool. The physical space can be different from the total + amount of space that any contained datasets can actually use. The amount of + space used in a raidz configuration depends on the characteristics of the + data being written. In addition, ZFS reserves some space for internal + accounting that the zfs(8) command takes into account, but + the zpoolprops command does not. For non-full pools + of a reasonable size, these effects should be invisible. For small pools, or + pools that are close to being completely full, these discrepancies may + become more noticeable.

+

The following property can be set at creation time and import + time:

+
+
+
Alternate root directory. If set, this directory is prepended to any mount + points within the pool. This can be used when examining an unknown pool + where the mount points cannot be trusted, or in an alternate boot + environment, where the typical paths are not valid. + altroot is not a persistent property. It is valid only + while the system is up. Setting altroot defaults to + using cachefile=none, though this may + be overridden using an explicit setting.
+
+

The following property can be set only at import time:

+
+
=on|off
+
If set to on, the pool will be imported in read-only + mode. This property can also be referred to by its shortened column name, + .
+
+

The following properties can be set at creation time and import + time, and later changed with the zpool + set command:

+
+
=ashift
+
Pool sector size exponent, to the power of + (internally + referred to as ashift). Values from 9 to 16, inclusive, + are valid; also, the value 0 (the default) means to auto-detect using the + kernel's block layer and a ZFS internal exception list. I/O operations + will be aligned to the specified size boundaries. Additionally, the + minimum (disk) write size will be set to the specified size, so this + represents a space/performance trade-off. For optimal performance, the + pool sector size should be greater than or equal to the sector size of the + underlying disks. The typical case for setting this property is when + performance is important and the underlying disks use 4KiB sectors but + report 512B sectors to the OS (for compatibility reasons); in that case, + set + ashift= + (which is + + = + ). + When set, this property is used as the default hint value in subsequent + vdev operations (add, attach and replace). Changing this value will not + modify any existing vdev, not even on disk replacement; however it can be + used, for instance, to replace a dying 512B sectors disk with a newer 4KiB + sectors device: this will probably result in bad performance but at the + same time could prevent loss of data.
+
=on|off
+
Controls automatic pool expansion when the underlying LUN is grown. If set + to on, the pool will be resized according to the size of + the expanded device. If the device is part of a mirror or raidz then all + devices within that mirror/raidz group must be expanded before the new + space is made available to the pool. The default behavior is + off. This property can also be referred to by its + shortened column name, + .
+
=on|off
+
Controls automatic device replacement. If set to off, + device replacement must be initiated by the administrator by using the + zpool replace command. If + set to on, any new device, found in the same physical + location as a device that previously belonged to the pool, is + automatically formatted and replaced. The default behavior is + off. This property can also be referred to by its + shortened column name, + . + Autoreplace can also be used with virtual disks (like device mapper) + provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. + See the vdev_id(8) manual page for more details. + Autoreplace and autoonline require the ZFS Event Daemon be configured and + running. See the zed(8) manual page for more + details.
+
=on|off
+
When set to on space which has been recently freed, and + is no longer allocated by the pool, will be periodically trimmed. This + allows block device vdevs which support BLKDISCARD, such as SSDs, or file + vdevs on which the underlying file system supports hole-punching, to + reclaim unused blocks. The default value for this property is + off. +

Automatic TRIM does not immediately + reclaim blocks after a free. Instead, it will optimistically delay + allowing smaller ranges to be aggregated into a few larger ones. These + can then be issued more efficiently to the storage. TRIM on L2ARC + devices is enabled by setting + .

+

Be aware that automatic trimming of recently freed data blocks + can put significant stress on the underlying storage devices. This will + vary depending of how well the specific device handles these commands. + For lower-end devices it is often possible to achieve most of the + benefits of automatic trimming by running an on-demand (manual) TRIM + periodically using the zpool + trim command.

+
+
=|pool[/dataset]
+
Identifies the default bootable dataset for the root pool. This property + is expected to be set mainly by the installation and upgrade programs. Not + all Linux distribution boot processes use the bootfs property.
+
=path|none
+
Controls the location of where the pool configuration is cached. + Discovering all pools on system startup requires a cached copy of the + configuration data that is stored on the root file system. All pools in + this cache are automatically imported when the system boots. Some + environments, such as install and clustering, need to cache this + information in a different location so that pools are not automatically + imported. Setting this property caches the pool configuration in a + different location that can later be imported with + zpool import + -c. Setting it to the value none + creates a temporary pool that is never cached, and the "" (empty + string) uses the default location. +

Multiple pools can share the same cache file. Because the + kernel destroys and recreates this file when pools are added and + removed, care should be taken when attempting to access this file. When + the last pool using a cachefile is exported or + destroyed, the file will be empty.

+
+
=text
+
A text string consisting of printable ASCII characters that will be stored + such that it is available even if the pool becomes faulted. An + administrator can provide additional information about a pool using this + property.
+
=off|legacy|file[,file]…
+
Specifies that the pool maintain compatibility with specific feature sets. + When set to off (or unset) compatibility is disabled + (all features may be enabled); when set to legacyno + features may be enabled. When set to a comma-separated list of filenames + (each filename may either be an absolute path, or relative to + /etc/zfs/compatibility.d or + /usr/share/zfs/compatibility.d) the lists of + requested features are read from those files, separated by whitespace + and/or commas. Only features present in all files may be enabled. +

See zpool-features(7), + zpool-create(8) and zpool-upgrade(8) + for more information on the operation of compatibility feature sets.

+
+
=number
+
This property is deprecated and no longer has any effect.
+
=on|off
+
Controls whether a non-privileged user is granted access based on the + dataset permissions defined on the dataset. See zfs(8) + for more information on ZFS delegated administration.
+
=wait|continue|panic
+
Controls the system behavior in the event of catastrophic pool failure. + This condition is typically a result of a loss of connectivity to the + underlying storage device(s) or a failure of all devices within the pool. + The behavior of such an event is determined as follows: +
+
+
Blocks all I/O access until the device connectivity is recovered and + the errors are cleared with zpool + clear. This is the default behavior.
+
+
Returns EIO to any new write I/O requests but + allows reads to any of the remaining healthy devices. Any write + requests that have yet to be committed to disk would be blocked.
+
+
Prints out a message to the console and generates a system crash + dump.
+
+
+
feature_name=enabled
+
The value of this property is the current state of + feature_name. The only valid value when setting this + property is enabled which moves + feature_name to the enabled state. See + zpool-features(7) for details on feature states.
+
=on|off
+
Controls whether information about snapshots associated with this pool is + output when zfs list is + run without the -t option. The default value is + off. This property can also be referred to by its + shortened name, + .
+
=on|off
+
Controls whether a pool activity check should be performed during + zpool import. When a pool + is determined to be active it cannot be imported, even with the + -f option. This property is intended to be used in + failover configurations where multiple hosts have access to a pool on + shared storage. +

Multihost provides protection on import only. It does not + protect against an individual device being used in multiple pools, + regardless of the type of vdev. See the discussion under + zpool create.

+

When this property is on, periodic + writes to storage occur to show the pool is in use. See + + in the zfs(4) manual page. In order to enable this + property each host must set a unique hostid. See + zgenhostid(8) + spl(4) for additional details. The default value is + off.

+
+
=version
+
The current on-disk version of the pool. This can be increased, but never + decreased. The preferred method of updating pools is with the + zpool upgrade command, + though this property can be used when a specific version is needed for + backwards compatibility. Once feature flags are enabled on a pool this + property will no longer have a value.
+
+
+

+

In addition to the standard native properties, ZFS supports + arbitrary user properties. User properties have no effect on ZFS behavior, + but applications or administrators can use them to annotate pools.

+

User property names must contain a colon + (":") character to distinguish them from native + properties. They may contain lowercase letters, numbers, and the following + punctuation characters: colon (":"), dash + ("-"), period + (""), and + underscore + (""). + The expected convention is that the property name is divided into two + portions such as + module:property, but this + namespace is not enforced by ZFS. User property names can be at most 256 + characters, and cannot begin with a dash + ("-").

+

When making programmatic use of user properties, it is strongly + suggested to use a reversed DNS domain name for the + module component of property names to reduce the + chance that two independently-developed packages use the same property name + for different purposes.

+

The values of user properties are arbitrary strings and are never + validated. All of the commands that operate on properties + (zpool list, + zpool get, + zpool set, and so forth) can + be used to manipulate both native properties and user properties. Use + zpool set + name= to clear a user property. Property values are + limited to 8192 bytes.

+
+
+
+ + + + + +
April 18, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/fsck.zfs.8.html b/man/v2.2/8/fsck.zfs.8.html new file mode 100644 index 000000000..3e837fe49 --- /dev/null +++ b/man/v2.2/8/fsck.zfs.8.html @@ -0,0 +1,292 @@ + + + + + + + fsck.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

fsck.zfs.8

+
+ + + + + +
FSCK.ZFS(8)System Manager's ManualFSCK.ZFS(8)
+
+
+

+

fsck.zfsdummy + ZFS filesystem checker

+
+
+

+ + + + + +
fsck.zfs[options] + dataset
+
+
+

+

fsck.zfs is a thin shell wrapper that at + most checks the status of a dataset's container pool. It is installed by + OpenZFS because some Linux distributions expect a fsck helper for all + filesystems.

+

If more than one dataset is specified, each + is checked in turn and the results binary-ored.

+
+
+

+

Ignored.

+
+
+

+

ZFS datasets are checked by running zpool + scrub on the containing pool. An individual ZFS + dataset is never checked independently of its pool, which is unlike a + regular filesystem.

+

However, the fsck(8) interface still + allows it to communicate some errors: if the dataset + is in a degraded pool, then fsck.zfs will return + exit code to indicate + an uncorrected filesystem error.

+

Similarly, if the dataset is in a + faulted pool and has a legacy /etc/fstab record, + then fsck.zfs will return exit code + to indicate a fatal + operational error.

+
+
+

+

fstab(5), fsck(8), + zpool-scrub(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/index.html b/man/v2.2/8/index.html new file mode 100644 index 000000000..04763343b --- /dev/null +++ b/man/v2.2/8/index.html @@ -0,0 +1,313 @@ + + + + + + + System Administration Commands (8) — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/mount.zfs.8.html b/man/v2.2/8/mount.zfs.8.html new file mode 100644 index 000000000..4eee2627c --- /dev/null +++ b/man/v2.2/8/mount.zfs.8.html @@ -0,0 +1,299 @@ + + + + + + + mount.zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

mount.zfs.8

+
+ + + + + +
MOUNT.ZFS(8)System Manager's ManualMOUNT.ZFS(8)
+
+
+

+

mount.zfsmount + ZFS filesystem

+
+
+

+ + + + + +
mount.zfs[-sfnvh] [-o + options] dataset + mountpoint
+
+
+

+

The mount.zfs helper is used by + mount(8) to mount filesystem snapshots and + legacy + ZFS filesystems, as well as by zfs(8) when the + + environment variable is not set. Users should should invoke either + zfs(8) in most cases.

+

options are handled according + to the section in zfsprops(7), except + for those described below.

+

If /etc/mtab is a regular file and + -n was not specified, it will be updated via + libmount.

+
+
+

+
+
+
Ignore unknown (sloppy) mount options.
+
+
Do everything except actually executing the system call.
+
+
Never update /etc/mtab.
+
+
Print resolved mount options and parser state.
+
+
Print the usage message.
+
+ zfsutil
+
This private flag indicates that mount(8) is being + called by the zfs(8) command.
+
+
+
+

+

fstab(5), mount(8), + zfs-mount(8)

+
+
+ + + + + +
May 24, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/vdev_id.8.html b/man/v2.2/8/vdev_id.8.html new file mode 100644 index 000000000..d48d89598 --- /dev/null +++ b/man/v2.2/8/vdev_id.8.html @@ -0,0 +1,324 @@ + + + + + + + vdev_id.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

vdev_id.8

+
+ + + + + +
VDEV_ID(8)System Manager's ManualVDEV_ID(8)
+
+
+

+

vdev_idgenerate + user-friendly names for JBOD disks

+
+
+

+ + + + + +
vdev_id-d dev + -c config_file + -g + sas_direct|sas_switch|scsi + -m -p + phys_per_port
+
+
+

+

vdev_id is an udev helper which parses + vdev_id.conf(5) to map a physical path in a storage + topology to a channel name. The channel name is combined with a disk + enclosure slot number to create an alias that reflects the physical location + of the drive. This is particularly helpful when it comes to tasks like + replacing failed drives. Slot numbers may also be remapped in case the + default numbering is unsatisfactory. The drive aliases will be created as + symbolic links in /dev/disk/by-vdev.

+

The currently supported topologies are + sas_direct, sas_switch, and + scsi. A multipath mode is supported in which dm-mpath + devices are handled by examining the first running component disk as + reported by the driver. In multipath mode the configuration file should + contain a channel definition with the same name for each path to a given + enclosure.

+

vdev_id also supports creating + aliases based on existing udev links in the /dev hierarchy using the + configuration + file keyword. See vdev_id.conf(5) for details.

+
+
+

+
+
+ device
+
The device node to classify, like /dev/sda.
+
+ config_file
+
Specifies the path to an alternate configuration file. The default is + /etc/zfs/vdev_id.conf.
+
+ sas_direct|sas_switch|scsi
+
Identifies a physical topology that governs how physical paths are mapped + to channels: +
+
+ and scsi
+
channels are uniquely identified by a PCI slot and HBA port + number
+
+
channels are uniquely identified by a SAS switch port number
+
+
+
+
Only handle dm-multipath devices. If specified, examine the first running + component disk of a dm-multipath device as provided by the driver to + determine the physical path.
+
+ phys_per_port
+
Specifies the number of PHY devices associated with a SAS HBA port or SAS + switch port. vdev_id internally uses this value to + determine which HBA or switch port a device is connected to. The default + is .
+
+
Print a usage summary.
+
+
+
+

+

vdev_id.conf(5)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zdb.8.html b/man/v2.2/8/zdb.8.html new file mode 100644 index 000000000..95d992628 --- /dev/null +++ b/man/v2.2/8/zdb.8.html @@ -0,0 +1,806 @@ + + + + + + + zdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zdb.8

+
+ + + + + +
ZDB(8)System Manager's ManualZDB(8)
+
+
+

+

zdbdisplay ZFS + storage pool debugging and consistency information

+
+
+

+ + + + + +
zdb[-AbcdDFGhikLMNPsTvXYy] + [-e [-V] + [-p path]…] + [-I inflight-I/O-ops] + [-o + var=value]… + [-t txg] + [-U cache] + [-x dumpdir] + [-K key] + [poolname[/dataset|objset-ID]] + [object|range…]
+
+ + + + + +
zdb[-AdiPv] [-e + [-V] [-p + path]…] [-U + cache] [-K + key] + poolname[/dataset|objset-ID] + [object|range…]
+
+ + + + + +
zdb-B [-e + [-V] [-p + path]…] [-U + cache] [-K + key] + poolname/objset-ID + [backup-flags]
+
+ + + + + +
zdb-C [-A] + [-U cache] + [poolname]
+
+ + + + + +
zdb-E [-A] + word0:word1:…:word15
+
+ + + + + +
zdb-l [-Aqu] + device
+
+ + + + + +
zdb-m [-AFLPXY] + [-e [-V] + [-p path]…] + [-t txg] + [-U cache] + poolname [vdev + [metaslab]…]
+
+ + + + + +
zdb-O [-K + key] dataset path
+
+ + + + + +
zdb-r [-K + key] dataset path + destination
+
+ + + + + +
zdb-R [-A] + [-e [-V] + [-p path]…] + [-U cache] + poolname + vdev:offset:[lsize/]psize[:flags]
+
+ + + + + +
zdb-S [-AP] + [-e [-V] + [-p path]…] + [-U cache] + poolname
+
+
+

+

The zdb utility displays information about + a ZFS pool useful for debugging and performs some amount of consistency + checking. It is a not a general purpose tool and options (and facilities) + may change. It is not a fsck(8) utility.

+

The output of this command in general reflects the on-disk + structure of a ZFS pool, and is inherently unstable. The precise output of + most invocations is not documented, a knowledge of ZFS internals is + assumed.

+

If the dataset argument does not + contain any + "" or + "" + characters, it is interpreted as a pool name. The root dataset can be + specified as "pool/".

+

zdb is an "offline" tool; it + accesses the block devices underneath the pools directly from userspace and + does not care if the pool is imported or datasets are mounted (or even if + the system understands ZFS at all). When operating on an imported and active + pool it is possible, though unlikely, that zdb may interpret inconsistent + pool data and behave erratically.

+
+
+

+

Display options:

+
+
, + --block-stats
+
Display statistics regarding the number, size (logical, physical and + allocated) and deduplication of blocks.
+
, + --backup
+
Generate a backup stream, similar to zfs + send, but for the numeric objset ID, and without + opening the dataset. This can be useful in recovery scenarios if dataset + metadata has become corrupted but the dataset itself is readable. The + optional flags argument is a string of one or more + of the letters e, L, + c, and + , which + correspond to the same flags in zfs-send(8).
+
, + --checksum
+
Verify the checksum of all metadata blocks while printing block statistics + (see -b). +

If specified multiple times, verify the checksums of all + blocks.

+
+
, + --config
+
Display information about the configuration. If specified with no other + options, instead display information about the cache file + (/etc/zfs/zpool.cache). To specify the cache file + to display, see -U. +

If specified multiple times, and a pool name is also specified + display both the cached configuration and the on-disk configuration. If + specified multiple times with -e also display + the configuration that would be used were the pool to be imported.

+
+
, + --datasets
+
Display information about datasets. Specified once, displays basic dataset + information: ID, create transaction, size, and object count. See + -N for determining if + poolname[/dataset|objset-ID] + is to use the specified + dataset|objset-ID as a string + (dataset name) or a number (objset ID) when datasets have numeric names. +

If specified multiple times provides greater and greater + verbosity.

+

If object IDs or object ID ranges are specified, display + information about those specific objects or ranges only.

+

An object ID range is specified in terms of a colon-separated + tuple of the form + ⟨start⟩:⟨end⟩[:⟨flags⟩]. The + fields start and end are + integer object identifiers that denote the upper and lower bounds of the + range. An end value of -1 specifies a range with + no upper bound. The flags field optionally + specifies a set of flags, described below, that control which object + types are dumped. By default, all object types are dumped. A minus sign + (-) negates the effect of the flag that follows it and has no effect + unless preceded by the A flag. For example, the + range 0:-1:A-d will dump all object types except for directories.

+

+
+
+
Dump all objects (this is the default)
+
+
Dump ZFS directory objects
+
+
Dump ZFS plain file objects
+
+
Dump SPA space map objects
+
+
Dump ZAP objects
+
-
+
Negate the effect of next flag
+
+
+
, + --dedup-stats
+
Display deduplication statistics, including the deduplication ratio + (dedup), compression ratio (compress), + inflation due to the zfs copies property (copies), and + an overall effective ratio (dedup + × compress + / copies).
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the statistics independently for each deduplication table.
+
+
Dump the contents of the deduplication tables describing duplicate + blocks.
+
+
Also dump the contents of the deduplication tables describing unique + blocks.
+
, + --embedded-block-pointer=word0:word1:…:word15
+
Decode and display block from an embedded block pointer specified by the + word arguments.
+
, + --history
+
Display pool history similar to zpool + history, but include internal changes, + transaction, and dataset information.
+
, + --intent-logs
+
Display information about intent log (ZIL) entries relating to each + dataset. If specified multiple times, display counts of each intent log + transaction type.
+
, + --checkpointed-state
+
Examine the checkpointed state of the pool. Note, the on disk format of + the pool is not reverted to the checkpointed state.
+
, + --label=device
+
Read the vdev labels and L2ARC header from the specified device. + zdb -l will return 0 if + valid label was found, 1 if error occurred, and 2 if no valid labels were + found. The presence of L2ARC header is indicated by a specific sequence + (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size or the + number of L2ARC log blocks zdb + -l will return 1. Each unique configuration is + displayed only once.
+
+ device
+
In addition display label space usage stats. If a valid L2ARC header was + found also display the properties of log blocks used for restoring L2ARC + contents (persistent L2ARC).
+
+ device
+
Display every configuration, unique or not. If a valid L2ARC header was + found also display the properties of log entries in log blocks used for + restoring L2ARC contents (persistent L2ARC). +

If the -q option is also specified, + don't print the labels or the L2ARC header.

+

If the -u option is also specified, + also display the uberblocks on this device. Specify multiple times to + increase verbosity.

+
+
, + --disable-leak-tracking
+
Disable leak detection and the loading of space maps. By default, + zdb verifies that all non-free blocks are + referenced, which can be very expensive.
+
, + --metaslabs
+
Display the offset, spacemap, free space of each metaslab, all the log + spacemaps and their obsolete entry statistics.
+
+
Also display information about the on-disk free space histogram associated + with each metaslab.
+
+
Display the maximum contiguous free space, the in-core free space + histogram, and the percentage of free space in each space map.
+
+
Display every spacemap record.
+
, + --metaslab-groups
+
Display all "normal" vdev metaslab group information - per-vdev + metaslab count, fragmentation, and free space histogram, as well as + overall pool fragmentation and histogram.
+
+
"Special" vdevs are added to -M's normal output.
+
, + --object-lookups=dataset + path
+
Also display information about the maximum contiguous free space and the + percentage of free space in each space map.
+
+
Display every spacemap record.
+
+
Same as -d but force zdb to interpret the + [dataset|objset-ID] in + [poolname[/dataset|objset-ID]] + as a numeric objset ID.
+
+ dataset path
+
Look up the specified path inside of the + dataset and display its metadata and indirect + blocks. Specified path must be relative to the root + of dataset. This option can be combined with + -v for increasing verbosity.
+
, + --copy-object=dataset path + destination
+
Copy the specified path inside of the + dataset to the specified destination. Specified + path must be relative to the root of + dataset. This option can be combined with + -v for increasing verbosity.
+
, + --read-block=poolname + vdev:offset:[lsize/]psize[:flags]
+
Read and display a block from the specified device. By default the block + is displayed as a hex dump, but see the description of the + r flag, below. +

The block is specified in terms of a colon-separated tuple + vdev (an integer vdev identifier) + offset (the offset within the vdev) + size (the physical size, or logical size / + physical size) of the block to read and, optionally, + flags (a set of flags, described below).

+

+
+
+ offset
+
Print block pointer at hex offset
+
+
Calculate and display checksums
+
+
Decompress the block. Set environment variable + ZDB_NO_ZLE to skip zle when guessing.
+
+
Byte swap the block
+
+
Dump gang block header
+
+
Dump indirect block
+
+
Dump raw uninterpreted block data
+
+
Verbose output for guessing compression algorithm
+
+
+
, + --io-stats
+
Report statistics on zdb I/O. Display operation + counts, bandwidth, and error counts of I/O to the pool from + zdb.
+
, + --simulate-dedup
+
Simulate the effects of deduplication, constructing a DDT and then display + that DDT as with -DD.
+
, + --brt-stats
+
Display block reference table (BRT) statistics, including the size of + uniques blocks cloned, the space saving as a result of cloning, and the + saving ratio.
+
+
Display the per-vdev BRT statistics, including total references.
+
+
Dump the contents of the block reference tables.
+
, + --uberblock
+
Display the current uberblock.
+
+

Other options:

+
+
, + --ignore-assertions
+
Do not abort should any assertion fail.
+
+
Enable panic recovery, certain errors which would otherwise be fatal are + demoted to warnings.
+
+
Do not abort if asserts fail and also enable panic recovery.
+
, + --exported=[-p + path]…
+
Operate on an exported pool, not present in + /etc/zfs/zpool.cache. The + -p flag specifies the path under which devices are + to be searched.
+
, + --dump-blocks=dumpdir
+
All blocks accessed will be copied to files in the specified directory. + The blocks will be placed in sparse files whose name is the same as that + of the file or device read. zdb can be then run on + the generated files. Note that the -bbc flags are + sufficient to access (and thus copy) all metadata on the pool.
+
, + --automatic-rewind
+
Attempt to make an unreadable pool readable by trying progressively older + transactions.
+
, + --dump-debug-msg
+
Dump the contents of the zfs_dbgmsg buffer before exiting + zdb. zfs_dbgmsg is a buffer used by ZFS to dump + advanced debug information.
+
, + --inflight=inflight-I/O-ops
+
Limit the number of outstanding checksum I/O operations to the specified + value. The default value is 200. This option affects the performance of + the -c option.
+
, + --key=key
+
Decryption key needed to access an encrypted dataset. This will cause + zdb to attempt to unlock the dataset using the + encryption root, key format and other encryption parameters on the given + dataset. zdb can still inspect pool and dataset + structures on encrypted datasets without unlocking them, but will not be + able to access file names and attributes and object contents. + WARNING: The raw decryption key and any decrypted data will be in + user memory while zdb is running. Other user + programs may be able to extract it by inspecting + zdb as it runs. Exercise extreme caution when + using this option in shared or uncontrolled environments.
+
, + --option=var=value
+
Set the given global libzpool variable to the provided value. The value + must be an unsigned 32-bit integer. Currently only little-endian systems + are supported to avoid accidentally setting the high 32 bits of 64-bit + variables.
+
, + --parseable
+
Print numbers in an unscaled form more amenable to parsing, e.g. + + rather than + .
+
, + --txg=transaction
+
Specify the highest transaction to use when searching for uberblocks. See + also the -u and -l options + for a means to see the available uberblocks and their associated + transaction numbers.
+
, + --cachefile=cachefile
+
Use a cache file other than + /etc/zfs/zpool.cache.
+
, + --verbose
+
Enable verbosity. Specify multiple times for increased verbosity.
+
, + --verbatim
+
Attempt verbatim import. This mimics the behavior of the kernel when + loading a pool from a cachefile. Only usable with + -e.
+
, + --extreme-rewind
+
Attempt "extreme" transaction rewind, that is attempt the same + recovery as -F but read transactions otherwise + deemed too old.
+
, + --all-reconstruction
+
Attempt all possible combinations when reconstructing indirect split + blocks. This flag disables the individual I/O deadman timer in order to + allow as much time as required for the attempted reconstruction.
+
, + --livelist
+
Perform validation for livelists that are being deleted. Scans through the + livelist and metaslabs, checking for duplicate entries and compares the + two, checking for potential double frees. If it encounters issues, + warnings will be printed, but the command will not necessarily fail.
+
+

Specifying a display option more than once enables verbosity for + only that option, with more occurrences enabling more verbosity.

+

If no options are specified, all information about the named pool + will be displayed at default verbosity.

+
+
+

+
+

+
+
# zdb -C rpool
+MOS Configuration:
+        version: 28
+        name: 'rpool'
+ …
+
+
+
+

+
+
# zdb -d rpool
+Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
+Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
+ …
+
+
+
+

+
+
# zdb -d rpool/export/home 0
+Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
+
+    Object  lvl   iblk   dblk  dsize  lsize   %full  type
+         0    7    16K    16K  15.0K    16K   25.00  DMU dnode
+
+
+
+

+
+
# zdb -S rpool
+Simulated DDT histogram:
+
+bucket              allocated                       referenced
+______   ______________________________   ______________________________
+refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
+------   ------   -----   -----   -----   ------   -----   -----   -----
+     1     694K   27.1G   15.0G   15.0G     694K   27.1G   15.0G   15.0G
+     2    35.0K   1.33G    699M    699M    74.7K   2.79G   1.45G   1.45G
+ …
+dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
+
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
November 18, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zed.8.html b/man/v2.2/8/zed.8.html new file mode 100644 index 000000000..57ec0678d --- /dev/null +++ b/man/v2.2/8/zed.8.html @@ -0,0 +1,474 @@ + + + + + + + zed.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zed.8

+
+ + + + + +
ZED(8)System Manager's ManualZED(8)
+
+
+

+

ZEDZFS Event + Daemon

+
+
+

+ + + + + +
ZED[-fFhILMvVZ] [-d + zedletdir] [-p + pidfile] [-P + path] [-s + statefile] [-j + jobs] [-b + buflen]
+
+
+

+

The ZED (ZFS Event Daemon) monitors events + generated by the ZFS kernel module. When a zevent (ZFS Event) is posted, the + ZED will run any ZEDLETs (ZFS Event Daemon Linkage + for Executable Tasks) that have been enabled for the corresponding zevent + class.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Display license information.
+
+
Display version information.
+
+
Be verbose.
+
+
Force the daemon to run if at all possible, disabling security checks and + throwing caution to the wind. Not recommended for use in production.
+
+
Don't daemonise: remain attached to the controlling terminal, log to the + standard I/O streams.
+
+
Lock all current and future pages in the virtual memory address space. + This may help the daemon remain responsive when the system is under heavy + memory pressure.
+
+
Request that the daemon idle rather than exit when the kernel modules are + not loaded. Processing of events will start, or resume, when the kernel + modules are (re)loaded. Under Linux the kernel modules cannot be unloaded + while the daemon is running.
+
+
Zero the daemon's state, thereby allowing zevents still within the kernel + to be reprocessed.
+
+ zedletdir
+
Read the enabled ZEDLETs from the specified directory.
+
+ pidfile
+
Write the daemon's process ID to the specified file.
+
+ path
+
Custom $PATH for zedlets to use. Normally zedlets + run in a locked-down environment, with hardcoded paths to the ZFS commands + ($ZFS, $ZPOOL, + $ZED, ), and a + hard-coded $PATH. This is done for security + reasons. However, the ZFS test suite uses a custom PATH for its ZFS + commands, and passes it to ZED with + -P. In short, -P is only + to be used by the ZFS test suite; never use it in production!
+
+ statefile
+
Write the daemon's state to the specified file.
+
+ jobs
+
Allow at most jobs ZEDLETs to run concurrently, + delaying execution of new ones until they finish. Defaults to + .
+
+ buflen
+
Cap kernel event buffer growth to buflen entries. + This buffer is grown when the daemon misses an event, but results in + unreclaimable memory use in the kernel. A value of + removes the + cap. Defaults to + .
+
+
+
+

+

A zevent is comprised of a list of nvpairs (name/value pairs). + Each zevent contains an EID (Event IDentifier) that uniquely identifies it + throughout the lifetime of the loaded ZFS kernel module; this EID is a + monotonically increasing integer that resets to 1 each time the kernel + module is loaded. Each zevent also contains a class string that identifies + the type of event. For brevity, a subclass string is defined that omits the + leading components of the class string. Additional nvpairs exist to provide + event details.

+

The kernel maintains a list of recent zevents that can be viewed + (along with their associated lists of nvpairs) using the + zpool events + -v command.

+
+
+

+

ZEDLETs to be invoked in response to zevents are located in the + enabled-zedlets directory + (zedletdir). These can be symlinked or copied from the + + directory; symlinks allow for automatic updates from the installed ZEDLETs, + whereas copies preserve local modifications. As a security measure, since + ownership change is a privileged operation, ZEDLETs must be owned by root. + They must have execute permissions for the user, but they must not have + write permissions for group or other. Dotfiles are ignored.

+

ZEDLETs are named after the zevent class for which they + should be invoked. In particular, a ZEDLET will be invoked for a given + zevent if either its class or subclass string is a prefix of its filename + (and is followed by a non-alphabetic character). As a special case, the + prefix matches + all zevents. Multiple ZEDLETs may be invoked for a given zevent.

+
+
+

+

ZEDLETs are executables invoked by the ZED in response to a given + zevent. They should be written under the presumption they can be invoked + concurrently, and they should use appropriate locking to access any shared + resources. Common variables used by ZEDLETs can be stored in the default rc + file which is sourced by scripts; these variables should be prefixed with + .

+

The zevent nvpairs are passed to ZEDLETs as environment variables. + Each nvpair name is converted to an environment variable in the following + manner:

+
    +
  1. it is prefixed with + ,
  2. +
  3. it is converted to uppercase, and
  4. +
  5. each non-alphanumeric character is converted to an underscore.
  6. +
+

Some additional environment variables have been defined to present + certain nvpair values in a more convenient form. An incomplete list of + zevent environment variables is as follows:

+
+
+
The Event IDentifier.
+
+
The zevent class string.
+
+
The zevent subclass string.
+
+
The time at which the zevent was posted as “seconds + nanoseconds” since the Epoch.
+
+
The seconds component of + ZEVENT_TIME.
+
+
The + + component of ZEVENT_TIME.
+
+
An almost-RFC3339-compliant string for ZEVENT_TIME.
+
+

Additionally, the following ZED & ZFS variables are + defined:

+
+
+
The daemon's process ID.
+
+
The daemon's current enabled-zedlets directory.
+
+
The alias + (“--”) + string of the ZFS distribution the daemon is part of.
+
+
The ZFS version the daemon is part of.
+
+
The ZFS release the daemon is part of.
+
+

ZEDLETs may need to call other ZFS commands. The + installation paths of the following executables are defined as environment + variables: , + , + , + , + and + . + These variables may be overridden in the rc file.

+
+
+

+
+
@sysconfdir@/zfs/zed.d
+
The default directory for enabled ZEDLETs.
+
@sysconfdir@/zfs/zed.d/zed.rc
+
The default rc file for common variables used by ZEDLETs.
+
@zfsexecdir@/zed.d
+
The default directory for installed ZEDLETs.
+
@runstatedir@/zed.pid
+
The default file containing the daemon's process ID.
+
@runstatedir@/zed.state
+
The default file containing the daemon's state.
+
+
+
+

+
+
+
Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+
, +
+
Terminate the daemon.
+
+
+
+

+

zfs(8), zpool(8), + zpool-events(8)

+
+
+

+

The ZED requires root privileges.

+

Do not taunt the ZED.

+
+
+

+

ZEDLETs are unable to return state/status information to the + kernel.

+

Internationalization support via gettext has not been added.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-allow.8.html b/man/v2.2/8/zfs-allow.8.html new file mode 100644 index 000000000..9eb4b295f --- /dev/null +++ b/man/v2.2/8/zfs-allow.8.html @@ -0,0 +1,956 @@ + + + + + + + zfs-allow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-allow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of mount, + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@ property
groupobjquotaotherAllows accessing any groupobjquota@ + property
groupusedotherAllows reading any groupused@ property
groupobjusedotherAllows reading any groupobjused@ property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@ property
userobjquotaotherAllows accessing any userobjquota@ + property
userusedotherAllows reading any userused@ property
userobjusedotherAllows reading any userobjused@ property
projectobjquotaotherAllows accessing any projectobjquota@ + property
projectquotaotherAllows accessing any projectquota@ + property
projectobjusedotherAllows reading any projectobjused@ + property
projectusedotherAllows reading any projectused@ property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+

+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys , users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-bookmark.8.html b/man/v2.2/8/zfs-bookmark.8.html new file mode 100644 index 000000000..01d460809 --- /dev/null +++ b/man/v2.2/8/zfs-bookmark.8.html @@ -0,0 +1,291 @@ + + + + + + + zfs-bookmark.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-bookmark.8

+
+ + + + + +
ZFS-BOOKMARK(8)System Manager's ManualZFS-BOOKMARK(8)
+
+
+

+

zfs-bookmark — + create bookmark of ZFS snapshot

+
+
+

+ + + + + +
zfsbookmark + snapshot|bookmark + newbookmark
+
+
+

+

Creates a new bookmark of the given snapshot or bookmark. + Bookmarks mark the point in time when the snapshot was created, and can be + used as the incremental source for a zfs + send.

+

When creating a bookmark from an existing redaction + bookmark, the resulting bookmark is + a redaction + bookmark.

+

This feature must be enabled to be used. See + zpool-features(7) for details on ZFS feature flags and the + + feature.

+
+
+

+
+

+

The following example creates a bookmark to a snapshot. This + bookmark can then be used instead of a snapshot in send streams.

+
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
+
+

+

zfs-destroy(8), zfs-send(8), + zfs-snapshot(8)

+
+
+ + + + + +
May 12, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-change-key.8.html b/man/v2.2/8/zfs-change-key.8.html new file mode 100644 index 000000000..017f2e341 --- /dev/null +++ b/man/v2.2/8/zfs-change-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-change-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-change-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-clone.8.html b/man/v2.2/8/zfs-clone.8.html new file mode 100644 index 000000000..0027836c2 --- /dev/null +++ b/man/v2.2/8/zfs-clone.8.html @@ -0,0 +1,315 @@ + + + + + + + zfs-clone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-clone.8

+
+ + + + + +
ZFS-CLONE(8)System Manager's ManualZFS-CLONE(8)
+
+
+

+

zfs-cloneclone + snapshot of ZFS dataset

+
+
+

+ + + + + +
zfsclone [-p] + [-o + property=value]… + snapshot + filesystem|volume
+
+
+

+

See the Clones section of + zfsconcepts(7) for details. The target dataset can be + located anywhere in the ZFS hierarchy, and is created as the same type as + the original.

+
+
+ property=value
+
Sets the specified property; see zfs + create for details.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + + property inherited from their parent. If the target filesystem or volume + already exists, the operation completes successfully.
+
+
+
+

+
+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-promote(8), + zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-create.8.html b/man/v2.2/8/zfs-create.8.html new file mode 100644 index 000000000..6c5ed0d44 --- /dev/null +++ b/man/v2.2/8/zfs-create.8.html @@ -0,0 +1,452 @@ + + + + + + + zfs-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-create.8

+
+ + + + + +
ZFS-CREATE(8)System Manager's ManualZFS-CREATE(8)
+
+
+

+

zfs-create — + create ZFS dataset

+
+
+

+ + + + + +
zfscreate [-Pnpuv] + [-o + property=value]… + filesystem
+
+ + + + + +
zfscreate [-ps] + [-b blocksize] + [-o + property=value]… + -V size + volume
+
+
+

+
+
zfs create + [-Pnpuv] [-o + property=value]… + filesystem
+
Creates a new ZFS file system. The file system is automatically mounted + according to the mountpoint property inherited from the + parent, unless the -u option is used. +
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + at the same time the dataset was created. Any editable ZFS property + can also be set at creation time. Multiple -o + options can be specified. An error results if the same property is + specified in multiple -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have filesystem as + their only value. The create_ancestors key only + appears if the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + filesystem due to the use of the + -o option.
+
+
Do not mount the newly created file system.
+
+
Print verbose information about the created dataset.
+
+
+
zfs create + [-ps] [-b + blocksize] [-o + property=value]… + -V size + volume
+
Creates a volume of the given size. The volume is exported as a block + device in /dev/zvol/path, where + is the name + of the volume in the ZFS namespace. The size represents the logical size + as exported by the device. By default, a reservation of equal size is + created. +

size is automatically + rounded up to the nearest multiple of the + .

+
+
+ blocksize
+
Equivalent to -o + volblocksize=blocksize. If + this option is specified in conjunction with + -o volblocksize, the + resulting behavior is undefined.
+
+ property=value
+
Sets the specified property as if the zfs + set + property=value command was + invoked at the same time the dataset was created. Any editable ZFS + property can also be set at creation time. Multiple + -o options can be specified. An error results + if the same property is specified in multiple + -o options.
+
+
Creates all the non-existing parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their parent. Any + property specified on the command line using the + -o option is ignored. If the target filesystem + already exists, the operation completes successfully.
+
+
Creates a sparse volume with no reservation. See + + in the + section of zfsprops(7) for more + information about sparse volumes.
+
+
Do a dry-run ("No-op") creation. No datasets will be + created. This is useful in conjunction with the + -v or -P flags to + validate properties that are passed via -o + options and those implied by other options. The actual dataset + creation can still fail due to insufficient privileges or available + capacity.
+
+
Print machine-parsable verbose information about the created dataset. + Each line of output contains a key and one or two values, all + separated by tabs. The create_ancestors and + create keys have volume as their + only value. The create_ancestors key only appears if + the -p option is used. The + property key has two values, a property name that + property's value. The property key may appear zero + or more times, once for each property that will be set local to + volume due to the use of the + -b or -o options, as + well as + + if the volume is not sparse.
+
+
Print verbose information about the created dataset.
+
+
+
+
+

+

Swapping to a ZFS volume is prone to deadlock and not recommended. + See OpenZFS FAQ.

+

Swapping to a file on a ZFS filesystem is not supported.

+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-destroy(8), zfs-list(8), + zpool-create(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-destroy.8.html b/man/v2.2/8/zfs-destroy.8.html new file mode 100644 index 000000000..8219837ad --- /dev/null +++ b/man/v2.2/8/zfs-destroy.8.html @@ -0,0 +1,424 @@ + + + + + + + zfs-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-destroy.8

+
+ + + + + +
ZFS-DESTROY(8)System Manager's ManualZFS-DESTROY(8)
+
+
+

+

zfs-destroy — + destroy ZFS dataset, snapshots, or bookmark

+
+
+

+ + + + + +
zfsdestroy [-Rfnprv] + filesystem|volume
+
+ + + + + +
zfsdestroy [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
+ + + + + +
zfsdestroy + filesystem|volume#bookmark
+
+
+

+
+
zfs destroy + [-Rfnprv] + filesystem|volume
+
Destroys the given dataset. By default, the command unshares any file + systems that are currently shared, unmounts any file systems that are + currently mounted, and refuses to destroy a dataset that has active + dependents (children or clones). +
+
+
Recursively destroy all dependents, including cloned file systems + outside the target hierarchy.
+
+
Forcibly unmount file systems. This option has no effect on non-file + systems or unmounted file systems.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -v or + -p flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Recursively destroy all children.
+
+
Print verbose information about the deleted data.
+
+

Extreme care should be taken when applying either the + -r or the -R options, as + they can destroy large portions of a pool and cause unexpected behavior + for mounted file systems in use.

+
+
zfs destroy + [-Rdnprv] + filesystem|volume@snap[%snap[,snap[%snap]]]…
+
The given snapshots are destroyed immediately if and only if the + zfs destroy command + without the -d option would have destroyed it. + Such immediate destruction would occur, for example, if the snapshot had + no clones and the user-initiated reference count were zero. +

If a snapshot does not qualify for immediate destruction, it + is marked for deferred deletion. In this state, it exists as a usable, + visible snapshot until both of the preconditions listed above are met, + at which point it is destroyed.

+

An inclusive range of snapshots may be specified by separating + the first and last snapshots with a percent sign. The first and/or last + snapshots may be left blank, in which case the filesystem's oldest or + newest snapshot will be implied.

+

Multiple snapshots (or ranges of snapshots) of the same + filesystem or volume may be specified in a comma-separated list of + snapshots. Only the snapshot's short name (the part after the + ) should be + specified when using a range or comma-separated list to identify + multiple snapshots.

+
+
+
Recursively destroy all clones of these snapshots, including the + clones, snapshots, and children. If this flag is specified, the + -d flag will have no effect.
+
+
Destroy immediately. If a snapshot cannot be destroyed now, mark it + for deferred destruction.
+
+
Do a dry-run ("No-op") deletion. No data will be deleted. + This is useful in conjunction with the -p or + -v flags to determine what data would be + deleted.
+
+
Print machine-parsable verbose information about the deleted + data.
+
+
Destroy (or mark for deferred deletion) all snapshots with this name + in descendent file systems.
+
+
Print verbose information about the deleted data. +

Extreme care should be taken when applying either the + -r or the -R + options, as they can destroy large portions of a pool and cause + unexpected behavior for mounted file systems in use.

+
+
+
+
zfs destroy + filesystem|volume#bookmark
+
The given bookmark is destroyed.
+
+
+
+

+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+

+

zfs-create(8), zfs-hold(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-diff.8.html b/man/v2.2/8/zfs-diff.8.html new file mode 100644 index 000000000..86d98602f --- /dev/null +++ b/man/v2.2/8/zfs-diff.8.html @@ -0,0 +1,341 @@ + + + + + + + zfs-diff.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-diff.8

+
+ + + + + +
ZFS-DIFF(8)System Manager's ManualZFS-DIFF(8)
+
+
+

+

zfs-diffshow + difference between ZFS snapshots

+
+
+

+ + + + + +
zfsdiff [-FHth] + snapshot + snapshot|filesystem
+
+
+

+

Display the difference between a snapshot of a given filesystem + and another snapshot of that filesystem from a later time or the current + contents of the filesystem. The first column is a character indicating the + type of change, the other columns indicate pathname, new pathname (in case + of rename), change in link count, and optionally file type and/or change + time. The types of change are:

+
+
+
-
+
The path has been removed
+
+
The path has been created
+
+
The path has been modified
+
+
The path has been renamed
+
+
+
+
+
Display an indication of the type of file, in a manner similar to the + -F option of ls(1). +
+
+
+
Block device
+
+
Character device
+
+
Directory
+
+
Door
+
+
Named pipe
+
+
Symbolic link
+
+
Event port
+
+
Socket
+
+
Regular file
+
+
+
+
+
Give more parsable tab-separated output, without header lines and without + arrows.
+
+
Display the path's inode change time as the first column of output.
+
+
Do not + ooo-escape + non-ASCII paths.
+
+
+
+

+
+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected.

+
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-get.8.html b/man/v2.2/8/zfs-get.8.html new file mode 100644 index 000000000..081cfd5a2 --- /dev/null +++ b/man/v2.2/8/zfs-get.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-get.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-groupspace.8.html b/man/v2.2/8/zfs-groupspace.8.html new file mode 100644 index 000000000..c9e4fd984 --- /dev/null +++ b/man/v2.2/8/zfs-groupspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-groupspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-groupspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-hold.8.html b/man/v2.2/8/zfs-hold.8.html new file mode 100644 index 000000000..f11ccdefa --- /dev/null +++ b/man/v2.2/8/zfs-hold.8.html @@ -0,0 +1,325 @@ + + + + + + + zfs-hold.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-hold.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rHp] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rHp] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
Prints holds timestamps as unix epoch timestamps.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-inherit.8.html b/man/v2.2/8/zfs-inherit.8.html new file mode 100644 index 000000000..5117c9c52 --- /dev/null +++ b/man/v2.2/8/zfs-inherit.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-inherit.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-inherit.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-jail.8.html b/man/v2.2/8/zfs-jail.8.html new file mode 100644 index 000000000..64694950f --- /dev/null +++ b/man/v2.2/8/zfs-jail.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-jail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-jail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-list.8.html b/man/v2.2/8/zfs-list.8.html new file mode 100644 index 000000000..2750df9f1 --- /dev/null +++ b/man/v2.2/8/zfs-list.8.html @@ -0,0 +1,376 @@ + + + + + + + zfs-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-list.8

+
+ + + + + +
ZFS-LIST(8)System Manager's ManualZFS-LIST(8)
+
+
+

+

zfs-listlist + properties of ZFS datasets

+
+
+

+ + + + + +
zfslist + [-r|-d + depth] [-Hp] + [-o + property[,property]…] + [-s property]… + [-S property]… + [-t + type[,type]…] + [filesystem|volume|snapshot]…
+
+
+

+

If specified, you can list property information by the absolute + pathname or the relative pathname. By default, all file systems and volumes + are displayed. Snapshots are displayed if the + + pool property is on (the default is + off), or if the -t + snapshot or -t + all options are specified. The following fields are + displayed: name, used, + , + , + .

+
+
+
Used for scripting mode. Do not print headers and separate fields by a + single tab instead of arbitrary white space.
+
+ depth
+
Recursively display any children of the dataset, limiting the recursion to + depth. A depth of + will display + only the dataset and its direct children.
+
+ property
+
A comma-separated list of properties to display. The property must be: + +
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display any children of the dataset on the command line.
+
+ property
+
A property for sorting the output by column in ascending order based on + the value of the property. The property must be one of the properties + described in the Properties section + of zfsprops(7) or the value name to + sort by the dataset name. Multiple properties can be specified at one time + using multiple -s property options. Multiple + -s options are evaluated from left to right in + decreasing order of importance. The following is a list of sorting + criteria: +
    +
  • Numeric types sort in numeric order.
  • +
  • String types sort in alphabetical order.
  • +
  • Types inappropriate for a row sort that row to the literal bottom, + regardless of the specified ordering.
  • +
+

If no sorting options are specified the existing behavior of + zfs list is + preserved.

+
+
+ property
+
Same as -s, but sorts by property in descending + order.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + , + or all. For example, specifying + -t snapshot displays only + snapshots. + , + , or + can be + used as aliases for filesystem, + snapshot, or volume.
+
+
+
+

+
+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties.

+
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
+
+

+

zfsprops(7), zfs-get(8)

+
+
+ + + + + +
February 8, 2024Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-load-key.8.html b/man/v2.2/8/zfs-load-key.8.html new file mode 100644 index 000000000..193d82331 --- /dev/null +++ b/man/v2.2/8/zfs-load-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-load-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-load-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-mount-generator.8.html b/man/v2.2/8/zfs-mount-generator.8.html new file mode 100644 index 000000000..3d22dbc2d --- /dev/null +++ b/man/v2.2/8/zfs-mount-generator.8.html @@ -0,0 +1,439 @@ + + + + + + + zfs-mount-generator.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-mount-generator.8

+
+ + + + + +
ZFS-MOUNT-GENERATOR(8)System Manager's ManualZFS-MOUNT-GENERATOR(8)
+
+
+

+

zfs-mount-generator — + generate systemd mount units for ZFS filesystems

+
+
+

+

@systemdgeneratordir@/zfs-mount-generator

+
+
+

+

zfs-mount-generator is a + systemd.generator(7) that generates native + systemd.mount(5) units for configured ZFS datasets.

+
+

+
+
=
+
+ + or none.
+
=
+
off. Skipped if + only noauto datasets exist for a given mountpoint + and there's more than one. Datasets with + + take precedence over ones with + noauto for the same mountpoint. + Sets logical noauto + flag if noauto. Encryption roots + always generate + zfs-load-key@root.service, + even if off.
+
=, + relatime=, + =, + =, + =, + =, + =
+
Used to generate mount options equivalent to zfs + mount.
+
=, + keylocation=
+
If the dataset is an encryption root, its mount unit will bind to + zfs-load-key@root.service, + with additional dependencies as follows: +
+
+
=
+
None, uses systemd-ask-password(1)
+
=URL + (et al.)
+
=, + After=: + network-online.target
+
=<path>
+
=path
+
+
+ The service also uses the same Wants=, + After=, Requires=, + and RequiresMountsFor=, as the + mount unit.
+
=path[ + path]…
+
+ Requires= for the mount- and key-loading unit.
+
=path[ + path]…
+
+ RequiresMountsFor= for the mount- and key-loading + unit.
+
=unit[ + unit]…
+
+ Before= for the mount unit.
+
=unit[ + unit]…
+
+ After= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + WantedBy= for the mount unit.
+
=unit[ + unit]…
+
Sets logical noauto + flag (see below). If not + none, sets + RequiredBy= for the mount unit.
+
=(unset)|on|off
+
Waxes or wanes strength of default reverse dependencies of the mount unit, + see below.
+
=on|off
+
on. Defaults to + off.
+
+
+
+

+

Additionally, unless the pool the dataset resides on is imported + at generation time, both units gain + Wants=zfs-import.target and + After=zfs-import.target.

+

Additionally, unless the logical noauto flag is + set, the mount unit gains a reverse-dependency for + local-fs.target of strength

+
+
+
(unset)
+
= + + Before=
+
+
=
+
+
= + + Before=
+
+
+
+
+

+

Because ZFS pools may not be available very early in the boot + process, information on ZFS mountpoints must be stored separately. The + output of

+
zfs + list -Ho + name,⟨every property above in + order⟩
+for datasets that should be mounted by systemd should be kept at + @sysconfdir@/zfs/zfs-list.cache/poolname, + and, if writeable, will be kept synchronized for the entire pool by the + history_event-zfs-list-cacher.sh ZEDLET, if enabled + (see zed(8)). +
+
+
+

+

If the + + environment variable is nonzero (or unset and + /proc/cmdline contains + ""), + print summary accounting information at the end.

+
+
+

+

To begin, enable tracking for the pool:

+
# touch + @sysconfdir@/zfs/zfs-list.cache/poolname
+Then enable the tracking ZEDLET: +
# ln + -s + @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh + @sysconfdir@/zfs/zed.d
+
# systemctl + enable + zfs-zed.service
+
# systemctl + restart + zfs-zed.service
+

If no history event is in the queue, inject one to ensure the + ZEDLET runs to refresh the cache file by setting a monitored property + somewhere on the pool:

+
# zfs + set relatime=off + poolname/dset
+
# zfs + inherit relatime + poolname/dset
+

To test the generator output:

+
$ mkdir + /tmp/zfs-mount-generator
+
$ + @systemdgeneratordir@/zfs-mount-generator + /tmp/zfs-mount-generator
+If the generated units are satisfactory, instruct + systemd to re-run all generators: +
# systemctl + daemon-reload
+
+
+

+

systemd.mount(5), + zfs(5), + systemd.generator(7), + zed(8), + zpool-events(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-mount.8.html b/man/v2.2/8/zfs-mount.8.html new file mode 100644 index 000000000..80fe5597d --- /dev/null +++ b/man/v2.2/8/zfs-mount.8.html @@ -0,0 +1,338 @@ + + + + + + + zfs-mount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-mount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-program.8.html b/man/v2.2/8/zfs-program.8.html new file mode 100644 index 000000000..60b0cb61e --- /dev/null +++ b/man/v2.2/8/zfs-program.8.html @@ -0,0 +1,1007 @@ + + + + + + + zfs-program.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-program.8

+
+ + + + + +
ZFS-PROGRAM(8)System Manager's ManualZFS-PROGRAM(8)
+
+
+

+

zfs-program — + execute ZFS channel programs

+
+
+

+ + + + + +
zfsprogram [-jn] + [-t instruction-limit] + [-m memory-limit] + pool script + [script arguments]
+
+
+

+

The ZFS channel program interface allows ZFS administrative + operations to be run programmatically as a Lua script. The entire script is + executed atomically, with no other administrative operations taking effect + concurrently. A library of ZFS calls is made available to channel program + scripts. Channel programs may only be run with root privileges.

+

A modified version of the Lua 5.2 interpreter is used to run + channel program scripts. The Lua 5.2 manual can be found at + http://www.lua.org/manual/5.2/

+

The channel program given by script will be + run on pool, and any attempts to access or modify + other pools will cause an error.

+
+
+

+
+
+
Display channel program output in JSON format. When this flag is specified + and standard output is empty - channel program encountered an error. The + details of such an error will be printed to standard error in plain + text.
+
+
Executes a read-only channel program, which runs faster. The program + cannot change on-disk state by calling functions from the zfs.sync + submodule. The program can be used to gather information such as + properties and determining if changes would succeed (zfs.check.*). Without + this flag, all pending changes must be synced to disk before a channel + program can complete.
+
+ instruction-limit
+
Limit the number of Lua instructions to execute. If a channel program + executes more than the specified number of instructions, it will be + stopped and an error will be returned. The default limit is 10 million + instructions, and it can be set to a maximum of 100 million + instructions.
+
+ memory-limit
+
Memory limit, in bytes. If a channel program attempts to allocate more + memory than the given limit, it will be stopped and an error returned. The + default memory limit is 10 MiB, and can be set to a maximum of 100 + MiB.
+
+

All remaining argument strings will be passed directly to the Lua + script as described in the LUA + INTERFACE section below.

+
+
+

+

A channel program can be invoked either from the command line, or + via a library call to + ().

+
+

+

Arguments passed to the channel program are converted to a Lua + table. If invoked from the command line, extra arguments to the Lua script + will be accessible as an array stored in the argument table with the key + 'argv':

+
+
args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+
+

If invoked from the libzfs interface, an arbitrary argument list + can be passed to the channel program, which is accessible via the same + "..." syntax in Lua:

+
+
args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+
+

Note that because Lua arrays are 1-indexed, arrays passed to Lua + from the libzfs interface will have their indices incremented by 1. That is, + the element in arr[0] in a C array passed to a channel + program will be stored in arr[1] when accessed from + Lua.

+
+
+

+

Lua return statements take the form:

+
return ret0, ret1, ret2, + ...
+

Return statements returning multiple values are permitted + internally in a channel program script, but attempting to return more than + one value from the top level of the channel program is not permitted and + will throw an error. However, tables containing multiple values can still be + returned. If invoked from the command line, a return statement:

+
+
a = {foo="bar", baz=2}
+return a
+
+

Will be output formatted as:

+
+
Channel program fully executed with return value:
+    return:
+        baz: 2
+        foo: 'bar'
+
+
+
+

+

If the channel program encounters a fatal error while running, a + non-zero exit status will be returned. If more information about the error + is available, a singleton list will be returned detailing the error:

+
error: "error string, including + Lua stack trace"
+

If a fatal error is returned, the channel program may have not + executed at all, may have partially executed, or may have fully executed but + failed to pass a return value back to userland.

+

If the channel program exhausts an instruction or memory limit, a + fatal error will be generated and the program will be stopped, leaving the + program partially executed. No attempt is made to reverse or undo any + operations already performed. Note that because both the instruction count + and amount of memory used by a channel program are deterministic when run + against the same inputs and filesystem state, as long as a channel program + has run successfully once, you can guarantee that it will finish + successfully against a similar size system.

+

If a channel program attempts to return too large a value, the + program will fully execute but exit with a nonzero status code and no return + value.

+

: + ZFS API functions do not generate Fatal Errors when correctly invoked, they + return an error code and the channel program continues executing. See the + ZFS API section below for + function-specific details on error return codes.

+
+
+

+

When invoking a channel program via the libzfs interface, it is + necessary to translate arguments and return values from Lua values to their + C equivalents, and vice-versa.

+

There is a correspondence between nvlist values in C and Lua + tables. A Lua table which is returned from the channel program will be + recursively converted to an nvlist, with table values converted to their + natural equivalents:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
string->string
number->int64
boolean->boolean_value
nil->boolean (no value)
table->nvlist
+

Likewise, table keys are replaced by string equivalents as + follows:

+ + + + + + + + + + + + + + + + + + + +
string->no change
number->signed decimal string ("%lld")
boolean->"true" | "false"
+

Any collision of table key strings (for example, the string + "true" and a true boolean value) will cause a fatal error.

+

Lua numbers are represented internally as signed 64-bit + integers.

+
+
+
+

+

The following Lua built-in base library functions are + available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
assertrawlencollectgarbagerawget
errorrawsetgetmetatableselect
ipairssetmetatablenexttonumber
pairstostringrawequaltype
+

All functions in the + , + , + and + + built-in submodules are also available. A complete list and documentation of + these modules is available in the Lua manual.

+

The following functions base library functions have been disabled + and are not available for use in channel programs:

+ + + + + + + + + + +
dofileloadfileloadpcallprintxpcall
+
+
+

+
+

+

Each API function takes a fixed set of required positional + arguments and optional keyword arguments. For example, the destroy function + takes a single positional string argument (the name of the dataset to + destroy) and an optional "defer" keyword boolean argument. When + using parentheses to specify the arguments to a Lua function, only + positional arguments can be used:

+
zfs.sync.destroy("rpool@snap")
+

To use keyword arguments, functions must be called with a single + argument that is a Lua table containing entries mapping integers to + positional arguments and strings to keyword arguments:

+
zfs.sync.destroy({1="rpool@snap", + defer=true})
+

The Lua language allows curly braces to be used in place of + parenthesis as syntactic sugar for this calling convention:

+
zfs.sync.snapshot{"rpool@snap", + defer=true}
+
+
+

+

If an API function succeeds, it returns 0. If it fails, it returns + an error code and the channel program continues executing. API functions do + not generate Fatal Errors except in the case of an unrecoverable internal + file system error.

+

In addition to returning an error code, some functions also return + extra details describing what caused the error. This extra description is + given as a second return value, and will always be a Lua table, or Nil if no + error details were returned. Different keys will exist in the error details + table depending on the function and error case. Any such function may be + called expecting a single return value:

+
errno = + zfs.sync.promote(dataset)
+

Or, the error details can be retrieved:

+
+
errno, details = zfs.sync.promote(dataset)
+if (errno == EEXIST) then
+    assert(details ~= Nil)
+    list_of_conflicting_snapshots = details
+end
+
+

The following global aliases for API function error return codes + are defined for use in channel programs:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
EPERMECHILDENODEVENOSPCENOENTEAGAINENOTDIR
ESPIPEESRCHENOMEMEISDIREROFSEINTREACCES
EINVALEMLINKEIOEFAULTENFILEEPIPEENXIO
ENOTBLKEMFILEEDOME2BIGEBUSYENOTTYERANGE
ENOEXECEEXISTETXTBSYEDQUOTEBADFEXDEVEFBIG
+
+
+

+

For detailed descriptions of the exact behavior of any ZFS + administrative operations, see the main zfs(8) manual + page.

+
+
(msg)
+
Record a debug message in the zfs_dbgmsg log. A log of these messages can + be printed via mdb's "::zfs_dbgmsg" command, or can be monitored + live by running +
dtrace -n + 'zfs-dbgmsg{trace(stringof(arg0))}'
+

+
+
msg (string)
+
Debug message to be printed.
+
+
+
(dataset)
+
Returns true if the given dataset exists, or false if it doesn't. A fatal + error will be thrown if the dataset is not in the target pool. That is, in + a channel program running on rpool, + zfs.exists("rpool/nonexistent_fs") returns + false, but + zfs.exists("somepool/fs_that_may_exist") will + error. +

+
+
dataset (string)
+
Dataset to check for existence. Must be in the target pool.
+
+
+
(dataset, + property)
+
Returns two values. First, a string, number or table containing the + property value for the given dataset. Second, a string containing the + source of the property (i.e. the name of the dataset in which it was set + or nil if it is readonly). Throws a Lua error if the dataset is invalid or + the property doesn't exist. Note that Lua only supports int64 number types + whereas ZFS number properties are uint64. This means very large values + (like GUIDs) may wrap around and appear negative. +

+
+
dataset (string)
+
Filesystem or snapshot path to retrieve properties from.
+
property (string)
+
Name of property to retrieve. All filesystem, snapshot and volume + properties are supported except for + and + . + Also supports the + snap + and + bookmark + properties and the + ⟨|⟩⟨|id + properties, though the id must be in numeric form.
+
+
+
+
+
+
The sync submodule contains functions that modify the on-disk state. They + are executed in "syncing context". +

The available sync submodule functions are as follows:

+
+
(dataset, + [defer=true|false])
+
Destroy the given dataset. Returns 0 on successful destroy, or a + nonzero error code if the dataset could not be destroyed (for example, + if the dataset has any active children or clones). +

+
+
dataset (string)
+
Filesystem or snapshot to be destroyed.
+
[defer (boolean)]
+
Valid only for destroying snapshots. If set to true, and the + snapshot has holds or clones, allows the snapshot to be marked for + deferred deletion rather than failing.
+
+
+
(dataset, + property)
+
Clears the specified property in the given dataset, causing it to be + inherited from an ancestor, or restored to the default if no ancestor + property is set. The zfs + inherit -S option has + not been implemented. Returns 0 on success, or a nonzero error code if + the property could not be cleared. +

+
+
dataset (string)
+
Filesystem or snapshot containing the property to clear.
+
property (string)
+
The property to clear. Allowed properties are the same as those + for the zfs + inherit command.
+
+
+
(dataset)
+
Promote the given clone to a filesystem. Returns 0 on successful + promotion, or a nonzero error code otherwise. If EEXIST is returned, + the second return value will be an array of the clone's snapshots + whose names collide with snapshots of the parent filesystem. +

+
+
dataset (string)
+
Clone to be promoted.
+
+
+
(filesystem)
+
Rollback to the previous snapshot for a dataset. Returns 0 on + successful rollback, or a nonzero error code otherwise. Rollbacks can + be performed on filesystems or zvols, but not on snapshots or mounted + datasets. EBUSY is returned in the case where the filesystem is + mounted. +

+
+
filesystem (string)
+
Filesystem to rollback.
+
+
+
(dataset, + property, value)
+
Sets the given property on a dataset. Currently only user properties + are supported. Returns 0 if the property was set, or a nonzero error + code otherwise. +

+
+
dataset (string)
+
The dataset where the property will be set.
+
property (string)
+
The property to set.
+
value (string)
+
The value of the property to be set.
+
+
+
(dataset)
+
Create a snapshot of a filesystem. Returns 0 if the snapshot was + successfully created, and a nonzero error code otherwise. +

Note: Taking a snapshot will fail on any pool older than + legacy version 27. To enable taking snapshots from ZCP scripts, the + pool must be upgraded.

+

+
+
dataset (string)
+
Name of snapshot to create.
+
+
+
(dataset, + oldsnapname, + newsnapname)
+
Rename a snapshot of a filesystem or a volume. Returns 0 if the + snapshot was successfully renamed, and a nonzero error code otherwise. +

+
+
dataset (string)
+
Name of the snapshot's parent dataset.
+
oldsnapname (string)
+
Original name of the snapshot.
+
newsnapname (string)
+
New name of the snapshot.
+
+
+
(source, + newbookmark)
+
Create a bookmark of an existing source snapshot or bookmark. Returns + 0 if the new bookmark was successfully created, and a nonzero error + code otherwise. +

Note: Bookmarking requires the corresponding pool feature + to be enabled.

+

+
+
source (string)
+
Full name of the existing snapshot or bookmark.
+
newbookmark (string)
+
Full name of the new bookmark.
+
+
+
+
+
+
For each function in the zfs.sync submodule, there is a + corresponding zfs.check function which performs a + "dry run" of the same operation. Each takes the same arguments + as its zfs.sync counterpart and returns 0 if the + operation would succeed, or a non-zero error code if it would fail, along + with any other error details. That is, each has the same behavior as the + corresponding sync function except for actually executing the requested + change. For example, + ("fs") + returns 0 if + zfs.sync.destroy("fs") + would successfully destroy the dataset. +

The available zfs.check functions are:

+
+
(dataset, + [defer=true|false])
+
 
+
(dataset)
+
 
+
(filesystem)
+
 
+
(dataset, + property, value)
+
 
+
(dataset)
+
 
+
+
+
+
The zfs.list submodule provides functions for iterating over datasets and + properties. Rather than returning tables, these functions act as Lua + iterators, and are generally used as follows: +
+
for child in zfs.list.children("rpool") do
+    ...
+end
+
+

The available zfs.list functions are:

+
+
(snapshot)
+
Iterate through all clones of the given snapshot. +

+
+
snapshot (string)
+
Must be a valid snapshot path in the current pool.
+
+
+
(dataset)
+
Iterate through all snapshots of the given dataset. Each snapshot is + returned as a string containing the full dataset name, e.g. + "pool/fs@snap". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all direct children of the given dataset. Each child + is returned as a string containing the full dataset name, e.g. + "pool/fs/child". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(dataset)
+
Iterate through all bookmarks of the given dataset. Each bookmark is + returned as a string containing the full dataset name, e.g. + "pool/fs#bookmark". +

+
+
dataset (string)
+
Must be a valid filesystem or volume.
+
+
+
(snapshot)
+
Iterate through all user holds on the given snapshot. Each hold is + returned as a pair of the hold's tag and the timestamp (in seconds + since the epoch) at which it was created. +

+
+
snapshot (string)
+
Must be a valid snapshot.
+
+
+
(dataset)
+
An alias for zfs.list.user_properties (see relevant entry). +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Iterate through all user properties for the given dataset. For each + step of the iteration, output the property name, its value, and its + source. Throws a Lua error if the dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot, or volume.
+
+
+
(dataset)
+
Returns an array of strings, the names of the valid system (non-user + defined) properties for the given dataset. Throws a Lua error if the + dataset is invalid. +

+
+
dataset (string)
+
Must be a valid filesystem, snapshot or volume.
+
+
+
+
+
+
+
+
+

+
+

+

The following channel program recursively destroys a filesystem + and all its snapshots and children in a naive manner. Note that this does + not involve any error handling or reporting.

+
+
function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        zfs.sync.destroy(snap)
+    end
+    zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+
+
+
+

+

A more verbose and robust version of the same channel program, + which properly detects and reports errors, and also takes the dataset to + destroy as a command line argument, would be as follows:

+
+
succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+    for child in zfs.list.children(root) do
+        destroy_recursive(child)
+    end
+    for snap in zfs.list.snapshots(root) do
+        err = zfs.sync.destroy(snap)
+        if (err ~= 0) then
+            failed[snap] = err
+        else
+            succeeded[snap] = err
+        end
+    end
+    err = zfs.sync.destroy(root)
+    if (err ~= 0) then
+        failed[root] = err
+    else
+        succeeded[root] = err
+    end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+
+
+
+

+

The following function performs a forced promote operation by + attempting to promote the given clone and destroying any conflicting + snapshots.

+
+
function force_promote(ds)
+   errno, details = zfs.check.promote(ds)
+   if (errno == EEXIST) then
+       assert(details ~= Nil)
+       for i, snap in ipairs(details) do
+           zfs.sync.destroy(ds .. "@" .. snap)
+       end
+   elseif (errno ~= 0) then
+       return errno
+   end
+   return zfs.sync.promote(ds)
+end
+
+
+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-project.8.html b/man/v2.2/8/zfs-project.8.html new file mode 100644 index 000000000..14a172e19 --- /dev/null +++ b/man/v2.2/8/zfs-project.8.html @@ -0,0 +1,362 @@ + + + + + + + zfs-project.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-project.8

+
+ + + + + +
ZFS-PROJECT(8)System Manager's ManualZFS-PROJECT(8)
+
+
+

+

zfs-project — + manage projects in ZFS filesystem

+
+
+

+ + + + + +
zfsproject + [-d|-r] + file|directory
+
+ + + + + +
zfsproject -C + [-kr] + file|directory
+
+ + + + + +
zfsproject -c + [-0] + [-d|-r] + [-p id] + file|directory
+
+ + + + + +
zfsproject [-p + id] [-rs] + file|directory
+
+
+

+
+
zfs project + [-d|-r] + file|directory
+
List project identifier (ID) and inherit flag of files and directories. +
+
+
Show the directory project ID and inherit flag, not its children.
+
+
List subdirectories recursively.
+
+
+
zfs project + -C [-kr] + file|directory
+
Clear project inherit flag and/or ID on the files and directories. +
+
+
Keep the project ID unchanged. If not specified, the project ID will + be reset to zero.
+
+
Clear subdirectories' flags recursively.
+
+
+
zfs project + -c [-0] + [-d|-r] + [-p id] + file|directory
+
Check project ID and inherit flag on the files and directories: report + entries without the project inherit flag, or with project IDs different + from the target directory's project ID or the one specified with + -p. +
+
+
Delimit filenames with a NUL byte instead of newline, don't output + diagnoses.
+
+
Check the directory project ID and inherit flag, not its + children.
+
+ id
+
Compare to id instead of the target files and + directories' project IDs.
+
+
Check subdirectories recursively.
+
+
+
zfs project + -p id + [-rs] + file|directory
+
Set project ID and/or inherit flag on the files and directories. +
+
+ id
+
Set the project ID to the given value.
+
+
Set on subdirectories recursively.
+
+
Set project inherit flag on the given files and directories. This is + usually used for setting up tree quotas with + -r. In that case, the directory's project ID + will be set for all its descendants, unless specified explicitly with + -p.
+
+
+
+
+
+

+

zfs-projectspace(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-projectspace.8.html b/man/v2.2/8/zfs-projectspace.8.html new file mode 100644 index 000000000..5b3816775 --- /dev/null +++ b/man/v2.2/8/zfs-projectspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-projectspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-projectspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-promote.8.html b/man/v2.2/8/zfs-promote.8.html new file mode 100644 index 000000000..4c93a7ed1 --- /dev/null +++ b/man/v2.2/8/zfs-promote.8.html @@ -0,0 +1,299 @@ + + + + + + + zfs-promote.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-promote.8

+
+ + + + + +
ZFS-PROMOTE(8)System Manager's ManualZFS-PROMOTE(8)
+
+
+

+

zfs-promote — + promote clone dataset to no longer depend on origin + snapshot

+
+
+

+ + + + + +
zfspromote clone
+
+
+

+

The zfs promote + command makes it possible to destroy the dataset that the clone was created + from. The clone parent-child dependency relationship is reversed, so that + the origin dataset becomes a clone of the specified dataset.

+

The snapshot that was cloned, and any snapshots previous to this + snapshot, are now owned by the promoted clone. The space they use moves from + the origin dataset to the promoted clone, so enough space must be available + to accommodate these snapshots. No new space is consumed by this operation, + but the space accounting is adjusted. The promoted clone must not have any + conflicting snapshot names of its own. The zfs + rename subcommand can be used to rename any + conflicting snapshots.

+
+
+

+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+
+

+

zfs-clone(8), + zfs-rename(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-receive.8.html b/man/v2.2/8/zfs-receive.8.html new file mode 100644 index 000000000..99b68fc9d --- /dev/null +++ b/man/v2.2/8/zfs-receive.8.html @@ -0,0 +1,628 @@ + + + + + + + zfs-receive.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-receive.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsreceive -c + [-vn] + filesystem|snapshot
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no effect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs receive + -c [-vn] + filesystem|snapshot
+
Attempt to repair data corruption in the specified dataset, by using the + provided stream as the source of healthy data. This method of healing can + only heal data blocks present in the stream. Metadata can not be healed by + corrective receive. Running a scrub is recommended post-healing to ensure + all data corruption was repaired. +

It's important to consider why corruption has happened in the + first place. If you have slowly failing hardware - periodically + repairing the data is not going to save you from data loss later on when + the hardware fails completely.

+
+
+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
March 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-recv.8.html b/man/v2.2/8/zfs-recv.8.html new file mode 100644 index 000000000..55c6c9364 --- /dev/null +++ b/man/v2.2/8/zfs-recv.8.html @@ -0,0 +1,628 @@ + + + + + + + zfs-recv.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-recv.8

+
+ + + + + +
ZFS-RECEIVE(8)System Manager's ManualZFS-RECEIVE(8)
+
+
+

+

zfs-receive — + create snapshot from backup stream

+
+
+

+ + + + + +
zfsreceive [-FhMnsuv] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
+ + + + + +
zfsreceive [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
+ + + + + +
zfsreceive -A + filesystem|volume
+
+ + + + + +
zfsreceive -c + [-vn] + filesystem|snapshot
+
+
+

+
+
zfs receive + [-FhMnsuv] [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem|volume|snapshot
+
 
+
zfs receive + [-FhMnsuv] + [-d|-e] + [-o + origin=snapshot] + [-o + property=value] + [-x property] + filesystem
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the zfs + send subcommand, which by default creates a full + stream. zfs recv can be + used as an alias for zfs + receive. +

If an incremental stream is received, then the + destination file system must already exist, and its most recent snapshot + must match the incremental stream's source. For + , the + destination device link is destroyed and recreated, which means the + + cannot be accessed during the receive + operation.

+

When a snapshot replication package stream that is generated + by using the zfs send + -R command is received, any snapshots that do + not exist on the sending location are destroyed by using the + zfs destroy + -d command.

+

The ability to send and receive deduplicated send streams has + been removed. However, a deduplicated send stream created with older + software can be converted to a regular (non-deduplicated) stream by + using the zstream redup + command.

+

If -o + property=value or + -x property is specified, it + applies to the effective value of the property throughout the entire + subtree of replicated datasets. Effective property values will be set + (-o) or inherited (-x) + on the topmost in the replicated subtree. In descendant datasets, if the + property is set by the send stream, it will be overridden by forcing the + property to be inherited from the top‐most file system. Received + properties are retained in spite of being overridden and may be restored + with zfs inherit + -S. Specifying -o + origin= + is a special case because, even if origin is a + read-only property and cannot be set, it's allowed to receive the send + stream as a clone of the given snapshot.

+

Raw encrypted send streams (created with + zfs send + -w) may only be received as is, and cannot be + re-encrypted, decrypted, or recompressed by the receive process. + Unencrypted streams can be received as encrypted datasets, either + through inheritance or by specifying encryption parameters with the + -o options. Note that the + keylocation property cannot be overridden to + prompt during a receive. This is because the receive + process itself is already using the standard input for the send stream. + Instead, the property can be overridden after the receive completes.

+

The added security provided by raw sends adds some + restrictions to the send and receive process. ZFS will not allow a mix + of raw receives and non-raw receives. Specifically, any raw incremental + receives that are attempted after a non-raw receive will fail. Non-raw + receives do not have this restriction and, therefore, are always + possible. Because of this, it is best practice to always use either raw + sends for their security benefits or non-raw sends for their flexibility + when working with encrypted datasets, but not a combination.

+

The reason for this restriction stems from the inherent + restrictions of the AEAD ciphers that ZFS uses to encrypt data. When + using ZFS native encryption, each block of data is encrypted against a + randomly generated number known as the "initialization vector" + (IV), which is stored in the filesystem metadata. This number is + required by the encryption algorithms whenever the data is to be + decrypted. Together, all of the IVs provided for all of the blocks in a + given snapshot are collectively called an "IV set". When ZFS + performs a raw send, the IV set is transferred from the source to the + destination in the send stream. When ZFS performs a non-raw send, the + data is decrypted by the source system and re-encrypted by the + destination system, creating a snapshot with effectively the same data, + but a different IV set. In order for decryption to work after a raw + send, ZFS must ensure that the IV set used on both the source and + destination side match. When an incremental raw receive is performed on + top of an existing snapshot, ZFS will check to confirm that the + "from" snapshot on both the source and destination were using + the same IV set, ensuring the new IV set is consistent.

+

The name of the snapshot (and file system, if a full stream is + received) that this subcommand creates depends on the argument type and + the use of the -d or -e + options.

+

If the argument is a snapshot name, the specified + snapshot is created. If the argument is a file + system or volume name, a snapshot with the same name as the sent + snapshot is created within the specified + filesystem or volume. If + neither of the -d or -e + options are specified, the provided target snapshot name is used exactly + as provided.

+

The -d and -e + options cause the file system name of the target snapshot to be + determined by appending a portion of the sent snapshot's name to the + specified target filesystem. If the + -d option is specified, all but the first + element of the sent snapshot's file system path (usually the pool name) + is used and any required intermediate file systems within the specified + one are created. If the -e option is specified, + then only the last element of the sent snapshot's file system name (i.e. + the name of the source file system itself) is used as the target file + system name.

+
+
+
Force a rollback of the file system to the most recent snapshot before + performing the receive operation. If receiving an incremental + replication stream (for example, one generated by + zfs send + -R + [-i|-I]), destroy + snapshots and file systems that do not exist on the sending side.
+
+
Discard the first element of the sent snapshot's file system name, + using the remaining elements to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Discard all but the last element of the sent snapshot's file system + name, using that element to determine the name of the target file + system for the new snapshot as described in the paragraph above.
+
+
Skip the receive of holds. There is no effect if holds are not + sent.
+
+
Force an unmount of the file system while receiving a snapshot. This + option is not supported on Linux.
+
+
Do not actually receive the stream. This can be useful in conjunction + with the -v option to verify the name the + receive operation would use.
+
+ origin=snapshot
+
Forces the stream to be received as a clone of the given snapshot. If + the stream is a full send stream, this will create the filesystem + described by the stream as a clone of the specified snapshot. Which + snapshot was specified will not affect the success or failure of the + receive, as long as the snapshot does exist. If the stream is an + incremental send stream, all the normal verification will be + performed.
+
+ property=value
+
Sets the specified property as if the command + zfs set + property=value was invoked + immediately before the receive. When receiving a stream from + zfs send + -R, causes the property to be inherited by all + descendant datasets, as through zfs + inherit property was run on + any descendant datasets that have this property set on the sending + system. +

If the send stream was sent with + -c then overriding the + compression property will have no effect on + received data but the compression property will be + set. To have the data recompressed on receive remove the + -c flag from the send stream.

+

Any editable property can be set at + receive time. Set-once properties bound to the received data, such + as + + and + , + cannot be set at receive time even when the datasets are newly + created by zfs + receive. Additionally both settable + properties + + and + + cannot be set at receive time.

+

The -o option may be specified + multiple times, for different properties. An error results if the + same property is specified in multiple -o or + -x options.

+

The -o option may also be used to + override encryption properties upon initial receive. This allows + unencrypted streams to be received as encrypted datasets. To cause + the received dataset (or root dataset of a recursive stream) to be + received as an encryption root, specify encryption properties in the + same manner as is required for zfs + create. For instance:

+
# zfs + send tank/test@snap1 | + zfs recv + -o + encryption= + -o + = + -o + keylocation=file:///path/to/keyfile
+

Note that -o + keylocation=prompt may not be + specified here, since the standard input is already being utilized + for the send stream. Once the receive has completed, you can use + zfs set to change + this setting after the fact. Similarly, you can receive a dataset as + an encrypted child by specifying -x + encryption to force the property to be inherited. + Overriding encryption properties (except for + keylocation) is not possible with raw send + streams.

+
+
+
If the receive is interrupted, save the partially received state, + rather than deleting it. Interruption may be due to premature + termination of the stream (e.g. due to network failure or failure of + the remote system if the stream is being read over a network + connection), a checksum error in the stream, termination of the + zfs receive process, + or unclean shutdown of the system. +

The receive can be resumed with + a stream generated by zfs + send -t + token, where the token + is the value of the + + property of the filesystem or volume which is received into.

+

To use this flag, the storage pool + must have the + + feature enabled. See zpool-features(7) for details + on ZFS feature flags.

+
+
+
File system that is associated with the received stream is not + mounted.
+
+
Print verbose information about the stream and the time required to + perform the receive operation.
+
+ property
+
Ensures that the effective value of the specified property after the + receive is unaffected by the value of that property in the send stream + (if any), as if the property had been excluded from the send stream. +

If the specified property is not present in the send + stream, this option does nothing.

+

If a received property needs to be overridden, the + effective value will be set or inherited, depending on whether the + property is inheritable or not.

+

In the case of an incremental update, + -x leaves any existing local setting or + explicit inheritance unchanged.

+

All -o restrictions (e.g. + set-once) apply equally to -x.

+
+
+
+
zfs receive + -A + filesystem|volume
+
Abort an interrupted zfs + receive -s, deleting its + saved partially received state.
+
zfs receive + -c [-vn] + filesystem|snapshot
+
Attempt to repair data corruption in the specified dataset, by using the + provided stream as the source of healthy data. This method of healing can + only heal data blocks present in the stream. Metadata can not be healed by + corrective receive. Running a scrub is recommended post-healing to ensure + all data corruption was repaired. +

It's important to consider why corruption has happened in the + first place. If you have slowly failing hardware - periodically + repairing the data is not going to save you from data loss later on when + the hardware fails completely.

+
+
+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-send(8), zstream(8)

+
+
+ + + + + +
March 12, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-redact.8.html b/man/v2.2/8/zfs-redact.8.html new file mode 100644 index 000000000..0d4beaa83 --- /dev/null +++ b/man/v2.2/8/zfs-redact.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-redact.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-redact.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVbcehnpsvw] + [-R [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVbcehnpsvw] [-R + [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --exclude + dataset[,dataset]…
+
With -R, -X specifies + a set of datasets (and, hence, their descendants), to be excluded from + the send stream. The root dataset may not be excluded. + -X a + -X b is equivalent to + -X + a,b.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o compress= + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data. Note that uncompressed data from the sender will still attempt + to compress on the receiver, unless you specify + -o compress= + .
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

See -v.

+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
July 27, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-release.8.html b/man/v2.2/8/zfs-release.8.html new file mode 100644 index 000000000..049a6caf3 --- /dev/null +++ b/man/v2.2/8/zfs-release.8.html @@ -0,0 +1,325 @@ + + + + + + + zfs-release.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-release.8

+
+ + + + + +
ZFS-HOLD(8)System Manager's ManualZFS-HOLD(8)
+
+
+

+

zfs-holdhold + ZFS snapshots to prevent their removal

+
+
+

+ + + + + +
zfshold [-r] + tag snapshot
+
+ + + + + +
zfsholds [-rHp] + snapshot
+
+ + + + + +
zfsrelease [-r] + tag snapshot
+
+
+

+
+
zfs hold + [-r] tag + snapshot
+
Adds a single reference, named with the tag + argument, to the specified snapshots. Each snapshot has its own tag + namespace, and tags must be unique within that space. +

If a hold exists on a snapshot, attempts to destroy that + snapshot by using the zfs + destroy command return + EBUSY.

+
+
+
Specifies that a hold with the given tag is applied recursively to the + snapshots of all descendent file systems.
+
+
+
zfs holds + [-rHp] snapshot
+
Lists all existing user references for the given snapshot or snapshots. +
+
+
Lists the holds that are set on the named descendent snapshots, in + addition to listing the holds on the named snapshot.
+
+
Do not print headers, use tab-delimited output.
+
+
Prints holds timestamps as unix epoch timestamps.
+
+
+
zfs release + [-r] tag + snapshot
+
Removes a single reference, named with the tag + argument, from the specified snapshot or snapshots. The tag must already + exist for each snapshot. If a hold exists on a snapshot, attempts to + destroy that snapshot by using the zfs + destroy command return EBUSY. +
+
+
Recursively releases a hold with the given tag on the snapshots of all + descendent file systems.
+
+
+
+
+
+

+

zfs-destroy(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-rename.8.html b/man/v2.2/8/zfs-rename.8.html new file mode 100644 index 000000000..980756586 --- /dev/null +++ b/man/v2.2/8/zfs-rename.8.html @@ -0,0 +1,375 @@ + + + + + + + zfs-rename.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rename.8

+
+ + + + + +
ZFS-RENAME(8)System Manager's ManualZFS-RENAME(8)
+
+
+

+

zfs-rename — + rename ZFS dataset

+
+
+

+ + + + + +
zfsrename [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
+ + + + + +
zfsrename -p + [-f] + filesystem|volume + filesystem|volume
+
+ + + + + +
zfsrename -u + [-f] filesystem + filesystem
+
+ + + + + +
zfsrename -r + snapshot snapshot
+
+
+

+
+
zfs rename + [-f] + filesystem|volume|snapshot + filesystem|volume|snapshot
+
 
+
zfs rename + -p [-f] + filesystem|volume + filesystem|volume
+
 
+
zfs rename + -u [-f] + filesystem filesystem
+
Renames the given dataset. The new target can be located anywhere in the + ZFS hierarchy, with the exception of snapshots. Snapshots can only be + renamed within the parent file system or volume. When renaming a snapshot, + the parent file system of the snapshot does not need to be specified as + part of the second argument. Renamed file systems can inherit new mount + points, in which case they are unmounted and remounted at the new mount + point. +
+
+
Force unmount any file systems that need to be unmounted in the + process. This flag has no effect if used together with the + -u flag.
+
+
Creates all the nonexistent parent datasets. Datasets created in this + manner are automatically mounted according to the + mountpoint property inherited from their + parent.
+
+
Do not remount file systems during rename. If a file system's + mountpoint property is set to + + or + , + the file system is not unmounted even if this option is not + given.
+
+
+
zfs rename + -r snapshot + snapshot
+
Recursively rename the snapshots of all descendent datasets. Snapshots are + the only dataset that can be renamed recursively.
+
+
+
+

+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-rollback.8.html b/man/v2.2/8/zfs-rollback.8.html new file mode 100644 index 000000000..9d86e8915 --- /dev/null +++ b/man/v2.2/8/zfs-rollback.8.html @@ -0,0 +1,299 @@ + + + + + + + zfs-rollback.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-rollback.8

+
+ + + + + +
ZFS-ROLLBACK(8)System Manager's ManualZFS-ROLLBACK(8)
+
+
+

+

zfs-rollback — + roll ZFS dataset back to snapshot

+
+
+

+ + + + + +
zfsrollback [-Rfr] + snapshot
+
+
+

+

When a dataset is rolled back, all data that has changed since the + snapshot is discarded, and the dataset reverts to the state at the time of + the snapshot. By default, the command refuses to roll back to a snapshot + other than the most recent one. In order to do so, all intermediate + snapshots and bookmarks must be destroyed by specifying the + -r option.

+

The -rR options do not recursively destroy + the child snapshots of a recursive snapshot. Only direct snapshots of the + specified filesystem are destroyed by either of these options. To completely + roll back a recursive snapshot, you must roll back the individual child + snapshots.

+
+
+
Destroy any more recent snapshots and bookmarks, as well as any clones of + those snapshots.
+
+
Used with the -R option to force an unmount of any + clone file systems that are to be destroyed.
+
+
Destroy any snapshots and bookmarks more recent than the one + specified.
+
+
+
+

+
+

+

The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots:

+
# zfs + rollback -r + pool/home/anne@yesterday
+
+
+
+

+

zfs-snapshot(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-send.8.html b/man/v2.2/8/zfs-send.8.html new file mode 100644 index 000000000..a8611c8ef --- /dev/null +++ b/man/v2.2/8/zfs-send.8.html @@ -0,0 +1,836 @@ + + + + + + + zfs-send.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-send.8

+
+ + + + + +
ZFS-SEND(8)System Manager's ManualZFS-SEND(8)
+
+
+

+

zfs-send — + generate backup stream of ZFS dataset

+
+
+

+ + + + + +
zfssend [-DLPVbcehnpsvw] + [-R [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
+ + + + + +
zfssend [-DLPVcensvw] + [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
+ + + + + +
zfssend --redact + redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
+ + + + + +
zfssend [-PVenv] + -t receive_resume_token
+
+ + + + + +
zfssend [-PVnv] + -S filesystem
+
+ + + + + +
zfsredact snapshot + redaction_bookmark + redaction_snapshot
+
+
+

+
+
zfs send + [-DLPVbcehnpsvw] [-R + [-X + dataset[,dataset]…]] + [[-I|-i] + snapshot] snapshot
+
Creates a stream representation of the second + snapshot, which is written to standard output. The + output can be redirected to a file or to a different system (for example, + using ssh(1)). By default, a full stream is generated. +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
+ snapshot
+
Generate a stream package that sends all intermediary snapshots from + the first snapshot to the second snapshot. For example, + -I @a fs@d + is similar to -i @a + ; + -i + + ; + -i + + fs@d. The incremental source may be specified as + with the -i option.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --replicate
+
Generate a replication stream package, which will replicate the + specified file system, and all descendent file systems, up to the + named snapshot. When received, all properties, snapshots, descendent + file systems, and clones are preserved. +

If the -i or + -I flags are used in conjunction with the + -R flag, an incremental replication stream + is generated. The current values of properties, and current snapshot + and file system names are set when the stream is received. If the + -F flag is specified when this stream is + received, snapshots and file systems that do not exist on the + sending side are destroyed. If the -R flag + is used to send encrypted datasets, then -w + must also be specified.

+
+
, + --proctitle
+
Set the process title to a per-second report of how much data has been + sent.
+
, + --exclude + dataset[,dataset]…
+
With -R, -X specifies + a set of datasets (and, hence, their descendants), to be excluded from + the send stream. The root dataset may not be excluded. + -X a + -X b is equivalent to + -X + a,b.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
, + --backup
+
Sends only received property values whether or not they are overridden + by local settings, but only if the dataset has ever been received. Use + this option when you want zfs + receive to restore received properties backed + up on the sent dataset and to avoid sending local settings that may + have nothing to do with the source dataset, but only with how the data + is backed up.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes. Streams sent with -c will not + have their data recompressed on the receiver side using + -o compress= + value. The data will stay compressed as it was + from the sender. The new compression property will be set for future + data. Note that uncompressed data from the sender will still attempt + to compress on the receiver, unless you specify + -o compress= + .
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --holds
+
Generate a stream package that includes any snapshot holds (created + with the zfs hold + command), and indicating to zfs + receive that the holds be applied to the + dataset on the receiving system.
+
+ snapshot
+
Generate an incremental stream from the first + snapshot (the incremental source) to the second + snapshot (the incremental target). The + incremental source can be specified as the last component of the + snapshot name (the @ character and following) and it + is assumed to be from the same file system as the incremental target. +

If the destination is a clone, the + source may be the origin snapshot, which must be fully specified + (for example, + , + not just + ).

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --props
+
Include the dataset's properties in the stream. This flag is implicit + when -R is specified. The receiving system + must also support this feature. Sends of encrypted datasets must use + -w when using this flag.
+
, + --skip-missing
+
Allows sending a replication stream even when there are snapshots + missing in the hierarchy. When a snapshot is missing, instead of + throwing an error and aborting the send, a warning is printed to the + standard error stream and the dataset to which it belongs and its + descendents are skipped. This flag can only be used in conjunction + with -R.
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v. +

The format of the stream is committed. You will be able to + receive your streams on future versions of ZFS.

+
+
+
+
zfs send + [-DLPVcenvw] [-i + snapshot|bookmark] + filesystem|volume|snapshot
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark. If the destination is a filesystem or volume, + the pool must be read-only, or the filesystem must not be mounted. When + the stream generated from a filesystem or volume is received, the default + snapshot name will be "--head--". +
+
, + --dedup
+
Deduplicated send is no longer supported. This flag is accepted for + backwards compatibility, but a regular, non-deduplicated stream will + be generated.
+
, + --large-block
+
Generate a stream which may contain blocks larger than 128 KiB. This + flag has no effect if the large_blocks pool feature + is disabled, or if the recordsize property of this + filesystem has never been set above 128 KiB. The receiving system must + have the large_blocks pool feature enabled as well. + See zpool-features(7) for details on ZFS feature + flags and the large_blocks feature.
+
, + --parsable
+
Print machine-parsable verbose information about the stream package + generated.
+
, + --compressed
+
Generate a more compact stream by using compressed WRITE records for + blocks which are compressed on disk and in memory (see the + compression property for details). If the + lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. If the large_blocks feature is enabled on the + sending system but the -L option is not + supplied in conjunction with -c, then the data + will be decompressed before sending so it can be split into smaller + block sizes.
+
, + --raw
+
For encrypted datasets, send data exactly as it exists on disk. This + allows backups to be taken even if encryption keys are not currently + loaded. The backup may then be received on an untrusted machine since + that machine will not have the encryption keys to read the protected + data or alter it without being detected. Upon being received, the + dataset will have the same encryption keys as it did on the send side, + although the keylocation property will be defaulted + to prompt if not otherwise provided. For unencrypted + datasets, this flag will be equivalent to + -Lec. Note that if you do not use this flag + for sending encrypted datasets, data will be sent unencrypted and may + be re-encrypted with a different encryption key on the receiving + system, which will disable the ability to do a raw send to that system + for incrementals.
+
, + --embed
+
Generate a more compact stream by using + WRITE_EMBEDDED records for blocks which are stored + more compactly on disk by the embedded_data pool + feature. This flag has no effect if the + embedded_data feature is disabled. The receiving + system must have the embedded_data feature enabled. + If the lz4_compress feature is active on the sending + system, then the receiving system must have that feature enabled as + well. Datasets that are sent with this flag may not be received as an + encrypted dataset, since encrypted datasets cannot use the + embedded_data feature. See + zpool-features(7) for details on ZFS feature flags + and the embedded_data feature.
+
+ snapshot|bookmark
+
Generate an incremental send stream. The incremental source must be an + earlier snapshot in the destination's history. It will commonly be an + earlier snapshot in the destination's file system, in which case it + can be specified as the last component of the name (the + or + @ character and following). +

If the incremental target is a clone, the incremental + source can be the origin snapshot, or an earlier snapshot in the + origin's filesystem, or the origin's origin, etc.

+
+
, + --dryrun
+
Do a dry-run ("No-op") send. Do not generate any actual send + data. This is useful in conjunction with the + -v or -P flags to + determine what data will be sent. In this case, the verbose output + will be written to standard output (contrast with a non-dry-run, where + the stream is written to standard output and the verbose output goes + to standard error).
+
, + --verbose
+
Print verbose information about the stream package generated. This + information includes a per-second report of how much data has been + sent. The same report can be requested by sending + SIGINFO or SIGUSR1, + regardless of -v.
+
+
+
zfs send + --redact redaction_bookmark + [-DLPVcenpv] [-i + snapshot|bookmark] + snapshot
+
Generate a redacted send stream. This send stream contains all blocks from + the snapshot being sent that aren't included in the redaction list + contained in the bookmark specified by the + --redact (or -d) flag. The + resulting send stream is said to be redacted with respect to the snapshots + the bookmark specified by the --redact + flag was created with. The bookmark must have been + created by running zfs + redact on the snapshot being sent. +

This feature can be used to allow clones of a filesystem to be + made available on a remote system, in the case where their parent need + not (or needs to not) be usable. For example, if a filesystem contains + sensitive data, and it has clones where that sensitive data has been + secured or replaced with dummy data, redacted sends can be used to + replicate the secured data without replicating the original sensitive + data, while still sharing all possible blocks. A snapshot that has been + redacted with respect to a set of snapshots will contain all blocks + referenced by at least one snapshot in the set, but will contain none of + the blocks referenced by none of the snapshots in the set. In other + words, if all snapshots in the set have modified a given block in the + parent, that block will not be sent; but if one or more snapshots have + not modified a block in the parent, they will still reference the + parent's block, so that block will be sent. Note that only user data + will be redacted.

+

When the redacted send stream is received, we will generate a + redacted snapshot. Due to the nature of redaction, a redacted dataset + can only be used in the following ways:

+
    +
  1. To receive, as a clone, an incremental send from the original snapshot + to one of the snapshots it was redacted with respect to. In this case, + the stream will produce a valid dataset when received because all + blocks that were redacted in the parent are guaranteed to be present + in the child's send stream. This use case will produce a normal + snapshot, which can be used just like other snapshots.
  2. +
  3. To receive an incremental send from the original snapshot to something + redacted with respect to a subset of the set of snapshots the initial + snapshot was redacted with respect to. In this case, each block that + was redacted in the original is still redacted (redacting with respect + to additional snapshots causes less data to be redacted (because the + snapshots define what is permitted, and everything else is redacted)). + This use case will produce a new redacted snapshot.
  4. +
  5. To receive an incremental send from a redaction bookmark of the + original snapshot that was created when redacting with respect to a + subset of the set of snapshots the initial snapshot was created with + respect to anything else. A send stream from such a redaction bookmark + will contain all of the blocks necessary to fill in any redacted data, + should it be needed, because the sending system is aware of what + blocks were originally redacted. This will either produce a normal + snapshot or a redacted one, depending on whether the new send stream + is redacted.
  6. +
  7. To receive an incremental send from a redacted version of the initial + snapshot that is redacted with respect to a subject of the set of + snapshots the initial snapshot was created with respect to. A send + stream from a compatible redacted dataset will contain all of the + blocks necessary to fill in any redacted data. This will either + produce a normal snapshot or a redacted one, depending on whether the + new send stream is redacted.
  8. +
  9. To receive a full send as a clone of the redacted snapshot. Since the + stream is a full send, it definitionally contains all the data needed + to create a new dataset. This use case will either produce a normal + snapshot or a redacted one, depending on whether the full send stream + was redacted.
  10. +
+

These restrictions are detected and enforced by + zfs receive; a redacted + send stream will contain the list of snapshots that the stream is + redacted with respect to. These are stored with the redacted snapshot, + and are used to detect and correctly handle the cases above. Note that + for technical reasons, raw sends and redacted sends cannot be combined + at this time.

+
+
zfs send + [-PVenv] -t + receive_resume_token
+
Creates a send stream which resumes an interrupted receive. The + receive_resume_token is the value of this property + on the filesystem or volume that was being received into. See the + documentation for zfs + receive -s for more + details.
+
zfs send + [-PVnv] [-i + snapshot|bookmark] + -S filesystem
+
Generate a send stream from a dataset that has been partially received. +
+
, + --saved
+
This flag requires that the specified filesystem previously received a + resumable send that did not finish and was interrupted. In such + scenarios this flag enables the user to send this partially received + state. Using this flag will always use the last fully received + snapshot as the incremental source if it exists.
+
+
+
zfs redact + snapshot redaction_bookmark + redaction_snapshot
+
Generate a new redaction bookmark. In addition to the typical bookmark + information, a redaction bookmark contains the list of redacted blocks and + the list of redaction snapshots specified. The redacted blocks are blocks + in the snapshot which are not referenced by any of the redaction + snapshots. These blocks are found by iterating over the metadata in each + redaction snapshot to determine what has been changed since the target + snapshot. Redaction is designed to support redacted zfs sends; see the + entry for zfs send for + more information on the purpose of this operation. If a redact operation + fails partway through (due to an error or a system failure), the redaction + can be resumed by rerunning the same command.
+
+
+

+

ZFS has support for a limited version of data subsetting, in the + form of redaction. Using the zfs + redact command, a + can be created that stores a list of blocks containing + sensitive information. When provided to zfs + send, this causes a redacted send + to occur. Redacted sends omit the blocks containing sensitive information, + replacing them with REDACT records. When these send streams are received, a + redacted dataset is created. A redacted dataset cannot be + mounted by default, since it is incomplete. It can be used to receive other + send streams. In this way datasets can be used for data backup and + replication, with all the benefits that zfs send and receive have to offer, + while protecting sensitive information from being stored on less-trusted + machines or services.

+

For the purposes of redaction, there are two steps to the process. + A redact step, and a send/receive step. First, a redaction bookmark is + created. This is done by providing the zfs + redact command with a parent snapshot, a bookmark to + be created, and a number of redaction snapshots. These redaction snapshots + must be descendants of the parent snapshot, and they should modify data that + is considered sensitive in some way. Any blocks of data modified by all of + the redaction snapshots will be listed in the redaction bookmark, because it + represents the truly sensitive information. When it comes to the send step, + the send process will not send the blocks listed in the redaction bookmark, + instead replacing them with REDACT records. When received on the target + system, this will create a redacted dataset, missing the data that + corresponds to the blocks in the redaction bookmark on the sending system. + The incremental send streams from the original parent to the redaction + snapshots can then also be received on the target system, and this will + produce a complete snapshot that can be used normally. Incrementals from one + snapshot on the parent filesystem and another can also be done by sending + from the redaction bookmark, rather than the snapshots themselves.

+

In order to make the purpose of the feature more clear, an example + is provided. Consider a zfs filesystem containing four files. These files + represent information for an online shopping service. One file contains a + list of usernames and passwords, another contains purchase histories, a + third contains click tracking data, and a fourth contains user preferences. + The owner of this data wants to make it available for their development + teams to test against, and their market research teams to do analysis on. + The development teams need information about user preferences and the click + tracking data, while the market research teams need information about + purchase histories and user preferences. Neither needs access to the + usernames and passwords. However, because all of this data is stored in one + ZFS filesystem, it must all be sent and received together. In addition, the + owner of the data wants to take advantage of features like compression, + checksumming, and snapshots, so they do want to continue to use ZFS to store + and transmit their data. Redaction can help them do so. First, they would + make two clones of a snapshot of the data on the source. In one clone, they + create the setup they want their market research team to see; they delete + the usernames and passwords file, and overwrite the click tracking data with + dummy information. In another, they create the setup they want the + development teams to see, by replacing the passwords with fake information + and replacing the purchase histories with randomly generated ones. They + would then create a redaction bookmark on the parent snapshot, using + snapshots on the two clones as redaction snapshots. The parent can then be + sent, redacted, to the target server where the research and development + teams have access. Finally, incremental sends from the parent snapshot to + each of the clones can be sent to and received on the target server; these + snapshots are identical to the ones on the source, and are ready to be used, + while the parent snapshot on the target contains none of the username and + password data present on the source, because it was removed by the redacted + send operation.

+
+
+
+

+

See -v.

+
+
+

+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+
+

+

zfs-bookmark(8), + zfs-receive(8), zfs-redact(8), + zfs-snapshot(8)

+
+
+ + + + + +
July 27, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-set.8.html b/man/v2.2/8/zfs-set.8.html new file mode 100644 index 000000000..56034d825 --- /dev/null +++ b/man/v2.2/8/zfs-set.8.html @@ -0,0 +1,566 @@ + + + + + + + zfs-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-set.8

+
+ + + + + +
ZFS-SET(8)System Manager's ManualZFS-SET(8)
+
+
+

+

zfs-setset + properties on ZFS datasets

+
+
+

+ + + + + +
zfsset [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
+ + + + + +
zfsget + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
+ + + + + +
zfsinherit [-rS] + property + filesystem|volume|snapshot
+
+
+

+
+
zfs set + [-u] + property=value + [property=value]… + filesystem|volume|snapshot
+
Only some properties can be edited. See zfsprops(7) for + more information on what properties can be set and acceptable values. + Numeric values can be specified as exact values, or in a human-readable + form with a suffix of + , + , + , + , + , + , + , + (for bytes, + kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, or + zettabytes, respectively). User properties can be set on snapshots. For + more information, see the + section of zfsprops(7). +
+
+
Update mountpoint, sharenfs, sharesmb property but do not mount or + share the dataset.
+
+
+
zfs get + [-r|-d + depth] [-Hp] + [-o + field[,field]…] + [-s + source[,source]…] + [-t + type[,type]…] + all|property[,property]… + [filesystem|volume|snapshot|bookmark]…
+
Displays properties for the given datasets. If no datasets are specified, + then the command displays properties for all datasets on the system. For + each property, the following columns are displayed: +
+
+
+
Dataset name
+
+
Property name
+
+
Property value
+
+
Property source local, default, + inherited, temporary, + received, or + - (none).
+
+
+

All columns are displayed by default, though this can be + controlled by using the -o option. This command + takes a comma-separated list of properties as described in the + Native Properties and + User Properties sections of + zfsprops(7).

+

The value all can be used to display all + properties that apply to the given dataset's type + (filesystem, volume, + snapshot, or + bookmark).

+
+
+
Display output in a form more easily parsed by scripts. Any headers + are omitted, and fields are explicitly separated by a single tab + instead of an arbitrary amount of space.
+
+ depth
+
Recursively display any children of the dataset, limiting the + recursion to depth. A depth of + will + display only the dataset and its direct children.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
Recursively display properties for any children.
+
+ source
+
A comma-separated list of sources to display. Those properties coming + from a source other than those in this list are ignored. Each source + must be one of the following: local, + default, inherited, + temporary, received, + or + . + The default value is all sources.
+
+ type
+
A comma-separated list of types to display, where + type is one of filesystem, + snapshot, volume, + bookmark, or + all.
+
+
+
zfs inherit + [-rS] property + filesystem|volume|snapshot
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists. See zfsprops(7) for a listing of default + values, and details on which properties can be inherited. +
+
+
Recursively inherit the given property for all children.
+
+
Revert the property to the received value, if one exists; otherwise, + for non-inheritable properties, to the default; otherwise, operate as + if the -S option was not specified.
+
+
+
+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + =/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression= + pool/home
+
# zfs + set + compression= + pool/home/anne
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set + =50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get all pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s local -o name,property,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+
+

+

zfsprops(7), zfs-list(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-share.8.html b/man/v2.2/8/zfs-share.8.html new file mode 100644 index 000000000..f4989c545 --- /dev/null +++ b/man/v2.2/8/zfs-share.8.html @@ -0,0 +1,310 @@ + + + + + + + zfs-share.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-share.8

+
+ + + + + +
ZFS-SHARE(8)System Manager's ManualZFS-SHARE(8)
+
+
+

+

zfs-shareshare + and unshare ZFS filesystems

+
+
+

+ + + + + +
zfsshare [-l] + -a|filesystem
+
+ + + + + +
zfsunshare + -a|filesystem|mountpoint
+
+
+

+
+
zfs share + [-l] + -a|filesystem
+
Shares available ZFS file systems. +
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Share all available ZFS file systems. Invoked automatically as part of + the boot process.
+
filesystem
+
Share the specified filesystem according to the + sharenfs and sharesmb properties. + File systems are shared when the sharenfs or + sharesmb property is set.
+
+
+
zfs unshare + -a|filesystem|mountpoint
+
Unshares currently shared ZFS file systems. +
+
+
Unshare all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
filesystem|mountpoint
+
Unshare the specified filesystem. The command can also be given a path + to a ZFS file system shared on the system.
+
+
+
+
+
+

+

exports(5), smb.conf(5), + zfsprops(7)

+
+
+ + + + + +
May 17, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-snapshot.8.html b/man/v2.2/8/zfs-snapshot.8.html new file mode 100644 index 000000000..4c762b9f3 --- /dev/null +++ b/man/v2.2/8/zfs-snapshot.8.html @@ -0,0 +1,352 @@ + + + + + + + zfs-snapshot.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-snapshot.8

+
+ + + + + +
ZFS-SNAPSHOT(8)System Manager's ManualZFS-SNAPSHOT(8)
+
+
+

+

zfs-snapshot — + create snapshots of ZFS datasets

+
+
+

+ + + + + +
zfssnapshot [-r] + [-o + property=value]… + dataset@snapname
+
+
+

+

All previous modifications by successful system calls to the file + system are part of the snapshots. Snapshots are taken atomically, so that + all snapshots correspond to the same moment in time. + zfs snap can be used as an + alias for zfs snapshot. See + the Snapshots section of + zfsconcepts(7) for details.

+
+
+ property=value
+
Set the specified property; see zfs + create for details.
+
+
Recursively create snapshots of all descendent datasets
+
+
+
+

+
+

+

The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system.

+
# zfs + snapshot + pool/home/bob@yesterday
+
+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+
+

+

zfs-bookmark(8), zfs-clone(8), + zfs-destroy(8), zfs-diff(8), + zfs-hold(8), zfs-rename(8), + zfs-rollback(8), zfs-send(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unallow.8.html b/man/v2.2/8/zfs-unallow.8.html new file mode 100644 index 000000000..668edae33 --- /dev/null +++ b/man/v2.2/8/zfs-unallow.8.html @@ -0,0 +1,956 @@ + + + + + + + zfs-unallow.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unallow.8

+
+ + + + + +
ZFS-ALLOW(8)System Manager's ManualZFS-ALLOW(8)
+
+
+

+

zfs-allow — + delegate ZFS administration permissions to unprivileged + users

+
+
+

+ + + + + +
zfsallow [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsallow -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
+ + + + + +
zfsunallow [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+ + + + + +
zfsunallow [-r] + -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
+
+

+
+
zfs allow + filesystem|volume
+
Displays permissions that have been delegated on the specified filesystem + or volume. See the other forms of zfs + allow for more information. +

Delegations are supported under Linux with the + exception of mount, + , + , + , + , + and + . + These permissions cannot be delegated because the Linux + mount(8) command restricts modifications of the global + namespace to the root user.

+
+
zfs allow + [-dglu] + user|group[,user|group]… + perm|@setname[,perm|@setname]… + filesystem|volume
+
 
+
zfs allow + [-dl] + -e|everyone + perm|@setname[,perm|@setname]… + filesystem|volume
+
Delegates ZFS administration permission for the file systems to + non-privileged users. +
+
+
Allow only for the descendent file systems.
+
|everyone
+
Specifies that the permissions be delegated to everyone.
+
+ group[,group]…
+
Explicitly specify that permissions are delegated to the group.
+
+
Allow "locally" only for the specified file system.
+
+ user[,user]…
+
Explicitly specify that permissions are delegated to the user.
+
user|group[,user|group]…
+
Specifies to whom the permissions are delegated. Multiple entities can + be specified as a comma-separated list. If neither of the + -gu options are specified, then the argument + is interpreted preferentially as the keyword + everyone, then as a user name, and lastly as a group + name. To specify a user or group named "everyone", use the + -g or -u options. To + specify a group with the same name as a user, use the + -g options.
+
perm|@setname[,perm|@setname]…
+
The permissions to delegate. Multiple permissions may be specified as + a comma-separated list. Permission names are the same as ZFS + subcommand and property names. See the property list below. Property + set names, which begin with @, may be specified. See + the -s form below for details.
+
+

If neither of the -dl options are + specified, or both are, then the permissions are allowed for the file + system or volume, and all of its descendents.

+

Permissions are generally the ability to use a ZFS subcommand + or change a ZFS property. The following permissions are available:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NAMETYPENOTES



allowsubcommandMust also have the permission that is being allowed
bookmarksubcommand
clonesubcommandMust also have the create ability and mount ability in + the origin file system
createsubcommandMust also have the mount ability. Must also have the + refreservation ability to create a non-sparse volume.
destroysubcommandMust also have the mount ability
diffsubcommandAllows lookup of paths within a dataset given an object number, and + the ability to create snapshots necessary to zfs diff.
holdsubcommandAllows adding a user hold to a snapshot
load-keysubcommandAllows loading and unloading of encryption key (see zfs + load-key and zfs unload-key).
change-keysubcommandAllows changing an encryption key via zfs change-key.
mountsubcommandAllows mounting/umounting ZFS datasets
promotesubcommandMust also have the mount and promote ability in the + origin file system
receivesubcommandMust also have the mount and create ability
releasesubcommandAllows releasing a user hold which might destroy the snapshot
renamesubcommandMust also have the mount and create ability in the new + parent
rollbacksubcommandMust also have the mount ability
sendsubcommand
sharesubcommandAllows sharing file systems over NFS or SMB protocols
snapshotsubcommandMust also have the mount ability
groupquotaotherAllows accessing any groupquota@ property
groupobjquotaotherAllows accessing any groupobjquota@ + property
groupusedotherAllows reading any groupused@ property
groupobjusedotherAllows reading any groupobjused@ property
userpropotherAllows changing any user property
userquotaotherAllows accessing any userquota@ property
userobjquotaotherAllows accessing any userobjquota@ + property
userusedotherAllows reading any userused@ property
userobjusedotherAllows reading any userobjused@ property
projectobjquotaotherAllows accessing any projectobjquota@ + property
projectquotaotherAllows accessing any projectquota@ + property
projectobjusedotherAllows reading any projectobjused@ + property
projectusedotherAllows reading any projectused@ property
aclinheritproperty
aclmodeproperty
acltypeproperty
atimeproperty
canmountproperty
casesensitivityproperty
checksumproperty
compressionproperty
contextproperty
copiesproperty
dedupproperty
defcontextproperty
devicesproperty
dnodesizeproperty
encryptionproperty
execproperty
filesystem_limitproperty
fscontextproperty
keyformatproperty
keylocationproperty
logbiasproperty
mlslabelproperty
mountpointproperty
nbmandproperty
normalizationproperty
overlayproperty
pbkdf2itersproperty
primarycacheproperty
quotaproperty
readonlyproperty
recordsizeproperty
redundant_metadataproperty
refquotaproperty
refreservationproperty
relatimeproperty
reservationproperty
rootcontextproperty
secondarycacheproperty
setuidproperty
sharenfsproperty
sharesmbproperty
snapdevproperty
snapdirproperty
snapshot_limitproperty
special_small_blocksproperty
syncproperty
utf8onlyproperty
versionproperty
volblocksizeproperty
volmodeproperty
volsizeproperty
vscanproperty
xattrproperty
zonedproperty
+
+
zfs allow + -c + perm|@setname[,perm|@setname]… + filesystem|volume
+
Sets "create time" permissions. These permissions are granted + (locally) to the creator of any newly-created descendent file system.
+
zfs allow + -s + @setname + perm|@setname[,perm|@setname]… + filesystem|volume
+
Defines or adds permissions to a permission set. The set can be used by + other zfs allow commands + for the specified file system and its descendents. Sets are evaluated + dynamically, so changes to a set are immediately reflected. Permission + sets follow the same naming restrictions as ZFS file systems, but the name + must begin with @, and can be no more than 64 characters + long.
+
zfs unallow + [-dglru] + user|group[,user|group]… + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-dlr] + -e|everyone + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
 
+
zfs unallow + [-r] -c + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions that were granted with the zfs + allow command. No permissions are explicitly + denied, so other permissions granted are still in effect. For example, if + the permission is granted by an ancestor. If no permissions are specified, + then all permissions for the specified user, + group, or everyone are removed. + Specifying everyone (or using the + -e option) only removes the permissions that were + granted to everyone, not all permissions for every user and group. See the + zfs allow command for a + description of the -ldugec options. +
+
+
Recursively remove the permissions from this file system and all + descendents.
+
+
+
zfs unallow + [-r] -s + @setname + [perm|@setname[,perm|@setname]…] + filesystem|volume
+
Removes permissions from a permission set. If no permissions are + specified, then all permissions are removed, thus removing the set + entirely.
+
+
+
+

+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys , users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unjail.8.html b/man/v2.2/8/zfs-unjail.8.html new file mode 100644 index 000000000..6fcf817b2 --- /dev/null +++ b/man/v2.2/8/zfs-unjail.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-unjail.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unjail.8

+
+ + + + + +
ZFS-JAIL(8)System Manager's ManualZFS-JAIL(8)
+
+
+

+

zfs-jailattach + or detach ZFS filesystem from FreeBSD jail

+
+
+

+ + + + + +
zfs jailjailid|jailname + filesystem
+
+ + + + + +
zfs unjailjailid|jailname + filesystem
+
+
+

+
+
zfs jail + jailid|jailname + filesystem
+
Attach the specified filesystem to the jail + identified by JID jailid or name + jailname. From now on this file system tree can be + managed from within a jail if the jailed property has + been set. To use this functionality, the jail needs the + + and + + parameters set to + and the + + parameter set to a value lower than + . +

You cannot attach a jailed dataset's children to another jail. + You can also not attach the root file system of the jail or any dataset + which needs to be mounted before the zfs rc script is run inside the + jail, as it would be attached unmounted until it is mounted from the rc + script inside the jail.

+

To allow management of the dataset from within a + jail, the jailed property has to be set and the jail + needs access to the /dev/zfs device. The + property + cannot be changed from within a jail.

+

After a dataset is attached to a jail and the + jailed property is set, a jailed file system cannot be + mounted outside the jail, since the jail administrator might have set + the mount point to an unacceptable value.

+

See jail(8) for more information on managing + jails. Jails are a FreeBSD feature and are not + relevant on other platforms.

+
+
zfs unjail + jailid|jailname + filesystem
+
Detaches the specified filesystem from the jail + identified by JID jailid or name + jailname.
+
+
+
+

+

zfsprops(7), jail(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unload-key.8.html b/man/v2.2/8/zfs-unload-key.8.html new file mode 100644 index 000000000..bc6a9203a --- /dev/null +++ b/man/v2.2/8/zfs-unload-key.8.html @@ -0,0 +1,476 @@ + + + + + + + zfs-unload-key.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unload-key.8

+
+ + + + + +
ZFS-LOAD-KEY(8)System Manager's ManualZFS-LOAD-KEY(8)
+
+
+

+

zfs-load-key — + load, unload, or change encryption key of ZFS + dataset

+
+
+

+ + + + + +
zfsload-key [-nr] + [-L keylocation] + -a|filesystem
+
+ + + + + +
zfsunload-key [-r] + -a|filesystem
+
+ + + + + +
zfschange-key [-l] + [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
+ + + + + +
zfschange-key -i + [-l] filesystem
+
+
+

+
+
zfs + load-key [-nr] + [-L keylocation] + -a|filesystem
+
Load the key for filesystem, allowing it and all + children that inherit the keylocation property to be + accessed. The key will be expected in the format specified by the + keyformat and location specified by the + keylocation property. Note that if the + keylocation is set to prompt the + terminal will interactively wait for the key to be entered. Loading a key + will not automatically mount the dataset. If that functionality is + desired, zfs mount + -l will ask for the key and mount the dataset (see + zfs-mount(8)). Once the key is loaded the + keystatus property will become + . +
+
+
Recursively loads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Loads the keys for all encryption roots in all imported pools.
+
+
Do a dry-run ("No-op") load-key. + This will cause zfs to simply check that the + provided key is correct. This command may be run even if the key is + already loaded.
+
+ keylocation
+
Use keylocation instead of the + keylocation property. This will not change the value + of the property on the dataset. Note that if used with either + -r or -a, + keylocation may only be given as + prompt.
+
+
+
zfs + unload-key [-r] + -a|filesystem
+
Unloads a key from ZFS, removing the ability to access the dataset and all + of its children that inherit the keylocation property. + This requires that the dataset is not currently open or mounted. Once the + key is unloaded the keystatus property will become + . +
+
+
Recursively unloads the keys for the specified filesystem and all + descendent encryption roots.
+
+
Unloads the keys for all encryption roots in all imported pools.
+
+
+
zfs change-key + [-l] [-o + keylocation=value] + [-o + keyformat=value] + [-o + pbkdf2iters=value] + filesystem
+
 
+
zfs change-key + -i [-l] + filesystem
+
Changes the user's key (e.g. a passphrase) used to access a dataset. This + command requires that the existing key for the dataset is already loaded. + This command may also be used to change the keylocation, + keyformat, and pbkdf2iters properties + as needed. If the dataset was not previously an encryption root it will + become one. Alternatively, the -i flag may be + provided to cause an encryption root to inherit the parent's key instead. +

If the user's key is compromised, zfs + change-key does not necessarily protect existing + or newly-written data from attack. Newly-written data will continue to + be encrypted with the same master key as the existing data. The master + key is compromised if an attacker obtains a user key and the + corresponding wrapped master key. Currently, zfs + change-key does not overwrite the previous + wrapped master key on disk, so it is accessible via forensic analysis + for an indeterminate length of time.

+

In the event of a master key compromise, ideally the drives + should be securely erased to remove all the old data (which is readable + using the compromised master key), a new pool created, and the data + copied back. This can be approximated in place by creating new datasets, + copying the data (e.g. using zfs + send | zfs + recv), and then clearing the free space with + zpool trim + --secure if supported by your hardware, + otherwise zpool + initialize.

+
+
+
Ensures the key is loaded before attempting to change the key. This is + effectively equivalent to running zfs + load-key filesystem; + zfs change-key + filesystem
+
+ property=value
+
Allows the user to set encryption key properties + (keyformat, keylocation, + and pbkdf2iters) while + changing the key. This is the only way to alter + keyformat and pbkdf2iters after + the dataset has been created.
+
+
Indicates that zfs should make filesystem + inherit the key of its parent. Note that this command can only be run + on an encryption root that has an encrypted parent.
+
+
+
+
+

+

Enabling the encryption feature allows for the + creation of encrypted filesystems and volumes. ZFS will encrypt file and + volume data, file attributes, ACLs, permission bits, directory listings, + FUID mappings, and + / + data. ZFS will not encrypt metadata related to the pool structure, including + dataset and snapshot names, dataset hierarchy, properties, file size, file + holes, and deduplication tables (though the deduplicated data itself is + encrypted).

+

Key rotation is managed by ZFS. Changing the user's key (e.g. a + passphrase) does not require re-encrypting the entire dataset. Datasets can + be scrubbed, resilvered, renamed, and deleted without the encryption keys + being loaded (see the load-key subcommand for more + info on key loading).

+

Creating an encrypted dataset requires + specifying the encryption and + keyformat properties at creation time, along with an + optional keylocation and + pbkdf2iters. After entering an encryption key, the created + dataset will become an encryption root. Any descendant datasets will inherit + their encryption key from the encryption root by default, meaning that + loading, unloading, or changing the key for the encryption root will + implicitly do the same for all inheriting datasets. If this inheritance is + not desired, simply supply a keyformat when creating the + child dataset or use zfs + change-key to break an existing relationship, + creating a new encryption root on the child. Note that the child's + keyformat may match that of the parent while still + creating a new encryption root, and that changing the + encryption property alone does not create a new encryption + root; this would simply use a different cipher suite with the same key as + its encryption root. The one exception is that clones will always use their + origin's encryption key. As a result of this exception, some + encryption-related properties (namely keystatus, + keyformat, keylocation, + and pbkdf2iters) do not inherit + like other ZFS properties and instead use the value determined by their + encryption root. Encryption root inheritance can be tracked via the + read-only + + property.

+

Encryption changes the behavior of a few ZFS operations. + Encryption is applied after compression so compression ratios are preserved. + Normally checksums in ZFS are 256 bits long, but for encrypted data the + checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from + the encryption suite, which provides additional protection against + maliciously altered data. Deduplication is still possible with encryption + enabled but for security, datasets will only deduplicate against themselves, + their snapshots, and their clones.

+

There are a few limitations on encrypted + datasets. Encrypted data cannot be embedded via the + + feature. Encrypted datasets may not have + = + since the implementation stores some encryption metadata where the third + copy would normally be. Since compression is applied before encryption, + datasets may be vulnerable to a CRIME-like attack if applications accessing + the data allow for it. Deduplication with encryption will leak information + about which blocks are equivalent in a dataset and will incur an extra CPU + cost for each block written.

+
+
+
+

+

zfsprops(7), zfs-create(8), + zfs-set(8)

+
+
+ + + + + +
January 13, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unmount.8.html b/man/v2.2/8/zfs-unmount.8.html new file mode 100644 index 000000000..48bd77731 --- /dev/null +++ b/man/v2.2/8/zfs-unmount.8.html @@ -0,0 +1,338 @@ + + + + + + + zfs-unmount.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unmount.8

+
+ + + + + +
ZFS-MOUNT(8)System Manager's ManualZFS-MOUNT(8)
+
+
+

+

zfs-mountmanage + mount state of ZFS filesystems

+
+
+

+ + + + + +
zfsmount
+
+ + + + + +
zfsmount [-Oflv] + [-o options] + -a|filesystem
+
+ + + + + +
zfsunmount [-fu] + -a|filesystem|mountpoint
+
+
+

+
+
zfs mount
+
Displays all ZFS file systems currently mounted.
+
zfs mount + [-Oflv] [-o + options] + -a|filesystem
+
Mount ZFS filesystem on a path described by its + mountpoint property, if the path exists and is empty. If + mountpoint is set to + , the + filesystem should be instead mounted using mount(8). +
+
+
Perform an overlay mount. Allows mounting in non-empty + mountpoint. See mount(8) for more + information.
+
+
Mount all available ZFS file systems. Invoked automatically as part of + the boot process if configured.
+
filesystem
+
Mount the specified filesystem.
+
+ options
+
An optional, comma-separated list of mount options to use temporarily + for the duration of the mount. See the + section of + zfsprops(7) for details.
+
+
Load keys for encrypted filesystems as they are being mounted. This is + equivalent to executing zfs + load-key on each encryption root before + mounting it. Note that if a filesystem has + =, + this will cause the terminal to interactively block after asking for + the key.
+
+
Report mount progress.
+
+
Attempt to force mounting of all filesystems, even those that couldn't + normally be mounted (e.g. redacted datasets).
+
+
+
zfs unmount + [-fu] + -a|filesystem|mountpoint
+
Unmounts currently mounted ZFS file systems. +
+
+
Unmount all available ZFS file systems. Invoked automatically as part + of the shutdown process.
+
+
Forcefully unmount the file system, even if it is currently in use. + This option is not supported on Linux.
+
+
Unload keys for any encryption roots unmounted by this command.
+
filesystem|mountpoint
+
Unmount the specified filesystem. The command can also be given a path + to a ZFS file system mount point on the system.
+
+
+
+
+
+ + + + + +
February 16, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-unzone.8.html b/man/v2.2/8/zfs-unzone.8.html new file mode 100644 index 000000000..a3dec8e62 --- /dev/null +++ b/man/v2.2/8/zfs-unzone.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-unzone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-unzone.8

+
+ + + + + +
ZFS-ZONE(8)System Manager's ManualZFS-ZONE(8)
+
+
+

+

zfs-zone, + zfs-unzoneattach and + detach ZFS filesystems to user namespaces

+
+
+

+ + + + + +
zfs zonensfile filesystem
+
+ + + + + +
zfs unzonensfile filesystem
+
+
+

+
+
zfs zone + nsfile filesystem
+
Attach the specified filesystem to the user + namespace identified by nsfile. From now on this + file system tree can be managed from within a user namespace if the + zoned property has been set. +

You cannot attach a zoned dataset's children to another user + namespace. You can also not attach the root file system of the user + namespace or any dataset which needs to be mounted before the zfs + service is run inside the user namespace, as it would be attached + unmounted until it is mounted from the service inside the user + namespace.

+

To allow management of the dataset from within a + user namespace, the zoned property has to be set and + the user namespaces needs access to the /dev/zfs + device. The + property + cannot be changed from within a user namespace.

+

After a dataset is attached to a user namespace and the + zoned property is set, a zoned file system cannot be + mounted outside the user namespace, since the user namespace + administrator might have set the mount point to an unacceptable + value.

+
+
zfs unzone + nsfile filesystem
+
Detach the specified filesystem from the user + namespace identified by nsfile.
+
+
+
+

+
+

+

The following example delegates the + tank/users dataset to a user namespace identified by + user namespace file /proc/1234/ns/user.

+
# zfs + zone /proc/1234/ns/user + tank/users
+
+
+
+

+

zfsprops(7)

+
+
+ + + + + +
June 3, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-upgrade.8.html b/man/v2.2/8/zfs-upgrade.8.html new file mode 100644 index 000000000..d4f610786 --- /dev/null +++ b/man/v2.2/8/zfs-upgrade.8.html @@ -0,0 +1,317 @@ + + + + + + + zfs-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-upgrade.8

+
+ + + + + +
ZFS-UPGRADE(8)System Manager's ManualZFS-UPGRADE(8)
+
+
+

+

zfs-upgrade — + manage on-disk version of ZFS filesystems

+
+
+

+ + + + + +
zfsupgrade
+
+ + + + + +
zfsupgrade -v
+
+ + + + + +
zfsupgrade [-r] + [-V version] + -a|filesystem
+
+
+

+
+
zfs upgrade
+
Displays a list of file systems that are not the most recent version.
+
zfs upgrade + -v
+
Displays a list of currently supported file system versions.
+
zfs upgrade + [-r] [-V + version] + -a|filesystem
+
Upgrades file systems to a new on-disk version. Once this is done, the + file systems will no longer be accessible on systems running older + versions of ZFS. zfs send + streams generated from new snapshots of these file systems cannot be + accessed on systems running older versions of ZFS. +

In general, the file system version is independent of the pool + version. See zpool-features(7) for information on + features of ZFS storage pools.

+

In some cases, the file system version and the pool version + are interrelated and the pool version must be upgraded before the file + system version can be upgraded.

+
+
+ version
+
Upgrade to version. If not specified, upgrade to + the most recent version. This option can only be used to increase the + version number, and only up to the most recent version supported by + this version of ZFS.
+
+
Upgrade all file systems on all imported pools.
+
filesystem
+
Upgrade the specified file system.
+
+
Upgrade the specified file system and all descendent file + systems.
+
+
+
+
+
+

+

zpool-upgrade(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-userspace.8.html b/man/v2.2/8/zfs-userspace.8.html new file mode 100644 index 000000000..bcfd6debf --- /dev/null +++ b/man/v2.2/8/zfs-userspace.8.html @@ -0,0 +1,390 @@ + + + + + + + zfs-userspace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs-userspace.8

+
+ + + + + +
ZFS-USERSPACE(8)System Manager's ManualZFS-USERSPACE(8)
+
+
+

+

zfs-userspace — + display space and quotas of ZFS dataset

+
+
+

+ + + + + +
zfsuserspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsgroupspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
+ + + + + +
zfsprojectspace [-Hp] + [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
+
+

+
+
zfs + userspace [-Hinp] + [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each user in the specified + filesystem, snapshot, or path. If a path is given, the filesystem that + contains that path will be used. This corresponds to the + user, + user, + user, + and + user + properties. +
+
+
Do not print headers, use tab-delimited output.
+
+ field
+
Sort by this field in reverse order. See + -s.
+
+
Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping + exists. Normal POSIX interfaces (like stat(2), + ls -l) perform this + translation, so the -i option allows the + output from zfs + userspace to be compared directly with those + utilities. However, -i may lead to confusion + if some files were created by an SMB user before a SMB-to-POSIX name + mapping was established. In such a case, some files will be owned by + the SMB entity and some by the POSIX entity. However, the + -i option will report that the POSIX entity + has the total usage and quota for both.
+
+
Print numeric ID instead of user/group name.
+
+ field[,field]…
+
Display only the specified fields from the following set: + type, name, + , + . + The default is to display all fields.
+
+
Use exact (parsable) numeric output.
+
+ field
+
Sort output by this field. The -s and + -S flags may be specified multiple times to + sort first by one field, then by another. The default is + -s type + -s name.
+
+ type[,type]…
+
Print only the specified types from the following set: + , + posixuser, smbuser, + posixgroup, smbgroup. The default + is -t + posixuser,smbuser. The default can + be changed to include group types.
+
+
+
zfs groupspace + [-Hinp] [-o + field[,field]…] + [-s field]… + [-S field]… + [-t + type[,type]…] + filesystem|snapshot
+
Displays space consumed by, and quotas on, each group in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the default types to + display are -t + posixgroup,smbgroup.
+
zfs projectspace + [-Hp] [-o + field[,field]…] + [-s field]… + [-S field]… + filesystem|snapshot|path
+
Displays space consumed by, and quotas on, each project in the specified + filesystem or snapshot. This subcommand is identical to + userspace, except that the project identifier is a + numeral, not a name. So need neither the option -i + for SID to POSIX ID nor -n for numeric ID, nor + -t for types.
+
+
+
+

+

zfsprops(7), zfs-set(8)

+
+
+ + + + + +
June 30, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-wait.8.html b/man/v2.2/8/zfs-wait.8.html new file mode 100644 index 000000000..c2ba3e1dc --- /dev/null +++ b/man/v2.2/8/zfs-wait.8.html @@ -0,0 +1,282 @@ + + + + + + + zfs-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-wait.8

+
+ + + + + +
ZFS-WAIT(8)System Manager's ManualZFS-WAIT(8)
+
+
+

+

zfs-waitwait + for activity in ZFS filesystem to stop

+
+
+

+ + + + + +
zfswait [-t + activity[,activity]…] + filesystem
+
+
+

+

Waits until all background activity of the given types has ceased + in the given filesystem. The activity could cease because it has completed + or because the filesystem has been destroyed or unmounted. If no activities + are specified, the command waits until background activity of every type + listed below has ceased. If there is no activity of the given types in + progress, the command returns immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
The filesystem's internal delete queue to empty
+
+
+

Note that the internal delete queue does not finish draining until + all large files have had time to be fully destroyed and all open file + handles to unlinked files are closed.

+
+
+

+

lsof(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs-zone.8.html b/man/v2.2/8/zfs-zone.8.html new file mode 100644 index 000000000..c692a67ad --- /dev/null +++ b/man/v2.2/8/zfs-zone.8.html @@ -0,0 +1,314 @@ + + + + + + + zfs-zone.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs-zone.8

+
+ + + + + +
ZFS-ZONE(8)System Manager's ManualZFS-ZONE(8)
+
+
+

+

zfs-zone, + zfs-unzoneattach and + detach ZFS filesystems to user namespaces

+
+
+

+ + + + + +
zfs zonensfile filesystem
+
+ + + + + +
zfs unzonensfile filesystem
+
+
+

+
+
zfs zone + nsfile filesystem
+
Attach the specified filesystem to the user + namespace identified by nsfile. From now on this + file system tree can be managed from within a user namespace if the + zoned property has been set. +

You cannot attach a zoned dataset's children to another user + namespace. You can also not attach the root file system of the user + namespace or any dataset which needs to be mounted before the zfs + service is run inside the user namespace, as it would be attached + unmounted until it is mounted from the service inside the user + namespace.

+

To allow management of the dataset from within a + user namespace, the zoned property has to be set and + the user namespaces needs access to the /dev/zfs + device. The + property + cannot be changed from within a user namespace.

+

After a dataset is attached to a user namespace and the + zoned property is set, a zoned file system cannot be + mounted outside the user namespace, since the user namespace + administrator might have set the mount point to an unacceptable + value.

+
+
zfs unzone + nsfile filesystem
+
Detach the specified filesystem from the user + namespace identified by nsfile.
+
+
+
+

+
+

+

The following example delegates the + tank/users dataset to a user namespace identified by + user namespace file /proc/1234/ns/user.

+
# zfs + zone /proc/1234/ns/user + tank/users
+
+
+
+

+

zfsprops(7)

+
+
+ + + + + +
June 3, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs.8.html b/man/v2.2/8/zfs.8.html new file mode 100644 index 000000000..a44ac284c --- /dev/null +++ b/man/v2.2/8/zfs.8.html @@ -0,0 +1,1033 @@ + + + + + + + zfs.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zfs.8

+
+ + + + + +
ZFS(8)System Manager's ManualZFS(8)
+
+
+

+

zfsconfigure + ZFS datasets

+
+
+

+ + + + + +
zfs-?V
+
+ + + + + +
zfsversion
+
+ + + + + +
zfssubcommand + [arguments]
+
+
+

+

The zfs command configures ZFS datasets + within a ZFS storage pool, as described in zpool(8). A + dataset is identified by a unique path within the ZFS namespace:

+

+
pool[/component]/component
+

for example:

+

+
rpool/var/log
+

The maximum length of a dataset name + is + + - 1 ASCII characters (currently 255) satisfying + . Additionally snapshots are allowed to contain a single + character, + while bookmarks are allowed to contain a single + character. + / is used as separator between components. The maximum + amount of nesting allowed in a path is + + levels deep. ZFS tunables + () + are explained in zfs(4).

+

A dataset can be one of the following:

+
+
+
+
Can be mounted within the standard system namespace and behaves like other + file systems. While ZFS file systems are designed to be POSIX-compliant, + known issues exist that prevent compliance in some cases. Applications + that depend on standards conformance might fail due to non-standard + behavior when checking file system free space.
+
+
A logical volume exported as a raw or block device. This type of dataset + should only be used when a block device is required. File systems are + typically used in most environments.
+
+
A read-only version of a file system or volume at a given point in time. + It is specified as + filesystem@name or + volume@name.
+
+
Much like a snapshot, but without the hold on on-disk + data. It can be used as the source of a send (but not for a receive). It + is specified as + filesystem#name or + volume#name.
+
+
+

See zfsconcepts(7) for details.

+
+

+

Properties are divided into two types: native properties and + user-defined (or "user") properties. Native properties either + export internal statistics or control ZFS behavior. In addition, native + properties are either editable or read-only. User properties have no effect + on ZFS behavior, but you can use them to annotate datasets in a way that is + meaningful in your environment. For more information about properties, see + zfsprops(7).

+
+
+

+

Enabling the + + feature allows for the creation of encrypted filesystems and volumes. ZFS + will encrypt file and zvol data, file attributes, ACLs, permission bits, + directory listings, FUID mappings, and + // + data. For an overview of encryption, see + zfs-load-key(8).

+
+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+
+
zfs -?
+
Displays a help message.
+
zfs -V, + --version
+
 
+
zfs version
+
Displays the software version of the zfs userland + utility and the zfs kernel module.
+
+
+

+
+
zfs-list(8)
+
Lists the property information for the given datasets in tabular + form.
+
zfs-create(8)
+
Creates a new ZFS file system or volume.
+
zfs-destroy(8)
+
Destroys the given dataset(s), snapshot(s), or bookmark.
+
zfs-rename(8)
+
Renames the given dataset (filesystem or snapshot).
+
zfs-upgrade(8)
+
Manage upgrading the on-disk version of filesystems.
+
+
+
+

+
+
zfs-snapshot(8)
+
Creates snapshots with the given names.
+
zfs-rollback(8)
+
Roll back the given dataset to a previous snapshot.
+
zfs-hold(8)/zfs-release(8)
+
Add or remove a hold reference to the specified snapshot or snapshots. If + a hold exists on a snapshot, attempts to destroy that snapshot by using + the zfs destroy command + return + .
+
zfs-diff(8)
+
Display the difference between a snapshot of a given filesystem and + another snapshot of that filesystem from a later time or the current + contents of the filesystem.
+
+
+
+

+
+
zfs-clone(8)
+
Creates a clone of the given snapshot.
+
zfs-promote(8)
+
Promotes a clone file system to no longer be dependent on its + "origin" snapshot.
+
+
+
+

+
+
zfs-send(8)
+
Generate a send stream, which may be of a filesystem, and may be + incremental from a bookmark.
+
zfs-receive(8)
+
Creates a snapshot whose contents are as specified in the stream provided + on standard input. If a full stream is received, then a new file system is + created as well. Streams are created using the + zfs-send(8) subcommand, which by default creates a full + stream.
+
zfs-bookmark(8)
+
Creates a new bookmark of the given snapshot or bookmark. Bookmarks mark + the point in time when the snapshot was created, and can be used as the + incremental source for a zfs + send command.
+
zfs-redact(8)
+
Generate a new redaction bookmark. This feature can be used to allow + clones of a filesystem to be made available on a remote system, in the + case where their parent need not (or needs to not) be usable.
+
+
+
+

+
+
zfs-get(8)
+
Displays properties for the given datasets.
+
zfs-set(8)
+
Sets the property or list of properties to the given value(s) for each + dataset.
+
zfs-inherit(8)
+
Clears the specified property, causing it to be inherited from an + ancestor, restored to default if no ancestor has the property set, or with + the -S option reverted to the received value if + one exists.
+
+
+
+

+
+
zfs-userspace(8)/zfs-groupspace(8)/zfs-projectspace(8)
+
Displays space consumed by, and quotas on, each user, group, or project in + the specified filesystem or snapshot.
+
zfs-project(8)
+
List, set, or clear project ID and/or inherit flag on the files or + directories.
+
+
+
+

+
+
zfs-mount(8)
+
Displays all ZFS file systems currently mounted, or mount ZFS filesystem + on a path described by its mountpoint property.
+
zfs-unmount(8)
+
Unmounts currently mounted ZFS file systems.
+
+
+
+

+
+
zfs-share(8)
+
Shares available ZFS file systems.
+
zfs-unshare(8)
+
Unshares currently shared ZFS file systems.
+
+
+
+

+
+
zfs-allow(8)
+
Delegate permissions on the specified filesystem or volume.
+
zfs-unallow(8)
+
Remove delegated permissions on the specified filesystem or volume.
+
+
+
+

+
+
zfs-change-key(8)
+
Add or change an encryption key on the specified dataset.
+
zfs-load-key(8)
+
Load the key for the specified encrypted dataset, enabling access.
+
zfs-unload-key(8)
+
Unload a key for the specified dataset, removing the ability to access the + dataset.
+
+
+
+

+
+
zfs-program(8)
+
Execute ZFS administrative operations programmatically via a Lua + script-language channel program.
+
+
+
+

+
+
zfs-jail(8)
+
Attaches a filesystem to a jail.
+
zfs-unjail(8)
+
Detaches a filesystem from a jail.
+
+
+
+

+
+
zfs-wait(8)
+
Wait for background activity in a filesystem to complete.
+
+
+
+
+

+

The zfs utility exits 0 + on success, if + an error occurs, and + if invalid + command line options were specified.

+
+
+

+
+

+

The following commands create a file system named + pool/home and a file system named + pool/home/bob. The mount point + /export/home is set for the parent file system, and + is automatically inherited by the child file system.

+
# zfs + create pool/home
+
# zfs + set + mountpoint=/export/home + pool/home
+
# zfs + create + pool/home/bob
+
+
+

+

The following command creates a snapshot named + yesterday. This snapshot is mounted on demand in the + .zfs/snapshot directory at the root of the + pool/home/bob file system.

+
# zfs + snapshot + pool/home/bob@yesterday
+
+
+

+

The following command creates snapshots named + yesterday of + pool/home and all of its descendent file systems. Each + snapshot is mounted on demand in the .zfs/snapshot + directory at the root of its file system. The second command destroys the + newly created snapshots.

+
# zfs + snapshot -r + pool/home@yesterday
+
# zfs + destroy -r + pool/home@yesterday
+
+
+

+

The following command disables the compression + property for all file systems under pool/home. The + next command explicitly enables compression for + pool/home/anne.

+
# zfs + set + compression=off + pool/home
+
# zfs + set compression=on + pool/home/anne
+
+
+

+

The following command lists all active file systems and volumes in + the system. Snapshots are displayed if + =on. + The default is off. See zpoolprops(7) + for more information on pool properties.

+
+
# zfs list
+NAME                      USED  AVAIL  REFER  MOUNTPOINT
+pool                      450K   457G    18K  /pool
+pool/home                 315K   457G    21K  /export/home
+pool/home/anne             18K   457G    18K  /export/home/anne
+pool/home/bob             276K   457G   276K  /export/home/bob
+
+
+
+

+

The following command sets a quota of 50 Gbytes for + pool/home/bob:

+
# zfs + set quota=50G + pool/home/bob
+
+
+

+

The following command lists all properties for + pool/home/bob:

+
+
# zfs get  pool/home/bob
+NAME           PROPERTY              VALUE                  SOURCE
+pool/home/bob  type                  filesystem             -
+pool/home/bob  creation              Tue Jul 21 15:53 2009  -
+pool/home/bob  used                  21K                    -
+pool/home/bob  available             20.0G                  -
+pool/home/bob  referenced            21K                    -
+pool/home/bob  compressratio         1.00x                  -
+pool/home/bob  mounted               yes                    -
+pool/home/bob  quota                 20G                    local
+pool/home/bob  reservation           none                   default
+pool/home/bob  recordsize            128K                   default
+pool/home/bob  mountpoint            /pool/home/bob         default
+pool/home/bob  sharenfs              off                    default
+pool/home/bob  checksum              on                     default
+pool/home/bob  compression           on                     local
+pool/home/bob  atime                 on                     default
+pool/home/bob  devices               on                     default
+pool/home/bob  exec                  on                     default
+pool/home/bob  setuid                on                     default
+pool/home/bob  readonly              off                    default
+pool/home/bob  zoned                 off                    default
+pool/home/bob  snapdir               hidden                 default
+pool/home/bob  acltype               off                    default
+pool/home/bob  aclmode               discard                default
+pool/home/bob  aclinherit            restricted             default
+pool/home/bob  canmount              on                     default
+pool/home/bob  xattr                 on                     default
+pool/home/bob  copies                1                      default
+pool/home/bob  version               4                      -
+pool/home/bob  utf8only              off                    -
+pool/home/bob  normalization         none                   -
+pool/home/bob  casesensitivity       sensitive              -
+pool/home/bob  vscan                 off                    default
+pool/home/bob  nbmand                off                    default
+pool/home/bob  sharesmb              off                    default
+pool/home/bob  refquota              none                   default
+pool/home/bob  refreservation        none                   default
+pool/home/bob  primarycache          all                    default
+pool/home/bob  secondarycache        all                    default
+pool/home/bob  usedbysnapshots       0                      -
+pool/home/bob  usedbydataset         21K                    -
+pool/home/bob  usedbychildren        0                      -
+pool/home/bob  usedbyrefreservation  0                      -
+
+

The following command gets a single property value:

+
+
# zfs get -H -o value compression pool/home/bob
+on
+
+

The following command lists all properties with local settings for + pool/home/bob:

+
+
# zfs get -r -s  -o ,,value all pool/home/bob
+NAME           PROPERTY              VALUE
+pool/home/bob  quota                 20G
+pool/home/bob  compression           on
+
+
+
+

+

The following command reverts the contents of + pool/home/anne to the snapshot named + yesterday, deleting all intermediate snapshots:

+
# zfs + rollback -r + pool/home/anne@yesterday
+
+
+

+

The following command creates a writable file system whose initial + contents are the same as pool/home/bob@yesterday.

+
# zfs + clone pool/home/bob@yesterday + pool/clone
+
+
+

+

The following commands illustrate how to test out changes to a + file system, and then replace the original file system with the changed one, + using clones, clone promotion, and renaming:

+
+
# zfs create pool/project/production
+  populate /pool/project/production with data
+# zfs snapshot pool/project/production@today
+# zfs clone pool/project/production@today pool/project/beta
+  make changes to /pool/project/beta and test them
+# zfs promote pool/project/beta
+# zfs rename pool/project/production pool/project/legacy
+# zfs rename pool/project/beta pool/project/production
+  once the legacy version is no longer needed, it can be destroyed
+# zfs destroy pool/project/legacy
+
+
+
+

+

The following command causes pool/home/bob + and pool/home/anne to inherit + the checksum property from their parent.

+
# zfs + inherit checksum + pool/home/bob pool/home/anne
+
+
+

+

The following commands send a full stream and then an incremental + stream to a remote machine, restoring them into + + and + , + respectively. + + must contain the file system + , + and must not initially contain + .

+
+
# zfs send pool/fs@a |
+    ssh host zfs receive poolB/received/fs@a
+# zfs send -i a pool/fs@b |
+    ssh host zfs receive poolB/received/fs
+
+
+
+

+

The following command sends a full stream of + poolA/fsA/fsB@snap to a remote machine, receiving it + into poolB/received/fsA/fsB@snap. The + fsA/fsB@snap portion of the received snapshot's name + is determined from the name of the sent snapshot. + poolB must contain the file system + poolB/received. If + poolB/received/fsA does not exist, it is created as an + empty file system.

+
+
# zfs send poolA/fsA/fsB@snap |
+    ssh host zfs receive -d poolB/received
+
+
+
+

+

The following example sets the user-defined + com.example:department property + for a dataset:

+
# zfs + set + com.example:department=12345 + tank/accounting
+
+
+

+

The following example shows how to maintain a history of snapshots + with a consistent naming scheme. To keep a week's worth of snapshots, the + user destroys the oldest snapshot, renames the remaining snapshots, and then + creates a new snapshot, as follows:

+
+
# zfs destroy -r pool/users@7daysago
+# zfs rename -r pool/users@6daysago @7daysago
+# zfs rename -r pool/users@5daysago @6daysago
+# zfs rename -r pool/users@4daysago @5daysago
+# zfs rename -r pool/users@3daysago @4daysago
+# zfs rename -r pool/users@2daysago @3daysago
+# zfs rename -r pool/users@yesterday @2daysago
+# zfs rename -r pool/users@today @yesterday
+# zfs snapshot -r pool/users@today
+
+
+
+

+

The following commands show how to set sharenfs + property options to enable read-write access for a set of IP addresses and + to enable root access for system "neo" on the + tank/home file system:

+
# zfs + set + sharenfs='rw=@123.123.0.0/16:[::1],root=neo' + tank/home
+

If you are using DNS for host name resolution, specify the + fully-qualified hostname.

+
+
+

+

The following example shows how to set permissions so that user + cindys can create, destroy, mount, and take snapshots + on tank/cindys. The permissions on + tank/cindys are also displayed.

+
+
# zfs allow ,destroy,mount,snapshot tank/cindys
+# zfs allow tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+        user cindys create,destroy,mount,snapshot
+
+

Because the tank/cindys mount point + permission is set to 755 by default, user cindys will + be unable to mount file systems under tank/cindys. Add + an ACE similar to the following syntax to provide mount point access:

+
# chmod + A+user:cindys:add_subdirectory:allow + /tank/cindys
+
+
+

+

The following example shows how to grant anyone in the group + staff to create file systems in + tank/users. This syntax also allows staff members to + destroy their own file systems, but not destroy anyone else's file system. + The permissions on tank/users are also displayed.

+
+
# zfs allow staff create,mount tank/users
+# zfs allow -c destroy tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        destroy
+Local+Descendent permissions:
+        group staff create,mount
+
+
+
+

+

The following example shows how to define and grant a permission + set on the tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
+# zfs allow staff @pset tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows to grant the ability to set quotas and + reservations on the users/home file system. The + permissions on users/home are also displayed.

+
+
# zfs allow cindys quota, users/home
+# zfs allow users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+        user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME              PROPERTY  VALUE  SOURCE
+users/home/marks  quota     10G    local
+
+
+
+

+

The following example shows how to remove the snapshot permission + from the staff group on the + tank/users file system. The permissions on + tank/users are also displayed.

+
+
# zfs unallow staff snapshot tank/users
+# zfs allow tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+        @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+        group staff @pset
+
+
+
+

+

The following example shows how to see what has changed between a + prior snapshot of a ZFS dataset and its current state. The + -F option is used to indicate type information for + the files affected.

+
+
# zfs diff -F tank/test@before tank/test
+M       /       /tank/test/
+M       F       /tank/test/linked      (+1)
+R       F       /tank/test/oldname -> /tank/test/newname
+-       F       /tank/test/deleted
++       F       /tank/test/created
+M       F       /tank/test/modified
+
+
+
+

+

The following example creates a bookmark to a snapshot. This + bookmark can then be used instead of a snapshot in send streams.

+
# zfs + bookmark + rpool@snapshot + rpool#bookmark
+
+
+

+ Property Options on a ZFS File System

+

The following example show how to share SMB filesystem through + ZFS. Note that a user and their password must be given.

+
# smbmount + //127.0.0.1/share_tmp /mnt/tmp + -o + user=workgroup/turbo,password=obrut,uid=1000
+

Minimal /etc/samba/smb.conf configuration + is required, as follows.

+

Samba will need to bind to the loopback interface for the ZFS + utilities to communicate with Samba. This is the default behavior for most + Linux distributions.

+

Samba must be able to authenticate a user. This can be done in a + number of ways (passwd(5), LDAP, + smbpasswd(5), &c.). How to do this is outside the + scope of this document – refer to smb.conf(5) for + more information.

+

See the USERSHARES section + for all configuration options, in case you need to modify any options of the + share afterwards. Do note that any changes done with the + net(8) command will be undone if the share is ever + unshared (like via a reboot).

+
+
+
+

+
+
+
Use ANSI color in zfs diff + and zfs list output.
+
+
Cause zfs mount to use + mount(8) to mount ZFS datasets. This option is provided + for backwards compatibility with older ZFS versions.
+
+
Tells zfs to set the maximum pipe size for + sends/recieves. Disabled by default on Linux due to an unfixed deadlock in + Linux's pipe size handling code.
+
+
Time, in seconds, to wait for /dev/zfs to appear. + Defaults to + , max + (10 + minutes). If <0, wait forever; if + 0, don't wait.
+
+
+
+

+

.

+
+
+

+

attr(1), gzip(1), + ssh(1), chmod(2), + fsync(2), stat(2), + write(2), acl(5), + attributes(5), exports(5), + zfsconcepts(7), zfsprops(7), + exportfs(8), mount(8), + net(8), selinux(8), + zfs-allow(8), zfs-bookmark(8), + zfs-change-key(8), zfs-clone(8), + zfs-create(8), zfs-destroy(8), + zfs-diff(8), zfs-get(8), + zfs-groupspace(8), zfs-hold(8), + zfs-inherit(8), zfs-jail(8), + zfs-list(8), zfs-load-key(8), + zfs-mount(8), zfs-program(8), + zfs-project(8), zfs-projectspace(8), + zfs-promote(8), zfs-receive(8), + zfs-redact(8), zfs-release(8), + zfs-rename(8), zfs-rollback(8), + zfs-send(8), zfs-set(8), + zfs-share(8), zfs-snapshot(8), + zfs-unallow(8), zfs-unjail(8), + zfs-unload-key(8), zfs-unmount(8), + zfs-upgrade(8), + zfs-userspace(8), zfs-wait(8), + zpool(8)

+
+
+ + + + + +
May 12, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs_ids_to_path.8.html b/man/v2.2/8/zfs_ids_to_path.8.html new file mode 100644 index 000000000..aa7cf4dd9 --- /dev/null +++ b/man/v2.2/8/zfs_ids_to_path.8.html @@ -0,0 +1,274 @@ + + + + + + + zfs_ids_to_path.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_ids_to_path.8

+
+ + + + + +
ZFS_IDS_TO_PATH(8)System Manager's ManualZFS_IDS_TO_PATH(8)
+
+
+

+

zfs_ids_to_path — + convert objset and object ids to names and paths

+
+
+

+ + + + + +
zfs_ids_to_path[-v] pool + objset-id object-id
+
+
+

+

The + + utility converts a provided objset and object ids into a path to the file + they refer to.

+
+
+
Verbose. Print the dataset name and the file path within the dataset + separately. This will work correctly even if the dataset is not + mounted.
+
+
+
+

+

zdb(8), zfs(8)

+
+
+ + + + + +
April 17, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zfs_prepare_disk.8.html b/man/v2.2/8/zfs_prepare_disk.8.html new file mode 100644 index 000000000..45d618808 --- /dev/null +++ b/man/v2.2/8/zfs_prepare_disk.8.html @@ -0,0 +1,302 @@ + + + + + + + zfs_prepare_disk.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zfs_prepare_disk.8

+
+ + + + + +
ZFS_PREPARE_DISK(8)System Manager's ManualZFS_PREPARE_DISK(8)
+
+
+

+

zfs_prepare_disk — + special script that gets run before bringing a disk into a + pool

+
+
+

+

zfs_prepare_disk is an optional script + that gets called by libzfs before bringing a disk into a pool. It can be + modified by the user to run whatever commands are necessary to prepare a + disk for inclusion into the pool. For example, users can add lines to + zfs_prepare_disk to do things like update the + drive's firmware or check the drive's health. + zfs_prepare_disk is optional and can be removed if + not needed. libzfs will look for the script at + @zfsexecdir@/zfs_prepare_disk.

+
+

+

zfs_prepare_disk will be passed the + following environment variables:

+

+
+
POOL_NAME
+
+
VDEV_PATH
+
+
VDEV_PREPARE
+
('create', 'add', 'replace', or + 'autoreplace'). This can be useful if you only want the script to be run + under certain actions.
+
VDEV_UPATH
+
disk. For multipath this would + return one of the /dev/sd* paths to the disk. If the device is not a + device mapper device, then VDEV_UPATH just returns + the same value as VDEV_PATH
+
VDEV_ENC_SYSFS_PATH
+
+
+

Note that some of these variables may have a blank value. + POOL_NAME is blank at pool creation time, for + example.

+
+
+
+

+

zfs_prepare_disk runs with a limited + $PATH.

+
+
+

+

zfs_prepare_disk should return 0 on + success, non-zero otherwise. If non-zero is returned, the disk will not be + included in the pool.

+
+
+ + + + + +
August 30, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zgenhostid.8.html b/man/v2.2/8/zgenhostid.8.html new file mode 100644 index 000000000..26ea407a8 --- /dev/null +++ b/man/v2.2/8/zgenhostid.8.html @@ -0,0 +1,332 @@ + + + + + + + zgenhostid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zgenhostid.8

+
+ + + + + +
ZGENHOSTID(8)System Manager's ManualZGENHOSTID(8)
+
+
+

+

zgenhostid — + generate host ID into /etc/hostid

+
+
+

+ + + + + +
zgenhostid[-f] [-o + filename] [hostid]
+
+
+

+

Creates /etc/hostid file and stores the + host ID in it. If hostid was provided, validate and + store that value. Otherwise, randomly generate an ID.

+
+
+

+
+
+
Display a summary of the command-line options.
+
+
Allow output overwrite.
+
+ filename
+
Write to filename instead of the default + /etc/hostid.
+
hostid
+
Specifies the value to be placed in /etc/hostid. + It should be a number with a value between 1 and 2^32-1. If + , generate a random + ID. This value must be unique among your systems. It + must be an 8-digit-long hexadecimal number, optionally + prefixed by "0x".
+
+
+
+

+

/etc/hostid

+
+
+

+
+
Generate a random hostid and store it
+
+
# + zgenhostid
+
+
Record the libc-generated hostid in + /etc/hostid
+
+
# + zgenhostid + "$(hostid)"
+
+
Record a custom hostid (0xdeadbeef) in + /etc/hostid
+
+
# + zgenhostid + deadbeef
+
+
Record a custom hostid (0x01234567) in + /tmp/hostid and overwrite the file + if it exists
+
+
# + zgenhostid -f + -o /tmp/hostid + 0x01234567
+
+
+
+
+

+

genhostid(1), hostid(1), + spl(4)

+
+
+

+

zgenhostid emulates the + genhostid(1) utility and is provided for use on systems + which do not include the utility or do not provide the + sethostid(3) function.

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zinject.8.html b/man/v2.2/8/zinject.8.html new file mode 100644 index 000000000..2b77579d4 --- /dev/null +++ b/man/v2.2/8/zinject.8.html @@ -0,0 +1,550 @@ + + + + + + + zinject.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zinject.8

+
+ + + + + +
ZINJECT(8)System Manager's ManualZINJECT(8)
+
+
+

+

zinjectZFS + Fault Injector

+
+
+

+

zinject creates artificial problems in a + ZFS pool by simulating data corruption or device failures. This program is + dangerous.

+
+
+

+
+
+ + + + + +
zinject
+
+
List injection records.
+
+ + + + + +
zinject-b + objset:object:level:start:end + [-f frequency] + -amu [pool]
+
+
Force an error into the pool at a bookmark.
+
+ + + + + +
zinject-c + id|all
+
+
Cancel injection records.
+
+ + + + + +
zinject-d vdev + -A + | + pool
+
+
Force a vdev into the DEGRADED or FAULTED state.
+
+ + + + + +
zinject-d vdev + -D + latency:lanes + pool
+
+
Add an artificial delay to I/O requests on a particular device, such that + the requests take a minimum of latency milliseconds + to complete. Each delay has an associated number of + lanes which defines the number of concurrent I/O + requests that can be processed. +

For example, with a single lane delay of 10 ms + (-D + 10:1), the device will only + be able to service a single I/O request at a time with each request + taking 10 ms to complete. So, if only a single request is submitted + every 10 ms, the average latency will be 10 ms; but if more than one + request is submitted every 10 ms, the average latency will be more than + 10 ms.

+

Similarly, if a delay of 10 ms is specified to have two lanes + (-D + 10:2), then the device will + be able to service two requests at a time, each with a minimum latency + of 10 ms. So, if two requests are submitted every 10 ms, then the + average latency will be 10 ms; but if more than two requests are + submitted every 10 ms, the average latency will be more than 10 ms.

+

Also note, these delays are additive. So two invocations of + -D + 10:1 are roughly equivalent + to a single invocation of -D + 10:2. This also means, that + one can specify multiple lanes with differing target latencies. For + example, an invocation of -D + 10:1 followed by + -D + 25:2 will create 3 lanes on + the device: one lane with a latency of 10 ms and two lanes with a 25 ms + latency.

+
+
+ + + + + +
zinject-d vdev + [-e device_error] + [-L label_error] + [-T failure] + [-f frequency] + [-F] pool
+
+
Force a vdev error.
+
+ + + + + +
zinject-I [-s + seconds|-g + txgs] pool
+
+
Simulate a hardware failure that fails to honor a cache flush.
+
+ + + + + +
zinject-p function + pool
+
+
Panic inside the specified function.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amq] path
+
+
Force an error into the contents of a file.
+
+ + + + + +
zinject-t + + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-amq] path
+
+
Force an error into the metadnode for a file or directory.
+
+ + + + + +
zinject-t mos_type + -C dvas + [-e device_error] + [-f frequency] + [-l level] + [-r range] + [-amqu] pool
+
+
Force an error into the MOS of a pool.
+
+
+
+

+
+
+
Flush the ARC before injection.
+
+ objset:object:level:start:end
+
Force an error into the pool at this bookmark tuple. Each number is in + hexadecimal, and only one block can be specified.
+
+ dvas
+
Inject the given error only into specific DVAs. The mask should be + specified as a list of 0-indexed DVAs separated by commas + (e.g. + 0,2). This option is not + applicable to logical data errors such as decompress and + decrypt.
+
+ vdev
+
A vdev specified by path or GUID.
+
+ device_error
+
Specify +
+
+
for an ECKSUM error,
+
+
for a data decompression error,
+
+
for a data decryption error,
+
+
to flip a bit in the data after a read,
+
+
for an ECHILD error,
+
+
for an EIO error where reopening the device will succeed, or
+
+
for an ENXIO error where reopening the device will fail.
+
+

For EIO and ENXIO, the "failed" reads or writes + still occur. The probe simply sets the error value reported by the I/O + pipeline so it appears the read or write failed. Decryption errors only + currently work with file data.

+
+
+ frequency
+
Only inject errors a fraction of the time. Expressed as a real number + percentage between + + and + .
+
+
Fail faster. Do fewer checks.
+
+ txgs
+
Run for this many transaction groups before reporting failure.
+
+
Print the usage message.
+
+ level
+
Inject an error at a particular block level. The default is + .
+
+ label_error
+
Set the label error region to one of + , + , + , or + .
+
+
Automatically remount the underlying filesystem.
+
+
Quiet mode. Only print the handler number added.
+
+ range
+
Inject an error over a particular logical range of an object, which will + be translated to the appropriate blkid range according to the object's + properties.
+
+ seconds
+
Run for this many seconds before reporting failure.
+
+ failure
+
Set the failure type to one of all, + , + , + , or + .
+
+ mos_type
+
Set this to +
+
+
for any data in the MOS,
+
+
for an object directory,
+
+
for the pool configuration,
+
+
for the block pointer list,
+
+
for the space map,
+
+
for the metaslab, or
+
+
for the persistent error log.
+
+
+
+
Unload the pool after injection.
+
+
+
+

+
+
+
Run zinject in debug mode.
+
+
+
+

+

zfs(8), zpool(8)

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-add.8.html b/man/v2.2/8/zpool-add.8.html new file mode 100644 index 000000000..c761316ae --- /dev/null +++ b/man/v2.2/8/zpool-add.8.html @@ -0,0 +1,336 @@ + + + + + + + zpool-add.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-add.8

+
+ + + + + +
ZPOOL-ADD(8)System Manager's ManualZPOOL-ADD(8)
+
+
+

+

zpool-addadd + vdevs to ZFS storage pool

+
+
+

+ + + + + +
zpooladd [-fgLnP] + [-o + property=value] + pool vdev
+
+
+

+

Adds the specified virtual devices to the given pool. The + vdev specification is described in the + section of zpoolconcepts(7). The behavior + of the -f option, and the device checks performed + are described in the zpool + create subcommand.

+
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+
Display vdev, GUIDs instead of the normal device + names. These GUIDs can be used in place of device names for the zpool + detach/offline/remove/replace commands.
+
+
Display real paths for vdevs resolving all symbolic + links. This can be used to look up the current block device name + regardless of the /dev/disk path used to open + it.
+
+
Displays the configuration that would be used without actually adding the + vdevs. The actual pool creation can still fail due + to insufficient privileges or device sharing.
+
+
Display real paths for vdevs instead of only the + last component of the path. This can be used in conjunction with the + -L flag.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
+
+

+
+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool.

+
# zpool + add tank + + sda sdb
+
+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+
+

+

zpool-attach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-remove(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-attach.8.html b/man/v2.2/8/zpool-attach.8.html new file mode 100644 index 000000000..b260bcd0b --- /dev/null +++ b/man/v2.2/8/zpool-attach.8.html @@ -0,0 +1,299 @@ + + + + + + + zpool-attach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-attach.8

+
+ + + + + +
ZPOOL-ATTACH(8)System Manager's ManualZPOOL-ATTACH(8)
+
+
+

+

zpool-attach — + attach new device to existing ZFS vdev

+
+
+

+ + + + + +
zpoolattach [-fsw] + [-o + property=value] + pool device new_device
+
+
+

+

Attaches new_device to the existing + device. The existing device cannot be part of a raidz + configuration. If device is not currently part of a + mirrored configuration, device automatically + transforms into a two-way mirror of device and + new_device. If device is part of + a two-way mirror, attaching new_device creates a + three-way mirror, and so on. In either case, + new_device begins to resilver immediately and any + running scrub is cancelled.

+
+
+
Forces use of new_device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new_device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until new_device has finished resilvering + before returning.
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-import(8), zpool-initialize(8), + zpool-online(8), zpool-replace(8), + zpool-resilver(8)

+
+
+ + + + + +
May 15, 2020Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-checkpoint.8.html b/man/v2.2/8/zpool-checkpoint.8.html new file mode 100644 index 000000000..83d978c46 --- /dev/null +++ b/man/v2.2/8/zpool-checkpoint.8.html @@ -0,0 +1,290 @@ + + + + + + + zpool-checkpoint.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-checkpoint.8

+
+ + + + + +
ZPOOL-CHECKPOINT(8)System Manager's ManualZPOOL-CHECKPOINT(8)
+
+
+

+

zpool-checkpoint — + check-point current ZFS storage pool state

+
+
+

+ + + + + +
zpoolcheckpoint [-d + [-w]] pool
+
+
+

+

Checkpoints the current state of pool , + which can be later restored by zpool + import --rewind-to-checkpoint. The existence of a + checkpoint in a pool prohibits the following zpool + subcommands: remove, attach, + detach, split, + and reguid. In addition, it + may break reservation boundaries if the pool lacks free space. The + zpool status command + indicates the existence of a checkpoint or the progress of discarding a + checkpoint from a pool. zpool + list can be used to check how much space the + checkpoint takes from the pool.

+
+
+

+
+
, + --discard
+
Discards an existing checkpoint from pool.
+
, + --wait
+
Waits until the checkpoint has finished being discarded before + returning.
+
+
+
+

+

zfs-snapshot(8), + zpool-import(8), zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-clear.8.html b/man/v2.2/8/zpool-clear.8.html new file mode 100644 index 000000000..8b796d5e8 --- /dev/null +++ b/man/v2.2/8/zpool-clear.8.html @@ -0,0 +1,284 @@ + + + + + + + zpool-clear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-clear.8

+
+ + + + + +
ZPOOL-CLEAR(8)System Manager's ManualZPOOL-CLEAR(8)
+
+
+

+

zpool-clear — + clear device errors in ZFS storage pool

+
+
+

+ + + + + +
zpoolclear [--power] + pool [device]…
+
+
+

+

Clears device errors in a pool. If no arguments are specified, all + device errors within the pool are cleared. If one or more devices is + specified, only those errors associated with the specified device or devices + are cleared.

+

If the pool was suspended it will be brought back + online provided the devices can be accessed. Pools with + + enabled which have been suspended cannot be resumed. While the pool was + suspended, it may have been imported on another host, and resuming I/O could + result in pool damage.

+
+
+
Power on the devices's slot in the storage enclosure and wait for the + device to show up before attempting to clear errors. This is done on all + the devices specified. Alternatively, you can set the + + environment variable to always enable this behavior. Note: This flag + currently works on Linux only.
+
+
+
+

+

zdb(8), zpool-reopen(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-create.8.html b/man/v2.2/8/zpool-create.8.html new file mode 100644 index 000000000..cb85539e5 --- /dev/null +++ b/man/v2.2/8/zpool-create.8.html @@ -0,0 +1,449 @@ + + + + + + + zpool-create.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-create.8

+
+ + + + + +
ZPOOL-CREATE(8)System Manager's ManualZPOOL-CREATE(8)
+
+
+

+

zpool-create — + create ZFS storage pool

+
+
+

+ + + + + +
zpoolcreate [-dfn] + [-m mountpoint] + [-o + property=value]… + [-o + feature@feature=value] + [-o + compatibility=off|legacy|file[,file]…] + [-O + file-system-property=value]… + [-R root] + [-t tname] + pool vdev
+
+
+

+

Creates a new storage pool containing the virtual devices + specified on the command line. The pool name must begin with a letter, and + can only contain alphanumeric characters as well as the underscore + (""), + dash + (""), + colon + (""), + space (" "), and period + (""). + The pool names mirror, raidz, + draid, spare and log + are reserved, as are names beginning with mirror, + raidz, draid, and + spare. The vdev specification is + described in the Virtual Devices + section of zpoolconcepts(7).

+

The command attempts to verify that each device + specified is accessible and not currently in use by another subsystem. + However this check is not robust enough to detect simultaneous attempts to + use a new device in different pools, even if + = + enabled. The administrator must ensure that simultaneous + invocations of any combination of zpool + replace, zpool + create, zpool + add, or zpool + labelclear do not refer to the same device. Using + the same device in two pools will result in pool corruption.

+

There are some uses, such as being currently mounted, or specified + as the dedicated dump device, that prevents a device from ever being used by + ZFS. Other uses, such as having a preexisting UFS file system, can be + overridden with -f.

+

The command also checks that the replication strategy for the pool + is consistent. An attempt to combine redundant and non-redundant storage in + a single pool, or to mix disks and files, results in an error unless + -f is specified. The use of differently-sized + devices within a single raidz or mirror group is also flagged as an error + unless -f is specified.

+

Unless the -R option is specified, the + default mount point is /pool. + The mount point must not exist or must be empty, or else the root dataset + will not be able to be be mounted. This can be overridden with the + -m option.

+

By default all supported features are enabled + on the new pool. The -d option and the + -o compatibility property (e.g + -o + =2020) + can be used to restrict the features that are enabled, so that the pool can + be imported on other releases of ZFS.

+
+
+
Do not enable any features on the new pool. Individual features can be + enabled by setting their corresponding properties to + enabled with -o. See + zpool-features(7) for details about feature + properties.
+
+
Forces use of vdevs, even if they appear in use or + specify a conflicting replication level. Not all devices can be overridden + in this manner.
+
+ mountpoint
+
Sets the mount point for the root dataset. The default mount point is + /pool or altroot/pool if + altroot is specified. The mount point must be an + absolute path, legacy, or none. For + more information on dataset mount points, see + zfsprops(7).
+
+
Displays the configuration that would be used without actually creating + the pool. The actual pool creation can still fail due to insufficient + privileges or device sharing.
+
+ property=value
+
Sets the given pool properties. See zpoolprops(7) for a + list of valid properties that can be set.
+
+ compatibility=off|legacy|file[,file]…
+
Specifies compatibility feature sets. See + zpool-features(7) for more information about + compatibility feature sets.
+
+ feature@feature=value
+
Sets the given pool feature. See the zpool-features(7) + section for a list of valid features that can be set. Value can be either + disabled or enabled.
+
+ file-system-property=value
+
Sets the given file system properties in the root file system of the pool. + See zfsprops(7) for a list of valid properties that can + be set.
+
+ root
+
Equivalent to -o + cachefile=none + -o + altroot=root
+
+ tname
+
Sets the in-core pool name to tname while the + on-disk name will be the name specified as pool. + This will set the default of the cachefile property to + none. This is intended to handle name space collisions + when creating pools for other systems, such as virtual machines or + physical machines whose pools live on network block devices.
+
+
+
+

+
+

+

The following command creates a pool with a single raidz root vdev + that consists of six disks:

+
# zpool + create tank + raidz sda sdb sdc sdd sde + sdf
+
+
+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks:

+
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
+

+

The following command creates a non-redundant pool using two disk + partitions:

+
# zpool + create tank + sda1 sdb2
+
+
+

+

The following command creates a non-redundant pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+
# zpool + create tank + /path/to/file/a /path/to/file/b
+
+
+

+

The following command creates a new pool with an available hot + spare:

+
# zpool + create tank + mirror sda sdb + spare sdc
+
+
+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+
# zpool + create pool + mirror sda sdb + mirror sdc sdd log + mirror sde sdf
+
+
+
+

+

zpool-destroy(8), + zpool-export(8), zpool-import(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-destroy.8.html b/man/v2.2/8/zpool-destroy.8.html new file mode 100644 index 000000000..37b7f6698 --- /dev/null +++ b/man/v2.2/8/zpool-destroy.8.html @@ -0,0 +1,278 @@ + + + + + + + zpool-destroy.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-destroy.8

+
+ + + + + +
ZPOOL-DESTROY(8)System Manager's ManualZPOOL-DESTROY(8)
+
+
+

+

zpool-destroy — + destroy ZFS storage pool

+
+
+

+ + + + + +
zpooldestroy [-f] + pool
+
+
+

+

Destroys the given pool, freeing up any devices for other use. + This command tries to unmount any active datasets before destroying the + pool.

+
+
+
Forcefully unmount all active datasets.
+
+
+
+

+
+

+

The following command destroys the pool tank + and any datasets contained within:

+
# zpool + destroy -f + tank
+
+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-detach.8.html b/man/v2.2/8/zpool-detach.8.html new file mode 100644 index 000000000..27414ec2e --- /dev/null +++ b/man/v2.2/8/zpool-detach.8.html @@ -0,0 +1,271 @@ + + + + + + + zpool-detach.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-detach.8

+
+ + + + + +
ZPOOL-DETACH(8)System Manager's ManualZPOOL-DETACH(8)
+
+
+

+

zpool-detach — + detach device from ZFS mirror

+
+
+

+ + + + + +
zpooldetach pool device
+
+
+

+

Detaches device from a mirror. The operation + is refused if there are no other valid replicas of the data. If + device may be re-added to the pool later on then + consider the zpool offline + command instead.

+
+
+

+

zpool-attach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-remove(8), zpool-replace(8), + zpool-split(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-events.8.html b/man/v2.2/8/zpool-events.8.html new file mode 100644 index 000000000..54c80b0eb --- /dev/null +++ b/man/v2.2/8/zpool-events.8.html @@ -0,0 +1,872 @@ + + + + + + + zpool-events.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-events.8

+
+ + + + + +
ZPOOL-EVENTS(8)System Manager's ManualZPOOL-EVENTS(8)
+
+
+

+

zpool-events — + list recent events generated by kernel

+
+
+

+ + + + + +
zpoolevents [-vHf] + [pool]
+
+ + + + + +
zpoolevents -c
+
+
+

+

Lists all recent events generated by the ZFS kernel modules. These + events are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. For + more information about the subclasses and event payloads that can be + generated see EVENTS and the following + sections.

+
+
+

+
+
+
Clear all previous events.
+
+
Follow mode.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Print the entire payload for each event.
+
+
+
+

+

These are the different event subclasses. The full event name + would be + , + but only the last part is listed here.

+

+
+
+
Issued when a checksum error has been detected.
+
+
Issued when there is an I/O error in a vdev in the pool.
+
+
Issued when there have been data errors in the pool.
+
+
Issued when an I/O request is determined to be "hung", this can + be caused by lost completion events due to flaky hardware or drivers. See + + in zfs(4) for additional information regarding + "hung" I/O detection and configuration.
+
+
Issued when a completed I/O request exceeds the maximum allowed time + specified by the + + module parameter. This can be an indicator of problems with the underlying + storage device. The number of delay events is ratelimited by the + + module parameter.
+
+
Issued every time a vdev change have been done to the pool.
+
+
Issued when a pool cannot be imported.
+
+
Issued when a pool is destroyed.
+
+
Issued when a pool is exported.
+
+
Issued when a pool is imported.
+
+
Issued when a REGUID (new unique identifier for the pool have been + regenerated) have been detected.
+
+
Issued when the vdev is unknown. Such as trying to clear device errors on + a vdev that have failed/been kicked from the system/pool and is no longer + available.
+
+
Issued when a vdev could not be opened (because it didn't exist for + example).
+
+
Issued when corrupt data have been detected on a vdev.
+
+
Issued when there are no more replicas to sustain the pool. This would + lead to the pool being + .
+
+
Issued when a missing device in the pool have been detected.
+
+
Issued when the system (kernel) have removed a device, and ZFS notices + that the device isn't there any more. This is usually followed by a + probe_failure event.
+
+
Issued when the label is OK but invalid.
+
+
Issued when the ashift alignment requirement has increased.
+
+
Issued when a vdev is detached from a mirror (or a spare detached from a + vdev where it have been used to replace a failed drive - only works if the + original drive have been re-added).
+
+
Issued when clearing device errors in a pool. Such as running + zpool clear on a device in + the pool.
+
+
Issued when a check to see if a given vdev could be opened is + started.
+
+
Issued when a spare have kicked in to replace a failed device.
+
+
Issued when a vdev can be automatically expanded.
+
+
Issued when there is an I/O failure in a vdev in the pool.
+
+
Issued when a probe fails on a vdev. This would occur if a vdev have been + kicked from the system outside of ZFS (such as the kernel have removed the + device).
+
+
Issued when the intent log cannot be replayed. The can occur in the case + of a missing or damaged log device.
+
+
Issued when a resilver is started.
+
+
Issued when the running resilver have finished.
+
+
Issued when a scrub is started on a pool.
+
+
Issued when a pool has finished scrubbing.
+
+
Issued when a scrub is aborted on a pool.
+
+
Issued when a scrub is resumed on a pool.
+
+
Issued when a scrub is paused on a pool.
+
+
 
+
+
+
+

+

This is the payload (data, information) that accompanies an + event.

+

For zed(8), these are set to + uppercase and prefixed with + .

+

+
+
+
Pool name.
+
+
Failmode - + , + , + or + . + See the + + property in zpoolprops(7) for more information.
+
+
The GUID of the pool.
+
+
The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, + 4=recover 5=error).
+
+
The GUID of the vdev in question (the vdev failing or operated upon with + zpool clear, etc.).
+
+
Type of vdev - + , + , + , + etc. See the + section of zpoolconcepts(7) for more + information on possible values.
+
+
Full path of the vdev, including any -partX.
+
+
ID of vdev (if any).
+
+
Physical FRU location.
+
+
State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed + to open, 5=faulted, 6=degraded, 7=healthy).
+
+
The ashift value of the vdev.
+
+
The time the last I/O request completed for the specified vdev.
+
+
The time since the last I/O request completed for the specified vdev.
+
+
List of spares, including full path and any -partX.
+
+
GUID(s) of spares.
+
+
How many read errors that have been detected on the vdev.
+
+
How many write errors that have been detected on the vdev.
+
+
How many checksum errors that have been detected on the vdev.
+
+
GUID of the vdev parent.
+
+
Type of parent. See vdev_type.
+
+
Path of the vdev parent (if any).
+
+
ID of the vdev parent (if any).
+
+
The object set number for a given I/O request.
+
+
The object number for a given I/O request.
+
+
The indirect level for the block. Level 0 is the lowest level and includes + data blocks. Values > 0 indicate metadata blocks at the appropriate + level.
+
+
The block ID for a given I/O request.
+
+
The error number for a failure when handling a given I/O request, + compatible with errno(3) with the value of + + used to indicate a ZFS checksum error.
+
+
The offset in bytes of where to write the I/O request for the specified + vdev.
+
+
The size in bytes of the I/O request.
+
+
The current flags describing how the I/O request should be handled. See + the I/O FLAGS section for the full list of I/O + flags.
+
+
The current stage of the I/O in the pipeline. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The valid pipeline stages for the I/O. See the I/O + STAGES section for a full list of all the I/O stages.
+
+
The time elapsed (in nanoseconds) waiting for the block layer to complete + the I/O request. Unlike zio_delta, this does not include + any vdev queuing time and is therefore solely a measure of the block layer + performance.
+
+
The time when a given I/O request was submitted.
+
+
The time required to service a given I/O request.
+
+
The previous state of the vdev.
+
+
Checksum algorithm used. See zfsprops(7) for more + information on the available checksum algorithms.
+
+
Whether or not the data is byteswapped.
+
+
start, + end) pairs of corruption offsets. Offsets are always + aligned on a 64-bit boundary, and can include some gaps of non-corruption. + (See bad_ranges_min_gap)
+
+
In order to bound the size of the bad_ranges array, gaps + of non-corruption less than or equal to + bad_ranges_min_gap bytes have been merged with adjacent + corruption. Always at least 8 bytes, since corruption is detected on a + 64-bit word basis.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits in that range which were clear in the + good data and set in the bad data.
+
+
This array has one element per range in bad_ranges. Each + element contains the count of bits for that range which were set in the + good data and clear in the bad data.
+
+
If this field exists, it is an array of (bad data + & ~(good data)); that + is, the bits set in the bad data which are cleared in the good data. Each + element corresponds a byte whose offset is in a range in + bad_ranges, and the array is ordered by offset. Thus, + the first element is the first byte in the first + bad_ranges range, and the last element is the last byte + in the last bad_ranges range.
+
+
Like bad_set_bits, but contains (good + data & ~(bad + data)); that is, the bits set in the good data which are cleared in + the bad data.
+
+
+
+

+

The ZFS I/O pipeline is comprised of various stages which are + defined below. The individual stages are used to construct these basic I/O + operations: Read, Write, Free, Claim, and Ioctl. These stages may be set on + an event to describe the life cycle of a given I/O request.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StageBit MaskOperations



ZIO_STAGE_OPEN0x00000001RWFCI
ZIO_STAGE_READ_BP_INIT0x00000002R----
ZIO_STAGE_WRITE_BP_INIT0x00000004-W---
ZIO_STAGE_FREE_BP_INIT0x00000008--F--
ZIO_STAGE_ISSUE_ASYNC0x00000010RWF--
ZIO_STAGE_WRITE_COMPRESS0x00000020-W---
ZIO_STAGE_ENCRYPT0x00000040-W---
ZIO_STAGE_CHECKSUM_GENERATE0x00000080-W---
ZIO_STAGE_NOP_WRITE0x00000100-W---
ZIO_STAGE_BRT_FREE0x00000200--F--
ZIO_STAGE_DDT_READ_START0x00000400R----
ZIO_STAGE_DDT_READ_DONE0x00000800R----
ZIO_STAGE_DDT_WRITE0x00001000-W---
ZIO_STAGE_DDT_FREE0x00002000--F--
ZIO_STAGE_GANG_ASSEMBLE0x00004000RWFC-
ZIO_STAGE_GANG_ISSUE0x00008000RWFC-
ZIO_STAGE_DVA_THROTTLE0x00010000-W---
ZIO_STAGE_DVA_ALLOCATE0x00020000-W---
ZIO_STAGE_DVA_FREE0x00040000--F--
ZIO_STAGE_DVA_CLAIM0x00080000---C-
ZIO_STAGE_READY0x00100000RWFCI
ZIO_STAGE_VDEV_IO_START0x00200000RW--I
ZIO_STAGE_VDEV_IO_DONE0x00400000RW--I
ZIO_STAGE_VDEV_IO_ASSESS0x00800000RW--I
ZIO_STAGE_CHECKSUM_VERIFY0x01000000R----
ZIO_STAGE_DONE0x02000000RWFCI
+
+
+

+

Every I/O request in the pipeline contains a set of flags which + describe its function and are used to govern its behavior. These flags will + be set in an event as a zio_flags payload entry.

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FlagBit Mask


ZIO_FLAG_DONT_AGGREGATE0x00000001
ZIO_FLAG_IO_REPAIR0x00000002
ZIO_FLAG_SELF_HEAL0x00000004
ZIO_FLAG_RESILVER0x00000008
ZIO_FLAG_SCRUB0x00000010
ZIO_FLAG_SCAN_THREAD0x00000020
ZIO_FLAG_PHYSICAL0x00000040
ZIO_FLAG_CANFAIL0x00000080
ZIO_FLAG_SPECULATIVE0x00000100
ZIO_FLAG_CONFIG_WRITER0x00000200
ZIO_FLAG_DONT_RETRY0x00000400
ZIO_FLAG_NODATA0x00001000
ZIO_FLAG_INDUCE_DAMAGE0x00002000
ZIO_FLAG_IO_ALLOCATING0x00004000
ZIO_FLAG_IO_RETRY0x00008000
ZIO_FLAG_PROBE0x00010000
ZIO_FLAG_TRYHARD0x00020000
ZIO_FLAG_OPTIONAL0x00040000
ZIO_FLAG_DONT_QUEUE0x00080000
ZIO_FLAG_DONT_PROPAGATE0x00100000
ZIO_FLAG_IO_BYPASS0x00200000
ZIO_FLAG_IO_REWRITE0x00400000
ZIO_FLAG_RAW_COMPRESS0x00800000
ZIO_FLAG_RAW_ENCRYPT0x01000000
ZIO_FLAG_GANG_CHILD0x02000000
ZIO_FLAG_DDT_CHILD0x04000000
ZIO_FLAG_GODFATHER0x08000000
ZIO_FLAG_NOPWRITE0x10000000
ZIO_FLAG_REEXECUTED0x20000000
ZIO_FLAG_DELEGATED0x40000000
ZIO_FLAG_FASTWRITE0x80000000
+
+
+

+

zfs(4), zed(8), + zpool-wait(8)

+
+
+ + + + + +
July 11, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-export.8.html b/man/v2.2/8/zpool-export.8.html new file mode 100644 index 000000000..4ce9cf68f --- /dev/null +++ b/man/v2.2/8/zpool-export.8.html @@ -0,0 +1,299 @@ + + + + + + + zpool-export.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-export.8

+
+ + + + + +
ZPOOL-EXPORT(8)System Manager's ManualZPOOL-EXPORT(8)
+
+
+

+

zpool-export — + export ZFS storage pools

+
+
+

+ + + + + +
zpoolexport [-f] + -a|pool
+
+
+

+

Exports the given pools from the system. All devices are marked as + exported, but are still considered in use by other subsystems. The devices + can be moved between systems (even those of different endianness) and + imported as long as a sufficient number of devices are present.

+

Before exporting the pool, all datasets within the pool are + unmounted. A pool can not be exported if it has a shared spare that is + currently being used.

+

For pools to be portable, you must give the + zpool command whole disks, not just partitions, so + that ZFS can label the disks with portable EFI labels. Otherwise, disk + drivers on platforms of different endianness will not recognize the + disks.

+
+
+
Exports all pools imported on the system.
+
+
Forcefully unmount all datasets, and allow export of pools with active + shared spares. +

This command will forcefully export the pool even if it has a + shared spare that is currently being used. This may lead to potential + data corruption.

+
+
+
+
+

+
+

+

The following command exports the devices in pool + tank so that they can be relocated or later + imported:

+
# zpool + export tank
+
+
+
+

+

zpool-import(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-get.8.html b/man/v2.2/8/zpool-get.8.html new file mode 100644 index 000000000..411c9b026 --- /dev/null +++ b/man/v2.2/8/zpool-get.8.html @@ -0,0 +1,389 @@ + + + + + + + zpool-get.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-get.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolset + property=value + pool vdev
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified vdevs (or all vdevs if + all-vdevs is used) in the specified pool. These + properties are displayed with the following fields: +
+
+
+
Name of vdev.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the vdevprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
zpool set + property=value + pool vdev
+
Sets the given property on the specified vdev in the specified pool. See + the vdevprops(7) manual page for more information on + what properties can be set and acceptable values.
+
+
+
+

+

vdevprops(7), + zpool-features(7), zpoolprops(7), + zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-history.8.html b/man/v2.2/8/zpool-history.8.html new file mode 100644 index 000000000..77ca3d5a8 --- /dev/null +++ b/man/v2.2/8/zpool-history.8.html @@ -0,0 +1,277 @@ + + + + + + + zpool-history.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-history.8

+
+ + + + + +
ZPOOL-HISTORY(8)System Manager's ManualZPOOL-HISTORY(8)
+
+
+

+

zpool-history — + inspect command history of ZFS storage pools

+
+
+

+ + + + + +
zpoolhistory [-il] + [pool]…
+
+
+

+

Displays the command history of the specified pool(s) or all pools + if no pool is specified.

+
+
+
Displays internally logged ZFS events in addition to user initiated + events.
+
+
Displays log records in long format, which in addition to standard format + includes, the user name, the hostname, and the zone in which the operation + was performed.
+
+
+
+

+

zpool-checkpoint(8), + zpool-events(8), zpool-status(8), + zpool-wait(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-import.8.html b/man/v2.2/8/zpool-import.8.html new file mode 100644 index 000000000..a1e23ff55 --- /dev/null +++ b/man/v2.2/8/zpool-import.8.html @@ -0,0 +1,575 @@ + + + + + + + zpool-import.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-import.8

+
+ + + + + +
ZPOOL-IMPORT(8)System Manager's ManualZPOOL-IMPORT(8)
+
+
+

+

zpool-import — + import ZFS storage pools or list available pools

+
+
+

+ + + + + +
zpoolimport [-D] + [-d + dir|device]…
+
+ + + + + +
zpoolimport -a + [-DflmN] [-F + [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root]
+
+ + + + + +
zpoolimport [-Dflmt] + [-F [-nTX]] + [--rewind-to-checkpoint] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
+
+

+
+
zpool import + [-D] [-d + dir|device]…
+
Lists pools available to import. If the -d or + -c options are not specified, this command + searches for devices using libblkid on Linux and geom on + FreeBSD. The -d option can + be specified multiple times, and all directories are searched. If the + device appears to be part of an exported pool, this command displays a + summary of the pool with the name of the pool, a numeric identifier, as + well as the vdev layout and current health of the device for each device + or file. Destroyed pools, pools that were previously destroyed with the + zpool destroy command, are + not listed unless the -D option is specified. +

The numeric identifier is unique, and can be used instead of + the pool name when multiple exported pools of the same name are + available.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times.
+
+
Lists destroyed pools only.
+
+
+
zpool import + -a [-DflmN] + [-F [-nTX]] + [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s]
+
Imports all pools found in the search directories. Identical to the + previous command, except that all pools with a sufficient number of + devices available are imported. Destroyed pools, pools that were + previously destroyed with the zpool + destroy command, will not be imported unless the + -D option is specified. +
+
+
Searches for and imports all pools found.
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pools only. The -f option is + also required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+
Import the pool without mounting any file systems.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Rewinds pool to the checkpointed state. Once the pool is imported with + this flag there is no way to undo the rewind. All changes and data + that were written after the checkpoint are lost! The only exception is + when the + + mounting option is enabled. In this case, the checkpointed state of + the pool is opened and an administrator can see how the pool would + look like if they were to fully rewind.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. WARNING: This + option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
+
zpool import + [-Dflmt] [-F + [-nTX]] [-c + cachefile|-d + dir|device] + [-o mntopts] + [-o + property=value]… + [-R root] + [-s] + pool|id + [newpool]
+
Imports a specific pool. A pool can be identified by its name or the + numeric identifier. If newpool is specified, the + pool is imported using the name newpool. Otherwise, + it is imported with the same name as its exported name. +

If a device is removed from a system without running + zpool export first, the + device appears as potentially active. It cannot be determined if this + was a failed export, or whether the device is really in use from another + host. To import a pool in this state, the -f + option is required.

+
+
+ cachefile
+
Reads configuration from the given cachefile + that was created with the cachefile pool property. + This cachefile is used instead of searching for + devices.
+
+ dir|device
+
Uses device or searches for devices or files in + dir. The -d option can + be specified multiple times. This option is incompatible with the + -c option.
+
+
Imports destroyed pool. The -f option is also + required.
+
+
Forces import, even if the pool appears to be potentially active.
+
+
Recovery mode for a non-importable pool. Attempt to return the pool to + an importable state by discarding the last few transactions. Not all + damaged pools can be recovered by using this option. If successful, + the data from the discarded transactions is irretrievably lost. This + option is ignored if the pool is importable or already imported.
+
+
Indicates that this command will request encryption keys for all + encrypted datasets it attempts to mount as it is bringing the pool + online. Note that if any datasets have a keylocation + of prompt this command will block waiting for the + keys to be entered. Without this flag encrypted datasets will be left + unavailable until the keys are loaded.
+
+
Allows a pool to import when there is a missing log device. Recent + transactions can be lost because the log device will be + discarded.
+
+
Used with the -F recovery option. Determines + whether a non-importable pool can be made importable again, but does + not actually perform the pool recovery. For more details about pool + recovery mode, see the -F option, above.
+
+ mntopts
+
Comma-separated list of mount options to use when mounting datasets + within the pool. See zfs(8) for a description of + dataset properties and mount options.
+
+ property=value
+
Sets the specified property on the imported pool. See the + zpoolprops(7) manual page for more information on + the available pool properties.
+
+ root
+
Sets the cachefile property to + none and the altroot property to + root.
+
+
Scan using the default search path, the libblkid cache will not be + consulted. A custom search path may be specified by setting the + ZPOOL_IMPORT_PATH environment variable.
+
+
Used with the -F recovery option. Determines + whether extreme measures to find a valid txg should take place. This + allows the pool to be rolled back to a txg which is no longer + guaranteed to be consistent. Pools imported at an inconsistent txg may + contain uncorrectable checksum errors. For more details about pool + recovery mode, see the -F option, above. + WARNING: This option can be extremely hazardous to the health of your + pool and should only be used as a last resort.
+
+
Specify the txg to use for rollback. Implies + -FX. For more details about pool recovery + mode, see the -X option, above. + : + This option can be extremely hazardous to the health of your pool and + should only be used as a last resort.
+
+
Used with newpool. Specifies that + newpool is temporary. Temporary pool names last + until export. Ensures that the original pool name will be used in all + label updates and therefore is retained upon export. Will also set + -o + cachefile=none when not explicitly + specified.
+
+
+
+
+
+

+
+

+

The following command displays available pools, and then imports + the pool tank for use on the system. The results from + this command are similar to the following:

+
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
+
+

+

zpool-export(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-initialize.8.html b/man/v2.2/8/zpool-initialize.8.html new file mode 100644 index 000000000..cbf5fa2d8 --- /dev/null +++ b/man/v2.2/8/zpool-initialize.8.html @@ -0,0 +1,298 @@ + + + + + + + zpool-initialize.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-initialize.8

+
+ + + + + +
ZPOOL-INITIALIZE(8)System Manager's ManualZPOOL-INITIALIZE(8)
+
+
+

+

zpool-initialize — + write to unallocated regions of ZFS storage pool

+
+
+

+ + + + + +
zpoolinitialize + [-c|-s + |-u] [-w] + pool [device]…
+
+
+

+

Begins initializing by writing to all unallocated regions on the + specified devices, or all eligible devices in the pool if no individual + devices are specified. Only leaf data or log devices may be initialized.

+
+
, + --cancel
+
Cancel initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no cancellation + will occur on any device.
+
, + --suspend
+
Suspend initializing on the specified devices, or all eligible devices if + none are specified. If one or more target devices are invalid or are not + currently being initialized, the command will fail and no suspension will + occur on any device. Initializing can then be resumed by running + zpool initialize with no + flags on the relevant target devices.
+
, + --uninit
+
Clears the initialization state on the specified devices, or all eligible + devices if none are specified. If the devices are being actively + initialized the command will fail. After being cleared + zpool initialize with no + flags can be used to re-initialize all unallocoated regions on the + relevant target devices.
+
, + --wait
+
Wait until the devices have finished initializing before returning.
+
+
+
+

+

zpool-add(8), zpool-attach(8), + zpool-create(8), zpool-online(8), + zpool-replace(8), zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-iostat.8.html b/man/v2.2/8/zpool-iostat.8.html new file mode 100644 index 000000000..d60f5cb83 --- /dev/null +++ b/man/v2.2/8/zpool-iostat.8.html @@ -0,0 +1,490 @@ + + + + + + + zpool-iostat.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-iostat.8

+
+ + + + + +
ZPOOL-IOSTAT(8)System Manager's ManualZPOOL-IOSTAT(8)
+
+
+

+

zpool-iostat — + display logical I/O statistics for ZFS storage + pools

+
+
+

+ + + + + +
zpooliostat [[[-c + SCRIPT] + [-lq]]|-rw] + [-T u|d] + [-ghHLnpPvy] + [pool…|[pool + vdev…]|vdev…] + [interval [count]]
+
+
+

+

Displays logical I/O statistics for the given pools/vdevs. + Physical I/O statistics may be observed via iostat(1). If + writes are located nearby, they may be merged into a single larger + operation. Additional I/O may be generated depending on the level of vdev + redundancy. To filter output, you may pass in a list of pools, a pool and + list of vdevs in that pool, or a list of any vdevs from any pool. If no + items are specified, statistics for every pool in the system are shown. When + given an interval, the statistics are printed every + interval seconds until killed. If + -n flag is specified the headers are displayed only + once, otherwise they are displayed periodically. If + count is specified, the command exits after + count reports are printed. The first report printed is + always the statistics since boot regardless of whether + interval and count are passed. + However, this behavior can be suppressed with the -y + flag. Also note that the units of + , + , + … that + are printed in the report are in base 1024. To get the raw values, use the + -p flag.

+
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool iostat + output. Users can run any script found in their + ~/.zpool.d directory or from the system + /etc/zfs/zpool.d directory. Script names + containing the slash + () character + are not allowed. The default search path can be overridden by setting the + + environment variable. A privileged user can only run + -c if they have the + + environment variable set. If a script requires the use of a privileged + command, like smartctl(8), then it's recommended you + allow the user access to it in /etc/sudoers or add + the user to the /etc/sudoers.d/zfs file. +

If -c is passed without a script name, + it prints a list of all scripts. -c also sets + verbose mode + (-v).

+

Script output should be in the form of "name=value". + The column name is set to "name" and the value is set to + "value". Multiple lines can be used to output multiple + columns. The first line of output not in the "name=value" + format is displayed without a column title, and no more output after + that is displayed. This can be useful for printing error messages. Blank + or NULL values are printed as a '-' to make output AWKable.

+

The following environment variables are set before running + each script:

+
+
+
Full path to the vdev
+
+
Underlying path to the vdev (/dev/sd*). For + use with device mapper, multipath, or partitioned vdevs.
+
+
The sysfs path to the enclosure for the vdev (if any).
+
+
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(1). Specify d for standard date + format. See date(1).
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Print headers only once when passed
+
+
Display numbers in parsable (exact) values. Time values are in + nanoseconds.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Print request size histograms for the leaf vdev's I/O. This includes + histograms of individual I/O (ind) and aggregate I/O (agg). These stats + can be useful for observing how well I/O aggregation is working. Note that + TRIM I/O may exceed 16M, but will be counted as 16M.
+
+
Verbose statistics Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
Normally the first line of output reports the statistics since boot: + suppress it.
+
+
Display latency histograms: +
+
+
Total I/O time (queuing + disk I/O time).
+
+
Disk I/O time (time reading/writing the disk).
+
+
Amount of time I/O spent in synchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in asynchronous priority queues. Does not + include disk time.
+
+
Amount of time I/O spent in scrub queue. Does not include disk + time.
+
+
Amount of time I/O spent in rebuild queue. Does not include disk + time.
+
+
+
+
Include average latency statistics: +
+
+
Average total I/O time (queuing + disk I/O time).
+
+
Average disk I/O time (time reading/writing the disk).
+
+
Average amount of time I/O spent in synchronous priority queues. Does + not include disk time.
+
+
Average amount of time I/O spent in asynchronous priority queues. Does + not include disk time.
+
+
Average queuing time in scrub queue. Does not include disk time.
+
+
Average queuing time in trim queue. Does not include disk time.
+
+
Average queuing time in rebuild queue. Does not include disk + time.
+
+
+
+
Include active queue statistics. Each priority queue has both pending + () + and active + () + I/O requests. Pending requests are waiting to be issued to the disk, and + active requests have been issued to disk and are waiting for completion. + These stats are broken out by priority queue: +
+
+
Current number of entries in synchronous priority queues.
+
+
Current number of entries in asynchronous priority queues.
+
+
Current number of entries in scrub queue.
+
+
Current number of entries in trim queue.
+
+
Current number of entries in rebuild queue.
+
+

All queue statistics are instantaneous measurements of the + number of entries in the queues. If you specify an interval, the + measurements will be sampled from the end of the interval.

+
+
+
+
+

+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+

iostat(1), smartctl(8), + zpool-list(8), zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-labelclear.8.html b/man/v2.2/8/zpool-labelclear.8.html new file mode 100644 index 000000000..f2648d310 --- /dev/null +++ b/man/v2.2/8/zpool-labelclear.8.html @@ -0,0 +1,275 @@ + + + + + + + zpool-labelclear.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-labelclear.8

+
+ + + + + +
ZPOOL-LABELCLEAR(8)System Manager's ManualZPOOL-LABELCLEAR(8)
+
+
+

+

zpool-labelclear — + remove ZFS label information from device

+
+
+

+ + + + + +
zpoollabelclear [-f] + device
+
+
+

+

Removes ZFS label information from the specified + device. If the device is a cache + device, it also removes the L2ARC header (persistent L2ARC). The + device must not be part of an active pool + configuration.

+
+
+
Treat exported or foreign devices as inactive.
+
+
+
+

+

zpool-destroy(8), + zpool-detach(8), zpool-remove(8), + zpool-replace(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-list.8.html b/man/v2.2/8/zpool-list.8.html new file mode 100644 index 000000000..051037396 --- /dev/null +++ b/man/v2.2/8/zpool-list.8.html @@ -0,0 +1,354 @@ + + + + + + + zpool-list.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-list.8

+
+ + + + + +
ZPOOL-LIST(8)System Manager's ManualZPOOL-LIST(8)
+
+
+

+

zpool-listlist + information about ZFS storage pools

+
+
+

+ + + + + +
zpoollist [-HgLpPv] + [-o + property[,property]…] + [-T u|d] + [pool]… [interval + [count]]
+
+
+

+

Lists the given pools along with a health status and space usage. + If no pools are specified, all pools in the system are + listed. When given an interval, the information is + printed every interval seconds until killed. If + count is specified, the command exits after + count reports are printed.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+ property
+
Comma-separated list of properties to display. See the + zpoolprops(7) manual page for a list of valid + properties. The default list is + , + , + , + , + , + , + , + , + , + .
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(1). Specify d for standard date + format. See date(1).
+
+
Verbose statistics. Reports usage statistics for individual vdevs within + the pool, in addition to the pool-wide statistics.
+
+
+
+

+
+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following:

+
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
+

+

The following command displays the detailed information for the + pool data. This pool is comprised of a single raidz + vdev where one of its devices increased its capacity by 10 GiB. In this + example, the pool will not be able to utilize this extra capacity until all + the devices under the raidz vdev have been expanded.

+
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
+
+

+

zpool-import(8), + zpool-status(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-offline.8.html b/man/v2.2/8/zpool-offline.8.html new file mode 100644 index 000000000..8fde8e37d --- /dev/null +++ b/man/v2.2/8/zpool-offline.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-offline.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-offline.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline + [--power|[-ft]] + pool device
+
+ + + + + +
zpoolonline + [--power] + [-e] pool + device
+
+
+

+
+
zpool offline + [--power|[-ft]] + pool device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Power off the device's slot in the storage enclosure. This flag + currently works on Linux only
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [--power] [-e] + pool device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Power on the device's slot in the storage enclosure and wait for the + device to show up before attempting to online it. Alternatively, you + can set the + + environment variable to always enable this behavior. This flag + currently works on Linux only
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-online.8.html b/man/v2.2/8/zpool-online.8.html new file mode 100644 index 000000000..91464d74e --- /dev/null +++ b/man/v2.2/8/zpool-online.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-online.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-online.8

+
+ + + + + +
ZPOOL-OFFLINE(8)System Manager's ManualZPOOL-OFFLINE(8)
+
+
+

+

zpool-offline — + take physical devices offline in ZFS storage + pool

+
+
+

+ + + + + +
zpooloffline + [--power|[-ft]] + pool device
+
+ + + + + +
zpoolonline + [--power] + [-e] pool + device
+
+
+

+
+
zpool offline + [--power|[-ft]] + pool device
+
Takes the specified physical device offline. While the + device is offline, no attempt is made to read or + write to the device. This command is not applicable to spares. +
+
+
Power off the device's slot in the storage enclosure. This flag + currently works on Linux only
+
+
Force fault. Instead of offlining the disk, put it into a faulted + state. The fault will persist across imports unless the + -t flag was specified.
+
+
Temporary. Upon reboot, the specified physical device reverts to its + previous state.
+
+
+
zpool online + [--power] [-e] + pool device
+
Brings the specified physical device online. This command is not + applicable to spares. +
+
+
Power on the device's slot in the storage enclosure and wait for the + device to show up before attempting to online it. Alternatively, you + can set the + + environment variable to always enable this behavior. This flag + currently works on Linux only
+
+
Expand the device to use all available space. If the device is part of + a mirror or raidz then all devices must be expanded before the new + space will become available to the pool.
+
+
+
+
+
+

+

zpool-detach(8), + zpool-remove(8), zpool-reopen(8), + zpool-resilver(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-reguid.8.html b/man/v2.2/8/zpool-reguid.8.html new file mode 100644 index 000000000..b8a90c0cb --- /dev/null +++ b/man/v2.2/8/zpool-reguid.8.html @@ -0,0 +1,268 @@ + + + + + + + zpool-reguid.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reguid.8

+
+ + + + + +
ZPOOL-REGUID(8)System Manager's ManualZPOOL-REGUID(8)
+
+
+

+

zpool-reguid — + generate new unique identifier for ZFS storage + pool

+
+
+

+ + + + + +
zpoolreguid pool
+
+
+

+

Generates a new unique identifier for the pool. You must ensure + that all devices in this pool are online and healthy before performing this + action.

+
+
+

+

zpool-export(8), + zpool-import(8)

+
+
+ + + + + +
May 31, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-remove.8.html b/man/v2.2/8/zpool-remove.8.html new file mode 100644 index 000000000..8b1b443fc --- /dev/null +++ b/man/v2.2/8/zpool-remove.8.html @@ -0,0 +1,363 @@ + + + + + + + zpool-remove.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-remove.8

+
+ + + + + +
ZPOOL-REMOVE(8)System Manager's ManualZPOOL-REMOVE(8)
+
+
+

+

zpool-remove — + remove devices from ZFS storage pool

+
+
+

+ + + + + +
zpoolremove [-npw] + pool device
+
+ + + + + +
zpoolremove -s + pool
+
+
+

+
+
zpool remove + [-npw] pool + device
+
Removes the specified device from the pool. This command supports removing + hot spare, cache, log, and both mirrored and non-redundant primary + top-level vdevs, including dedup and special vdevs. +

Top-level vdevs can only be removed if the primary pool + storage does not contain a top-level raidz vdev, all top-level vdevs + have the same sector size, and the keys for all encrypted datasets are + loaded.

+

Removing a top-level vdev reduces the + total amount of space in the storage pool. The specified device will be + evacuated by copying all allocated space from it to the other devices in + the pool. In this case, the zpool + remove command initiates the removal and + returns, while the evacuation continues in the background. The removal + progress can be monitored with zpool + status. If an I/O error is encountered during + the removal process it will be cancelled. The + + feature flag must be enabled to remove a top-level vdev, see + zpool-features(7).

+

A mirrored top-level device (log or data) can be removed by + specifying the top- level mirror for the same. Non-log devices or data + devices that are part of a mirrored configuration can be removed using + the zpool detach + command.

+
+
+
Do not actually perform the removal ("No-op"). Instead, + print the estimated amount of memory that will be used by the mapping + table after the removal completes. This is nonzero only for top-level + vdevs.
+
+
+
+
Used in conjunction with the -n flag, displays + numbers as parsable (exact) values.
+
+
Waits until the removal has completed before returning.
+
+
+
zpool remove + -s pool
+
Stops and cancels an in-progress removal of a top-level vdev.
+
+
+
+

+
+

+

The following commands remove the mirrored log device + + and mirrored top-level data device + .

+

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
+
+

+

zpool-add(8), zpool-detach(8), + zpool-labelclear(8), zpool-offline(8), + zpool-replace(8), zpool-split(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-reopen.8.html b/man/v2.2/8/zpool-reopen.8.html new file mode 100644 index 000000000..d9460c287 --- /dev/null +++ b/man/v2.2/8/zpool-reopen.8.html @@ -0,0 +1,270 @@ + + + + + + + zpool-reopen.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-reopen.8

+
+ + + + + +
ZPOOL-REOPEN(8)System Manager's ManualZPOOL-REOPEN(8)
+
+
+

+

zpool-reopen — + reopen vdevs associated with ZFS storage pools

+
+
+

+ + + + + +
zpoolreopen [-n] + [pool]…
+
+
+

+

Reopen all vdevs associated with the specified pools, or all pools + if none specified.

+
+
+

+
+
+
Do not restart an in-progress scrub operation. This is not recommended and + can result in partially resilvered devices unless a second scrub is + performed.
+
+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-replace.8.html b/man/v2.2/8/zpool-replace.8.html new file mode 100644 index 000000000..660653c55 --- /dev/null +++ b/man/v2.2/8/zpool-replace.8.html @@ -0,0 +1,304 @@ + + + + + + + zpool-replace.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-replace.8

+
+ + + + + +
ZPOOL-REPLACE(8)System Manager's ManualZPOOL-REPLACE(8)
+
+
+

+

zpool-replace — + replace one device with another in ZFS storage + pool

+
+
+

+ + + + + +
zpoolreplace [-fsw] + [-o + property=value] + pool device + [new-device]
+
+
+

+

Replaces device with + new-device. This is equivalent to attaching + new-device, waiting for it to resilver, and then + detaching device. Any in progress scrub will be + cancelled.

+

The size of new-device must be greater than + or equal to the minimum size of all the devices in a mirror or raidz + configuration.

+

new-device is required if the pool is not + redundant. If new-device is not specified, it defaults + to device. This form of replacement is useful after an + existing disk has failed and has been physically replaced. In this case, the + new disk may have the same /dev path as the old + device, even though it is actually a different disk. ZFS recognizes + this.

+
+
+
Forces use of new-device, even if it appears to be + in use. Not all devices can be overridden in this manner.
+
+ property=value
+
Sets the given pool properties. See the zpoolprops(7) + manual page for a list of valid properties that can be set. The only + property supported at the moment is + .
+
+
The new-device is reconstructed sequentially to + restore redundancy as quickly as possible. Checksums are not verified + during sequential reconstruction so a scrub is started when the resilver + completes. Sequential reconstruction is not supported for raidz + configurations.
+
+
Waits until the replacement has completed before returning.
+
+
+
+

+

zpool-detach(8), + zpool-initialize(8), zpool-online(8), + zpool-resilver(8)

+
+
+ + + + + +
May 29, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-resilver.8.html b/man/v2.2/8/zpool-resilver.8.html new file mode 100644 index 000000000..a41857da0 --- /dev/null +++ b/man/v2.2/8/zpool-resilver.8.html @@ -0,0 +1,272 @@ + + + + + + + zpool-resilver.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-resilver.8

+
+ + + + + +
ZPOOL-RESILVER(8)System Manager's ManualZPOOL-RESILVER(8)
+
+
+

+

zpool-resilver — + resilver devices in ZFS storage pools

+
+
+

+ + + + + +
zpoolresilver pool
+
+
+

+

Starts a resilver of the specified pools. If an existing resilver + is already running it will be restarted from the beginning. Any drives that + were scheduled for a deferred resilver will be added to the new one. This + requires the + + pool feature.

+
+
+

+

zpool-iostat(8), + zpool-online(8), zpool-reopen(8), + zpool-replace(8), zpool-scrub(8), + zpool-status(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-scrub.8.html b/man/v2.2/8/zpool-scrub.8.html new file mode 100644 index 000000000..48a200ee3 --- /dev/null +++ b/man/v2.2/8/zpool-scrub.8.html @@ -0,0 +1,362 @@ + + + + + + + zpool-scrub.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-scrub.8

+
+ + + + + +
ZPOOL-SCRUB(8)System Manager's ManualZPOOL-SCRUB(8)
+
+
+

+

zpool-scrub — + begin or resume scrub of ZFS storage pools

+
+
+

+ + + + + +
zpoolscrub + [-s|-p] + [-w] [-e] + pool
+
+
+

+

Begins a scrub or resumes a paused scrub. The scrub examines all + data in the specified pools to verify that it checksums correctly. For + replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any + damage discovered during the scrub. The zpool + status command reports the progress of the scrub and + summarizes the results of the scrub upon completion.

+

Scrubbing and resilvering are very similar operations. The + difference is that resilvering only examines data that ZFS knows to be out + of date (for example, when attaching a new device to a mirror or replacing + an existing device), whereas scrubbing examines all data to discover silent + errors due to hardware faults or disk failure.

+

When scrubbing a pool with encrypted filesystems the keys do not + need to be loaded. However, if the keys are not loaded and an unrepairable + checksum error is detected the file name cannot be included in the + zpool status + -v verbose error report.

+

Because scrubbing and resilvering are I/O-intensive operations, + ZFS only allows one at a time.

+

A scrub is split into two parts: metadata scanning and block + scrubbing. The metadata scanning sorts blocks into large sequential ranges + which can then be read much more efficiently from disk when issuing the + scrub I/O.

+

If a scrub is paused, the zpool + scrub resumes it. If a resilver is in progress, ZFS + does not allow a scrub to be started until the resilver completes.

+

Note that, due to changes in pool data on a live system, it is + possible for scrubs to progress slightly beyond 100% completion. During this + period, no completion time estimate will be provided.

+
+
+

+
+
+
Stop scrubbing.
+
+
Pause scrubbing. Scrub pause state and progress are periodically synced to + disk. If the system is restarted or pool is exported during a paused + scrub, even after import, scrub will remain paused until it is resumed. + Once resumed the scrub will pick up from the place where it was last + checkpointed to disk. To resume a paused scrub issue + zpool scrub or + zpool scrub + -e again.
+
+
Wait until scrub has completed before returning.
+
+
Only scrub files with known data errors as reported by + zpool status + -v. The pool must have been scrubbed at least once + with the + + feature enabled to use this option. Error scrubbing cannot be run + simultaneously with regular scrubbing or resilvering, nor can it be run + when a regular scrub is paused.
+
+
+
+

+
+

+

Status of pool with ongoing scrub:

+

+
+
# zpool status
+  ...
+  scan: scrub in progress since Sun Jul 25 16:07:49 2021
+        403M / 405M scanned at 100M/s, 68.4M / 405M issued at 10.0M/s
+        0B repaired, 16.91% done, 00:00:04 to go
+  ...
+
+

Where metadata which references 403M of file data has been scanned + at 100M/s, and 68.4M of that file data has been scrubbed sequentially at + 10.0M/s.

+
+
+
+

+

On machines using systemd, scrub timers can be enabled on per-pool + basis. weekly and monthly + timer units are provided.

+
+
+
systemctl enable + zfs-scrub-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-scrub-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpool-iostat(8), + zpool-resilver(8), + zpool-status(8)

+
+
+ + + + + +
June 22, 2023Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-set.8.html b/man/v2.2/8/zpool-set.8.html new file mode 100644 index 000000000..5e03d1409 --- /dev/null +++ b/man/v2.2/8/zpool-set.8.html @@ -0,0 +1,389 @@ + + + + + + + zpool-set.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool-set.8

+
+ + + + + +
ZPOOL-GET(8)System Manager's ManualZPOOL-GET(8)
+
+
+

+

zpool-get — + retrieve properties of ZFS storage pools

+
+
+

+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + [pool]…
+
+ + + + + +
zpoolget [-Hp] + [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
+ + + + + +
zpoolset + property=value + pool
+
+ + + + + +
zpoolset + property=value + pool vdev
+
+
+

+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + [pool]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified storage pool(s). These + properties are displayed with the following fields: +
+
+
+
Name of storage pool.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the zpoolprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool get + [-Hp] [-o + field[,field]…] + all|property[,property]… + pool + [all-vdevs|vdev]…
+
Retrieves the given list of properties (or all properties if + all is used) for the specified vdevs (or all vdevs if + all-vdevs is used) in the specified pool. These + properties are displayed with the following fields: +
+
+
+
Name of vdev.
+
+
Property name.
+
+
Property value.
+
+
Property source, either default + or local.
+
+
+

See the vdevprops(7) manual page for more + information on the available pool properties.

+
+
+
+
Scripted mode. Do not display headers, and separate fields by a single + tab instead of arbitrary space.
+
+ field
+
A comma-separated list of columns to display, defaults to + name,property,value,source.
+
+
Display numbers in parsable (exact) values.
+
+
+
+
zpool set + property=value + pool
+
Sets the given property on the specified pool. See the + zpoolprops(7) manual page for more information on what + properties can be set and acceptable values.
+
zpool set + property=value + pool vdev
+
Sets the given property on the specified vdev in the specified pool. See + the vdevprops(7) manual page for more information on + what properties can be set and acceptable values.
+
+
+
+

+

vdevprops(7), + zpool-features(7), zpoolprops(7), + zpool-list(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-split.8.html b/man/v2.2/8/zpool-split.8.html new file mode 100644 index 000000000..b445550e6 --- /dev/null +++ b/man/v2.2/8/zpool-split.8.html @@ -0,0 +1,317 @@ + + + + + + + zpool-split.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-split.8

+
+ + + + + +
ZPOOL-SPLIT(8)System Manager's ManualZPOOL-SPLIT(8)
+
+
+

+

zpool-split — + split devices off ZFS storage pool, creating new + pool

+
+
+

+ + + + + +
zpoolsplit [-gLlnP] + [-o + property=value]… + [-R root] + pool newpool + [device]…
+
+
+

+

Splits devices off pool creating + newpool. All vdevs in pool must + be mirrors and the pool must not be in the process of resilvering. At the + time of the split, newpool will be a replica of + pool. By default, the last device in each mirror is + split from pool to create + newpool.

+

The optional device specification causes the specified device(s) + to be included in the new pool and, should any devices + remain unspecified, the last device in each mirror is used as would be by + default.

+
+
+
Display vdev GUIDs instead of the normal device names. These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Indicates that this command will request encryption keys for all encrypted + datasets it attempts to mount as it is bringing the new pool online. Note + that if any datasets have + =, + this command will block waiting for the keys to be entered. Without this + flag, encrypted datasets will be left unavailable until the keys are + loaded.
+
+
Do a dry-run ("No-op") split: do not actually perform it. Print + out the expected configuration of newpool.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+ property=value
+
Sets the specified property for newpool. See the + zpoolprops(7) manual page for more information on the + available pool properties.
+
+ root
+
Set + + for newpool to root and + automatically import it.
+
+
+
+

+

zpool-import(8), + zpool-list(8), zpool-remove(8)

+
+
+ + + + + +
June 2, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-status.8.html b/man/v2.2/8/zpool-status.8.html new file mode 100644 index 000000000..7eef2576d --- /dev/null +++ b/man/v2.2/8/zpool-status.8.html @@ -0,0 +1,373 @@ + + + + + + + zpool-status.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-status.8

+
+ + + + + +
ZPOOL-STATUS(8)System Manager's ManualZPOOL-STATUS(8)
+
+
+

+

zpool-status — + show detailed health status for ZFS storage + pools

+
+
+

+ + + + + +
zpoolstatus [-DeigLpPstvx] + [-T u|d] + [-c + [SCRIPT1[,SCRIPT2]…]] + [pool]… [interval + [count]]
+
+
+

+

Displays the detailed health status for the given pools. If no + pool is specified, then the status of each pool in the + system is displayed. For more information on pool and device health, see the + Device Failure and + Recovery section of zpoolconcepts(7).

+

If a scrub or resilver is in progress, this command reports the + percentage done and the estimated time to completion. Both of these are only + approximate, because the amount of data in the pool and the other workloads + on the system can change.

+
+
+
Display vdev enclosure slot power status (on or off).
+
+ [SCRIPT1[,SCRIPT2]…]
+
Run a script (or scripts) on each vdev and include the output as a new + column in the zpool status + output. See the -c option of + zpool iostat for complete + details.
+
+
Only show unhealthy vdevs (not-ONLINE or with errors).
+
+
Display vdev initialization status.
+
+
Display vdev GUIDs instead of the normal device names These GUIDs can be + used in place of device names for the zpool detach/offline/remove/replace + commands.
+
+
Display real paths for vdevs resolving all symbolic links. This can be + used to look up the current block device name regardless of the + /dev/disk/ path used to open it.
+
+
Display numbers in parsable (exact) values.
+
+
Display full paths for vdevs instead of only the last component of the + path. This can be used in conjunction with the -L + flag.
+
+
Display a histogram of deduplication statistics, showing the allocated + (physically present on disk) and referenced (logically referenced in the + pool) block counts and sizes by reference count.
+
+
Display the number of leaf vdev slow I/O operations. This is the number of + I/O operations that didn't complete in + + milliseconds + ( + by default). This does not necessarily mean the + I/O operations failed to complete, just took an unreasonably long amount + of time. This may indicate a problem with the underlying storage.
+
+
Display vdev TRIM status.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(1). Specify d for standard date + format. See date(1).
+
+
Displays verbose data error information, printing out a complete list of + all data errors since the last complete pool scrub. If the head_errlog + feature is enabled and files containing errors have been removed then the + respective filenames will not be reported in subsequent runs of this + command.
+
+
Only display status for pools that are exhibiting errors or are otherwise + unavailable. Warnings about pools not using the latest on-disk format will + not be included.
+
+
+
+

+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+

zpool-events(8), + zpool-history(8), zpool-iostat(8), + zpool-list(8), zpool-resilver(8), + zpool-scrub(8), zpool-wait(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-sync.8.html b/man/v2.2/8/zpool-sync.8.html new file mode 100644 index 000000000..91cd97e0f --- /dev/null +++ b/man/v2.2/8/zpool-sync.8.html @@ -0,0 +1,269 @@ + + + + + + + zpool-sync.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-sync.8

+
+ + + + + +
ZPOOL-SYNC(8)System Manager's ManualZPOOL-SYNC(8)
+
+
+

+

zpool-syncflush + data to primary storage of ZFS storage pools

+
+
+

+ + + + + +
zpoolsync [pool]…
+
+
+

+

This command forces all in-core dirty data to be written to the + primary pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified pools.

+
+
+

+

zpoolconcepts(7), + zpool-export(8), zpool-iostat(8)

+
+
+ + + + + +
August 9, 2019Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-trim.8.html b/man/v2.2/8/zpool-trim.8.html new file mode 100644 index 000000000..1b91a3b80 --- /dev/null +++ b/man/v2.2/8/zpool-trim.8.html @@ -0,0 +1,326 @@ + + + + + + + zpool-trim.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-trim.8

+
+ + + + + +
ZPOOL-TRIM(8)System Manager's ManualZPOOL-TRIM(8)
+
+
+

+

zpool-trim — + initiate TRIM of free space in ZFS storage pool

+
+
+

+ + + + + +
zpooltrim [-dw] + [-r rate] + [-c|-s] + pool [device]…
+
+
+

+

Initiates an immediate on-demand TRIM operation for all of the + free space in a pool. This operation informs the underlying storage devices + of all blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.

+

A manual on-demand TRIM operation can be initiated irrespective of + the autotrim pool property setting. See the documentation + for the autotrim property above for the types of vdev + devices which can be trimmed.

+
+
, + --secure
+
Causes a secure TRIM to be initiated. When performing a secure TRIM, the + device guarantees that data stored on the trimmed blocks has been erased. + This requires support from the device and is not supported by all + SSDs.
+
, + --rate rate
+
Controls the rate at which the TRIM operation progresses. Without this + option TRIM is executed as quickly as possible. The rate, expressed in + bytes per second, is applied on a per-vdev basis and may be set + differently for each leaf vdev.
+
, + --cancel
+
Cancel trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no cancellation will + occur on any device.
+
, + --suspend
+
Suspend trimming on the specified devices, or all eligible devices if none + are specified. If one or more target devices are invalid or are not + currently being trimmed, the command will fail and no suspension will + occur on any device. Trimming can then be resumed by running + zpool trim with no flags + on the relevant target devices.
+
, + --wait
+
Wait until the devices are done being trimmed before returning.
+
+
+
+

+

On machines using systemd, trim timers can be enabled on a + per-pool basis. weekly and + monthly timer units are provided.

+
+
+
systemctl enable + zfs-trim-weekly@rpool.timer + --now
+
+
systemctl + enable + zfs-trim-monthly@otherpool.timer + --now
+
+
+
+

+

systemd.timer(5), + zpoolprops(7), + zpool-initialize(8), + zpool-wait(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-upgrade.8.html b/man/v2.2/8/zpool-upgrade.8.html new file mode 100644 index 000000000..37fcc6ed9 --- /dev/null +++ b/man/v2.2/8/zpool-upgrade.8.html @@ -0,0 +1,337 @@ + + + + + + + zpool-upgrade.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-upgrade.8

+
+ + + + + +
ZPOOL-UPGRADE(8)System Manager's ManualZPOOL-UPGRADE(8)
+
+
+

+

zpool-upgrade — + manage version and feature flags of ZFS storage + pools

+
+
+

+ + + + + +
zpoolupgrade
+
+ + + + + +
zpoolupgrade -v
+
+ + + + + +
zpoolupgrade [-V + version] + -a|pool
+
+
+

+
+
zpool upgrade
+
Displays pools which do not have all supported features enabled and pools + formatted using a legacy ZFS version number. These pools can continue to + be used, but some features may not be available. Use + zpool upgrade + -a to enable all features on all pools (subject to + the -o compatibility + property).
+
zpool upgrade + -v
+
Displays legacy ZFS versions supported by the this version of ZFS. See + zpool-features(7) for a description of feature flags + features supported by this version of ZFS.
+
zpool upgrade + [-V version] + -a|pool
+
Enables all supported features on the given pool. +

If the pool has specified compatibility feature sets using the + -o compatibility property, + only the features present in all requested compatibility sets will be + enabled. If this property is set to legacy then no + upgrade will take place.

+

Once this is done, the pool will no longer be accessible on + systems that do not support feature flags. See + zpool-features(7) for details on compatibility with + systems that support feature flags, but do not support all features + enabled on the pool.

+
+
+
Enables all supported features (from specified compatibility sets, if + any) on all pools.
+
+ version
+
Upgrade to the specified legacy version. If specified, no features + will be enabled on the pool. This option can only be used to increase + the version number up to the last supported legacy version + number.
+
+
+
+
+
+

+
+

+

The following command upgrades all ZFS Storage pools to the + current version of the software:

+
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
+
+

+

zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zpool-history(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool-wait.8.html b/man/v2.2/8/zpool-wait.8.html new file mode 100644 index 000000000..d1c367693 --- /dev/null +++ b/man/v2.2/8/zpool-wait.8.html @@ -0,0 +1,318 @@ + + + + + + + zpool-wait.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool-wait.8

+
+ + + + + +
ZPOOL-WAIT(8)System Manager's ManualZPOOL-WAIT(8)
+
+
+

+

zpool-waitwait + for activity to stop in a ZFS storage pool

+
+
+

+ + + + + +
zpoolwait [-Hp] + [-T u|d] + [-t + activity[,activity]…] + pool [interval]
+
+
+

+

Waits until all background activity of the given types has ceased + in the given pool. The activity could cease because it has completed, or + because it has been paused or canceled by a user, or because the pool has + been exported or destroyed. If no activities are specified, the command + waits until background activity of every type listed below has ceased. If + there is no activity of the given types in progress, the command returns + immediately.

+

These are the possible values for activity, + along with what each one waits for:

+
+
+
+
Checkpoint to be discarded
+
+
+ property to become +
+
+
All initializations to cease
+
+
All device replacements to cease
+
+
Device removal to cease
+
+
Resilver to cease
+
+
Scrub to cease
+
+
Manual trim to cease
+
+
+

If an interval is provided, the amount of + work remaining, in bytes, for each activity is printed every + interval seconds.

+
+
+
Scripted mode. Do not display headers, and separate fields by a single tab + instead of arbitrary space.
+
+
Display numbers in parsable (exact) values.
+
+ u|d
+
Display a time stamp. Specify u for a printed + representation of the internal representation of time. See + time(1). Specify d for standard date + format. See date(1).
+
+
+
+

+

zpool-checkpoint(8), + zpool-initialize(8), zpool-remove(8), + zpool-replace(8), zpool-resilver(8), + zpool-scrub(8), zpool-status(8), + zpool-trim(8)

+
+
+ + + + + +
May 27, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool.8.html b/man/v2.2/8/zpool.8.html new file mode 100644 index 000000000..3e94b4d62 --- /dev/null +++ b/man/v2.2/8/zpool.8.html @@ -0,0 +1,838 @@ + + + + + + + zpool.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zpool.8

+
+ + + + + +
ZPOOL(8)System Manager's ManualZPOOL(8)
+
+
+

+

zpoolconfigure + ZFS storage pools

+
+
+

+ + + + + +
zpool-?V
+
+ + + + + +
zpoolversion
+
+ + + + + +
zpoolsubcommand + [arguments]
+
+
+

+

The zpool command configures ZFS storage + pools. A storage pool is a collection of devices that provides physical + storage and data replication for ZFS datasets. All datasets within a storage + pool share the same space. See zfs(8) for information on + managing datasets.

+

For an overview of creating and managing ZFS storage pools see the + zpoolconcepts(7) manual page.

+
+
+

+

All subcommands that modify state are logged persistently to the + pool in their original form.

+

The zpool command provides subcommands to + create and destroy storage pools, add capacity to storage pools, and provide + information about the storage pools. The following subcommands are + supported:

+
+
zpool -?
+
Displays a help message.
+
zpool -V, + --version
+
 
+
zpool version
+
Displays the software version of the zpool + userland utility and the ZFS kernel module.
+
+
+

+
+
zpool-create(8)
+
Creates a new storage pool containing the virtual devices specified on the + command line.
+
zpool-initialize(8)
+
Begins initializing by writing to all unallocated regions on the specified + devices, or all eligible devices in the pool if no individual devices are + specified.
+
+
+
+

+
+
zpool-destroy(8)
+
Destroys the given pool, freeing up any devices for other use.
+
zpool-labelclear(8)
+
Removes ZFS label information from the specified + device.
+
+
+
+

+
+
zpool-attach(8)/zpool-detach(8)
+
Converts a non-redundant disk into a mirror, or increases the redundancy + level of an existing mirror (attach), or performs + the inverse operation (detach).
+
zpool-add(8)/zpool-remove(8)
+
Adds the specified virtual devices to the given pool, or removes the + specified device from the pool.
+
zpool-replace(8)
+
Replaces an existing device (which may be faulted) with a new one.
+
zpool-split(8)
+
Creates a new pool by splitting all mirrors in an existing pool (which + decreases its redundancy).
+
+
+
+

+

Available pool properties listed in the + zpoolprops(7) manual page.

+
+
zpool-list(8)
+
Lists the given pools along with a health status and space usage.
+
zpool-get(8)/zpool-set(8)
+
Retrieves the given list of properties (or all properties if + is used) for + the specified storage pool(s).
+
+
+
+

+
+
zpool-status(8)
+
Displays the detailed health status for the given pools.
+
zpool-iostat(8)
+
Displays logical I/O statistics for the given pools/vdevs. Physical I/O + operations may be observed via iostat(1).
+
zpool-events(8)
+
Lists all recent events generated by the ZFS kernel modules. These events + are consumed by the zed(8) and used to automate + administrative tasks such as replacing a failed device with a hot spare. + That manual page also describes the subclasses and event payloads that can + be generated.
+
zpool-history(8)
+
Displays the command history of the specified pool(s) or all pools if no + pool is specified.
+
+
+
+

+
+
zpool-scrub(8)
+
Begins a scrub or resumes a paused scrub.
+
zpool-checkpoint(8)
+
Checkpoints the current state of pool, which can be + later restored by zpool + import + --rewind-to-checkpoint.
+
zpool-trim(8)
+
Initiates an immediate on-demand TRIM operation for all of the free space + in a pool. This operation informs the underlying storage devices of all + blocks in the pool which are no longer allocated and allows thinly + provisioned devices to reclaim the space.
+
zpool-sync(8)
+
This command forces all in-core dirty data to be written to the primary + pool storage and not the ZIL. It will also update administrative + information including quota reporting. Without arguments, + zpool sync will sync all + pools on the system. Otherwise, it will sync only the specified + pool(s).
+
zpool-upgrade(8)
+
Manage the on-disk format version of storage pools.
+
zpool-wait(8)
+
Waits until all background activity of the given types has ceased in the + given pool.
+
+
+
+

+
+
zpool-offline(8)/zpool-online(8)
+
Takes the specified physical device offline or brings it online.
+
zpool-resilver(8)
+
Starts a resilver. If an existing resilver is already running it will be + restarted from the beginning.
+
zpool-reopen(8)
+
Reopen all the vdevs associated with the pool.
+
zpool-clear(8)
+
Clears device errors in a pool.
+
+
+
+

+
+
zpool-import(8)
+
Make disks containing ZFS storage pools available for use on the + system.
+
zpool-export(8)
+
Exports the given pools from the system.
+
zpool-reguid(8)
+
Generates a new unique identifier for the pool.
+
+
+
+
+

+

The following exit values are returned:

+
+
+
+
Successful completion.
+
+
An error occurred.
+
+
Invalid command line options were specified.
+
+
+
+
+

+
+

+

The following command creates a pool with a single raidz root vdev + that consists of six disks:

+
# zpool + create tank + + sda sdb sdc sdd sde sdf
+
+
+

+

The following command creates a pool with two mirrors, where each + mirror contains two disks:

+
# zpool + create tank + mirror sda sdb + mirror sdc sdd
+
+
+

+

The following command creates a non-redundant pool using two disk + partitions:

+
# zpool + create tank + sda1 sdb2
+
+
+

+

The following command creates a non-redundant pool using files. + While not recommended, a pool based on files can be useful for experimental + purposes.

+
# zpool + create tank + /path/to/file/a /path/to/file/b
+
+
+

+

The following command converts an existing single device + sda into a mirror by attaching a second device to it, + sdb.

+
# zpool + attach tank sda + sdb
+
+
+

+

The following command adds two mirrored disks to the pool + tank, assuming the pool is already made up of two-way + mirrors. The additional space is immediately available to any datasets + within the pool.

+
# zpool + add tank + mirror sda sdb
+
+
+

+

The following command lists all available pools on the system. In + this case, the pool zion is faulted due to a missing + device. The results from this command are similar to the following:

+
+
# zpool list
+NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
+tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
+zion       -      -      -         -      -      -      -  FAULTED -
+
+
+
+

+

The following command destroys the pool tank + and any datasets contained within:

+
# zpool + destroy -f + tank
+
+
+

+

The following command exports the devices in pool + tank so that they can be relocated or later + imported:

+
# zpool + export tank
+
+
+

+

The following command displays available pools, and then imports + the pool tank for use on the system. The results from + this command are similar to the following:

+
+
# zpool import
+  pool: tank
+    id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        tank        ONLINE
+          mirror    ONLINE
+            sda     ONLINE
+            sdb     ONLINE
+
+# zpool import tank
+
+
+
+

+

The following command upgrades all ZFS Storage pools to the + current version of the software:

+
+
# zpool upgrade -a
+This system is currently running ZFS version 2.
+
+
+
+

+

The following command creates a new pool with an available hot + spare:

+
# zpool + create tank + mirror sda sdb + + sdc
+

If one of the disks were to fail, the pool would be reduced to the + degraded state. The failed device can be replaced using the following + command:

+
# zpool + replace tank + sda sdd
+

Once the data has been resilvered, the spare is automatically + removed and is made available for use should another device fail. The hot + spare can be permanently removed from the pool using the following + command:

+
# zpool + remove tank + sdc
+
+
+

+

The following command creates a ZFS storage pool consisting of + two, two-way mirrors and mirrored log devices:

+
# zpool + create pool + mirror sda sdb + mirror sdc sdd + + sde sdf
+
+
+

+

The following command adds two disks for use as cache devices to a + ZFS storage pool:

+
# zpool + add pool + + sdc sdd
+

Once added, the cache devices gradually fill with content from + main memory. Depending on the size of your cache devices, it could take over + an hour for them to fill. Capacity and reads can be monitored using the + iostat subcommand as follows:

+
# zpool + iostat -v pool + 5
+
+
+

+

The following commands remove the mirrored log device + + and mirrored top-level data device + .

+

Given this configuration:

+
+
  pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+         NAME        STATE     READ WRITE CKSUM
+         tank        ONLINE       0     0     0
+           mirror-0  ONLINE       0     0     0
+             sda     ONLINE       0     0     0
+             sdb     ONLINE       0     0     0
+           mirror-1  ONLINE       0     0     0
+             sdc     ONLINE       0     0     0
+             sdd     ONLINE       0     0     0
+         logs
+           mirror-2  ONLINE       0     0     0
+             sde     ONLINE       0     0     0
+             sdf     ONLINE       0     0     0
+
+

The command to remove the mirrored log + mirror-2 is:

+
# zpool + remove tank + mirror-2
+

The command to remove the mirrored data + mirror-1 is:

+
# zpool + remove tank + mirror-1
+
+
+

+

The following command displays the detailed information for the + pool data. This pool is comprised of a single raidz + vdev where one of its devices increased its capacity by 10 GiB. In this + example, the pool will not be able to utilize this extra capacity until all + the devices under the raidz vdev have been expanded.

+
+
# zpool list -v data
+NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
+data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
+  raidz1    23.9G  14.6G  9.30G         -    48%
+    sda         -      -      -         -      -
+    sdb         -      -      -       10G      -
+    sdc         -      -      -         -      -
+
+
+
+

+

Additional columns can be added to the + zpool status + and zpool + iostat output with + -c.

+
+
# zpool status -c vendor,model,size
+   NAME     STATE  READ WRITE CKSUM vendor  model        size
+   tank     ONLINE 0    0     0
+   mirror-0 ONLINE 0    0     0
+   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+              capacity     operations     bandwidth
+pool        alloc   free   read  write   read  write  size
+----------  -----  -----  -----  -----  -----  -----  ----
+rpool       14.6G  54.9G      4     55   250K  2.69M
+  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
+----------  -----  -----  -----  -----  -----  -----  ----
+
+
+
+
+

+
+
+
Cause zpool to dump core on exit for the purposes + of running + .
+
+
Use ANSI color in zpool + status and zpool + iostat output.
+
+
Automatically attempt to turn on the drives enclosure slot power to a + drive when running the zpool + online or zpool + clear commands. This has the same effect as + passing the --power option to those commands.
+
+
The maximum time in milliseconds to wait for a slot power sysfs value to + return the correct value after writing it. For example, after writing + "on" to the sysfs enclosure slot power_control file, it can take + some time for the enclosure to power down the slot and return + "on" if you read back the 'power_control' value. Defaults to 30 + seconds (30000ms) if not set.
+
+
The search path for devices or files to use with the pool. This is a + colon-separated list of directories in which zpool + looks for device nodes and files. Similar to the + -d option in zpool + import.
+
+
The maximum time in milliseconds that zpool import + will wait for an expected device to be available.
+
+
If set, suppress warning about non-native vdev ashift in + zpool status. The value is + not used, only the presence or absence of the variable matters.
+
+
Cause zpool subcommands to output vdev guids by + default. This behavior is identical to the zpool + status -g command line + option.
+ +
Cause zpool subcommands to follow links for vdev + names by default. This behavior is identical to the + zpool status + -L command line option.
+
+
Cause zpool subcommands to output full vdev path + names by default. This behavior is identical to the + zpool status + -P command line option.
+
+
Older OpenZFS implementations had issues when attempting to display pool + config vdev names if a devid NVP value is present in the + pool's config. +

For example, a pool that originated on illumos platform would + have a devid value in the config and + zpool status would fail + when listing the config. This would also be true for future Linux-based + pools.

+

A pool can be stripped of any devid values + on import or prevented from adding them on zpool + create or zpool + add by setting + ZFS_VDEV_DEVID_OPT_OUT.

+

+
+
+
Allow a privileged user to run zpool + status/iostat + -c. Normally, only unprivileged users are allowed + to run -c.
+
+
The search path for scripts when running zpool + status/iostat + -c. This is a colon-separated list of directories + and overrides the default ~/.zpool.d and + /etc/zfs/zpool.d search paths.
+
+
Allow a user to run zpool + status/iostat + -c. If ZPOOL_SCRIPTS_ENABLED is + not set, it is assumed that the user is allowed to run + zpool + status/iostat + -c.
+
+
Time, in seconds, to wait for /dev/zfs to appear. + Defaults to + , max + (10 + minutes). If <0, wait forever; if + 0, don't wait.
+
+
+
+

+

+
+
+

+

zfs(4), zpool-features(7), + zpoolconcepts(7), zpoolprops(7), + zed(8), zfs(8), + zpool-add(8), zpool-attach(8), + zpool-checkpoint(8), zpool-clear(8), + zpool-create(8), zpool-destroy(8), + zpool-detach(8), zpool-events(8), + zpool-export(8), zpool-get(8), + zpool-history(8), zpool-import(8), + zpool-initialize(8), zpool-iostat(8), + zpool-labelclear(8), zpool-list(8), + zpool-offline(8), zpool-online(8), + zpool-reguid(8), zpool-remove(8), + zpool-reopen(8), zpool-replace(8), + zpool-resilver(8), zpool-scrub(8), + zpool-set(8), zpool-split(8), + zpool-status(8), zpool-sync(8), + zpool-trim(8), zpool-upgrade(8), + zpool-wait(8)

+
+
+ + + + + +
March 16, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zpool_influxdb.8.html b/man/v2.2/8/zpool_influxdb.8.html new file mode 100644 index 000000000..b14379a0f --- /dev/null +++ b/man/v2.2/8/zpool_influxdb.8.html @@ -0,0 +1,319 @@ + + + + + + + zpool_influxdb.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zpool_influxdb.8

+
+ + + + + +
ZPOOL_INFLUXDB(8)System Manager's ManualZPOOL_INFLUXDB(8)
+
+
+

+

zpool_influxdb — + collect ZFS pool statistics in InfluxDB line protocol + format

+
+
+

+ + + + + +
zpool_influxdb[-e|--execd] + [-n|--no-histogram] + [-s|--sum-histogram-buckets] + [-t|--tags + key=value[,key=value]…] + [pool]
+
+
+

+

zpool_influxdb produces + InfluxDB-line-protocol-compatible metrics from zpools. Like the + zpool command, + zpool_influxdb reads the current pool status and + statistics. Unlike the zpool command which is + intended for humans, zpool_influxdb formats the + output in the InfluxDB line protocol. The expected use is as a plugin to a + metrics collector or aggregator, such as Telegraf.

+

By default, zpool_influxdb prints pool + metrics and status in the InfluxDB line protocol format. All pools are + printed, similar to the zpool + status command. Providing a pool name restricts the + output to the named pool.

+
+
+

+
+
, + --execd
+
Run in daemon mode compatible with Telegraf's + execd plugin. In this mode, the pools are sampled + every time a newline appears on the standard input.
+
, + --no-histogram
+
Do not print latency and I/O size histograms. This can reduce the total + amount of data, but one should consider the value brought by the insights + that latency and I/O size distributions provide. The resulting values are + suitable for graphing with Grafana's heatmap plugin.
+
, + --sum-histogram-buckets
+
Accumulates bucket values. By default, the values are not accumulated and + the raw data appears as shown by zpool + iostat. This works well for Grafana's heatmap + plugin. Summing the buckets produces output similar to Prometheus + histograms.
+
, + --tags + key=value[,key=value]…
+
Adds specified tags to the tag set. No sanity checking is performed. See + the InfluxDB Line Protocol format documentation for details on escaping + special characters used in tags.
+
, + --help
+
Print a usage summary.
+
+
+
+

+

zpool-iostat(8), + zpool-status(8), + InfluxDB, + Telegraf, + Grafana, + Prometheus

+
+
+ + + + + +
May 26, 2021Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zstream.8.html b/man/v2.2/8/zstream.8.html new file mode 100644 index 000000000..55eb45456 --- /dev/null +++ b/man/v2.2/8/zstream.8.html @@ -0,0 +1,406 @@ + + + + + + + zstream.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+ +
+
+ +
+

zstream.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamdecompress [-v] + [object,offset[,type...]]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+ + + + + +
zstreamrecompress [-l + level] algorithm
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream + decompress [-v] + [object,offset[,type...]]
+
Decompress selected records in a ZFS send stream provided on standard + input, when the compression type recorded in ZFS metadata may be + incorrect. Specify the object number and byte offset of each record that + you wish to decompress. Optionally specify the compression type. Valid + compression types include off, + , + lz4, + , + , + and . + The default is lz4. Every record for that object + beginning at that offset will be decompressed, if possible. It may not be + possible, because the record may be corrupted in some but not all of the + stream's snapshots. Specifying a compression type of off + will change the stream's metadata accordingly, without attempting + decompression. This can be useful if the record is already uncompressed + but the metadata insists otherwise. The repaired stream will be written to + standard output. +
+
+
Verbose. Print summary of decompressed records.
+
+
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
zstream recompress + [-l level] + algorithm
+
Recompresses a send stream, provided on standard input, using the provided + algorithm and optional level, and writes the modified stream to standard + output. All WRITE records in the send stream will be recompressed, unless + they fail to result in size reduction compared to being left uncompressed. + The provided algorithm can be any valid value to the + compress property. Note that encrypted send + streams cannot be recompressed. +
+
+ level
+
Specifies compression level. Only needed for algorithms where the + level is not implied as part of the name of the algorithm (e.g. gzip-3 + does not require it, while zstd does, if a non-default level is + desired).
+
+
+
+
+
+

+

Heal a dataset that was corrupted due to OpenZFS bug #12762. + First, determine which records are corrupt. That cannot be done + automatically; it requires information beyond ZFS's metadata. If object + is + corrupted at offset + and is + compressed using lz4, then run this command:

+
+
# zfs send -c  | zstream decompress 128,0,lz4 | zfs recv 
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8), + https://github.com/openzfs/zfs/issues/12762

+
+
+ + + + + +
October 4, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/8/zstreamdump.8.html b/man/v2.2/8/zstreamdump.8.html new file mode 100644 index 000000000..e545d1af5 --- /dev/null +++ b/man/v2.2/8/zstreamdump.8.html @@ -0,0 +1,406 @@ + + + + + + + zstreamdump.8 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

zstreamdump.8

+
+ + + + + +
ZSTREAM(8)System Manager's ManualZSTREAM(8)
+
+
+

+

zstream — + manipulate ZFS send streams

+
+
+

+ + + + + +
zstreamdump [-Cvd] + [file]
+
+ + + + + +
zstreamdecompress [-v] + [object,offset[,type...]]
+
+ + + + + +
zstreamredup [-v] + file
+
+ + + + + +
zstreamtoken resume_token
+
+ + + + + +
zstreamrecompress [-l + level] algorithm
+
+
+

+

The + + utility manipulates ZFS send streams output by the + + command.

+
+
zstream dump + [-Cvd] [file]
+
Print information about the specified send stream, including headers and + record counts. The send stream may either be in the specified + file, or provided on standard input. +
+
+
Suppress the validation of checksums.
+
+
Verbose. Print metadata for each record.
+
+
Dump data contained in each record. Implies verbose.
+
+

The zstreamdump alias is provided for + compatibility and is equivalent to running + zstream dump.

+
+
zstream token + resume_token
+
Dumps zfs resume token information
+
zstream + decompress [-v] + [object,offset[,type...]]
+
Decompress selected records in a ZFS send stream provided on standard + input, when the compression type recorded in ZFS metadata may be + incorrect. Specify the object number and byte offset of each record that + you wish to decompress. Optionally specify the compression type. Valid + compression types include off, + , + lz4, + , + , + and . + The default is lz4. Every record for that object + beginning at that offset will be decompressed, if possible. It may not be + possible, because the record may be corrupted in some but not all of the + stream's snapshots. Specifying a compression type of off + will change the stream's metadata accordingly, without attempting + decompression. This can be useful if the record is already uncompressed + but the metadata insists otherwise. The repaired stream will be written to + standard output. +
+
+
Verbose. Print summary of decompressed records.
+
+
+
zstream redup + [-v] file
+
Deduplicated send streams can be generated by using the + zfs send + -D command. The ability to send deduplicated send + streams is deprecated. In the future, the ability to receive a + deduplicated send stream with zfs + receive will be removed. However, deduplicated + send streams can still be received by utilizing + zstream redup. +

The zstream + redup command is provided a + file containing a deduplicated send stream, and + outputs an equivalent non-deduplicated send stream on standard output. + Therefore, a deduplicated send stream can be received by running:

+
# zstream + redup DEDUP_STREAM_FILE | + zfs receive +
+
+
+
Verbose. Print summary of converted records.
+
+
+
zstream recompress + [-l level] + algorithm
+
Recompresses a send stream, provided on standard input, using the provided + algorithm and optional level, and writes the modified stream to standard + output. All WRITE records in the send stream will be recompressed, unless + they fail to result in size reduction compared to being left uncompressed. + The provided algorithm can be any valid value to the + compress property. Note that encrypted send + streams cannot be recompressed. +
+
+ level
+
Specifies compression level. Only needed for algorithms where the + level is not implied as part of the name of the algorithm (e.g. gzip-3 + does not require it, while zstd does, if a non-default level is + desired).
+
+
+
+
+
+

+

Heal a dataset that was corrupted due to OpenZFS bug #12762. + First, determine which records are corrupt. That cannot be done + automatically; it requires information beyond ZFS's metadata. If object + is + corrupted at offset + and is + compressed using lz4, then run this command:

+
+
# zfs send -c  | zstream decompress 128,0,lz4 | zfs recv 
+
+
+
+

+

zfs(8), zfs-receive(8), + zfs-send(8), + https://github.com/openzfs/zfs/issues/12762

+
+
+ + + + + +
October 4, 2022Debian
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/man/v2.2/index.html b/man/v2.2/index.html new file mode 100644 index 000000000..19da1aa36 --- /dev/null +++ b/man/v2.2/index.html @@ -0,0 +1,147 @@ + + + + + + + v2.2 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/msg/ZFS-8000-14/index.html b/msg/ZFS-8000-14/index.html new file mode 100644 index 000000000..b2207cc29 --- /dev/null +++ b/msg/ZFS-8000-14/index.html @@ -0,0 +1,195 @@ + + + + + + + Message ID: ZFS-8000-14 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-14

+
+

Corrupt ZFS cache

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

The ZFS cache file is corrupted.

Automated Response:

No automated response will be taken.

Impact:

ZFS filesystems are not available.

+

Suggested Action for System Administrator

+

ZFS keeps a list of active pools on the filesystem to avoid having to +scan all devices when the system is booted. If this file is corrupted, +then normally active pools will not be automatically opened. The pools +can be recovered using the zpool import command:

+
# zpool import
+  pool: test
+    id: 12743384782310107047
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+        test              ONLINE
+          sda9            ONLINE
+
+
+

This will automatically scan /dev for any devices part of a pool. +If devices have been made available in an alternate location, use the +-d option to zpool import to search for devices in a different +directory.

+

Once you have determined which pools are available for import, you +can import the pool explicitly by specifying the name or numeric +identifier:

+
# zpool import test
+
+
+

Alternately, you can import all available pools by specifying the -a +option. Once a pool has been imported, the ZFS cache will be repaired +so that the pool will appear normally in the future.

+

Details

+

The Message ID: ZFS-8000-14 indicates a corrupted ZFS cache file. +Take the documented action to resolve the problem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-2Q/index.html b/msg/ZFS-8000-2Q/index.html new file mode 100644 index 000000000..55b01deb7 --- /dev/null +++ b/msg/ZFS-8000-2Q/index.html @@ -0,0 +1,238 @@ + + + + + + + Message ID: ZFS-8000-2Q — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-2Q

+
+

Missing device in replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

A device in a replicated configuration could not +be opened.

Automated Response:

A hot spare will be activated if available.

Impact:

The pool is no longer providing the configured +level of replication.

+

Suggested Action for System Administrator

+

For an active pool:

+

If this error was encountered while running zpool import, please +see the section below. Otherwise, run zpool status -x to determine +which pool has experienced a failure:

+
# zpool status -x
+  pool: test
+ state: DEGRADED
+status: One or more devices could not be opened.  Sufficient replicas exist for
+        the pool to continue functioning in a degraded state.
+action: Attach the missing device and online it using 'zpool online'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  DEGRADED     0     0     0
+          mirror              DEGRADED     0     0     0
+            c0t0d0            ONLINE       0     0     0
+            c0t0d1            FAULTED      0     0     0  cannot open
+
+errors: No known data errors
+
+
+

Determine which device failed to open by looking for a FAULTED device +with an additional ‘cannot open’ message. If this device has been +inadvertently removed from the system, attach the device and bring it +online with zpool online:

+
# zpool online test c0t0d1
+
+
+

If the device is no longer available, the device can be replaced +using the zpool replace command:

+
# zpool replace test c0t0d1 c0t0d2
+
+
+

If the device has been replaced by another disk in the same physical +slot, then the device can be replaced using a single argument to the +zpool replace command:

+
# zpool replace test c0t0d1
+
+
+

Existing data will be resilvered to the new device. Once the +resilvering completes, the device will be removed from the pool.

+

For an exported pool:

+

If this error is encountered during a zpool import, it means that +one of the devices is not attached to the system:

+
# zpool import
+  pool: test
+    id: 10121266328238932306
+ state: DEGRADED
+status: One or more devices are missing from the system.
+action: The pool can be imported despite missing or damaged devices.  The
+        fault tolerance of the pool may be compromised if imported.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
+config:
+
+        test              DEGRADED
+          mirror          DEGRADED
+            c0t0d0        ONLINE
+            c0t0d1        FAULTED   cannot open
+
+
+

Unlike when the pool is active on the system, the device cannot be +replaced while the pool is exported. If the device can be attached to +the system, attach the device and run zpool import again.

+

Alternatively, the pool can be imported as-is, though it will be +placed in the DEGRADED state due to a missing device. The device will +be marked as UNAVAIL. Once the pool has been imported, the missing +device can be replaced as described above.

+

Details

+

The Message ID: ZFS-8000-2Q indicates a device which was unable +to be opened by the ZFS subsystem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-3C/index.html b/msg/ZFS-8000-3C/index.html new file mode 100644 index 000000000..144ceccce --- /dev/null +++ b/msg/ZFS-8000-3C/index.html @@ -0,0 +1,220 @@ + + + + + + + Message ID: ZFS-8000-3C — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-3C

+
+

Missing device in non-replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

A device could not be opened and no replicas are +available.

Automated Response:

No automated response will be taken.

Impact:

The pool is no longer available.

+

Suggested Action for System Administrator

+

For an active pool:

+

If this error was encountered while running zpool import, please +see the section below. Otherwise, run zpool status -x to determine +which pool has experienced a failure:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: One or more devices could not be opened.  There are insufficient
+        replicas for the pool to continue functioning.
+action: Attach the missing device and online it using 'zpool online'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  FAULTED      0     0     0  insufficient replicas
+          c0t0d0              ONLINE       0     0     0
+          c0t0d1              FAULTED      0     0     0  cannot open
+
+errors: No known data errors
+
+
+

If the device has been temporarily detached from the system, attach +the device to the system and run zpool status again. The pool +should automatically detect the newly attached device and resume +functioning. You may have to mount the filesystems in the pool +explicitly using zfs mount -a.

+

If the device is no longer available and cannot be reattached to the +system, then the pool must be destroyed and re-created from a backup +source.

+

For an exported pool:

+

If this error is encountered during a zpool import, it means that +one of the devices is not attached to the system:

+
# zpool import
+  pool: test
+    id: 10121266328238932306
+ state: FAULTED
+status: One or more devices are missing from the system.
+action: The pool cannot be imported.  Attach the missing devices and try again.
+        see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
+config:
+
+        test              FAULTED   insufficient replicas
+          c0t0d0          ONLINE
+          c0t0d1          FAULTED   cannot open
+
+
+

The pool cannot be imported until the missing device is attached to +the system. If the device has been made available in an alternate +location, use the -d option to zpool import to search for devices +in a different directory. If the missing device is unavailable, then +the pool cannot be imported.

+

Details

+

The Message ID: ZFS-8000-3C indicates a device which was unable +to be opened by the ZFS subsystem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-4J/index.html b/msg/ZFS-8000-4J/index.html new file mode 100644 index 000000000..7a2a00c3c --- /dev/null +++ b/msg/ZFS-8000-4J/index.html @@ -0,0 +1,237 @@ + + + + + + + Message ID: ZFS-8000-4J — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-4J

+
+

Corrupted device label in a replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

A device could not be opened due to a missing or +invalid device label.

Automated Response:

A hot spare will be activated if available.

Impact:

The pool is no longer providing the configured +level of replication.

+

Suggested Action for System Administrator

+

For an active pool:

+

If this error was encountered while running zpool import, please +see the section below. Otherwise, run zpool status -x to determine +which pool has experienced a failure:

+
# zpool status -x
+  pool: test
+ state: DEGRADED
+status: One or more devices could not be used because the label is missing or
+        invalid.  Sufficient replicas exist for the pool to continue
+        functioning in a degraded state.
+action: Replace the device using 'zpool replace'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  DEGRADED     0     0     0
+          mirror              DEGRADED     0     0     0
+            c0t0d0            ONLINE       0     0     0
+            c0t0d1            FAULTED      0     0     0  corrupted data
+
+errors: No known data errors
+
+
+

If the device has been temporarily detached from the system, attach +the device to the system and run zpool status again. The pool +should automatically detect the newly attached device and resume +functioning.

+

If the device is no longer available, it can be replaced using zpool +replace:

+
# zpool replace test c0t0d1 c0t0d2
+
+
+

If the device has been replaced by another disk in the same physical +slot, then the device can be replaced using a single argument to the +zpool replace command:

+
# zpool replace test c0t0d1
+
+
+

ZFS will begin migrating data to the new device as soon as the +replace is issued. Once the resilvering completes, the original +device (if different from the replacement) will be removed, and the +pool will be restored to the ONLINE state.

+

For an exported pool:

+

If this error is encountered while running zpool import, the pool +can be still be imported despite the failure:

+
# zpool import
+  pool: test
+    id: 5187963178597328409
+ state: DEGRADED
+status: One or more devices contains corrupted data.  The fault tolerance of
+        the pool may be compromised if imported.
+action: The pool can be imported using its name or numeric identifier.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
+config:
+
+        test              DEGRADED
+          mirror          DEGRADED
+            c0t0d0        ONLINE
+            c0t0d1        FAULTED   corrupted data
+
+
+

To import the pool, run zpool import:

+
# zpool import test
+
+
+

Once the pool has been imported, the damaged device can be replaced +according to the above procedure.

+

Details

+

The Message ID: ZFS-8000-4J indicates a device which was unable +to be opened by the ZFS subsystem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-5E/index.html b/msg/ZFS-8000-5E/index.html new file mode 100644 index 000000000..24dc4f4fd --- /dev/null +++ b/msg/ZFS-8000-5E/index.html @@ -0,0 +1,201 @@ + + + + + + + Message ID: ZFS-8000-5E — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-5E

+
+

Corrupted device label in non-replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

A device could not be opened due to a missing or +invalid device label and no replicas are +available.

Automated Response:

No automated response will be taken.

Impact:

The pool is no longer available.

+

Suggested Action for System Administrator

+

For an active pool:

+

If this error was encountered while running zpool import, please see the +section below. Otherwise, run zpool status -x to determine which pool has +experienced a failure:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: One or more devices could not be used because the the label is missing
+        or invalid.  There are insufficient replicas for the pool to continue
+        functioning.
+action: Destroy and re-create the pool from a backup source.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
+ scrub: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        test        FAULTED      0     0     0  insufficient replicas
+          c0t0d0    FAULTED      0     0     0  corrupted data
+          c0t0d1    ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

The device listed as FAULTED with ‘corrupted data’ cannot be opened due to a +corrupt label. ZFS will be unable to use the pool, and all data within the +pool is irrevocably lost. The pool must be destroyed and recreated from an +appropriate backup source. Using replicated configurations will prevent this +from happening in the future.

+

For an exported pool:

+

If this error is encountered during zpool import, the action is the same. +The pool cannot be imported - all data is lost and must be restored from an +appropriate backup source.

+

Details

+

The Message ID: ZFS-8000-5E indicates a device which was unable to be +opened by the ZFS subsystem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-6X/index.html b/msg/ZFS-8000-6X/index.html new file mode 100644 index 000000000..2681abb65 --- /dev/null +++ b/msg/ZFS-8000-6X/index.html @@ -0,0 +1,195 @@ + + + + + + + Message ID: ZFS-8000-6X — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-6X

+
+

Missing top level device

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

One or more top level devices are missing.

Automated Response:

No automated response will be taken.

Impact:

The pool cannot be imported.

+

Suggested Action for System Administrator

+

Run zpool import to list which pool cannot be imported:

+
# zpool import
+  pool: test
+    id: 13783646421373024673
+ state: FAULTED
+status: One or more devices are missing from the system.
+action: The pool cannot be imported.  Attach the missing devices and try again.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-6X
+config:
+
+        test              FAULTED   missing device
+          c0t0d0          ONLINE
+
+Additional devices are known to be part of this pool, though their
+exact configuration cannot be determined.
+
+
+

ZFS attempts to store enough configuration data on the devices such +that the configuration is recoverable from any subset of devices. In +some cases, particularly when an entire toplevel virtual device is +not attached to the system, ZFS will be unable to determine the +complete configuration. It will always detect that these devices are +missing, even if it cannot identify all of the devices.

+

The pool cannot be imported until the unknown missing device is +attached to the system. If the device has been made available in an +alternate location, use the -d option to zpool import to search +for devices in a different directory. If the missing device is +unavailable, then the pool cannot be imported.

+

Details

+

The Message ID: ZFS-8000-6X indicates one or more top level +devices are missing from the configuration.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-72/index.html b/msg/ZFS-8000-72/index.html new file mode 100644 index 000000000..d6fdf2391 --- /dev/null +++ b/msg/ZFS-8000-72/index.html @@ -0,0 +1,223 @@ + + + + + + + Message ID: ZFS-8000-72 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-72

+
+

Corrupted pool metadata

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

The metadata required to open the pool is +corrupt.

Automated Response:

No automated response will be taken.

Impact:

The pool is no longer available.

+

Suggested Action for System Administrator

+

Even though all the devices are available, the on-disk data has been +corrupted such that the pool cannot be opened. If a recovery action +is presented, the pool can be returned to a usable state. Otherwise, +all data within the pool is lost, and the pool must be destroyed and +restored from an appropriate backup source. ZFS includes built-in +metadata replication to prevent this from happening even for +unreplicated pools, but running in a replicated configuration will +decrease the chances of this happening in the future.

+

If this error is encountered during zpool import, see the section +below. Otherwise, run zpool status -x to determine which pool is +faulted and if a recovery option is available:

+
# zpool status -x
+  pool: test
+    id: 13783646421373024673
+ state: FAULTED
+status: The pool metadata is corrupted and cannot be opened.
+action: Recovery is possible, but will result in some data loss.
+        Returning the pool to its state as of Mon Sep 28 10:24:39 2009
+        should correct the problem.  Approximately 59 seconds of data
+        will have to be discarded, irreversibly.  Recovery can be
+        attempted by executing 'zpool clear -F test'.  A scrub of the pool
+        is strongly recommended following a successful recovery.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  FAULTED      0     0     2  corrupted data
+            c0t0d0            ONLINE       0     0     2
+            c0t0d1            ONLINE       0     0     2
+
+
+

If recovery is unavailable, the recommended action will be:

+
action: Destroy the pool and restore from backup.
+
+
+

If this error is encountered during zpool import, and if no recovery option +is mentioned, the pool is unrecoverable and cannot be imported. The pool must +be restored from an appropriate backup source. If a recovery option is +available, the output from zpool import will look something like the +following:

+
# zpool import share
+cannot import 'share': I/O error
+        Recovery is possible, but will result in some data loss.
+        Returning the pool to its state as of Sun Sep 27 12:31:07 2009
+        should correct the problem.  Approximately 53 seconds of data
+        will have to be discarded, irreversibly.  Recovery can be
+        attempted by executing 'zpool import -F share'.  A scrub of the pool
+        is strongly recommended following a successful recovery.
+
+
+

Recovery actions are requested with the -F option to either zpool +clear or zpool import. Recovery will result in some data loss, +because it reverts the pool to an earlier state. A dry-run recovery +check can be performed by adding the -n option, affirming if recovery +is possible without actually reverting the pool to its earlier state.

+

Details

+

The Message ID: ZFS-8000-72 indicates a pool was unable to be +opened due to a detected corruption in the pool metadata.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-8A/index.html b/msg/ZFS-8000-8A/index.html new file mode 100644 index 000000000..ed62fc13c --- /dev/null +++ b/msg/ZFS-8000-8A/index.html @@ -0,0 +1,224 @@ + + + + + + + Message ID: ZFS-8000-8A — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-8A

+
+

Corrupted data

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Critical

Description:

A file or directory could not be read due to +corrupt data.

Automated Response:

No automated response will be taken.

Impact:

The file or directory is unavailable.

+

Suggested Action for System Administrator

+

Run zpool status -x to determine which pool is damaged:

+
# zpool status -x
+  pool: test
+ state: ONLINE
+status: One or more devices has experienced an error and no valid replicas
+        are available.  Some filesystem data is corrupt, and applications
+        may have been affected.
+action: Destroy the pool and restore from backup.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  ONLINE       0     0     2
+          c0t0d0              ONLINE       0     0     2
+          c0t0d1              ONLINE       0     0     0
+
+errors: 1 data errors, use '-v' for a list
+
+
+

Unfortunately, the data cannot be repaired, and the only choice to +repair the data is to restore the pool from backup. Applications +attempting to access the corrupted data will get an error (EIO), and +data may be permanently lost.

+

The list of affected files can be retrieved by using the -v option to +zpool status:

+
# zpool status -xv
+  pool: test
+ state: ONLINE
+status: One or more devices has experienced an error and no valid replicas
+        are available.  Some filesystem data is corrupt, and applications
+        may have been affected.
+action: Destroy the pool and restore from backup.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  ONLINE       0     0     2
+          c0t0d0              ONLINE       0     0     2
+          c0t0d1              ONLINE       0     0     0
+
+errors: Permanent errors have been detected in the following files:
+
+        /export/example/foo
+
+
+

Damaged files may or may not be able to be removed depending on the +type of corruption. If the corruption is within the plain data, the +file should be removable. If the corruption is in the file metadata, +then the file cannot be removed, though it can be moved to an +alternate location. In either case, the data should be restored from +a backup source. It is also possible for the corruption to be within +pool-wide metadata, resulting in entire datasets being unavailable. +If this is the case, the only option is to destroy the pool and +re-create the datasets from backup.

+

Details

+

The Message ID: ZFS-8000-8A indicates corrupted data exists in +the current pool.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-9P/index.html b/msg/ZFS-8000-9P/index.html new file mode 100644 index 000000000..82f57c1c3 --- /dev/null +++ b/msg/ZFS-8000-9P/index.html @@ -0,0 +1,264 @@ + + + + + + + Message ID: ZFS-8000-9P — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-9P

+
+

Failing device in replicated configuration

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Minor

Description:

A device has experienced uncorrectable errors in a +replicated configuration.

Automated Response:

ZFS has attempted to repair the affected data.

Impact:

The system is unaffected, though errors may +indicate future failure. Future errors may cause +ZFS to automatically fault the device.

+

Suggested Action for System Administrator

+

Run zpool status -x to determine which pool has experienced errors:

+
# zpool status
+  pool: test
+ state: ONLINE
+status: One or more devices has experienced an unrecoverable error.  An
+        attempt was made to correct the error.  Applications are unaffected.
+action: Determine if the device needs to be replaced, and clear the errors
+        using 'zpool online' or replace the device with 'zpool replace'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  ONLINE       0     0     0
+          mirror              ONLINE       0     0     0
+            c0t0d0            ONLINE       0     0     2
+            c0t0d1            ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

Find the device with a non-zero error count for READ, WRITE, or +CKSUM. This indicates that the device has experienced a read I/O +error, write I/O error, or checksum validation error. Because the +device is part of a mirror or RAID-Z device, ZFS was able to recover +from the error and subsequently repair the damaged data.

+

If these errors persist over a period of time, ZFS may determine the +device is faulty and mark it as such. However, these error counts may +or may not indicate that the device is unusable. It depends on how +the errors were caused, which the administrator can determine in +advance of any ZFS diagnosis. For example, the following cases will +all produce errors that do not indicate potential device failure:

+
    +
  • A network attached device lost connectivity but has now +recovered

  • +
  • A device suffered from a bit flip, an expected event over long +periods of time

  • +
  • An administrator accidentally wrote over a portion of the disk +using another program

  • +
+

In these cases, the presence of errors does not indicate that the +device is likely to fail in the future, and therefore does not need +to be replaced. If this is the case, then the device errors should be +cleared using zpool clear:

+
# zpool clear test c0t0d0
+
+
+

On the other hand, errors may very well indicate that the device has +failed or is about to fail. If there are continual I/O errors to a +device that is otherwise attached and functioning on the system, it +most likely needs to be replaced. The administrator should check the +system log for any driver messages that may indicate hardware +failure. If it is determined that the device needs to be replaced, +then the zpool replace command should be used:

+
# zpool replace test c0t0d0 c0t0d2
+
+
+

This will attach the new device to the pool and begin resilvering +data to it. Once the resilvering process is complete, the old device +will automatically be removed from the pool, at which point it can +safely be removed from the system. If the device needs to be replaced +in-place (because there are no available spare devices), the original +device can be removed and replaced with a new device, at which point +a different form of zpool replace can be used:

+
# zpool replace test c0t0d0
+
+
+

This assumes that the original device at ‘c0t0d0’ has been replaced +with a new device under the same path, and will be replaced +appropriately.

+

You can monitor the progress of the resilvering operation by using +the zpool status -x command:

+
# zpool status -x
+  pool: test
+ state: DEGRADED
+status: One or more devices is currently being replaced.  The pool may not be
+        providing the necessary level of replication.
+action: Wait for the resilvering operation to complete
+ scrub: resilver in progress, 0.14% done, 0h0m to go
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  ONLINE       0     0     0
+          mirror              ONLINE       0     0     0
+            replacing         ONLINE       0     0     0
+              c0t0d0          ONLINE       0     0     3
+              c0t0d2          ONLINE       0     0     0  58.5K resilvered
+            c0t0d1            ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

Details

+

The Message ID: ZFS-8000-9P indicates a device has exceeded the +acceptable limit of errors allowed by the system. See document +203768 +for additional information.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-A5/index.html b/msg/ZFS-8000-A5/index.html new file mode 100644 index 000000000..ea4d77b4f --- /dev/null +++ b/msg/ZFS-8000-A5/index.html @@ -0,0 +1,197 @@ + + + + + + + Message ID: ZFS-8000-A5 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-A5

+
+

Incompatible version

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

The on-disk version is not compatible with the +running system.

Automated Response:

No automated response will occur.

Impact:

The pool is unavailable.

+

Suggested Action for System Administrator

+

If this error is seen during zpool import, see the section below. +Otherwise, run zpool status -x to determine which pool is faulted:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: The ZFS version for the pool is incompatible with the software running
+        on this system.
+action: Destroy and re-create the pool.
+ scrub: none requested
+config:
+
+        NAME                  STATE     READ WRITE CKSUM
+        test                  FAULTED      0     0     0  incompatible version
+          mirror              ONLINE       0     0     0
+            sda9              ONLINE       0     0     0
+            sdb9              ONLINE       0     0     0
+
+errors: No known errors
+
+
+

The pool cannot be used on this system. Either move the storage to +the system where the pool was originally created, upgrade the current +system software to a more recent version, or destroy the pool and +re-create it from backup.

+

If this error is seen during import, the pool cannot be imported on +the current system. The disks must be attached to the system which +originally created the pool, and imported there.

+

The list of currently supported versions can be displayed using +zpool upgrade -v.

+

Details

+

The Message ID: ZFS-8000-A5 indicates a version mismatch exists +between the running system and the on-disk data.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-ER/index.html b/msg/ZFS-8000-ER/index.html new file mode 100644 index 000000000..cf6f46a91 --- /dev/null +++ b/msg/ZFS-8000-ER/index.html @@ -0,0 +1,440 @@ + + + + + + + Message ID: ZFS-8000-ER — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-ER

+
+

ZFS Errata #1

+ + + + + + + + + + + + + + + + + + +

Type:

Compatibility

Severity:

Moderate

Description:

The ZFS pool contains an on-disk format +incompatibility.

Automated Response:

No automated response will be taken.

Impact:

Until the pool is scrubbed using OpenZFS version +0.6.3 or newer the pool may not be imported by +older versions of OpenZFS or other ZFS +implementations.

+

Suggested Action for System Administrator

+

The pool contains an on-disk format incompatibility. Affected pools +must be imported and scrubbed using the current version of ZFS. This +will return the pool to a state in which it may be imported by other +implementations. This errata only impacts compatibility between ZFS +versions, no user data is at risk as result of this erratum.

+
# zpool status -x
+  pool: test
+ state: ONLINE
+status: Errata #1 detected.
+action: To correct the issue run 'zpool scrub'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER
+  scan: none requested
+config:
+
+    NAME            STATE     READ WRITE CKSUM
+    test            ONLINE    0    0     0
+      raidz1-0      ONLINE    0    0     0
+        vdev0       ONLINE    0    0     0
+        vdev1       ONLINE    0    0     0
+        vdev2       ONLINE    0    0     0
+        vdev3       ONLINE    0    0     0
+
+errors: No known data errors
+
+# zpool scrub test
+
+# zpool status -x
+all pools are healthy
+
+
+
+
+

ZFS Errata #2

+ + + + + + + + + + + + + + + + + + +

Type:

Compatibility

Severity:

Moderate

Description:

The ZFS packages were updated while an +asynchronous destroy was in progress and the pool +contains an on-disk format incompatibility.

Automated Response:

No automated response will be taken.

Impact:

The pool cannot be imported until the issue is +corrected.

+

Suggested Action for System Administrator

+

Affected pools must be reverted to the previous ZFS version where +they can be correctly imported. Once imported, all asynchronous +destroy operations must be allowed to complete. The ZFS packages may +then be updated and the pool can be imported cleanly by the newer +software.

+
# zpool import
+  pool: test
+    id: 1165955789558693437
+ state: ONLINE
+status: Errata #2 detected.
+action: The pool cannot be imported with this version of ZFS due to
+        an active asynchronous destroy.  Revert to an earlier version
+        and allow the destroy to complete before updating.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER
+config:
+
+    test           ONLINE
+      raidz1-0     ONLINE
+        vdev0      ONLINE
+        vdev1      ONLINE
+        vdev2      ONLINE
+        vdev3      ONLINE
+
+
+

Revert to previous ZFS version, import the pool, then wait for the +freeing property to drop to zero. This indicates that all +outstanding asynchronous destroys have completed.

+
# zpool get freeing
+NAME  PROPERTY  VALUE    SOURCE
+test  freeing   0        default
+
+
+

The ZFS packages may be now be updated and the pool imported. The +on-disk format incompatibility can now be corrected online as +described in Errata #1.

+
+
+

ZFS Errata #3

+ + + + + + + + + + + + + + + + + + +

Type:

Compatibility

Severity:

Moderate

Description:

An encrypted dataset contains an on-disk format +incompatibility.

Automated Response:

No automated response will be taken.

Impact:

Encrypted datasets created before the ZFS packages +were updated cannot be mounted or opened for +write. The errata impacts the ability of ZFS to +correctly perform raw sends, so this functionality +has been disabled for these datasets.

+

Suggested Action for System Administrator

+

System administrators with affected pools will need to recreate any +encrypted datasets created before the new version of ZFS was used. +This can be accomplished by using zfs send and zfs receive. +Note, however, that backups can NOT be done with a raw zfs send -w, +since this would preserve the on-disk incompatibility. +Alternatively, system administrators can use conventional tools to +back up data to new encrypted datasets. The new version of ZFS will +prevent new data from being written to the impacted datasets, but +they can still be mounted read-only.

+
# zpool status
+  pool: test
+    id: 1165955789558693437
+ state: ONLINE
+status: Errata #3 detected.
+action: To correct the issue backup existing encrypted datasets to new
+        encrypted datasets and destroy the old ones.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER
+config:
+
+    test           ONLINE
+      raidz1-0     ONLINE
+        vdev0      ONLINE
+        vdev1      ONLINE
+        vdev2      ONLINE
+        vdev3      ONLINE
+
+
+

Import the pool and backup any existing encrypted datasets to new +datasets. To ensure the new datasets are re-encrypted, be sure to +receive them below an encryption root or use zfs receive -o +encryption=on, then destroy the source dataset.

+
# zfs send test/crypt1@snap1 | zfs receive -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile test/newcrypt1
+# zfs send -I test/crypt1@snap1 test/crypt1@snap5 | zfs receive test/newcrypt1
+# zfs destroy -R test/crypt1
+
+
+

New datasets can be mounted read-write and used normally. The errata +will be cleared upon reimporting the pool and the alert will only be +shown again if another dataset is found with the errata. To ensure +that all datasets are on the new version reimport the pool, load all +keys, mount all encrypted datasets, and check zpool status.

+
# zpool export test
+# zpool import test
+# zfs load-key -a
+Enter passphrase for 'test/crypt1':
+1 / 1 key(s) successfully loaded
+# zfs mount -a
+# zpool status -x
+all pools are healthy
+
+
+
+
+

ZFS Errata #4

+ + + + + + + + + + + + + + + + + + +

Type:

Compatibility

Severity:

Moderate

Description:

An encrypted dataset contains an on-disk format +incompatibility.

Automated Response:

No automated response will be taken.

Impact:

Encrypted datasets created before the ZFS packages +were updated cannot be backed up via a raw send to +an updated system. These datasets also cannot +receive additional snapshots. New encrypted +datasets cannot be created until the +bookmark_v2 feature has been enabled.

+

Suggested Action for System Administrator

+

First, system administrators with affected pools will need to enable +the bookmark_v2 feature on their pools. Enabling this feature +will prevent this pool from being imported by previous versions of +the ZFS software after any new bookmarks are created (including +read-only imports). If the pool contains no encrypted datasets, this +is the only step required. If there are existing encrypted datasets, +administrators will then need to back these datasets up. This can be +done in several ways. Non-raw zfs send and zfs receive can be +used as per usual, as can traditional backup tools. Raw receives of +existing encrypted datasets and raw receives into existing encrypted +datasets are currently disabled because ZFS is not able to guarantee +that the stream and the existing dataset came from a consistent +source. This check can be disabled which will allow ZFS to receive +these streams anyway. Note that this can result in datasets with data +that cannot be accessed due to authentication errors if raw and +non-raw receives are mixed over the course of several incremental +backups. To disable this restriction, set the +zfs_disable_ivset_guid_check module parameter to 1. Streams +received this way (as well as any received before the upgrade) will +need to be manually checked by reading the data to ensure they are +not corrupted. Note that zpool scrub cannot be used for this +purpose because the scrub does not check the cryptographic +authentication codes. For more information on this issue, please +refer to the zfs man page section on zfs receive which describes +the restrictions on raw sends.

+
# zpool status
+  pool: test
+ state: ONLINE
+status: Errata #4 detected.
+        Existing encrypted datasets contain an on-disk incompatibility
+        which needs to be corrected.
+action: To correct the issue enable the bookmark_v2 feature and backup
+        any existing encrypted datasets to new encrypted datasets and
+        destroy the old ones. If this pool does not contain any
+        encrypted datasets, simply enable the bookmark_v2 feature.
+   see: http://openzfs.github.io/openzfs-docs/msg/ZFS-8000-ER
+  scan: none requested
+config:
+
+        NAME           STATE     READ WRITE CKSUM
+        test           ONLINE       0     0     0
+          /root/vdev0  ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

Import the pool and enable the bookmark_v2 feature. Then backup +any existing encrypted datasets to new datasets. This can be done +with traditional tools or via zfs send. Raw sends will require +that the zfs_disable_ivset_guid_check is set to 1 on the receive +side. Once this is done, the original datasets should be destroyed.

+
# zpool set feature@bookmark_v2=enabled test
+# echo 1 > /sys/module/zfs/parameters/zfs_disable_ivset_guid_check
+# zfs send -Rw test/crypt1@snap1 | zfs receive test/newcrypt1
+# zfs send -I test/crypt1@snap1 test/crypt1@snap5 | zfs receive test/newcrypt1
+# zfs destroy -R test/crypt1
+# echo 0 > /sys/module/zfs/parameters/zfs_disable_ivset_guid_check
+
+
+

The errata will be cleared upon reimporting the pool and the alert +will only be shown again if another dataset is found with the errata. +To check that all datasets are fixed, perform a zfs list -t all, +and check zpool status once it is completed.

+
# zpool export test
+# zpool import test
+# zpool scrub # wait for completion
+# zpool status -x
+all pools are healthy
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-EY/index.html b/msg/ZFS-8000-EY/index.html new file mode 100644 index 000000000..8ada7bb67 --- /dev/null +++ b/msg/ZFS-8000-EY/index.html @@ -0,0 +1,195 @@ + + + + + + + Message ID: ZFS-8000-EY — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-EY

+
+

ZFS label hostid mismatch

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

The ZFS pool was last accessed by another system.

Automated Response:

No automated response will be taken.

Impact:

ZFS filesystems are not available.

+

Suggested Action for System Administrator

+

The pool has been written to from another host, and was not cleanly +exported from the other system. Actively importing a pool on multiple +systems will corrupt the pool and leave it in an unrecoverable state. +To determine which system last accessed the pool, run the zpool +import command:

+
# zpool import
+  pool: test
+    id: 14702934086626715962
+ state: ONLINE
+status: The pool was last accessed by another system.
+action: The pool can be imported using its name or numeric identifier and
+        the '-f' flag.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
+config:
+
+        test              ONLINE
+          c0t0d0          ONLINE
+
+# zpool import test
+cannot import 'test': pool may be in use from other system, it was last
+accessed by 'tank' (hostid: 0x1435718c) on Fri Mar  9 15:42:47 2007
+use '-f' to import anyway
+
+
+

If you are certain that the pool is not being actively accessed by +another system, then you can use the -f option to zpool import to +forcibly import the pool.

+

Details

+

The Message ID: ZFS-8000-EY indicates that the pool cannot be +imported as it was last accessed by another system. Take the +documented action to resolve the problem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-HC/index.html b/msg/ZFS-8000-HC/index.html new file mode 100644 index 000000000..c1fa21eaa --- /dev/null +++ b/msg/ZFS-8000-HC/index.html @@ -0,0 +1,198 @@ + + + + + + + Message ID: ZFS-8000-HC — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-HC

+
+

ZFS pool I/O failures

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

The ZFS pool has experienced currently +unrecoverable I/O failures.

Automated Response:

No automated response will be taken.

Impact:

Read and write I/Os cannot be serviced.

+

Suggested Action for System Administrator

+

The pool has experienced I/O failures. Since the ZFS pool property +failmode is set to ‘wait’, all I/Os (reads and writes) are blocked. +See the zpoolprops(8) manpage for more information on the failmode +property. Manual intervention is required for I/Os to be serviced.

+

You can see which devices are affected by running zpool status -x:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: There are I/O failures.
+action: Make sure the affected devices are connected, then run 'zpool clear'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
+ scrub: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        test        FAULTED      0    13     0  insufficient replicas
+          c0t0d0    FAULTED      0     7     0  experienced I/O failures
+          c0t1d0    ONLINE       0     0     0
+
+errors: 1 data errors, use '-v' for a list
+
+
+

After you have made sure the affected devices are connected, run zpool +clear to allow I/O to the pool again:

+
# zpool clear test
+
+
+

If I/O failures continue to happen, then applications and commands for the pool +may hang. At this point, a reboot may be necessary to allow I/O to the pool +again.

+

Details

+

The Message ID: ZFS-8000-HC indicates that the pool has experienced I/O +failures. Take the documented action to resolve the problem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-JQ/index.html b/msg/ZFS-8000-JQ/index.html new file mode 100644 index 000000000..85ee56e8f --- /dev/null +++ b/msg/ZFS-8000-JQ/index.html @@ -0,0 +1,200 @@ + + + + + + + Message ID: ZFS-8000-JQ — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-JQ

+
+

ZFS pool I/O failures

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

The ZFS pool has experienced currently +unrecoverable I/O failures.

Automated Response:

No automated response will be taken.

Impact:

Write I/Os cannot be serviced.

+

Suggested Action for System Administrator

+

The pool has experienced I/O failures. Since the ZFS pool property +failmode is set to ‘continue’, read I/Os will continue to be +serviced, but write I/Os are blocked. See the zpoolprops(8) manpage for +more information on the failmode property. Manual intervention is +required for write I/Os to be serviced. You can see which devices are +affected by running zpool status -x:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: There are I/O failures.
+action: Make sure the affected devices are connected, then run 'zpool clear'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
+ scrub: none requested
+config:
+
+        NAME        STATE     READ WRITE CKSUM
+        test        FAULTED      0    13     0  insufficient replicas
+          sda9      FAULTED      0     7     0  experienced I/O failures
+          sdb9      ONLINE       0     0     0
+
+errors: 1 data errors, use '-v' for a list
+
+
+

After you have made sure the affected devices are connected, run +zpool clear to allow write I/O to the pool again:

+
# zpool clear test
+
+
+

If I/O failures continue to happen, then applications and commands +for the pool may hang. At this point, a reboot may be necessary to +allow I/O to the pool again.

+

Details

+

The Message ID: ZFS-8000-JQ indicates that the pool has +experienced I/O failures. Take the documented action to resolve the +problem.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/ZFS-8000-K4/index.html b/msg/ZFS-8000-K4/index.html new file mode 100644 index 000000000..3a303fc5a --- /dev/null +++ b/msg/ZFS-8000-K4/index.html @@ -0,0 +1,244 @@ + + + + + + + Message ID: ZFS-8000-K4 — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Message ID: ZFS-8000-K4

+
+

ZFS intent log read failure

+ + + + + + + + + + + + + + + + + + +

Type:

Error

Severity:

Major

Description:

A ZFS intent log device could not be read.

Automated Response:

No automated response will be taken.

Impact:

The intent log(s) cannot be replayed.

+

Suggested Action for System Administrator

+

A ZFS intent log record could not be read due to an error. This may +be due to a missing or broken log device, or a device within the pool +may be experiencing I/O errors. The pool itself is not corrupt but is +missing some pool changes that happened shortly before a power loss +or system failure. These are pool changes that applications had +requested to be written synchronously but had not been committed in +the pool. This transaction group commit currently occurs every five +seconds, and so typically at most five seconds worth of synchronous +writes have been lost. ZFS itself cannot determine if the pool +changes lost are critical to those applications running at the time +of the system failure. This is a decision the administrator must +make. You may want to consider mirroring log devices. First determine +which pool is in error:

+
# zpool status -x
+  pool: test
+ state: FAULTED
+status: One or more of the intent logs could not be read.
+        Waiting for adminstrator intervention to fix the faulted pool.
+action: Either restore the affected device(s) and run 'zpool online',
+        or ignore the intent log records by running 'zpool clear'.
+ scrub: none requested
+config:
+
+        NAME              STATE     READ WRITE CKSUM
+        test              FAULTED      0     0     0  bad intent log
+          c3t2d0          ONLINE       0     0     0
+        logs              FAULTED      0     0     0  bad intent log
+          c5t3d0          UNAVAIL      0     0     0  cannot open
+
+
+

There are two courses of action to resolve this problem. +If the validity of the pool from an application perspective requires +the pool changes then the log devices must be recovered. Make sure +power and cables are connected and that the affected device is +online. Then run zpool online and then zpool clear:

+
# zpool online test c5t3d0
+# zpool clear test
+# zpool status test
+  pool: test
+ state: ONLINE
+ scrub: none requested
+config:
+
+        NAME              STATE     READ WRITE CKSUM
+        test              ONLINE       0     0     0
+          c3t2d0          ONLINE       0     0     0
+        logs              ONLINE       0     0     0
+          c5t3d0          ONLINE       0     0     0
+
+errors: No known data errors
+
+
+

The second alternative action is to ignore the most recent pool +changes that could not be read. To do this run zpool clear:

+
# zpool clear test
+# zpool status test
+  pool: test
+ state: DEGRADED
+status: One or more devices could not be opened.  Sufficient replicas exist for
+        the pool to continue functioning in a degraded state.
+action: Attach the missing device and online it using 'zpool online'.
+   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
+ scrub: none requested
+config:
+
+        NAME              STATE     READ WRITE CKSUM
+        test              DEGRADED     0     0     0
+          c3t2d0          ONLINE       0     0     0
+        logs              DEGRADED     0     0     0
+          c5t3d0          UNAVAIL      0     0     0  cannot open
+
+errors: No known data errors
+
+
+

Future log records will not use a failed log device but will be +written to the main pool. You should fix or replace any failed log +devices.

+

Details

+

The Message ID: ZFS-8000-K4 indicates that a log device is +missing or cannot be read.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/msg/index.html b/msg/index.html new file mode 100644 index 000000000..3c5411454 --- /dev/null +++ b/msg/index.html @@ -0,0 +1,205 @@ + + + + + + + ZFS Messages — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/objects.inv b/objects.inv new file mode 100644 index 000000000..fa248e0ad Binary files /dev/null and b/objects.inv differ diff --git a/search.html b/search.html new file mode 100644 index 000000000..bb62c78d2 --- /dev/null +++ b/search.html @@ -0,0 +1,131 @@ + + + + + + Search — OpenZFS documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + + + +
+ +
+ +
+
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/searchindex.js b/searchindex.js new file mode 100644 index 000000000..9d7edd94a --- /dev/null +++ b/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"docnames": ["404", "Basic Concepts/Checksums", "Basic Concepts/Feature Flags", "Basic Concepts/RAIDZ", "Basic Concepts/Troubleshooting", "Basic Concepts/dRAID Howto", "Basic Concepts/index", "Developer Resources/Buildbot Options", "Developer Resources/Building ZFS", "Developer Resources/Custom Packages", "Developer Resources/Git and GitHub for beginners", "Developer Resources/OpenZFS Exceptions", "Developer Resources/OpenZFS Patches", "Developer Resources/index", "Getting Started/Alpine Linux/Root on ZFS", "Getting Started/Alpine Linux/index", "Getting Started/Arch Linux/Root on ZFS", "Getting Started/Arch Linux/index", "Getting Started/Debian/Debian Bookworm Root on ZFS", "Getting Started/Debian/Debian Bullseye Root on ZFS", "Getting Started/Debian/Debian Buster Root on ZFS", "Getting Started/Debian/Debian GNU Linux initrd documentation", "Getting Started/Debian/Debian Stretch Root on ZFS", "Getting Started/Debian/index", "Getting Started/Fedora", "Getting Started/Fedora/Root on ZFS", "Getting Started/Fedora/index", "Getting Started/FreeBSD", "Getting Started/NixOS/Root on ZFS", "Getting Started/NixOS/index", "Getting Started/RHEL and CentOS", "Getting Started/RHEL-based distro/Root on ZFS", "Getting Started/RHEL-based distro/index", "Getting Started/Slackware/Root on ZFS", "Getting Started/Slackware/index", "Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS", "Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS", "Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi", "Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS", "Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi", "Getting Started/Ubuntu/index", "Getting Started/index", "Getting Started/openSUSE/index", "Getting Started/openSUSE/openSUSE Leap Root on ZFS", "Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS", "License", "Performance and Tuning/Async Write", "Performance and Tuning/Hardware", "Performance and Tuning/Module Parameters", "Performance and Tuning/Workload Tuning", "Performance and Tuning/ZFS Transaction Delay", "Performance and Tuning/ZIO Scheduler", "Performance and Tuning/index", "Project and Community/Admin Documentation", "Project and Community/FAQ", "Project and Community/FAQ hole birth", "Project and Community/Mailing Lists", "Project and Community/Signing Keys", "Project and Community/index", "_TableOfContents", "index", "man/index", "man/master/1/arcstat.1", "man/master/1/cstyle.1", "man/master/1/index", "man/master/1/raidz_test.1", "man/master/1/test-runner.1", "man/master/1/zhack.1", "man/master/1/ztest.1", "man/master/1/zvol_wait.1", "man/master/4/index", "man/master/4/spl.4", "man/master/4/zfs.4", "man/master/5/index", "man/master/5/vdev_id.conf.5", "man/master/7/dracut.zfs.7", "man/master/7/index", "man/master/7/vdevprops.7", "man/master/7/zfsconcepts.7", "man/master/7/zfsprops.7", "man/master/7/zpool-features.7", "man/master/7/zpoolconcepts.7", "man/master/7/zpoolprops.7", "man/master/8/fsck.zfs.8", "man/master/8/index", "man/master/8/mount.zfs.8", "man/master/8/vdev_id.8", "man/master/8/zdb.8", "man/master/8/zed.8", "man/master/8/zfs-allow.8", "man/master/8/zfs-bookmark.8", "man/master/8/zfs-change-key.8", "man/master/8/zfs-clone.8", "man/master/8/zfs-create.8", "man/master/8/zfs-destroy.8", "man/master/8/zfs-diff.8", "man/master/8/zfs-get.8", "man/master/8/zfs-groupspace.8", "man/master/8/zfs-hold.8", "man/master/8/zfs-inherit.8", "man/master/8/zfs-jail.8", "man/master/8/zfs-list.8", "man/master/8/zfs-load-key.8", "man/master/8/zfs-mount-generator.8", "man/master/8/zfs-mount.8", "man/master/8/zfs-program.8", "man/master/8/zfs-project.8", "man/master/8/zfs-projectspace.8", "man/master/8/zfs-promote.8", "man/master/8/zfs-receive.8", "man/master/8/zfs-recv.8", "man/master/8/zfs-redact.8", "man/master/8/zfs-release.8", "man/master/8/zfs-rename.8", "man/master/8/zfs-rollback.8", "man/master/8/zfs-send.8", "man/master/8/zfs-set.8", "man/master/8/zfs-share.8", "man/master/8/zfs-snapshot.8", "man/master/8/zfs-unallow.8", "man/master/8/zfs-unjail.8", "man/master/8/zfs-unload-key.8", "man/master/8/zfs-unmount.8", "man/master/8/zfs-unzone.8", "man/master/8/zfs-upgrade.8", "man/master/8/zfs-userspace.8", "man/master/8/zfs-wait.8", "man/master/8/zfs-zone.8", "man/master/8/zfs.8", "man/master/8/zfs_ids_to_path.8", "man/master/8/zfs_prepare_disk.8", "man/master/8/zgenhostid.8", "man/master/8/zinject.8", "man/master/8/zpool-add.8", "man/master/8/zpool-attach.8", "man/master/8/zpool-checkpoint.8", "man/master/8/zpool-clear.8", "man/master/8/zpool-create.8", "man/master/8/zpool-destroy.8", "man/master/8/zpool-detach.8", "man/master/8/zpool-events.8", "man/master/8/zpool-export.8", "man/master/8/zpool-get.8", "man/master/8/zpool-history.8", "man/master/8/zpool-import.8", "man/master/8/zpool-initialize.8", "man/master/8/zpool-iostat.8", "man/master/8/zpool-labelclear.8", "man/master/8/zpool-list.8", "man/master/8/zpool-offline.8", "man/master/8/zpool-online.8", "man/master/8/zpool-reguid.8", "man/master/8/zpool-remove.8", "man/master/8/zpool-reopen.8", "man/master/8/zpool-replace.8", "man/master/8/zpool-resilver.8", "man/master/8/zpool-scrub.8", "man/master/8/zpool-set.8", "man/master/8/zpool-split.8", "man/master/8/zpool-status.8", "man/master/8/zpool-sync.8", "man/master/8/zpool-trim.8", "man/master/8/zpool-upgrade.8", "man/master/8/zpool-wait.8", "man/master/8/zpool.8", "man/master/8/zpool_influxdb.8", "man/master/8/zstream.8", "man/master/8/zstreamdump.8", "man/master/index", "man/v0.6/1/cstyle.1", "man/v0.6/1/index", "man/v0.6/1/zhack.1", "man/v0.6/1/zpios.1", "man/v0.6/1/ztest.1", "man/v0.6/5/index", "man/v0.6/5/vdev_id.conf.5", "man/v0.6/5/zfs-events.5", "man/v0.6/5/zfs-module-parameters.5", "man/v0.6/5/zpool-features.5", "man/v0.6/8/fsck.zfs.8", "man/v0.6/8/index", "man/v0.6/8/mount.zfs.8", "man/v0.6/8/vdev_id.8", "man/v0.6/8/zdb.8", "man/v0.6/8/zed.8", "man/v0.6/8/zfs.8", "man/v0.6/8/zinject.8", "man/v0.6/8/zpool.8", "man/v0.6/8/zstreamdump.8", "man/v0.6/index", "man/v0.7/1/cstyle.1", "man/v0.7/1/index", "man/v0.7/1/raidz_test.1", "man/v0.7/1/zhack.1", "man/v0.7/1/zpios.1", "man/v0.7/1/ztest.1", "man/v0.7/5/index", "man/v0.7/5/vdev_id.conf.5", "man/v0.7/5/zfs-events.5", "man/v0.7/5/zfs-module-parameters.5", "man/v0.7/5/zpool-features.5", "man/v0.7/8/fsck.zfs.8", "man/v0.7/8/index", "man/v0.7/8/mount.zfs.8", "man/v0.7/8/vdev_id.8", "man/v0.7/8/zdb.8", "man/v0.7/8/zed.8", "man/v0.7/8/zfs.8", "man/v0.7/8/zgenhostid.8", "man/v0.7/8/zinject.8", "man/v0.7/8/zpool.8", "man/v0.7/8/zstreamdump.8", "man/v0.7/index", "man/v0.8/1/cstyle.1", "man/v0.8/1/index", "man/v0.8/1/raidz_test.1", "man/v0.8/1/zhack.1", "man/v0.8/1/ztest.1", "man/v0.8/1/zvol_wait.1", "man/v0.8/5/index", "man/v0.8/5/spl-module-parameters.5", "man/v0.8/5/vdev_id.conf.5", "man/v0.8/5/zfs-events.5", "man/v0.8/5/zfs-module-parameters.5", "man/v0.8/5/zpool-features.5", "man/v0.8/8/fsck.zfs.8", "man/v0.8/8/index", "man/v0.8/8/mount.zfs.8", "man/v0.8/8/vdev_id.8", "man/v0.8/8/zdb.8", "man/v0.8/8/zed.8", "man/v0.8/8/zfs-mount-generator.8", "man/v0.8/8/zfs-program.8", "man/v0.8/8/zfs.8", "man/v0.8/8/zfsprops.8", "man/v0.8/8/zgenhostid.8", "man/v0.8/8/zinject.8", "man/v0.8/8/zpool.8", "man/v0.8/8/zstreamdump.8", "man/v0.8/index", "man/v2.0/1/arcstat.1", "man/v2.0/1/cstyle.1", "man/v2.0/1/index", "man/v2.0/1/raidz_test.1", "man/v2.0/1/zhack.1", "man/v2.0/1/ztest.1", "man/v2.0/1/zvol_wait.1", "man/v2.0/5/index", "man/v2.0/5/spl-module-parameters.5", "man/v2.0/5/vdev_id.conf.5", "man/v2.0/5/zfs-events.5", "man/v2.0/5/zfs-module-parameters.5", "man/v2.0/5/zpool-features.5", "man/v2.0/8/fsck.zfs.8", "man/v2.0/8/index", "man/v2.0/8/mount.zfs.8", "man/v2.0/8/vdev_id.8", "man/v2.0/8/zdb.8", "man/v2.0/8/zed.8", "man/v2.0/8/zfs-allow.8", "man/v2.0/8/zfs-bookmark.8", "man/v2.0/8/zfs-change-key.8", "man/v2.0/8/zfs-clone.8", "man/v2.0/8/zfs-create.8", "man/v2.0/8/zfs-destroy.8", "man/v2.0/8/zfs-diff.8", "man/v2.0/8/zfs-get.8", "man/v2.0/8/zfs-groupspace.8", "man/v2.0/8/zfs-hold.8", "man/v2.0/8/zfs-inherit.8", "man/v2.0/8/zfs-jail.8", "man/v2.0/8/zfs-list.8", "man/v2.0/8/zfs-load-key.8", "man/v2.0/8/zfs-mount-generator.8", "man/v2.0/8/zfs-mount.8", "man/v2.0/8/zfs-program.8", "man/v2.0/8/zfs-project.8", "man/v2.0/8/zfs-projectspace.8", "man/v2.0/8/zfs-promote.8", "man/v2.0/8/zfs-receive.8", "man/v2.0/8/zfs-recv.8", "man/v2.0/8/zfs-redact.8", "man/v2.0/8/zfs-release.8", "man/v2.0/8/zfs-rename.8", "man/v2.0/8/zfs-rollback.8", "man/v2.0/8/zfs-send.8", "man/v2.0/8/zfs-set.8", "man/v2.0/8/zfs-share.8", "man/v2.0/8/zfs-snapshot.8", "man/v2.0/8/zfs-unallow.8", "man/v2.0/8/zfs-unjail.8", "man/v2.0/8/zfs-unload-key.8", "man/v2.0/8/zfs-unmount.8", "man/v2.0/8/zfs-upgrade.8", "man/v2.0/8/zfs-userspace.8", "man/v2.0/8/zfs-wait.8", "man/v2.0/8/zfs.8", "man/v2.0/8/zfs_ids_to_path.8", "man/v2.0/8/zfsconcepts.8", "man/v2.0/8/zfsprops.8", "man/v2.0/8/zgenhostid.8", "man/v2.0/8/zinject.8", "man/v2.0/8/zpool-add.8", "man/v2.0/8/zpool-attach.8", "man/v2.0/8/zpool-checkpoint.8", "man/v2.0/8/zpool-clear.8", "man/v2.0/8/zpool-create.8", "man/v2.0/8/zpool-destroy.8", "man/v2.0/8/zpool-detach.8", "man/v2.0/8/zpool-events.8", "man/v2.0/8/zpool-export.8", "man/v2.0/8/zpool-get.8", "man/v2.0/8/zpool-history.8", "man/v2.0/8/zpool-import.8", "man/v2.0/8/zpool-initialize.8", "man/v2.0/8/zpool-iostat.8", "man/v2.0/8/zpool-labelclear.8", "man/v2.0/8/zpool-list.8", "man/v2.0/8/zpool-offline.8", "man/v2.0/8/zpool-online.8", "man/v2.0/8/zpool-reguid.8", "man/v2.0/8/zpool-remove.8", "man/v2.0/8/zpool-reopen.8", "man/v2.0/8/zpool-replace.8", "man/v2.0/8/zpool-resilver.8", "man/v2.0/8/zpool-scrub.8", "man/v2.0/8/zpool-set.8", "man/v2.0/8/zpool-split.8", "man/v2.0/8/zpool-status.8", "man/v2.0/8/zpool-sync.8", "man/v2.0/8/zpool-trim.8", "man/v2.0/8/zpool-upgrade.8", "man/v2.0/8/zpool-wait.8", "man/v2.0/8/zpool.8", "man/v2.0/8/zpoolconcepts.8", "man/v2.0/8/zpoolprops.8", "man/v2.0/8/zstream.8", "man/v2.0/8/zstreamdump.8", "man/v2.0/index", "man/v2.1/1/arcstat.1", "man/v2.1/1/cstyle.1", "man/v2.1/1/index", "man/v2.1/1/raidz_test.1", "man/v2.1/1/zhack.1", "man/v2.1/1/ztest.1", "man/v2.1/1/zvol_wait.1", "man/v2.1/4/index", "man/v2.1/4/spl.4", "man/v2.1/4/zfs.4", "man/v2.1/5/index", "man/v2.1/5/vdev_id.conf.5", "man/v2.1/7/dracut.zfs.7", "man/v2.1/7/index", "man/v2.1/7/zfsconcepts.7", "man/v2.1/7/zfsprops.7", "man/v2.1/7/zpool-features.7", "man/v2.1/7/zpoolconcepts.7", "man/v2.1/7/zpoolprops.7", "man/v2.1/8/fsck.zfs.8", "man/v2.1/8/index", "man/v2.1/8/mount.zfs.8", "man/v2.1/8/vdev_id.8", "man/v2.1/8/zdb.8", "man/v2.1/8/zed.8", "man/v2.1/8/zfs-allow.8", "man/v2.1/8/zfs-bookmark.8", "man/v2.1/8/zfs-change-key.8", "man/v2.1/8/zfs-clone.8", "man/v2.1/8/zfs-create.8", "man/v2.1/8/zfs-destroy.8", "man/v2.1/8/zfs-diff.8", "man/v2.1/8/zfs-get.8", "man/v2.1/8/zfs-groupspace.8", "man/v2.1/8/zfs-hold.8", "man/v2.1/8/zfs-inherit.8", "man/v2.1/8/zfs-jail.8", "man/v2.1/8/zfs-list.8", "man/v2.1/8/zfs-load-key.8", "man/v2.1/8/zfs-mount-generator.8", "man/v2.1/8/zfs-mount.8", "man/v2.1/8/zfs-program.8", "man/v2.1/8/zfs-project.8", "man/v2.1/8/zfs-projectspace.8", "man/v2.1/8/zfs-promote.8", "man/v2.1/8/zfs-receive.8", "man/v2.1/8/zfs-recv.8", "man/v2.1/8/zfs-redact.8", "man/v2.1/8/zfs-release.8", "man/v2.1/8/zfs-rename.8", "man/v2.1/8/zfs-rollback.8", "man/v2.1/8/zfs-send.8", "man/v2.1/8/zfs-set.8", "man/v2.1/8/zfs-share.8", "man/v2.1/8/zfs-snapshot.8", "man/v2.1/8/zfs-unallow.8", "man/v2.1/8/zfs-unjail.8", "man/v2.1/8/zfs-unload-key.8", "man/v2.1/8/zfs-unmount.8", "man/v2.1/8/zfs-upgrade.8", "man/v2.1/8/zfs-userspace.8", "man/v2.1/8/zfs-wait.8", "man/v2.1/8/zfs.8", "man/v2.1/8/zfs_ids_to_path.8", "man/v2.1/8/zfs_prepare_disk.8", "man/v2.1/8/zgenhostid.8", "man/v2.1/8/zinject.8", "man/v2.1/8/zpool-add.8", "man/v2.1/8/zpool-attach.8", "man/v2.1/8/zpool-checkpoint.8", "man/v2.1/8/zpool-clear.8", "man/v2.1/8/zpool-create.8", "man/v2.1/8/zpool-destroy.8", "man/v2.1/8/zpool-detach.8", "man/v2.1/8/zpool-events.8", "man/v2.1/8/zpool-export.8", "man/v2.1/8/zpool-get.8", "man/v2.1/8/zpool-history.8", "man/v2.1/8/zpool-import.8", "man/v2.1/8/zpool-initialize.8", "man/v2.1/8/zpool-iostat.8", "man/v2.1/8/zpool-labelclear.8", "man/v2.1/8/zpool-list.8", "man/v2.1/8/zpool-offline.8", "man/v2.1/8/zpool-online.8", "man/v2.1/8/zpool-reguid.8", "man/v2.1/8/zpool-remove.8", "man/v2.1/8/zpool-reopen.8", "man/v2.1/8/zpool-replace.8", "man/v2.1/8/zpool-resilver.8", "man/v2.1/8/zpool-scrub.8", "man/v2.1/8/zpool-set.8", "man/v2.1/8/zpool-split.8", "man/v2.1/8/zpool-status.8", "man/v2.1/8/zpool-sync.8", "man/v2.1/8/zpool-trim.8", "man/v2.1/8/zpool-upgrade.8", "man/v2.1/8/zpool-wait.8", "man/v2.1/8/zpool.8", "man/v2.1/8/zpool_influxdb.8", "man/v2.1/8/zstream.8", "man/v2.1/8/zstreamdump.8", "man/v2.1/index", "man/v2.2/1/arcstat.1", "man/v2.2/1/cstyle.1", "man/v2.2/1/index", "man/v2.2/1/raidz_test.1", "man/v2.2/1/test-runner.1", "man/v2.2/1/zhack.1", "man/v2.2/1/ztest.1", "man/v2.2/1/zvol_wait.1", "man/v2.2/4/index", "man/v2.2/4/spl.4", "man/v2.2/4/zfs.4", "man/v2.2/5/index", "man/v2.2/5/vdev_id.conf.5", "man/v2.2/7/dracut.zfs.7", "man/v2.2/7/index", "man/v2.2/7/vdevprops.7", "man/v2.2/7/zfsconcepts.7", "man/v2.2/7/zfsprops.7", "man/v2.2/7/zpool-features.7", "man/v2.2/7/zpoolconcepts.7", "man/v2.2/7/zpoolprops.7", "man/v2.2/8/fsck.zfs.8", "man/v2.2/8/index", "man/v2.2/8/mount.zfs.8", "man/v2.2/8/vdev_id.8", "man/v2.2/8/zdb.8", "man/v2.2/8/zed.8", "man/v2.2/8/zfs-allow.8", "man/v2.2/8/zfs-bookmark.8", "man/v2.2/8/zfs-change-key.8", "man/v2.2/8/zfs-clone.8", "man/v2.2/8/zfs-create.8", "man/v2.2/8/zfs-destroy.8", "man/v2.2/8/zfs-diff.8", "man/v2.2/8/zfs-get.8", "man/v2.2/8/zfs-groupspace.8", "man/v2.2/8/zfs-hold.8", "man/v2.2/8/zfs-inherit.8", "man/v2.2/8/zfs-jail.8", "man/v2.2/8/zfs-list.8", "man/v2.2/8/zfs-load-key.8", "man/v2.2/8/zfs-mount-generator.8", "man/v2.2/8/zfs-mount.8", "man/v2.2/8/zfs-program.8", "man/v2.2/8/zfs-project.8", "man/v2.2/8/zfs-projectspace.8", "man/v2.2/8/zfs-promote.8", "man/v2.2/8/zfs-receive.8", "man/v2.2/8/zfs-recv.8", "man/v2.2/8/zfs-redact.8", "man/v2.2/8/zfs-release.8", "man/v2.2/8/zfs-rename.8", "man/v2.2/8/zfs-rollback.8", "man/v2.2/8/zfs-send.8", "man/v2.2/8/zfs-set.8", "man/v2.2/8/zfs-share.8", "man/v2.2/8/zfs-snapshot.8", "man/v2.2/8/zfs-unallow.8", "man/v2.2/8/zfs-unjail.8", "man/v2.2/8/zfs-unload-key.8", "man/v2.2/8/zfs-unmount.8", "man/v2.2/8/zfs-unzone.8", "man/v2.2/8/zfs-upgrade.8", "man/v2.2/8/zfs-userspace.8", "man/v2.2/8/zfs-wait.8", "man/v2.2/8/zfs-zone.8", "man/v2.2/8/zfs.8", "man/v2.2/8/zfs_ids_to_path.8", "man/v2.2/8/zfs_prepare_disk.8", "man/v2.2/8/zgenhostid.8", "man/v2.2/8/zinject.8", "man/v2.2/8/zpool-add.8", "man/v2.2/8/zpool-attach.8", "man/v2.2/8/zpool-checkpoint.8", "man/v2.2/8/zpool-clear.8", "man/v2.2/8/zpool-create.8", "man/v2.2/8/zpool-destroy.8", "man/v2.2/8/zpool-detach.8", "man/v2.2/8/zpool-events.8", "man/v2.2/8/zpool-export.8", "man/v2.2/8/zpool-get.8", "man/v2.2/8/zpool-history.8", "man/v2.2/8/zpool-import.8", "man/v2.2/8/zpool-initialize.8", "man/v2.2/8/zpool-iostat.8", "man/v2.2/8/zpool-labelclear.8", "man/v2.2/8/zpool-list.8", "man/v2.2/8/zpool-offline.8", "man/v2.2/8/zpool-online.8", "man/v2.2/8/zpool-reguid.8", "man/v2.2/8/zpool-remove.8", "man/v2.2/8/zpool-reopen.8", "man/v2.2/8/zpool-replace.8", "man/v2.2/8/zpool-resilver.8", "man/v2.2/8/zpool-scrub.8", "man/v2.2/8/zpool-set.8", "man/v2.2/8/zpool-split.8", "man/v2.2/8/zpool-status.8", "man/v2.2/8/zpool-sync.8", "man/v2.2/8/zpool-trim.8", "man/v2.2/8/zpool-upgrade.8", "man/v2.2/8/zpool-wait.8", "man/v2.2/8/zpool.8", "man/v2.2/8/zpool_influxdb.8", "man/v2.2/8/zstream.8", "man/v2.2/8/zstreamdump.8", "man/v2.2/index", "msg/ZFS-8000-14/index", "msg/ZFS-8000-2Q/index", "msg/ZFS-8000-3C/index", "msg/ZFS-8000-4J/index", "msg/ZFS-8000-5E/index", "msg/ZFS-8000-6X/index", "msg/ZFS-8000-72/index", "msg/ZFS-8000-8A/index", "msg/ZFS-8000-9P/index", "msg/ZFS-8000-A5/index", "msg/ZFS-8000-ER/index", "msg/ZFS-8000-EY/index", "msg/ZFS-8000-HC/index", "msg/ZFS-8000-JQ/index", "msg/ZFS-8000-K4/index", "msg/index"], "filenames": ["404.rst", "Basic Concepts/Checksums.rst", "Basic Concepts/Feature Flags.rst", "Basic Concepts/RAIDZ.rst", "Basic Concepts/Troubleshooting.rst", "Basic Concepts/dRAID Howto.rst", "Basic Concepts/index.rst", "Developer Resources/Buildbot Options.rst", "Developer Resources/Building ZFS.rst", "Developer Resources/Custom Packages.rst", "Developer Resources/Git and GitHub for beginners.rst", "Developer Resources/OpenZFS Exceptions.rst", "Developer Resources/OpenZFS Patches.rst", "Developer Resources/index.rst", "Getting Started/Alpine Linux/Root on ZFS.rst", "Getting Started/Alpine Linux/index.rst", "Getting Started/Arch Linux/Root on ZFS.rst", "Getting Started/Arch Linux/index.rst", "Getting Started/Debian/Debian Bookworm Root on ZFS.rst", "Getting Started/Debian/Debian Bullseye Root on ZFS.rst", "Getting Started/Debian/Debian Buster Root on ZFS.rst", "Getting Started/Debian/Debian GNU Linux initrd documentation.rst", "Getting Started/Debian/Debian Stretch Root on ZFS.rst", "Getting Started/Debian/index.rst", "Getting Started/Fedora.rst", "Getting Started/Fedora/Root on ZFS.rst", "Getting Started/Fedora/index.rst", "Getting Started/FreeBSD.rst", "Getting Started/NixOS/Root on ZFS.rst", "Getting Started/NixOS/index.rst", "Getting Started/RHEL and CentOS.rst", "Getting Started/RHEL-based distro/Root on ZFS.rst", "Getting Started/RHEL-based distro/index.rst", "Getting Started/Slackware/Root on ZFS.rst", "Getting Started/Slackware/index.rst", "Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst", "Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst", "Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS for Raspberry Pi.rst", "Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS.rst", "Getting Started/Ubuntu/Ubuntu 22.04 Root on ZFS for Raspberry Pi.rst", "Getting Started/Ubuntu/index.rst", "Getting Started/index.rst", "Getting Started/openSUSE/index.rst", "Getting Started/openSUSE/openSUSE Leap Root on ZFS.rst", "Getting Started/openSUSE/openSUSE Tumbleweed Root on ZFS.rst", "License.rst", "Performance and Tuning/Async Write.rst", "Performance and Tuning/Hardware.rst", "Performance and Tuning/Module Parameters.rst", "Performance and Tuning/Workload Tuning.rst", "Performance and Tuning/ZFS Transaction Delay.rst", "Performance and Tuning/ZIO Scheduler.rst", "Performance and Tuning/index.rst", "Project and Community/Admin Documentation.rst", "Project and Community/FAQ.rst", "Project and Community/FAQ hole birth.rst", "Project and Community/Mailing Lists.rst", "Project and Community/Signing Keys.rst", "Project and Community/index.rst", "_TableOfContents.rst", "index.rst", "man/index.rst", "man/master/1/arcstat.1.rst", "man/master/1/cstyle.1.rst", "man/master/1/index.rst", "man/master/1/raidz_test.1.rst", "man/master/1/test-runner.1.rst", "man/master/1/zhack.1.rst", "man/master/1/ztest.1.rst", "man/master/1/zvol_wait.1.rst", "man/master/4/index.rst", "man/master/4/spl.4.rst", "man/master/4/zfs.4.rst", "man/master/5/index.rst", "man/master/5/vdev_id.conf.5.rst", "man/master/7/dracut.zfs.7.rst", "man/master/7/index.rst", "man/master/7/vdevprops.7.rst", "man/master/7/zfsconcepts.7.rst", "man/master/7/zfsprops.7.rst", "man/master/7/zpool-features.7.rst", "man/master/7/zpoolconcepts.7.rst", "man/master/7/zpoolprops.7.rst", "man/master/8/fsck.zfs.8.rst", "man/master/8/index.rst", "man/master/8/mount.zfs.8.rst", "man/master/8/vdev_id.8.rst", "man/master/8/zdb.8.rst", "man/master/8/zed.8.rst", "man/master/8/zfs-allow.8.rst", "man/master/8/zfs-bookmark.8.rst", "man/master/8/zfs-change-key.8.rst", "man/master/8/zfs-clone.8.rst", "man/master/8/zfs-create.8.rst", "man/master/8/zfs-destroy.8.rst", "man/master/8/zfs-diff.8.rst", "man/master/8/zfs-get.8.rst", "man/master/8/zfs-groupspace.8.rst", "man/master/8/zfs-hold.8.rst", "man/master/8/zfs-inherit.8.rst", "man/master/8/zfs-jail.8.rst", "man/master/8/zfs-list.8.rst", "man/master/8/zfs-load-key.8.rst", "man/master/8/zfs-mount-generator.8.rst", "man/master/8/zfs-mount.8.rst", "man/master/8/zfs-program.8.rst", "man/master/8/zfs-project.8.rst", "man/master/8/zfs-projectspace.8.rst", "man/master/8/zfs-promote.8.rst", "man/master/8/zfs-receive.8.rst", "man/master/8/zfs-recv.8.rst", "man/master/8/zfs-redact.8.rst", "man/master/8/zfs-release.8.rst", "man/master/8/zfs-rename.8.rst", "man/master/8/zfs-rollback.8.rst", "man/master/8/zfs-send.8.rst", "man/master/8/zfs-set.8.rst", "man/master/8/zfs-share.8.rst", "man/master/8/zfs-snapshot.8.rst", "man/master/8/zfs-unallow.8.rst", "man/master/8/zfs-unjail.8.rst", "man/master/8/zfs-unload-key.8.rst", "man/master/8/zfs-unmount.8.rst", "man/master/8/zfs-unzone.8.rst", "man/master/8/zfs-upgrade.8.rst", "man/master/8/zfs-userspace.8.rst", "man/master/8/zfs-wait.8.rst", "man/master/8/zfs-zone.8.rst", "man/master/8/zfs.8.rst", "man/master/8/zfs_ids_to_path.8.rst", "man/master/8/zfs_prepare_disk.8.rst", "man/master/8/zgenhostid.8.rst", "man/master/8/zinject.8.rst", "man/master/8/zpool-add.8.rst", "man/master/8/zpool-attach.8.rst", "man/master/8/zpool-checkpoint.8.rst", "man/master/8/zpool-clear.8.rst", "man/master/8/zpool-create.8.rst", "man/master/8/zpool-destroy.8.rst", "man/master/8/zpool-detach.8.rst", "man/master/8/zpool-events.8.rst", "man/master/8/zpool-export.8.rst", "man/master/8/zpool-get.8.rst", "man/master/8/zpool-history.8.rst", "man/master/8/zpool-import.8.rst", "man/master/8/zpool-initialize.8.rst", "man/master/8/zpool-iostat.8.rst", "man/master/8/zpool-labelclear.8.rst", "man/master/8/zpool-list.8.rst", "man/master/8/zpool-offline.8.rst", "man/master/8/zpool-online.8.rst", "man/master/8/zpool-reguid.8.rst", "man/master/8/zpool-remove.8.rst", "man/master/8/zpool-reopen.8.rst", "man/master/8/zpool-replace.8.rst", "man/master/8/zpool-resilver.8.rst", "man/master/8/zpool-scrub.8.rst", "man/master/8/zpool-set.8.rst", "man/master/8/zpool-split.8.rst", "man/master/8/zpool-status.8.rst", "man/master/8/zpool-sync.8.rst", "man/master/8/zpool-trim.8.rst", "man/master/8/zpool-upgrade.8.rst", "man/master/8/zpool-wait.8.rst", "man/master/8/zpool.8.rst", "man/master/8/zpool_influxdb.8.rst", "man/master/8/zstream.8.rst", "man/master/8/zstreamdump.8.rst", "man/master/index.rst", "man/v0.6/1/cstyle.1.rst", "man/v0.6/1/index.rst", "man/v0.6/1/zhack.1.rst", "man/v0.6/1/zpios.1.rst", "man/v0.6/1/ztest.1.rst", "man/v0.6/5/index.rst", "man/v0.6/5/vdev_id.conf.5.rst", "man/v0.6/5/zfs-events.5.rst", "man/v0.6/5/zfs-module-parameters.5.rst", "man/v0.6/5/zpool-features.5.rst", "man/v0.6/8/fsck.zfs.8.rst", "man/v0.6/8/index.rst", "man/v0.6/8/mount.zfs.8.rst", "man/v0.6/8/vdev_id.8.rst", "man/v0.6/8/zdb.8.rst", "man/v0.6/8/zed.8.rst", "man/v0.6/8/zfs.8.rst", "man/v0.6/8/zinject.8.rst", "man/v0.6/8/zpool.8.rst", "man/v0.6/8/zstreamdump.8.rst", "man/v0.6/index.rst", "man/v0.7/1/cstyle.1.rst", "man/v0.7/1/index.rst", "man/v0.7/1/raidz_test.1.rst", "man/v0.7/1/zhack.1.rst", "man/v0.7/1/zpios.1.rst", "man/v0.7/1/ztest.1.rst", "man/v0.7/5/index.rst", "man/v0.7/5/vdev_id.conf.5.rst", "man/v0.7/5/zfs-events.5.rst", "man/v0.7/5/zfs-module-parameters.5.rst", "man/v0.7/5/zpool-features.5.rst", "man/v0.7/8/fsck.zfs.8.rst", "man/v0.7/8/index.rst", "man/v0.7/8/mount.zfs.8.rst", "man/v0.7/8/vdev_id.8.rst", "man/v0.7/8/zdb.8.rst", "man/v0.7/8/zed.8.rst", "man/v0.7/8/zfs.8.rst", "man/v0.7/8/zgenhostid.8.rst", "man/v0.7/8/zinject.8.rst", "man/v0.7/8/zpool.8.rst", "man/v0.7/8/zstreamdump.8.rst", "man/v0.7/index.rst", "man/v0.8/1/cstyle.1.rst", "man/v0.8/1/index.rst", "man/v0.8/1/raidz_test.1.rst", "man/v0.8/1/zhack.1.rst", "man/v0.8/1/ztest.1.rst", "man/v0.8/1/zvol_wait.1.rst", "man/v0.8/5/index.rst", "man/v0.8/5/spl-module-parameters.5.rst", "man/v0.8/5/vdev_id.conf.5.rst", "man/v0.8/5/zfs-events.5.rst", "man/v0.8/5/zfs-module-parameters.5.rst", "man/v0.8/5/zpool-features.5.rst", "man/v0.8/8/fsck.zfs.8.rst", "man/v0.8/8/index.rst", "man/v0.8/8/mount.zfs.8.rst", "man/v0.8/8/vdev_id.8.rst", "man/v0.8/8/zdb.8.rst", "man/v0.8/8/zed.8.rst", "man/v0.8/8/zfs-mount-generator.8.rst", "man/v0.8/8/zfs-program.8.rst", "man/v0.8/8/zfs.8.rst", "man/v0.8/8/zfsprops.8.rst", "man/v0.8/8/zgenhostid.8.rst", "man/v0.8/8/zinject.8.rst", "man/v0.8/8/zpool.8.rst", "man/v0.8/8/zstreamdump.8.rst", "man/v0.8/index.rst", "man/v2.0/1/arcstat.1.rst", "man/v2.0/1/cstyle.1.rst", "man/v2.0/1/index.rst", "man/v2.0/1/raidz_test.1.rst", "man/v2.0/1/zhack.1.rst", "man/v2.0/1/ztest.1.rst", "man/v2.0/1/zvol_wait.1.rst", "man/v2.0/5/index.rst", "man/v2.0/5/spl-module-parameters.5.rst", "man/v2.0/5/vdev_id.conf.5.rst", "man/v2.0/5/zfs-events.5.rst", "man/v2.0/5/zfs-module-parameters.5.rst", "man/v2.0/5/zpool-features.5.rst", "man/v2.0/8/fsck.zfs.8.rst", "man/v2.0/8/index.rst", "man/v2.0/8/mount.zfs.8.rst", "man/v2.0/8/vdev_id.8.rst", "man/v2.0/8/zdb.8.rst", "man/v2.0/8/zed.8.rst", "man/v2.0/8/zfs-allow.8.rst", "man/v2.0/8/zfs-bookmark.8.rst", "man/v2.0/8/zfs-change-key.8.rst", "man/v2.0/8/zfs-clone.8.rst", "man/v2.0/8/zfs-create.8.rst", "man/v2.0/8/zfs-destroy.8.rst", "man/v2.0/8/zfs-diff.8.rst", "man/v2.0/8/zfs-get.8.rst", "man/v2.0/8/zfs-groupspace.8.rst", "man/v2.0/8/zfs-hold.8.rst", "man/v2.0/8/zfs-inherit.8.rst", "man/v2.0/8/zfs-jail.8.rst", "man/v2.0/8/zfs-list.8.rst", "man/v2.0/8/zfs-load-key.8.rst", "man/v2.0/8/zfs-mount-generator.8.rst", "man/v2.0/8/zfs-mount.8.rst", "man/v2.0/8/zfs-program.8.rst", "man/v2.0/8/zfs-project.8.rst", "man/v2.0/8/zfs-projectspace.8.rst", "man/v2.0/8/zfs-promote.8.rst", "man/v2.0/8/zfs-receive.8.rst", "man/v2.0/8/zfs-recv.8.rst", "man/v2.0/8/zfs-redact.8.rst", "man/v2.0/8/zfs-release.8.rst", "man/v2.0/8/zfs-rename.8.rst", "man/v2.0/8/zfs-rollback.8.rst", "man/v2.0/8/zfs-send.8.rst", "man/v2.0/8/zfs-set.8.rst", "man/v2.0/8/zfs-share.8.rst", "man/v2.0/8/zfs-snapshot.8.rst", "man/v2.0/8/zfs-unallow.8.rst", "man/v2.0/8/zfs-unjail.8.rst", "man/v2.0/8/zfs-unload-key.8.rst", "man/v2.0/8/zfs-unmount.8.rst", "man/v2.0/8/zfs-upgrade.8.rst", "man/v2.0/8/zfs-userspace.8.rst", "man/v2.0/8/zfs-wait.8.rst", "man/v2.0/8/zfs.8.rst", "man/v2.0/8/zfs_ids_to_path.8.rst", "man/v2.0/8/zfsconcepts.8.rst", "man/v2.0/8/zfsprops.8.rst", "man/v2.0/8/zgenhostid.8.rst", "man/v2.0/8/zinject.8.rst", "man/v2.0/8/zpool-add.8.rst", "man/v2.0/8/zpool-attach.8.rst", "man/v2.0/8/zpool-checkpoint.8.rst", "man/v2.0/8/zpool-clear.8.rst", "man/v2.0/8/zpool-create.8.rst", "man/v2.0/8/zpool-destroy.8.rst", "man/v2.0/8/zpool-detach.8.rst", "man/v2.0/8/zpool-events.8.rst", "man/v2.0/8/zpool-export.8.rst", "man/v2.0/8/zpool-get.8.rst", "man/v2.0/8/zpool-history.8.rst", "man/v2.0/8/zpool-import.8.rst", "man/v2.0/8/zpool-initialize.8.rst", "man/v2.0/8/zpool-iostat.8.rst", "man/v2.0/8/zpool-labelclear.8.rst", "man/v2.0/8/zpool-list.8.rst", "man/v2.0/8/zpool-offline.8.rst", "man/v2.0/8/zpool-online.8.rst", "man/v2.0/8/zpool-reguid.8.rst", "man/v2.0/8/zpool-remove.8.rst", "man/v2.0/8/zpool-reopen.8.rst", "man/v2.0/8/zpool-replace.8.rst", "man/v2.0/8/zpool-resilver.8.rst", "man/v2.0/8/zpool-scrub.8.rst", "man/v2.0/8/zpool-set.8.rst", "man/v2.0/8/zpool-split.8.rst", "man/v2.0/8/zpool-status.8.rst", "man/v2.0/8/zpool-sync.8.rst", "man/v2.0/8/zpool-trim.8.rst", "man/v2.0/8/zpool-upgrade.8.rst", "man/v2.0/8/zpool-wait.8.rst", "man/v2.0/8/zpool.8.rst", "man/v2.0/8/zpoolconcepts.8.rst", "man/v2.0/8/zpoolprops.8.rst", "man/v2.0/8/zstream.8.rst", "man/v2.0/8/zstreamdump.8.rst", "man/v2.0/index.rst", "man/v2.1/1/arcstat.1.rst", "man/v2.1/1/cstyle.1.rst", "man/v2.1/1/index.rst", "man/v2.1/1/raidz_test.1.rst", "man/v2.1/1/zhack.1.rst", "man/v2.1/1/ztest.1.rst", "man/v2.1/1/zvol_wait.1.rst", "man/v2.1/4/index.rst", "man/v2.1/4/spl.4.rst", "man/v2.1/4/zfs.4.rst", "man/v2.1/5/index.rst", "man/v2.1/5/vdev_id.conf.5.rst", "man/v2.1/7/dracut.zfs.7.rst", "man/v2.1/7/index.rst", "man/v2.1/7/zfsconcepts.7.rst", "man/v2.1/7/zfsprops.7.rst", "man/v2.1/7/zpool-features.7.rst", "man/v2.1/7/zpoolconcepts.7.rst", "man/v2.1/7/zpoolprops.7.rst", "man/v2.1/8/fsck.zfs.8.rst", "man/v2.1/8/index.rst", "man/v2.1/8/mount.zfs.8.rst", "man/v2.1/8/vdev_id.8.rst", "man/v2.1/8/zdb.8.rst", "man/v2.1/8/zed.8.rst", "man/v2.1/8/zfs-allow.8.rst", "man/v2.1/8/zfs-bookmark.8.rst", "man/v2.1/8/zfs-change-key.8.rst", "man/v2.1/8/zfs-clone.8.rst", "man/v2.1/8/zfs-create.8.rst", "man/v2.1/8/zfs-destroy.8.rst", "man/v2.1/8/zfs-diff.8.rst", "man/v2.1/8/zfs-get.8.rst", "man/v2.1/8/zfs-groupspace.8.rst", "man/v2.1/8/zfs-hold.8.rst", "man/v2.1/8/zfs-inherit.8.rst", "man/v2.1/8/zfs-jail.8.rst", "man/v2.1/8/zfs-list.8.rst", "man/v2.1/8/zfs-load-key.8.rst", "man/v2.1/8/zfs-mount-generator.8.rst", "man/v2.1/8/zfs-mount.8.rst", "man/v2.1/8/zfs-program.8.rst", "man/v2.1/8/zfs-project.8.rst", "man/v2.1/8/zfs-projectspace.8.rst", "man/v2.1/8/zfs-promote.8.rst", "man/v2.1/8/zfs-receive.8.rst", "man/v2.1/8/zfs-recv.8.rst", "man/v2.1/8/zfs-redact.8.rst", "man/v2.1/8/zfs-release.8.rst", "man/v2.1/8/zfs-rename.8.rst", "man/v2.1/8/zfs-rollback.8.rst", "man/v2.1/8/zfs-send.8.rst", "man/v2.1/8/zfs-set.8.rst", "man/v2.1/8/zfs-share.8.rst", "man/v2.1/8/zfs-snapshot.8.rst", "man/v2.1/8/zfs-unallow.8.rst", "man/v2.1/8/zfs-unjail.8.rst", "man/v2.1/8/zfs-unload-key.8.rst", "man/v2.1/8/zfs-unmount.8.rst", "man/v2.1/8/zfs-upgrade.8.rst", "man/v2.1/8/zfs-userspace.8.rst", "man/v2.1/8/zfs-wait.8.rst", "man/v2.1/8/zfs.8.rst", "man/v2.1/8/zfs_ids_to_path.8.rst", "man/v2.1/8/zfs_prepare_disk.8.rst", "man/v2.1/8/zgenhostid.8.rst", "man/v2.1/8/zinject.8.rst", "man/v2.1/8/zpool-add.8.rst", "man/v2.1/8/zpool-attach.8.rst", "man/v2.1/8/zpool-checkpoint.8.rst", "man/v2.1/8/zpool-clear.8.rst", "man/v2.1/8/zpool-create.8.rst", "man/v2.1/8/zpool-destroy.8.rst", "man/v2.1/8/zpool-detach.8.rst", "man/v2.1/8/zpool-events.8.rst", "man/v2.1/8/zpool-export.8.rst", "man/v2.1/8/zpool-get.8.rst", "man/v2.1/8/zpool-history.8.rst", "man/v2.1/8/zpool-import.8.rst", "man/v2.1/8/zpool-initialize.8.rst", "man/v2.1/8/zpool-iostat.8.rst", "man/v2.1/8/zpool-labelclear.8.rst", "man/v2.1/8/zpool-list.8.rst", "man/v2.1/8/zpool-offline.8.rst", "man/v2.1/8/zpool-online.8.rst", "man/v2.1/8/zpool-reguid.8.rst", "man/v2.1/8/zpool-remove.8.rst", "man/v2.1/8/zpool-reopen.8.rst", "man/v2.1/8/zpool-replace.8.rst", "man/v2.1/8/zpool-resilver.8.rst", "man/v2.1/8/zpool-scrub.8.rst", "man/v2.1/8/zpool-set.8.rst", "man/v2.1/8/zpool-split.8.rst", "man/v2.1/8/zpool-status.8.rst", "man/v2.1/8/zpool-sync.8.rst", "man/v2.1/8/zpool-trim.8.rst", "man/v2.1/8/zpool-upgrade.8.rst", "man/v2.1/8/zpool-wait.8.rst", "man/v2.1/8/zpool.8.rst", "man/v2.1/8/zpool_influxdb.8.rst", "man/v2.1/8/zstream.8.rst", "man/v2.1/8/zstreamdump.8.rst", "man/v2.1/index.rst", "man/v2.2/1/arcstat.1.rst", "man/v2.2/1/cstyle.1.rst", "man/v2.2/1/index.rst", "man/v2.2/1/raidz_test.1.rst", "man/v2.2/1/test-runner.1.rst", "man/v2.2/1/zhack.1.rst", "man/v2.2/1/ztest.1.rst", "man/v2.2/1/zvol_wait.1.rst", "man/v2.2/4/index.rst", "man/v2.2/4/spl.4.rst", "man/v2.2/4/zfs.4.rst", "man/v2.2/5/index.rst", "man/v2.2/5/vdev_id.conf.5.rst", "man/v2.2/7/dracut.zfs.7.rst", "man/v2.2/7/index.rst", "man/v2.2/7/vdevprops.7.rst", "man/v2.2/7/zfsconcepts.7.rst", "man/v2.2/7/zfsprops.7.rst", "man/v2.2/7/zpool-features.7.rst", "man/v2.2/7/zpoolconcepts.7.rst", "man/v2.2/7/zpoolprops.7.rst", "man/v2.2/8/fsck.zfs.8.rst", "man/v2.2/8/index.rst", "man/v2.2/8/mount.zfs.8.rst", "man/v2.2/8/vdev_id.8.rst", "man/v2.2/8/zdb.8.rst", "man/v2.2/8/zed.8.rst", "man/v2.2/8/zfs-allow.8.rst", "man/v2.2/8/zfs-bookmark.8.rst", "man/v2.2/8/zfs-change-key.8.rst", "man/v2.2/8/zfs-clone.8.rst", "man/v2.2/8/zfs-create.8.rst", "man/v2.2/8/zfs-destroy.8.rst", "man/v2.2/8/zfs-diff.8.rst", "man/v2.2/8/zfs-get.8.rst", "man/v2.2/8/zfs-groupspace.8.rst", "man/v2.2/8/zfs-hold.8.rst", "man/v2.2/8/zfs-inherit.8.rst", "man/v2.2/8/zfs-jail.8.rst", "man/v2.2/8/zfs-list.8.rst", "man/v2.2/8/zfs-load-key.8.rst", "man/v2.2/8/zfs-mount-generator.8.rst", "man/v2.2/8/zfs-mount.8.rst", "man/v2.2/8/zfs-program.8.rst", "man/v2.2/8/zfs-project.8.rst", "man/v2.2/8/zfs-projectspace.8.rst", "man/v2.2/8/zfs-promote.8.rst", "man/v2.2/8/zfs-receive.8.rst", "man/v2.2/8/zfs-recv.8.rst", "man/v2.2/8/zfs-redact.8.rst", "man/v2.2/8/zfs-release.8.rst", "man/v2.2/8/zfs-rename.8.rst", "man/v2.2/8/zfs-rollback.8.rst", "man/v2.2/8/zfs-send.8.rst", "man/v2.2/8/zfs-set.8.rst", "man/v2.2/8/zfs-share.8.rst", "man/v2.2/8/zfs-snapshot.8.rst", "man/v2.2/8/zfs-unallow.8.rst", "man/v2.2/8/zfs-unjail.8.rst", "man/v2.2/8/zfs-unload-key.8.rst", "man/v2.2/8/zfs-unmount.8.rst", "man/v2.2/8/zfs-unzone.8.rst", "man/v2.2/8/zfs-upgrade.8.rst", "man/v2.2/8/zfs-userspace.8.rst", "man/v2.2/8/zfs-wait.8.rst", "man/v2.2/8/zfs-zone.8.rst", "man/v2.2/8/zfs.8.rst", "man/v2.2/8/zfs_ids_to_path.8.rst", "man/v2.2/8/zfs_prepare_disk.8.rst", "man/v2.2/8/zgenhostid.8.rst", "man/v2.2/8/zinject.8.rst", "man/v2.2/8/zpool-add.8.rst", "man/v2.2/8/zpool-attach.8.rst", "man/v2.2/8/zpool-checkpoint.8.rst", "man/v2.2/8/zpool-clear.8.rst", "man/v2.2/8/zpool-create.8.rst", "man/v2.2/8/zpool-destroy.8.rst", "man/v2.2/8/zpool-detach.8.rst", "man/v2.2/8/zpool-events.8.rst", "man/v2.2/8/zpool-export.8.rst", "man/v2.2/8/zpool-get.8.rst", "man/v2.2/8/zpool-history.8.rst", "man/v2.2/8/zpool-import.8.rst", "man/v2.2/8/zpool-initialize.8.rst", "man/v2.2/8/zpool-iostat.8.rst", "man/v2.2/8/zpool-labelclear.8.rst", "man/v2.2/8/zpool-list.8.rst", "man/v2.2/8/zpool-offline.8.rst", "man/v2.2/8/zpool-online.8.rst", "man/v2.2/8/zpool-reguid.8.rst", "man/v2.2/8/zpool-remove.8.rst", "man/v2.2/8/zpool-reopen.8.rst", "man/v2.2/8/zpool-replace.8.rst", "man/v2.2/8/zpool-resilver.8.rst", "man/v2.2/8/zpool-scrub.8.rst", "man/v2.2/8/zpool-set.8.rst", "man/v2.2/8/zpool-split.8.rst", "man/v2.2/8/zpool-status.8.rst", "man/v2.2/8/zpool-sync.8.rst", "man/v2.2/8/zpool-trim.8.rst", "man/v2.2/8/zpool-upgrade.8.rst", "man/v2.2/8/zpool-wait.8.rst", "man/v2.2/8/zpool.8.rst", "man/v2.2/8/zpool_influxdb.8.rst", "man/v2.2/8/zstream.8.rst", "man/v2.2/8/zstreamdump.8.rst", "man/v2.2/index.rst", "msg/ZFS-8000-14/index.rst", "msg/ZFS-8000-2Q/index.rst", "msg/ZFS-8000-3C/index.rst", "msg/ZFS-8000-4J/index.rst", "msg/ZFS-8000-5E/index.rst", "msg/ZFS-8000-6X/index.rst", "msg/ZFS-8000-72/index.rst", "msg/ZFS-8000-8A/index.rst", "msg/ZFS-8000-9P/index.rst", "msg/ZFS-8000-A5/index.rst", "msg/ZFS-8000-ER/index.rst", "msg/ZFS-8000-EY/index.rst", "msg/ZFS-8000-HC/index.rst", "msg/ZFS-8000-JQ/index.rst", "msg/ZFS-8000-K4/index.rst", "msg/index.rst"], "titles": ["", "Checksums and Their Use in ZFS", "Feature Flags", "RAIDZ", "Troubleshooting", "dRAID", "Basic Concepts", "Buildbot Options", "Building ZFS", "Custom Packages", "Git and GitHub for beginners (ZoL edition)", "OpenZFS Exceptions", "OpenZFS Patches", "Developer Resources", "Alpine Linux Root on ZFS", "Alpine Linux", "Arch Linux Root on ZFS", "Arch Linux", "Debian Bookworm Root on ZFS", "Debian Bullseye Root on ZFS", "Debian Buster Root on ZFS", "Debian GNU Linux initrd documentation", "Debian Stretch Root on ZFS", "Debian", "Fedora", "Fedora Root on ZFS", "Fedora", "FreeBSD", "NixOS Root on ZFS", "NixOS", "RHEL and CentOS", "Rocky Linux Root on ZFS", "RHEL-based distro", "Slackware Root on ZFS", "Slackware", "Ubuntu 18.04 Root on ZFS", "Ubuntu 20.04 Root on ZFS", "Ubuntu 20.04 Root on ZFS for Raspberry Pi", "Ubuntu 22.04 Root on ZFS", "Ubuntu 22.04 Root on ZFS for Raspberry Pi", "Ubuntu", "Getting Started", "openSUSE", "openSUSE Leap Root on ZFS", "openSUSE Tumbleweed Root on ZFS", "License", "Async Writes", "Hardware", "Module Parameters", "Workload Tuning", "ZFS Transaction Delay", "ZFS I/O (ZIO) Scheduler", "Performance and Tuning", "Admin Documentation", "FAQ", "FAQ Hole birth", "Mailing Lists", "Signing Keys", "Project and Community", "<no title>", "OpenZFS Documentation", "Man Pages", "arcstat.1", "cstyle.1", "User Commands (1)", "raidz_test.1", "test-runner.1", "zhack.1", "ztest.1", "zvol_wait.1", "Devices and Special Files (4)", "spl.4", "zfs.4", "File Formats and Conventions (5)", "vdev_id.conf.5", "dracut.zfs.7", "Miscellaneous (7)", "vdevprops.7", "zfsconcepts.7", "zfsprops.7", "zpool-features.7", "zpoolconcepts.7", "zpoolprops.7", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-allow.8", "zfs-bookmark.8", "zfs-change-key.8", "zfs-clone.8", "zfs-create.8", "zfs-destroy.8", "zfs-diff.8", "zfs-get.8", "zfs-groupspace.8", "zfs-hold.8", "zfs-inherit.8", "zfs-jail.8", "zfs-list.8", "zfs-load-key.8", "zfs-mount-generator.8", "zfs-mount.8", "zfs-program.8", "zfs-project.8", "zfs-projectspace.8", "zfs-promote.8", "zfs-receive.8", "zfs-recv.8", "zfs-redact.8", "zfs-release.8", "zfs-rename.8", "zfs-rollback.8", "zfs-send.8", "zfs-set.8", "zfs-share.8", "zfs-snapshot.8", "zfs-unallow.8", "zfs-unjail.8", "zfs-unload-key.8", "zfs-unmount.8", "zfs-unzone.8", "zfs-upgrade.8", "zfs-userspace.8", "zfs-wait.8", "zfs-zone.8", "zfs.8", "zfs_ids_to_path.8", "zfs_prepare_disk.8", "zgenhostid.8", "zinject.8", "zpool-add.8", "zpool-attach.8", "zpool-checkpoint.8", "zpool-clear.8", "zpool-create.8", "zpool-destroy.8", "zpool-detach.8", "zpool-events.8", "zpool-export.8", "zpool-get.8", "zpool-history.8", "zpool-import.8", "zpool-initialize.8", "zpool-iostat.8", "zpool-labelclear.8", "zpool-list.8", "zpool-offline.8", "zpool-online.8", "zpool-reguid.8", "zpool-remove.8", "zpool-reopen.8", "zpool-replace.8", "zpool-resilver.8", "zpool-scrub.8", "zpool-set.8", "zpool-split.8", "zpool-status.8", "zpool-sync.8", "zpool-trim.8", "zpool-upgrade.8", "zpool-wait.8", "zpool.8", "zpool_influxdb.8", "zstream.8", "zstreamdump.8", "master", "cstyle.1", "User Commands (1)", "zhack.1", "zpios.1", "ztest.1", "File Formats and Conventions (5)", "vdev_id.conf.5", "zfs-events.5", "zfs-module-parameters.5", "zpool-features.5", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs.8", "zinject.8", "zpool.8", "zstreamdump.8", "v0.6", "cstyle.1", "User Commands (1)", "raidz_test.1", "zhack.1", "zpios.1", "ztest.1", "File Formats and Conventions (5)", "vdev_id.conf.5", "zfs-events.5", "zfs-module-parameters.5", "zpool-features.5", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs.8", "zgenhostid.8", "zinject.8", "zpool.8", "zstreamdump.8", "v0.7", "cstyle.1", "User Commands (1)", "raidz_test.1", "zhack.1", "ztest.1", "zvol_wait.1", "File Formats and Conventions (5)", "spl-module-parameters.5", "vdev_id.conf.5", "zfs-events.5", "zfs-module-parameters.5", "zpool-features.5", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-mount-generator.8", "zfs-program.8", "zfs.8", "zfsprops.8", "zgenhostid.8", "zinject.8", "zpool.8", "zstreamdump.8", "v0.8", "arcstat.1", "cstyle.1", "User Commands (1)", "raidz_test.1", "zhack.1", "ztest.1", "zvol_wait.1", "File Formats and Conventions (5)", "spl-module-parameters.5", "vdev_id.conf.5", "zfs-events.5", "zfs-module-parameters.5", "zpool-features.5", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-allow.8", "zfs-bookmark.8", "zfs-change-key.8", "zfs-clone.8", "zfs-create.8", "zfs-destroy.8", "zfs-diff.8", "zfs-get.8", "zfs-groupspace.8", "zfs-hold.8", "zfs-inherit.8", "zfs-jail.8", "zfs-list.8", "zfs-load-key.8", "zfs-mount-generator.8", "zfs-mount.8", "zfs-program.8", "zfs-project.8", "zfs-projectspace.8", "zfs-promote.8", "zfs-receive.8", "zfs-recv.8", "zfs-redact.8", "zfs-release.8", "zfs-rename.8", "zfs-rollback.8", "zfs-send.8", "zfs-set.8", "zfs-share.8", "zfs-snapshot.8", "zfs-unallow.8", "zfs-unjail.8", "zfs-unload-key.8", "zfs-unmount.8", "zfs-upgrade.8", "zfs-userspace.8", "zfs-wait.8", "zfs.8", "zfs_ids_to_path.8", "zfsconcepts.8", "zfsprops.8", "zgenhostid.8", "zinject.8", "zpool-add.8", "zpool-attach.8", "zpool-checkpoint.8", "zpool-clear.8", "zpool-create.8", "zpool-destroy.8", "zpool-detach.8", "zpool-events.8", "zpool-export.8", "zpool-get.8", "zpool-history.8", "zpool-import.8", "zpool-initialize.8", "zpool-iostat.8", "zpool-labelclear.8", "zpool-list.8", "zpool-offline.8", "zpool-online.8", "zpool-reguid.8", "zpool-remove.8", "zpool-reopen.8", "zpool-replace.8", "zpool-resilver.8", "zpool-scrub.8", "zpool-set.8", "zpool-split.8", "zpool-status.8", "zpool-sync.8", "zpool-trim.8", "zpool-upgrade.8", "zpool-wait.8", "zpool.8", "zpoolconcepts.8", "zpoolprops.8", "zstream.8", "zstreamdump.8", "v2.0", "arcstat.1", "cstyle.1", "User Commands (1)", "raidz_test.1", "zhack.1", "ztest.1", "zvol_wait.1", "Devices and Special Files (4)", "spl.4", "zfs.4", "File Formats and Conventions (5)", "vdev_id.conf.5", "dracut.zfs.7", "Miscellaneous (7)", "zfsconcepts.7", "zfsprops.7", "zpool-features.7", "zpoolconcepts.7", "zpoolprops.7", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-allow.8", "zfs-bookmark.8", "zfs-change-key.8", "zfs-clone.8", "zfs-create.8", "zfs-destroy.8", "zfs-diff.8", "zfs-get.8", "zfs-groupspace.8", "zfs-hold.8", "zfs-inherit.8", "zfs-jail.8", "zfs-list.8", "zfs-load-key.8", "zfs-mount-generator.8", "zfs-mount.8", "zfs-program.8", "zfs-project.8", "zfs-projectspace.8", "zfs-promote.8", "zfs-receive.8", "zfs-recv.8", "zfs-redact.8", "zfs-release.8", "zfs-rename.8", "zfs-rollback.8", "zfs-send.8", "zfs-set.8", "zfs-share.8", "zfs-snapshot.8", "zfs-unallow.8", "zfs-unjail.8", "zfs-unload-key.8", "zfs-unmount.8", "zfs-upgrade.8", "zfs-userspace.8", "zfs-wait.8", "zfs.8", "zfs_ids_to_path.8", "zfs_prepare_disk.8", "zgenhostid.8", "zinject.8", "zpool-add.8", "zpool-attach.8", "zpool-checkpoint.8", "zpool-clear.8", "zpool-create.8", "zpool-destroy.8", "zpool-detach.8", "zpool-events.8", "zpool-export.8", "zpool-get.8", "zpool-history.8", "zpool-import.8", "zpool-initialize.8", "zpool-iostat.8", "zpool-labelclear.8", "zpool-list.8", "zpool-offline.8", "zpool-online.8", "zpool-reguid.8", "zpool-remove.8", "zpool-reopen.8", "zpool-replace.8", "zpool-resilver.8", "zpool-scrub.8", "zpool-set.8", "zpool-split.8", "zpool-status.8", "zpool-sync.8", "zpool-trim.8", "zpool-upgrade.8", "zpool-wait.8", "zpool.8", "zpool_influxdb.8", "zstream.8", "zstreamdump.8", "v2.1", "arcstat.1", "cstyle.1", "User Commands (1)", "raidz_test.1", "test-runner.1", "zhack.1", "ztest.1", "zvol_wait.1", "Devices and Special Files (4)", "spl.4", "zfs.4", "File Formats and Conventions (5)", "vdev_id.conf.5", "dracut.zfs.7", "Miscellaneous (7)", "vdevprops.7", "zfsconcepts.7", "zfsprops.7", "zpool-features.7", "zpoolconcepts.7", "zpoolprops.7", "fsck.zfs.8", "System Administration Commands (8)", "mount.zfs.8", "vdev_id.8", "zdb.8", "zed.8", "zfs-allow.8", "zfs-bookmark.8", "zfs-change-key.8", "zfs-clone.8", "zfs-create.8", "zfs-destroy.8", "zfs-diff.8", "zfs-get.8", "zfs-groupspace.8", "zfs-hold.8", "zfs-inherit.8", "zfs-jail.8", "zfs-list.8", "zfs-load-key.8", "zfs-mount-generator.8", "zfs-mount.8", "zfs-program.8", "zfs-project.8", "zfs-projectspace.8", "zfs-promote.8", "zfs-receive.8", "zfs-recv.8", "zfs-redact.8", "zfs-release.8", "zfs-rename.8", "zfs-rollback.8", "zfs-send.8", "zfs-set.8", "zfs-share.8", "zfs-snapshot.8", "zfs-unallow.8", "zfs-unjail.8", "zfs-unload-key.8", "zfs-unmount.8", "zfs-unzone.8", "zfs-upgrade.8", "zfs-userspace.8", "zfs-wait.8", "zfs-zone.8", "zfs.8", "zfs_ids_to_path.8", "zfs_prepare_disk.8", "zgenhostid.8", "zinject.8", "zpool-add.8", "zpool-attach.8", "zpool-checkpoint.8", "zpool-clear.8", "zpool-create.8", "zpool-destroy.8", "zpool-detach.8", "zpool-events.8", "zpool-export.8", "zpool-get.8", "zpool-history.8", "zpool-import.8", "zpool-initialize.8", "zpool-iostat.8", "zpool-labelclear.8", "zpool-list.8", "zpool-offline.8", "zpool-online.8", "zpool-reguid.8", "zpool-remove.8", "zpool-reopen.8", "zpool-replace.8", "zpool-resilver.8", "zpool-scrub.8", "zpool-set.8", "zpool-split.8", "zpool-status.8", "zpool-sync.8", "zpool-trim.8", "zpool-upgrade.8", "zpool-wait.8", "zpool.8", "zpool_influxdb.8", "zstream.8", "zstreamdump.8", "v2.2", "Message ID:\u00a0ZFS-8000-14", "Message ID:\u00a0ZFS-8000-2Q", "Message ID:\u00a0ZFS-8000-3C", "Message ID: ZFS-8000-4J", "Message ID: ZFS-8000-5E", "Message ID: ZFS-8000-6X", "Message ID:\u00a0ZFS-8000-72", "Message ID:\u00a0ZFS-8000-8A", "Message ID:\u00a0ZFS-8000-9P", "Message ID:\u00a0ZFS-8000-A5", "Message ID:\u00a0ZFS-8000-ER", "Message ID:\u00a0ZFS-8000-EY", "Message ID: ZFS-8000-HC", "Message ID:\u00a0ZFS-8000-JQ", "Message ID:\u00a0ZFS-8000-K4", "ZFS Messages"], "terms": {"end": [1, 14, 16, 25, 28, 31, 47, 48, 54, 55, 66, 72, 80, 81, 82, 87, 103, 105, 132, 134, 140, 146, 177, 186, 187, 188, 199, 209, 210, 211, 222, 223, 232, 236, 237, 238, 250, 251, 257, 275, 301, 315, 334, 335, 337, 348, 355, 356, 357, 362, 378, 380, 405, 413, 419, 446, 452, 460, 461, 462, 467, 483, 485, 512, 520, 526], "ar": [1, 2, 4, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 23, 25, 26, 27, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 47, 48, 49, 50, 51, 54, 57, 62, 63, 65, 66, 68, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 124, 125, 126, 128, 130, 132, 133, 134, 136, 137, 139, 140, 141, 142, 144, 145, 146, 148, 151, 152, 154, 156, 157, 158, 159, 161, 163, 164, 165, 166, 167, 169, 172, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 187, 188, 190, 192, 194, 195, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 209, 210, 211, 213, 215, 217, 218, 220, 221, 222, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 236, 237, 238, 240, 241, 243, 245, 246, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 271, 272, 273, 274, 275, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 298, 299, 301, 302, 303, 305, 306, 308, 309, 310, 311, 313, 314, 315, 317, 320, 321, 323, 325, 326, 327, 328, 330, 332, 333, 334, 335, 336, 337, 339, 340, 342, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 403, 405, 406, 407, 409, 410, 412, 413, 414, 415, 417, 418, 419, 421, 424, 425, 427, 429, 430, 431, 432, 434, 436, 437, 438, 442, 443, 445, 446, 448, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 504, 505, 506, 508, 510, 512, 513, 514, 516, 517, 519, 520, 521, 522, 524, 525, 526, 528, 531, 532, 534, 536, 537, 538, 539, 541, 543, 544, 545, 546, 547, 549, 550, 551, 553, 554, 555, 556, 557, 559, 560, 561, 562, 563], "kei": [1, 5, 9, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 32, 35, 36, 37, 38, 39, 43, 44, 58, 59, 60, 72, 75, 79, 80, 84, 87, 89, 93, 103, 104, 105, 111, 115, 117, 119, 122, 128, 144, 152, 156, 158, 165, 185, 200, 207, 224, 231, 232, 233, 237, 251, 252, 254, 259, 263, 273, 274, 275, 281, 285, 289, 292, 296, 299, 313, 321, 327, 348, 351, 354, 355, 359, 364, 368, 378, 379, 380, 386, 390, 392, 394, 397, 401, 417, 425, 431, 438, 452, 455, 459, 460, 464, 467, 469, 473, 483, 484, 485, 491, 495, 497, 499, 502, 508, 524, 532, 536, 538, 545, 559], "featur": [1, 6, 11, 12, 14, 16, 17, 18, 19, 20, 22, 25, 29, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 59, 60, 67, 68, 72, 76, 79, 81, 82, 90, 91, 100, 102, 105, 109, 110, 111, 115, 120, 121, 124, 128, 137, 142, 152, 155, 156, 157, 159, 162, 164, 171, 173, 174, 177, 185, 187, 193, 195, 196, 199, 207, 210, 216, 217, 219, 223, 233, 237, 244, 245, 247, 251, 260, 261, 272, 275, 279, 280, 281, 285, 291, 296, 299, 306, 311, 321, 324, 326, 331, 333, 334, 335, 343, 344, 348, 352, 354, 356, 357, 365, 366, 375, 377, 380, 384, 385, 386, 390, 395, 396, 398, 401, 410, 415, 425, 428, 430, 435, 437, 447, 448, 452, 456, 459, 461, 462, 470, 471, 480, 482, 485, 489, 490, 491, 495, 500, 501, 504, 508, 517, 522, 532, 535, 536, 537, 539, 542, 544, 559], "an": [1, 2, 3, 4, 5, 7, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 22, 25, 26, 27, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 51, 58, 63, 66, 68, 71, 72, 74, 75, 78, 79, 80, 81, 82, 83, 86, 87, 88, 89, 90, 91, 93, 94, 95, 96, 97, 99, 100, 102, 103, 104, 105, 107, 109, 110, 111, 114, 115, 116, 118, 119, 120, 121, 122, 123, 125, 127, 128, 130, 131, 132, 133, 135, 137, 140, 144, 146, 147, 148, 152, 153, 154, 155, 156, 159, 161, 163, 164, 166, 167, 169, 172, 173, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 190, 194, 195, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 209, 210, 213, 217, 220, 221, 222, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 236, 237, 241, 245, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 263, 264, 265, 266, 267, 269, 270, 272, 273, 274, 275, 277, 279, 280, 281, 284, 285, 286, 288, 289, 290, 291, 292, 294, 296, 298, 299, 301, 303, 304, 306, 313, 315, 316, 317, 321, 322, 323, 324, 325, 328, 330, 332, 333, 334, 335, 336, 340, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 361, 362, 363, 364, 365, 366, 368, 369, 370, 371, 372, 374, 375, 377, 378, 379, 380, 382, 384, 385, 386, 389, 390, 391, 393, 394, 395, 396, 397, 399, 401, 403, 404, 405, 408, 410, 413, 417, 419, 420, 421, 425, 426, 427, 428, 429, 432, 434, 436, 437, 439, 440, 443, 446, 448, 451, 452, 454, 455, 458, 459, 460, 461, 462, 463, 466, 467, 468, 469, 470, 471, 473, 474, 475, 476, 477, 479, 480, 482, 483, 484, 485, 487, 489, 490, 491, 494, 495, 496, 498, 499, 500, 501, 502, 503, 505, 507, 508, 510, 511, 512, 513, 515, 517, 520, 524, 526, 527, 528, 532, 533, 534, 535, 536, 539, 541, 543, 544, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 563], "import": [1, 4, 5, 8, 9, 12, 18, 19, 20, 22, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 49, 51, 54, 57, 68, 69, 72, 75, 78, 79, 80, 81, 82, 84, 87, 91, 101, 102, 103, 109, 110, 121, 124, 133, 134, 135, 136, 137, 140, 141, 148, 149, 150, 151, 156, 158, 164, 176, 177, 178, 183, 185, 187, 198, 199, 200, 205, 207, 210, 218, 222, 223, 224, 229, 233, 237, 246, 250, 251, 252, 254, 257, 261, 271, 272, 291, 293, 298, 299, 302, 303, 304, 305, 306, 310, 317, 318, 319, 320, 325, 327, 333, 334, 335, 344, 345, 348, 351, 353, 354, 355, 356, 357, 359, 362, 366, 376, 377, 378, 396, 398, 406, 407, 408, 409, 410, 413, 414, 421, 422, 423, 424, 429, 431, 437, 448, 449, 452, 455, 458, 459, 460, 461, 462, 464, 467, 471, 481, 482, 483, 489, 490, 501, 504, 513, 514, 515, 516, 517, 520, 521, 528, 529, 530, 531, 536, 538, 544, 549, 550, 551, 552, 553, 554, 555, 558, 559, 560], "differenti": 1, "over": [1, 5, 10, 12, 18, 19, 20, 35, 36, 37, 38, 43, 44, 47, 48, 54, 63, 72, 74, 79, 80, 81, 82, 89, 103, 105, 109, 110, 111, 115, 119, 132, 133, 146, 164, 169, 175, 177, 185, 186, 187, 190, 197, 199, 200, 207, 209, 210, 213, 220, 221, 223, 224, 232, 233, 236, 237, 240, 241, 248, 249, 251, 252, 259, 275, 279, 280, 281, 285, 289, 299, 301, 333, 335, 340, 348, 350, 354, 355, 356, 357, 364, 378, 380, 384, 385, 386, 390, 394, 405, 437, 443, 452, 454, 459, 460, 461, 462, 469, 483, 485, 489, 490, 491, 495, 499, 512, 513, 526, 544, 557, 559], "other": [1, 2, 4, 5, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 51, 55, 66, 68, 72, 74, 75, 79, 80, 81, 82, 87, 88, 89, 91, 93, 95, 96, 99, 100, 102, 105, 111, 114, 115, 116, 119, 120, 121, 128, 137, 138, 139, 141, 152, 159, 164, 175, 177, 178, 183, 184, 185, 187, 195, 197, 199, 200, 205, 206, 207, 210, 217, 220, 221, 223, 224, 229, 230, 232, 233, 237, 245, 248, 249, 251, 252, 257, 258, 259, 261, 263, 265, 266, 269, 272, 275, 281, 284, 285, 286, 289, 291, 296, 299, 306, 307, 308, 310, 321, 328, 333, 334, 335, 344, 347, 348, 350, 351, 354, 355, 356, 357, 362, 363, 364, 366, 368, 370, 371, 374, 375, 377, 380, 386, 389, 390, 391, 394, 395, 396, 401, 410, 411, 412, 414, 425, 432, 437, 446, 448, 452, 454, 455, 459, 460, 461, 462, 467, 468, 469, 471, 473, 475, 476, 479, 480, 482, 485, 491, 494, 495, 496, 499, 500, 501, 508, 517, 518, 519, 521, 532, 539, 544, 557, 559, 560], "raid": [1, 3, 5, 36, 38, 48, 58, 65, 68, 72, 79, 80, 81, 134, 137, 163, 164, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 299, 333, 334, 344, 348, 354, 356, 437, 445, 448, 452, 459, 460, 461, 517, 544, 557], "implement": [1, 6, 7, 8, 11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 58, 60, 65, 71, 72, 78, 79, 80, 91, 102, 105, 121, 164, 172, 178, 181, 184, 185, 192, 194, 199, 200, 203, 206, 207, 210, 215, 220, 223, 224, 227, 230, 231, 233, 237, 243, 248, 251, 252, 255, 258, 261, 272, 273, 275, 291, 298, 299, 333, 342, 347, 348, 353, 354, 355, 366, 377, 380, 396, 437, 445, 451, 452, 458, 459, 460, 471, 482, 485, 501, 544, 559], "filesystem": [1, 11, 14, 16, 18, 19, 20, 22, 23, 25, 28, 31, 33, 35, 36, 37, 38, 39, 40, 42, 47, 49, 54, 58, 67, 72, 75, 78, 79, 80, 82, 83, 85, 89, 91, 92, 93, 94, 95, 96, 97, 99, 100, 101, 102, 103, 104, 105, 106, 107, 109, 110, 111, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 132, 156, 171, 176, 177, 178, 179, 181, 185, 186, 187, 188, 193, 198, 199, 200, 201, 203, 207, 209, 211, 216, 222, 223, 224, 225, 227, 232, 233, 236, 237, 238, 244, 250, 251, 252, 253, 255, 259, 261, 262, 263, 264, 265, 266, 267, 269, 270, 271, 272, 274, 275, 277, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 298, 299, 301, 335, 343, 348, 351, 353, 354, 355, 357, 358, 360, 364, 366, 367, 368, 369, 370, 371, 372, 374, 375, 376, 377, 378, 379, 380, 381, 382, 384, 385, 386, 388, 389, 390, 391, 392, 394, 395, 396, 397, 398, 399, 400, 401, 405, 447, 452, 455, 458, 459, 460, 462, 463, 465, 469, 471, 472, 473, 474, 475, 476, 477, 479, 480, 481, 482, 483, 484, 485, 486, 487, 489, 490, 491, 493, 494, 495, 496, 497, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 512, 536, 549, 551, 556, 560], "advantag": [1, 5, 47, 54, 79, 80, 81, 111, 115, 178, 185, 187, 200, 207, 210, 224, 233, 237, 252, 281, 285, 299, 334, 354, 355, 356, 386, 390, 459, 460, 461, 491, 495], "includ": [1, 2, 4, 9, 11, 12, 16, 18, 19, 20, 22, 23, 32, 35, 36, 37, 38, 39, 40, 42, 43, 44, 47, 48, 51, 54, 55, 58, 62, 63, 66, 68, 72, 75, 78, 79, 80, 81, 82, 87, 91, 94, 97, 102, 105, 107, 111, 115, 118, 121, 125, 130, 131, 134, 140, 143, 146, 152, 156, 158, 159, 160, 164, 166, 167, 169, 176, 177, 178, 183, 185, 187, 190, 195, 198, 199, 200, 205, 207, 208, 210, 213, 217, 222, 223, 224, 229, 232, 233, 235, 237, 240, 241, 245, 250, 251, 252, 257, 261, 264, 267, 272, 275, 277, 281, 285, 291, 294, 299, 300, 312, 315, 321, 327, 328, 329, 333, 334, 335, 336, 339, 340, 344, 348, 351, 354, 355, 356, 357, 362, 366, 369, 372, 377, 380, 382, 386, 390, 396, 399, 403, 404, 413, 416, 419, 425, 431, 432, 433, 437, 439, 440, 442, 443, 446, 448, 452, 455, 458, 459, 460, 461, 462, 467, 471, 474, 477, 482, 485, 487, 491, 495, 501, 505, 510, 511, 520, 523, 526, 532, 536, 538, 539, 540, 544, 546, 547, 555, 559], "detect": [1, 5, 8, 12, 21, 26, 32, 47, 48, 49, 54, 63, 68, 72, 79, 81, 82, 87, 105, 111, 115, 137, 140, 156, 169, 173, 176, 187, 190, 195, 198, 199, 210, 213, 217, 222, 223, 229, 232, 233, 237, 241, 245, 250, 251, 257, 275, 281, 285, 299, 306, 334, 335, 340, 344, 348, 354, 356, 357, 362, 380, 386, 390, 410, 413, 443, 448, 452, 459, 461, 462, 467, 485, 491, 495, 517, 520, 536, 551, 552, 554, 555, 556, 559], "data": [1, 3, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 50, 51, 54, 55, 58, 62, 63, 65, 67, 68, 72, 78, 79, 80, 81, 82, 87, 91, 92, 93, 94, 102, 108, 109, 110, 111, 113, 114, 115, 118, 121, 128, 132, 134, 139, 140, 141, 144, 145, 148, 152, 156, 159, 160, 161, 164, 165, 166, 167, 169, 171, 172, 176, 177, 178, 183, 185, 186, 187, 188, 190, 192, 193, 194, 198, 199, 200, 205, 207, 209, 210, 211, 213, 215, 216, 222, 223, 224, 229, 233, 236, 237, 238, 240, 241, 243, 244, 250, 251, 252, 257, 261, 264, 272, 279, 280, 281, 284, 285, 291, 296, 298, 299, 301, 308, 310, 313, 314, 321, 325, 328, 329, 330, 333, 334, 335, 336, 337, 339, 340, 342, 343, 344, 348, 353, 354, 355, 356, 357, 362, 366, 369, 377, 384, 385, 386, 389, 390, 396, 401, 405, 412, 413, 414, 417, 418, 425, 429, 432, 433, 434, 437, 438, 439, 440, 442, 443, 445, 447, 448, 452, 458, 459, 460, 461, 462, 467, 471, 472, 473, 474, 482, 488, 489, 490, 491, 493, 494, 495, 498, 501, 508, 512, 519, 520, 521, 524, 525, 528, 532, 536, 539, 540, 541, 544, 545, 546, 547, 550, 551, 552, 553, 554, 555, 557, 558, 559, 561, 562, 563, 564], "corrupt": [1, 47, 48, 49, 54, 58, 67, 72, 79, 81, 82, 87, 109, 110, 132, 137, 140, 141, 166, 167, 171, 176, 177, 185, 186, 187, 193, 198, 199, 207, 209, 210, 216, 222, 223, 233, 236, 237, 244, 250, 251, 299, 301, 306, 310, 334, 343, 348, 354, 356, 357, 405, 410, 413, 414, 447, 452, 459, 461, 462, 467, 489, 490, 512, 517, 520, 521, 546, 547, 559, 560, 563, 564], "upon": [1, 26, 48, 54, 66, 72, 109, 110, 111, 115, 140, 144, 149, 150, 156, 176, 187, 198, 199, 210, 222, 223, 233, 237, 250, 251, 279, 280, 281, 285, 313, 318, 319, 325, 348, 384, 385, 386, 390, 413, 417, 422, 423, 429, 446, 452, 489, 490, 491, 495, 520, 524, 529, 530, 536, 559], "read": [1, 4, 5, 8, 10, 12, 18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 45, 47, 48, 49, 50, 51, 54, 62, 67, 71, 72, 74, 77, 78, 79, 80, 81, 82, 87, 88, 89, 91, 96, 99, 102, 105, 109, 110, 111, 115, 116, 119, 121, 128, 132, 133, 134, 140, 144, 146, 149, 150, 152, 156, 159, 164, 165, 171, 175, 176, 177, 178, 183, 184, 185, 186, 187, 188, 193, 197, 198, 199, 200, 205, 206, 207, 209, 210, 211, 216, 220, 221, 222, 223, 224, 229, 230, 232, 233, 236, 237, 238, 240, 244, 248, 249, 250, 251, 252, 257, 258, 259, 261, 272, 275, 279, 280, 281, 285, 289, 291, 296, 298, 299, 301, 313, 315, 318, 319, 333, 334, 335, 337, 339, 343, 347, 348, 350, 353, 354, 355, 356, 357, 362, 363, 364, 366, 377, 380, 384, 385, 386, 390, 394, 396, 401, 405, 413, 417, 419, 422, 423, 429, 437, 438, 442, 447, 451, 452, 454, 457, 458, 459, 460, 461, 462, 467, 468, 469, 471, 476, 479, 482, 485, 489, 490, 491, 495, 496, 499, 501, 508, 512, 513, 520, 524, 526, 529, 530, 532, 536, 539, 544, 545, 550, 551, 552, 553, 555, 556, 557, 558, 559, 561, 562, 564], "from": [1, 4, 5, 9, 10, 11, 12, 14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 51, 55, 56, 57, 58, 62, 63, 66, 67, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 87, 88, 89, 90, 91, 92, 93, 95, 96, 97, 98, 99, 100, 101, 102, 105, 106, 107, 108, 109, 110, 111, 112, 113, 115, 116, 119, 120, 121, 123, 124, 125, 127, 128, 133, 134, 135, 137, 139, 140, 141, 144, 146, 147, 148, 152, 155, 156, 158, 161, 162, 164, 165, 169, 171, 172, 173, 175, 176, 177, 178, 183, 184, 185, 187, 188, 190, 193, 194, 195, 197, 198, 199, 200, 205, 206, 207, 210, 211, 213, 216, 217, 220, 221, 222, 223, 224, 229, 230, 231, 232, 233, 237, 238, 240, 241, 244, 245, 248, 249, 250, 251, 252, 257, 258, 259, 260, 261, 262, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 275, 276, 277, 278, 279, 280, 281, 282, 283, 285, 286, 289, 290, 291, 293, 294, 296, 298, 299, 304, 306, 308, 310, 313, 315, 316, 321, 324, 325, 327, 330, 333, 334, 335, 337, 339, 340, 343, 347, 348, 350, 351, 353, 354, 355, 356, 357, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 373, 374, 375, 376, 377, 380, 381, 382, 383, 384, 385, 386, 387, 388, 390, 391, 394, 395, 396, 398, 399, 401, 408, 410, 412, 413, 414, 417, 419, 420, 425, 428, 429, 431, 434, 435, 437, 438, 442, 443, 446, 447, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 467, 468, 469, 470, 471, 472, 473, 475, 476, 477, 478, 479, 480, 481, 482, 485, 486, 487, 488, 489, 490, 491, 492, 493, 495, 496, 499, 500, 501, 503, 504, 505, 507, 508, 513, 515, 517, 519, 520, 521, 524, 526, 527, 528, 532, 535, 536, 538, 541, 542, 544, 545, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 563], "media": [1, 14, 16, 18, 19, 20, 25, 31, 36, 38, 43, 44, 48, 49, 72, 81, 177, 187, 199, 210, 223, 237, 251, 334, 348, 356, 452, 461], "block": [1, 3, 5, 11, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 49, 50, 63, 65, 68, 69, 72, 78, 79, 80, 81, 82, 87, 91, 93, 95, 102, 104, 109, 110, 111, 115, 117, 121, 122, 128, 132, 133, 134, 137, 140, 144, 146, 148, 156, 158, 159, 161, 164, 169, 173, 176, 177, 178, 183, 185, 186, 187, 190, 192, 194, 195, 198, 199, 200, 205, 207, 209, 210, 213, 215, 217, 222, 223, 224, 229, 233, 236, 237, 238, 241, 243, 245, 250, 251, 252, 257, 261, 263, 265, 272, 274, 279, 280, 281, 285, 291, 292, 296, 298, 299, 301, 302, 306, 313, 315, 317, 327, 328, 330, 333, 334, 335, 337, 340, 342, 344, 345, 348, 353, 354, 355, 356, 357, 362, 366, 368, 370, 377, 379, 384, 385, 386, 390, 392, 396, 397, 401, 405, 406, 410, 413, 417, 419, 421, 429, 431, 432, 434, 437, 443, 445, 448, 449, 452, 458, 459, 460, 461, 462, 467, 471, 473, 475, 482, 484, 489, 490, 491, 495, 497, 501, 502, 508, 512, 513, 517, 520, 524, 526, 528, 536, 538, 539, 541, 544, 561, 562], "automat": [1, 9, 12, 18, 19, 20, 22, 32, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 57, 58, 71, 72, 75, 78, 79, 80, 81, 82, 87, 88, 91, 92, 93, 96, 99, 102, 104, 113, 116, 117, 121, 122, 128, 132, 134, 140, 156, 158, 164, 166, 167, 176, 178, 184, 185, 186, 187, 198, 200, 206, 207, 209, 210, 220, 222, 223, 224, 230, 231, 233, 236, 237, 248, 250, 251, 252, 258, 261, 262, 263, 272, 273, 274, 283, 287, 291, 292, 296, 298, 299, 301, 303, 325, 327, 333, 334, 335, 347, 348, 351, 353, 354, 355, 356, 357, 363, 366, 367, 368, 377, 379, 388, 392, 396, 397, 401, 405, 407, 413, 429, 431, 437, 451, 452, 455, 458, 459, 460, 461, 462, 467, 468, 471, 472, 473, 476, 479, 482, 484, 493, 496, 497, 501, 502, 508, 512, 514, 520, 536, 538, 544, 546, 547, 549, 551, 552, 557], "repair": [1, 5, 48, 58, 67, 72, 81, 109, 110, 156, 166, 167, 187, 199, 210, 223, 237, 251, 325, 334, 348, 356, 429, 447, 452, 461, 489, 490, 536, 546, 547, 549, 556, 557], "possibl": [1, 7, 8, 9, 11, 12, 21, 22, 33, 39, 43, 47, 48, 49, 51, 54, 55, 62, 72, 78, 79, 80, 81, 82, 87, 88, 91, 102, 108, 109, 110, 111, 115, 121, 126, 134, 140, 154, 156, 161, 163, 166, 167, 176, 177, 178, 183, 184, 185, 187, 198, 199, 200, 205, 206, 207, 210, 220, 222, 223, 224, 229, 230, 233, 237, 240, 248, 250, 251, 252, 257, 258, 261, 272, 278, 279, 280, 281, 285, 291, 295, 298, 299, 303, 323, 325, 330, 332, 334, 335, 339, 348, 353, 354, 355, 356, 357, 362, 363, 366, 377, 383, 384, 385, 386, 390, 396, 400, 407, 413, 427, 429, 434, 436, 442, 452, 458, 459, 460, 461, 462, 467, 468, 471, 482, 488, 489, 490, 491, 495, 501, 506, 514, 520, 534, 536, 541, 543, 546, 547, 555, 556], "protect": [1, 4, 18, 19, 20, 22, 26, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 58, 79, 82, 91, 102, 111, 115, 121, 233, 237, 261, 272, 281, 285, 291, 299, 335, 354, 357, 366, 377, 386, 390, 396, 459, 462, 471, 482, 491, 495, 501], "suitabl": [1, 2, 32, 48, 49, 79, 165, 207, 233, 299, 354, 438, 459, 545], "configur": [1, 4, 5, 9, 10, 12, 13, 23, 27, 29, 32, 47, 48, 49, 55, 65, 66, 67, 68, 71, 72, 74, 79, 80, 81, 82, 86, 87, 88, 103, 104, 122, 128, 132, 133, 134, 137, 140, 144, 147, 152, 154, 158, 164, 171, 173, 175, 177, 182, 183, 184, 185, 186, 187, 193, 195, 197, 199, 200, 204, 205, 206, 207, 209, 210, 216, 217, 220, 221, 222, 223, 224, 228, 229, 230, 233, 236, 237, 244, 245, 248, 249, 250, 251, 252, 256, 257, 258, 263, 270, 274, 290, 292, 296, 299, 301, 302, 303, 306, 313, 316, 321, 323, 327, 333, 334, 335, 342, 343, 344, 347, 348, 350, 354, 355, 356, 357, 361, 362, 363, 378, 379, 397, 401, 405, 406, 407, 410, 413, 417, 420, 425, 427, 431, 437, 445, 446, 447, 448, 451, 452, 454, 459, 460, 461, 462, 466, 467, 468, 483, 484, 502, 508, 512, 513, 514, 517, 520, 524, 527, 532, 534, 538, 544, 554, 555, 564], "pool": [1, 2, 4, 5, 7, 14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 51, 58, 67, 68, 69, 72, 75, 77, 78, 79, 80, 81, 82, 83, 87, 91, 92, 93, 94, 96, 99, 101, 102, 103, 105, 108, 109, 110, 111, 113, 114, 115, 116, 118, 121, 124, 128, 129, 130, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 171, 172, 173, 176, 177, 178, 179, 183, 185, 186, 187, 188, 193, 194, 195, 198, 199, 200, 201, 205, 207, 209, 210, 211, 216, 217, 218, 222, 223, 224, 225, 229, 231, 232, 233, 236, 237, 238, 244, 245, 246, 250, 251, 252, 253, 257, 261, 264, 271, 272, 273, 275, 279, 280, 281, 285, 291, 293, 296, 297, 298, 299, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 343, 344, 345, 348, 351, 353, 354, 355, 356, 357, 358, 362, 366, 369, 376, 377, 378, 380, 384, 385, 386, 390, 396, 398, 401, 402, 403, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 447, 448, 449, 452, 455, 457, 458, 459, 460, 461, 462, 463, 467, 471, 472, 473, 474, 476, 479, 481, 482, 483, 485, 488, 489, 490, 491, 493, 494, 495, 496, 498, 501, 504, 508, 509, 510, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 549, 550, 551, 552, 553, 554, 556, 557, 558, 559, 560, 563, 564], "redund": [1, 3, 5, 47, 49, 54, 68, 78, 79, 80, 81, 134, 137, 146, 152, 154, 164, 185, 187, 207, 210, 233, 237, 252, 298, 299, 303, 306, 315, 321, 323, 333, 334, 344, 353, 354, 355, 356, 407, 410, 419, 425, 427, 437, 448, 458, 459, 460, 461, 514, 517, 526, 532, 534, 544], "copi": [1, 8, 10, 14, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 58, 68, 72, 78, 79, 80, 81, 82, 87, 88, 89, 91, 96, 99, 102, 116, 119, 121, 128, 134, 152, 173, 183, 184, 185, 187, 195, 205, 206, 207, 210, 217, 223, 229, 230, 231, 233, 237, 245, 251, 257, 258, 259, 261, 272, 273, 289, 291, 296, 298, 299, 321, 334, 335, 344, 348, 353, 354, 356, 357, 362, 363, 364, 366, 377, 394, 396, 401, 425, 448, 452, 458, 459, 460, 461, 462, 467, 468, 469, 471, 476, 479, 482, 496, 499, 501, 508, 532], "see": [1, 4, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 40, 42, 43, 44, 48, 49, 54, 55, 62, 63, 65, 66, 67, 68, 69, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 190, 192, 193, 194, 195, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 215, 216, 217, 218, 221, 222, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 238, 240, 241, 243, 244, 245, 246, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 339, 340, 342, 343, 344, 345, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 404, 405, 406, 407, 408, 409, 410, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 511, 512, 513, 514, 515, 516, 517, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "properti": [1, 2, 5, 18, 19, 20, 36, 37, 38, 39, 43, 44, 48, 49, 51, 54, 67, 72, 75, 77, 78, 79, 80, 81, 82, 85, 87, 89, 91, 92, 93, 96, 97, 99, 100, 101, 102, 103, 104, 105, 107, 109, 110, 111, 113, 115, 116, 117, 118, 119, 120, 121, 122, 123, 125, 127, 128, 130, 132, 133, 134, 137, 140, 142, 144, 148, 154, 157, 158, 161, 162, 163, 164, 166, 167, 171, 176, 178, 183, 185, 186, 187, 193, 198, 199, 200, 205, 207, 209, 210, 216, 222, 223, 224, 229, 231, 232, 233, 236, 237, 244, 250, 251, 252, 257, 259, 261, 262, 263, 266, 267, 269, 270, 271, 272, 273, 274, 275, 277, 279, 280, 281, 283, 285, 286, 287, 288, 289, 290, 291, 292, 294, 296, 298, 299, 301, 302, 303, 306, 311, 313, 317, 323, 326, 327, 330, 332, 333, 334, 335, 343, 348, 351, 353, 354, 355, 356, 357, 360, 362, 364, 366, 367, 368, 371, 372, 374, 375, 376, 377, 378, 379, 380, 382, 384, 385, 386, 388, 390, 391, 392, 393, 394, 395, 396, 397, 399, 401, 403, 405, 406, 407, 410, 413, 415, 417, 421, 427, 430, 431, 434, 435, 436, 437, 447, 452, 455, 457, 458, 459, 460, 461, 462, 465, 467, 469, 471, 472, 473, 476, 477, 479, 480, 481, 482, 483, 484, 485, 487, 489, 490, 491, 493, 495, 496, 497, 498, 499, 500, 501, 502, 503, 505, 507, 508, 510, 512, 513, 514, 517, 520, 522, 524, 528, 534, 537, 538, 541, 542, 543, 544, 546, 547, 559, 561, 562], "period": [1, 2, 48, 51, 72, 77, 79, 82, 109, 110, 137, 146, 156, 161, 177, 185, 187, 199, 207, 210, 220, 223, 233, 237, 248, 251, 299, 306, 315, 325, 335, 348, 354, 357, 410, 419, 429, 452, 457, 459, 462, 489, 490, 517, 526, 536, 541, 557], "scrub": [1, 5, 47, 51, 72, 80, 81, 83, 84, 91, 102, 109, 110, 121, 134, 140, 146, 152, 153, 154, 155, 159, 163, 164, 176, 177, 179, 187, 198, 199, 201, 210, 222, 223, 225, 233, 237, 250, 251, 252, 253, 254, 261, 272, 291, 303, 315, 322, 323, 324, 328, 332, 333, 334, 348, 355, 356, 358, 359, 366, 377, 396, 407, 413, 419, 426, 427, 428, 432, 436, 437, 452, 460, 461, 463, 464, 471, 482, 489, 490, 501, 514, 520, 526, 532, 533, 534, 535, 539, 543, 544, 550, 551, 552, 553, 555, 556, 557, 558, 559, 561, 562, 563], "can": [1, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 50, 51, 54, 57, 63, 66, 67, 68, 71, 72, 74, 77, 78, 79, 80, 81, 82, 87, 88, 89, 90, 91, 92, 93, 94, 96, 97, 99, 100, 101, 102, 104, 105, 107, 108, 109, 110, 111, 113, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 127, 128, 130, 132, 133, 134, 135, 136, 137, 140, 141, 142, 144, 145, 146, 148, 149, 150, 152, 153, 154, 156, 157, 158, 159, 161, 162, 164, 165, 166, 167, 169, 171, 173, 175, 176, 177, 178, 183, 184, 185, 186, 187, 190, 193, 194, 195, 197, 198, 199, 200, 205, 206, 207, 209, 210, 213, 216, 217, 220, 221, 222, 223, 224, 229, 230, 231, 232, 233, 236, 237, 241, 244, 245, 248, 249, 250, 251, 252, 257, 258, 259, 260, 261, 262, 263, 264, 266, 267, 269, 270, 271, 272, 273, 274, 275, 277, 278, 279, 280, 281, 283, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 296, 298, 299, 301, 302, 303, 304, 306, 309, 310, 311, 313, 314, 315, 317, 321, 322, 323, 326, 327, 328, 330, 331, 333, 334, 335, 336, 340, 343, 344, 347, 348, 350, 353, 354, 355, 356, 357, 362, 363, 364, 365, 366, 367, 368, 369, 371, 372, 374, 375, 376, 377, 379, 380, 382, 383, 384, 385, 386, 388, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 401, 403, 405, 406, 407, 408, 409, 410, 413, 414, 415, 417, 418, 419, 421, 422, 423, 425, 426, 427, 429, 430, 431, 432, 434, 435, 437, 438, 439, 440, 443, 446, 447, 448, 451, 452, 454, 457, 458, 459, 460, 461, 462, 467, 468, 469, 470, 471, 472, 473, 474, 476, 477, 479, 480, 481, 482, 484, 485, 487, 488, 489, 490, 491, 493, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 507, 508, 510, 512, 513, 514, 515, 516, 517, 520, 521, 522, 524, 525, 526, 528, 529, 530, 532, 533, 534, 536, 537, 538, 539, 541, 542, 544, 545, 546, 547, 549, 550, 552, 555, 556, 557, 558, 559, 560, 561, 562], "check": [1, 5, 7, 8, 10, 12, 18, 19, 20, 22, 25, 27, 32, 33, 36, 37, 38, 39, 41, 43, 44, 47, 48, 58, 63, 65, 72, 79, 80, 81, 82, 83, 87, 88, 91, 102, 105, 106, 109, 110, 121, 128, 130, 132, 133, 135, 137, 140, 165, 169, 176, 177, 179, 183, 184, 185, 186, 187, 190, 192, 198, 199, 201, 205, 206, 207, 209, 210, 213, 215, 222, 223, 225, 229, 230, 232, 233, 236, 237, 241, 243, 250, 251, 253, 257, 258, 261, 272, 275, 276, 279, 280, 291, 296, 299, 301, 302, 306, 335, 340, 342, 348, 354, 355, 356, 357, 358, 362, 363, 366, 377, 380, 381, 384, 385, 396, 401, 403, 405, 406, 408, 410, 413, 438, 443, 445, 452, 459, 460, 461, 462, 463, 467, 468, 471, 482, 485, 486, 489, 490, 501, 508, 510, 512, 513, 515, 517, 520, 545, 555, 557, 559], "latent": 1, "degrad": [1, 5, 47, 48, 49, 54, 72, 81, 82, 83, 132, 140, 164, 176, 179, 186, 187, 198, 201, 209, 210, 222, 223, 225, 236, 237, 250, 251, 253, 301, 333, 334, 335, 348, 356, 357, 358, 405, 413, 437, 452, 461, 462, 463, 512, 520, 544, 550, 552, 557, 563], "bit": [1, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 52, 58, 62, 65, 68, 71, 72, 79, 80, 81, 87, 91, 102, 105, 121, 128, 132, 140, 176, 177, 185, 187, 192, 198, 199, 200, 205, 207, 210, 215, 220, 222, 223, 224, 229, 232, 233, 236, 237, 240, 243, 248, 250, 251, 252, 257, 261, 272, 275, 291, 296, 299, 301, 334, 339, 342, 344, 347, 348, 354, 355, 356, 362, 366, 377, 380, 396, 401, 405, 413, 442, 445, 448, 451, 452, 459, 460, 461, 467, 471, 482, 485, 501, 508, 512, 520, 557], "rot": 1, "sourc": [1, 3, 7, 8, 10, 12, 13, 18, 19, 20, 22, 23, 27, 35, 36, 38, 40, 45, 47, 48, 54, 55, 58, 63, 72, 78, 80, 88, 89, 90, 96, 99, 105, 109, 110, 111, 115, 116, 119, 128, 142, 157, 169, 184, 185, 186, 187, 190, 206, 207, 209, 210, 213, 224, 230, 231, 232, 233, 236, 237, 241, 252, 258, 260, 266, 269, 273, 275, 279, 280, 281, 285, 286, 296, 301, 311, 326, 340, 355, 363, 365, 371, 374, 380, 384, 385, 386, 390, 391, 401, 415, 430, 443, 452, 458, 460, 468, 469, 470, 476, 479, 485, 489, 490, 491, 495, 496, 499, 508, 522, 537, 551, 553, 555, 556, 559], "replic": [1, 58, 79, 80, 81, 109, 110, 111, 115, 128, 133, 137, 156, 164, 178, 185, 187, 200, 207, 210, 224, 233, 237, 252, 279, 280, 281, 285, 296, 299, 302, 306, 325, 333, 334, 354, 355, 356, 384, 385, 386, 390, 401, 406, 410, 429, 437, 459, 460, 461, 489, 490, 491, 495, 508, 513, 517, 536, 544, 555, 564], "stream": [1, 32, 48, 55, 58, 72, 79, 80, 87, 88, 90, 109, 110, 111, 115, 124, 128, 166, 167, 177, 178, 185, 188, 199, 200, 207, 211, 223, 224, 233, 238, 251, 252, 279, 280, 281, 285, 293, 296, 299, 336, 337, 348, 354, 355, 363, 384, 385, 386, 390, 398, 401, 439, 440, 452, 459, 460, 467, 468, 470, 489, 490, 491, 495, 504, 508, 546, 547, 559], "send": [1, 49, 55, 58, 72, 79, 80, 84, 87, 89, 90, 91, 102, 109, 110, 111, 118, 119, 121, 124, 128, 166, 167, 177, 178, 185, 188, 199, 200, 207, 211, 223, 224, 233, 238, 251, 252, 254, 259, 260, 261, 272, 279, 280, 281, 288, 289, 291, 293, 296, 299, 336, 337, 348, 354, 355, 359, 364, 365, 366, 377, 384, 385, 386, 393, 394, 396, 398, 401, 439, 440, 452, 459, 460, 464, 467, 469, 470, 471, 482, 489, 490, 491, 498, 499, 501, 504, 508, 546, 547, 559], "receiv": [1, 9, 11, 19, 20, 35, 36, 37, 43, 44, 47, 49, 58, 65, 72, 79, 80, 84, 89, 96, 99, 110, 111, 115, 116, 119, 128, 166, 167, 178, 185, 192, 199, 200, 207, 215, 223, 224, 233, 243, 251, 252, 254, 259, 266, 269, 280, 281, 285, 286, 289, 296, 299, 336, 342, 348, 354, 355, 359, 364, 371, 374, 385, 386, 390, 391, 394, 401, 439, 440, 445, 452, 459, 460, 464, 469, 476, 479, 490, 491, 495, 496, 499, 508, 546, 547, 559], "ensur": [1, 5, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 54, 58, 71, 72, 78, 79, 80, 81, 91, 102, 103, 109, 110, 121, 137, 144, 151, 177, 178, 185, 187, 199, 200, 207, 210, 220, 223, 224, 233, 237, 248, 251, 252, 261, 272, 279, 280, 291, 298, 299, 306, 313, 320, 334, 347, 348, 353, 354, 355, 356, 366, 377, 378, 384, 385, 396, 410, 417, 424, 451, 452, 458, 459, 460, 461, 471, 482, 483, 489, 490, 501, 517, 524, 531, 559], "i": [1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 43, 44, 45, 46, 47, 48, 50, 52, 57, 58, 59, 60, 62, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 136, 137, 139, 140, 141, 142, 143, 144, 146, 147, 148, 149, 150, 152, 153, 154, 155, 156, 157, 158, 159, 161, 162, 163, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 190, 192, 193, 194, 195, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 213, 215, 216, 217, 218, 220, 221, 222, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 240, 241, 243, 244, 245, 246, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 305, 306, 308, 310, 311, 312, 313, 315, 316, 317, 318, 319, 321, 322, 323, 324, 325, 326, 327, 328, 330, 331, 332, 333, 334, 335, 336, 339, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 409, 410, 412, 413, 414, 415, 416, 417, 419, 420, 421, 422, 423, 425, 426, 427, 428, 429, 430, 431, 432, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 516, 517, 519, 520, 521, 522, 523, 524, 526, 527, 528, 529, 530, 532, 533, 534, 535, 536, 537, 538, 539, 541, 542, 543, 544, 545, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 563, 564], "interven": [1, 48], "storag": [1, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 50, 51, 54, 58, 72, 78, 79, 81, 82, 86, 87, 109, 110, 124, 128, 133, 135, 136, 137, 138, 140, 141, 142, 143, 144, 145, 146, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 177, 182, 185, 187, 199, 204, 207, 210, 222, 223, 228, 233, 237, 250, 251, 256, 279, 280, 296, 298, 299, 302, 304, 305, 306, 307, 310, 311, 312, 313, 314, 315, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 348, 353, 354, 356, 357, 361, 362, 384, 385, 398, 401, 406, 408, 409, 410, 411, 413, 414, 415, 416, 417, 418, 419, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 452, 458, 459, 461, 462, 466, 467, 489, 490, 504, 508, 513, 515, 516, 517, 518, 520, 521, 522, 523, 524, 525, 526, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 558], "transport": [1, 72, 452], "mechan": [1, 4, 33, 36, 38, 47, 48, 49, 79, 81, 185, 187, 207, 210, 233, 237, 299, 334, 354, 356, 459, 461], "The": [1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 21, 23, 25, 27, 29, 32, 37, 39, 45, 46, 47, 48, 49, 50, 51, 55, 58, 62, 63, 65, 66, 67, 68, 71, 72, 74, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 125, 126, 127, 128, 129, 132, 133, 134, 135, 137, 138, 139, 140, 141, 144, 146, 147, 148, 149, 150, 152, 154, 156, 158, 161, 162, 163, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 178, 181, 182, 183, 184, 185, 186, 187, 188, 190, 192, 193, 194, 195, 197, 198, 199, 200, 203, 204, 205, 206, 207, 209, 210, 211, 213, 215, 216, 217, 220, 221, 222, 223, 224, 227, 228, 229, 230, 231, 232, 233, 236, 237, 238, 240, 241, 243, 244, 245, 248, 249, 250, 251, 252, 255, 256, 257, 258, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 289, 290, 291, 292, 294, 295, 296, 297, 298, 299, 301, 302, 303, 304, 306, 308, 310, 313, 315, 316, 317, 318, 319, 321, 323, 325, 327, 330, 332, 333, 334, 335, 336, 337, 339, 340, 342, 343, 344, 347, 348, 350, 353, 354, 355, 356, 357, 360, 361, 362, 363, 364, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 394, 395, 396, 397, 399, 400, 401, 402, 405, 406, 407, 408, 410, 412, 413, 414, 417, 419, 420, 421, 422, 423, 425, 427, 429, 431, 434, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 451, 452, 454, 457, 458, 459, 460, 461, 462, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 505, 506, 507, 508, 509, 512, 513, 514, 515, 517, 518, 519, 520, 521, 524, 526, 527, 528, 529, 530, 532, 534, 536, 538, 541, 542, 543, 544, 545, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "chang": [1, 2, 4, 8, 11, 13, 17, 18, 19, 20, 21, 22, 27, 29, 32, 35, 36, 37, 38, 39, 43, 44, 48, 49, 50, 51, 55, 56, 57, 58, 66, 67, 72, 77, 78, 79, 80, 81, 82, 84, 87, 88, 89, 92, 93, 94, 95, 97, 100, 102, 105, 107, 108, 109, 110, 111, 113, 114, 115, 118, 119, 120, 121, 123, 125, 127, 128, 134, 140, 144, 156, 159, 166, 167, 171, 176, 177, 178, 183, 185, 187, 193, 198, 199, 200, 205, 207, 210, 216, 220, 222, 223, 224, 229, 232, 233, 237, 244, 248, 250, 251, 252, 254, 257, 259, 265, 267, 270, 272, 275, 277, 279, 280, 281, 284, 285, 289, 290, 291, 294, 296, 298, 299, 313, 325, 328, 334, 335, 343, 348, 353, 354, 355, 356, 357, 359, 362, 363, 364, 370, 372, 375, 377, 380, 382, 384, 385, 386, 389, 390, 394, 395, 396, 399, 401, 413, 417, 429, 432, 446, 447, 452, 457, 458, 459, 460, 461, 462, 464, 467, 468, 469, 472, 473, 474, 475, 477, 480, 482, 485, 487, 488, 489, 490, 491, 493, 494, 495, 498, 499, 500, 501, 503, 505, 507, 508, 520, 524, 536, 539, 546, 547, 563], "dataset": [1, 7, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 54, 55, 68, 72, 75, 78, 79, 80, 81, 82, 83, 85, 87, 89, 91, 92, 93, 94, 95, 96, 97, 99, 100, 101, 102, 103, 104, 105, 107, 108, 109, 110, 111, 113, 114, 115, 116, 118, 119, 120, 121, 122, 123, 125, 127, 128, 129, 133, 137, 138, 141, 144, 152, 158, 164, 166, 167, 173, 177, 178, 179, 181, 183, 185, 187, 195, 199, 200, 201, 203, 205, 207, 210, 217, 223, 224, 225, 227, 229, 231, 232, 233, 237, 245, 251, 252, 253, 255, 257, 259, 261, 262, 263, 264, 266, 269, 270, 271, 272, 273, 274, 275, 279, 280, 281, 283, 284, 285, 286, 288, 289, 290, 291, 292, 296, 297, 298, 299, 306, 307, 310, 313, 321, 327, 333, 334, 335, 344, 348, 351, 353, 354, 355, 356, 357, 358, 360, 362, 364, 366, 367, 368, 369, 371, 372, 374, 375, 376, 377, 378, 379, 380, 382, 383, 384, 385, 386, 388, 389, 390, 391, 393, 394, 395, 396, 397, 399, 401, 402, 410, 411, 414, 417, 425, 431, 437, 448, 452, 455, 458, 459, 460, 461, 462, 463, 465, 467, 469, 471, 472, 473, 474, 475, 476, 477, 479, 480, 481, 482, 483, 484, 485, 487, 488, 489, 490, 491, 493, 494, 495, 496, 498, 499, 500, 501, 502, 503, 505, 507, 508, 509, 513, 517, 518, 521, 524, 532, 538, 544, 546, 547, 556, 559], "volum": [1, 5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 54, 58, 69, 72, 78, 79, 80, 81, 82, 89, 91, 92, 93, 94, 96, 99, 101, 102, 105, 109, 110, 111, 113, 115, 116, 119, 121, 128, 177, 178, 185, 199, 200, 207, 218, 223, 224, 232, 233, 237, 246, 251, 252, 259, 261, 262, 263, 264, 266, 269, 271, 272, 275, 279, 280, 281, 283, 285, 286, 288, 289, 291, 296, 298, 299, 335, 345, 348, 353, 354, 355, 356, 357, 364, 366, 367, 368, 369, 371, 374, 376, 377, 380, 384, 385, 386, 388, 390, 391, 394, 396, 401, 449, 452, 458, 459, 460, 461, 462, 469, 471, 472, 473, 474, 476, 479, 481, 482, 485, 489, 490, 491, 493, 495, 496, 499, 501, 508], "each": [1, 2, 3, 4, 5, 7, 12, 18, 19, 20, 21, 22, 32, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 55, 63, 65, 66, 68, 71, 72, 77, 79, 80, 81, 82, 83, 86, 87, 88, 91, 93, 94, 96, 97, 98, 99, 102, 104, 105, 107, 109, 110, 111, 112, 115, 116, 117, 118, 121, 122, 125, 126, 128, 132, 137, 140, 144, 146, 158, 159, 161, 163, 164, 166, 167, 169, 172, 173, 176, 177, 178, 182, 183, 184, 185, 186, 187, 190, 194, 195, 198, 199, 200, 204, 205, 206, 207, 209, 210, 213, 217, 218, 220, 222, 223, 224, 228, 229, 230, 232, 233, 236, 237, 241, 245, 246, 248, 250, 251, 252, 256, 257, 258, 263, 266, 267, 268, 269, 274, 275, 277, 279, 280, 281, 282, 285, 286, 292, 294, 295, 296, 299, 301, 306, 309, 313, 315, 327, 328, 330, 332, 333, 334, 335, 336, 340, 342, 344, 347, 348, 354, 355, 356, 357, 358, 361, 362, 363, 366, 368, 371, 372, 373, 374, 377, 379, 380, 382, 384, 385, 386, 387, 390, 391, 392, 396, 397, 399, 400, 401, 405, 410, 413, 417, 419, 431, 432, 434, 436, 437, 439, 440, 443, 445, 446, 448, 451, 452, 457, 459, 460, 461, 462, 463, 466, 467, 468, 471, 473, 474, 476, 477, 478, 479, 482, 484, 485, 487, 489, 490, 491, 492, 495, 496, 497, 498, 501, 502, 505, 506, 508, 512, 517, 520, 524, 526, 538, 539, 541, 543, 544, 546, 547], "store": [1, 3, 4, 5, 7, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 66, 68, 72, 78, 79, 80, 81, 82, 88, 91, 102, 103, 105, 109, 110, 111, 115, 121, 131, 161, 173, 178, 184, 185, 187, 195, 199, 200, 206, 207, 208, 210, 217, 223, 224, 230, 231, 232, 233, 235, 237, 245, 251, 252, 258, 261, 272, 273, 275, 279, 280, 281, 285, 291, 298, 299, 300, 330, 334, 335, 344, 348, 353, 354, 355, 356, 357, 363, 366, 377, 378, 380, 384, 385, 386, 390, 396, 404, 434, 446, 448, 452, 458, 459, 460, 461, 462, 468, 471, 482, 483, 485, 489, 490, 491, 495, 501, 511, 541, 554], "pointer": [1, 33, 48, 72, 80, 87, 132, 177, 178, 183, 186, 199, 200, 205, 209, 223, 224, 229, 236, 251, 252, 257, 301, 348, 355, 362, 405, 452, 460, 467, 512], "metadata": [1, 18, 19, 20, 22, 32, 35, 36, 37, 38, 39, 43, 44, 47, 49, 54, 55, 62, 72, 79, 80, 81, 87, 91, 102, 109, 110, 111, 115, 121, 140, 156, 166, 167, 177, 178, 183, 185, 187, 199, 200, 205, 207, 210, 222, 223, 224, 229, 233, 237, 240, 250, 251, 252, 257, 261, 272, 279, 280, 281, 285, 291, 299, 334, 336, 339, 348, 354, 355, 356, 362, 366, 377, 384, 385, 386, 390, 396, 413, 429, 439, 440, 442, 452, 459, 460, 461, 467, 471, 482, 489, 490, 491, 495, 501, 520, 536, 546, 547, 556, 564], "calcul": [1, 47, 48, 49, 50, 55, 72, 78, 82, 87, 177, 185, 199, 207, 223, 229, 233, 251, 257, 298, 348, 353, 362, 452, 458, 462, 467], "when": [1, 2, 5, 7, 8, 9, 10, 11, 12, 18, 19, 20, 22, 23, 27, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 55, 58, 65, 66, 68, 69, 71, 72, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88, 90, 91, 94, 102, 105, 109, 110, 111, 113, 114, 115, 117, 121, 128, 134, 137, 140, 144, 146, 148, 154, 156, 161, 164, 166, 167, 173, 176, 177, 178, 182, 183, 184, 185, 187, 192, 195, 198, 199, 200, 204, 205, 206, 207, 210, 215, 217, 218, 220, 222, 223, 224, 228, 229, 230, 231, 232, 233, 237, 243, 245, 246, 248, 250, 251, 252, 256, 257, 258, 260, 261, 264, 272, 273, 275, 276, 279, 280, 281, 283, 284, 285, 287, 291, 296, 298, 299, 303, 306, 313, 315, 317, 321, 323, 325, 330, 333, 334, 335, 342, 344, 345, 347, 348, 353, 354, 355, 356, 357, 360, 361, 362, 363, 365, 366, 369, 377, 380, 384, 385, 386, 388, 389, 390, 392, 396, 401, 407, 410, 413, 417, 419, 421, 427, 429, 434, 437, 445, 446, 448, 449, 451, 452, 457, 458, 459, 460, 461, 462, 465, 466, 467, 468, 470, 471, 474, 482, 485, 489, 490, 491, 493, 494, 495, 497, 501, 508, 514, 517, 520, 524, 526, 528, 534, 536, 541, 544, 546, 547, 549, 550, 554], "written": [1, 3, 36, 43, 44, 47, 48, 49, 54, 68, 72, 78, 79, 80, 81, 82, 88, 91, 102, 105, 111, 115, 121, 134, 144, 160, 164, 166, 167, 171, 172, 173, 177, 178, 181, 184, 185, 186, 187, 193, 194, 195, 199, 200, 203, 206, 207, 209, 210, 216, 217, 223, 224, 227, 230, 232, 233, 236, 237, 240, 244, 245, 251, 252, 255, 258, 261, 272, 275, 281, 285, 291, 299, 301, 313, 329, 333, 334, 335, 344, 348, 354, 355, 356, 357, 363, 366, 377, 380, 386, 390, 396, 417, 433, 437, 448, 452, 458, 459, 460, 461, 462, 468, 471, 482, 485, 491, 495, 501, 524, 540, 544, 546, 547, 559, 560, 563], "so": [1, 3, 4, 10, 18, 19, 20, 21, 22, 27, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 54, 55, 68, 71, 72, 77, 78, 79, 80, 82, 89, 91, 97, 102, 107, 108, 111, 114, 115, 119, 121, 125, 128, 132, 134, 137, 141, 154, 164, 173, 177, 178, 185, 187, 195, 199, 200, 207, 209, 210, 217, 220, 223, 224, 231, 233, 236, 237, 245, 248, 251, 252, 259, 261, 267, 272, 273, 277, 278, 281, 284, 285, 288, 289, 291, 294, 296, 298, 299, 301, 303, 310, 323, 333, 334, 335, 344, 347, 348, 353, 354, 355, 356, 357, 364, 366, 372, 377, 382, 383, 386, 389, 390, 393, 394, 396, 399, 401, 405, 407, 410, 414, 427, 437, 448, 451, 452, 457, 458, 459, 460, 462, 469, 471, 477, 482, 487, 488, 491, 494, 495, 498, 499, 501, 505, 508, 512, 514, 517, 521, 534, 544, 549, 559, 563], "onli": [1, 5, 9, 11, 12, 14, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 54, 55, 62, 63, 66, 67, 68, 71, 72, 74, 77, 78, 79, 80, 81, 82, 86, 87, 88, 89, 91, 93, 94, 96, 97, 99, 101, 102, 103, 105, 107, 109, 110, 111, 113, 114, 115, 116, 119, 121, 124, 125, 128, 130, 132, 133, 134, 136, 137, 140, 144, 145, 146, 148, 149, 150, 152, 154, 156, 158, 159, 160, 162, 164, 166, 167, 169, 171, 175, 176, 177, 178, 182, 183, 185, 186, 187, 188, 190, 193, 197, 198, 199, 200, 204, 205, 206, 207, 209, 210, 211, 213, 216, 220, 221, 222, 223, 224, 228, 229, 230, 231, 232, 233, 236, 237, 238, 240, 241, 244, 248, 249, 250, 251, 252, 256, 257, 258, 259, 261, 263, 264, 266, 267, 269, 271, 272, 273, 275, 277, 279, 280, 281, 283, 284, 285, 286, 289, 291, 293, 294, 296, 298, 299, 301, 302, 303, 305, 306, 313, 314, 315, 317, 321, 323, 325, 327, 328, 329, 331, 333, 334, 335, 337, 339, 340, 343, 344, 347, 348, 350, 353, 354, 355, 356, 357, 361, 362, 363, 364, 366, 368, 369, 371, 372, 374, 376, 377, 378, 380, 382, 384, 385, 386, 388, 389, 390, 391, 394, 396, 398, 399, 401, 403, 405, 406, 407, 409, 410, 413, 417, 418, 419, 421, 422, 423, 425, 427, 429, 431, 432, 433, 435, 437, 442, 443, 446, 447, 448, 451, 452, 454, 457, 458, 459, 460, 461, 462, 466, 467, 468, 469, 471, 473, 474, 476, 477, 479, 481, 482, 483, 485, 487, 489, 490, 491, 493, 494, 495, 496, 499, 501, 504, 505, 508, 510, 512, 513, 514, 516, 517, 520, 524, 525, 526, 528, 529, 530, 532, 534, 536, 538, 539, 540, 542, 544, 546, 547, 556, 559], "affect": [1, 47, 48, 54, 65, 68, 72, 79, 80, 81, 87, 95, 109, 110, 128, 177, 178, 183, 185, 192, 195, 199, 200, 205, 207, 215, 217, 223, 224, 229, 233, 243, 245, 251, 252, 257, 279, 280, 296, 299, 342, 344, 348, 354, 355, 362, 384, 385, 401, 445, 448, 452, 459, 460, 461, 467, 475, 489, 490, 508, 556, 557, 559, 561, 562, 563], "write": [1, 4, 5, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 52, 54, 58, 59, 60, 66, 71, 72, 78, 79, 80, 81, 82, 88, 96, 99, 111, 115, 116, 128, 131, 132, 140, 145, 146, 149, 150, 152, 159, 164, 166, 167, 176, 177, 178, 184, 185, 186, 187, 198, 199, 200, 206, 207, 209, 210, 220, 222, 223, 224, 230, 233, 236, 237, 248, 250, 251, 252, 258, 281, 285, 296, 299, 300, 301, 314, 315, 318, 319, 333, 334, 335, 347, 348, 354, 355, 356, 357, 363, 386, 390, 401, 404, 405, 413, 418, 419, 422, 423, 437, 446, 451, 452, 458, 459, 460, 461, 462, 468, 476, 479, 491, 495, 496, 508, 511, 512, 520, 525, 526, 529, 530, 532, 539, 544, 546, 547, 550, 551, 552, 553, 555, 556, 557, 558, 559, 561, 562, 563], "occur": [1, 36, 38, 43, 44, 47, 48, 49, 50, 54, 71, 72, 79, 80, 82, 87, 94, 111, 115, 128, 132, 140, 145, 161, 164, 176, 177, 185, 187, 198, 199, 200, 205, 207, 209, 210, 220, 222, 224, 229, 233, 236, 237, 248, 250, 251, 252, 257, 264, 281, 285, 296, 299, 301, 314, 330, 333, 335, 347, 348, 354, 355, 357, 362, 369, 386, 390, 401, 405, 413, 418, 434, 437, 451, 452, 459, 460, 462, 467, 474, 491, 495, 508, 512, 520, 525, 541, 544, 558, 563], "after": [1, 3, 5, 8, 12, 14, 18, 19, 20, 21, 22, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 54, 57, 63, 66, 68, 72, 75, 78, 79, 80, 81, 82, 88, 91, 94, 100, 102, 103, 104, 109, 110, 117, 120, 121, 122, 123, 127, 132, 134, 144, 145, 146, 148, 152, 154, 156, 164, 169, 172, 173, 177, 178, 184, 185, 186, 187, 190, 194, 195, 199, 200, 206, 207, 209, 210, 213, 217, 220, 223, 224, 230, 231, 233, 236, 237, 240, 241, 245, 248, 251, 252, 258, 261, 263, 264, 270, 272, 273, 274, 279, 280, 290, 291, 292, 298, 299, 301, 313, 315, 317, 321, 323, 325, 334, 335, 340, 344, 348, 351, 353, 354, 355, 356, 357, 363, 366, 368, 369, 375, 377, 378, 379, 384, 385, 392, 395, 396, 397, 405, 417, 418, 419, 421, 425, 427, 429, 437, 443, 446, 448, 452, 455, 458, 459, 460, 461, 462, 468, 471, 474, 480, 482, 483, 484, 489, 490, 497, 500, 501, 502, 503, 507, 512, 524, 525, 526, 528, 532, 534, 536, 544, 559, 561, 562], "set": [1, 3, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 29, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 55, 58, 63, 65, 66, 68, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 84, 85, 87, 89, 91, 92, 93, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 109, 110, 111, 112, 113, 115, 117, 118, 119, 120, 121, 122, 123, 125, 127, 128, 132, 133, 134, 136, 137, 140, 142, 144, 146, 149, 150, 154, 158, 161, 162, 164, 165, 172, 173, 175, 176, 177, 178, 181, 182, 183, 185, 186, 187, 194, 195, 197, 198, 199, 200, 203, 204, 205, 207, 209, 210, 217, 220, 221, 222, 223, 224, 227, 228, 229, 231, 232, 233, 236, 237, 245, 248, 249, 250, 251, 252, 254, 255, 257, 259, 261, 262, 263, 266, 267, 268, 269, 270, 272, 273, 274, 275, 276, 277, 279, 280, 281, 282, 283, 285, 287, 288, 289, 290, 291, 292, 294, 296, 298, 299, 301, 302, 303, 306, 311, 313, 315, 323, 327, 330, 333, 334, 335, 342, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 359, 360, 362, 364, 366, 367, 368, 371, 372, 373, 374, 375, 377, 378, 379, 380, 381, 382, 384, 385, 386, 387, 388, 390, 392, 393, 394, 395, 396, 397, 399, 401, 405, 406, 407, 409, 410, 413, 415, 417, 419, 422, 423, 427, 431, 434, 435, 437, 438, 443, 445, 446, 448, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 464, 465, 467, 469, 471, 472, 473, 476, 477, 478, 479, 480, 482, 483, 484, 485, 486, 487, 489, 490, 491, 492, 493, 495, 497, 498, 499, 500, 501, 502, 503, 505, 507, 508, 512, 513, 514, 516, 517, 520, 522, 524, 526, 529, 530, 534, 538, 541, 542, 544, 545, 559, 561, 562], "sha256": [1, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 72, 79, 185, 207, 223, 233, 251, 299, 348, 354, 452, 459], "pool_nam": [1, 48, 68, 130, 173, 195, 217, 245, 344, 403, 448, 510], "dataset_nam": 1, "ok": [1, 140, 176, 198, 222, 250, 413, 520], "dedup": [1, 49, 72, 78, 79, 80, 81, 87, 89, 111, 115, 119, 148, 152, 164, 177, 183, 185, 187, 199, 200, 205, 207, 210, 223, 224, 229, 233, 237, 251, 252, 257, 261, 272, 281, 285, 291, 298, 299, 321, 333, 334, 348, 353, 354, 355, 356, 362, 364, 386, 390, 394, 425, 437, 452, 458, 459, 460, 461, 467, 469, 491, 495, 499, 528, 532, 544], "nopwrit": [1, 72, 80, 200, 224, 252, 348, 355, 452, 460], "compat": [1, 6, 8, 9, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 72, 79, 80, 82, 111, 115, 128, 137, 140, 162, 165, 166, 167, 178, 185, 187, 200, 207, 210, 222, 223, 224, 233, 237, 250, 251, 252, 281, 285, 296, 299, 331, 335, 348, 354, 355, 357, 386, 390, 401, 410, 413, 435, 438, 439, 440, 452, 459, 460, 462, 491, 495, 508, 517, 520, 542, 545, 546, 547, 558, 559], "note": [1, 2, 5, 8, 9, 10, 12, 14, 15, 16, 18, 19, 20, 22, 26, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 44, 45, 47, 48, 49, 50, 51, 54, 63, 68, 72, 74, 79, 80, 83, 87, 88, 89, 91, 102, 104, 105, 109, 110, 111, 115, 117, 119, 121, 122, 126, 128, 130, 132, 136, 144, 146, 156, 158, 166, 167, 169, 173, 175, 177, 178, 179, 181, 184, 185, 187, 190, 195, 197, 199, 200, 201, 203, 205, 206, 207, 209, 210, 213, 217, 221, 223, 224, 225, 227, 229, 230, 232, 233, 236, 237, 241, 245, 249, 251, 252, 253, 255, 257, 258, 259, 261, 272, 274, 275, 279, 280, 281, 285, 289, 291, 292, 295, 296, 299, 301, 313, 315, 321, 325, 327, 340, 344, 348, 350, 354, 355, 358, 362, 363, 364, 366, 377, 379, 380, 384, 385, 386, 390, 392, 394, 396, 397, 400, 401, 403, 405, 409, 417, 419, 429, 431, 443, 448, 452, 454, 459, 460, 463, 467, 468, 469, 471, 482, 484, 485, 489, 490, 491, 495, 497, 499, 501, 502, 506, 508, 510, 512, 516, 524, 526, 536, 538, 546, 547, 559], "ye": [1, 7, 14, 16, 18, 19, 20, 21, 22, 25, 27, 31, 35, 36, 37, 38, 39, 43, 44, 54, 74, 79, 80, 96, 99, 103, 116, 128, 175, 177, 178, 182, 185, 197, 199, 200, 204, 207, 221, 223, 224, 228, 233, 249, 251, 252, 296, 299, 350, 354, 355, 378, 401, 454, 459, 460, 476, 479, 483, 496, 508], "short": [1, 10, 12, 18, 19, 20, 22, 35, 43, 44, 48, 54, 67, 79, 80, 88, 94, 171, 176, 178, 185, 193, 198, 200, 206, 207, 216, 222, 224, 230, 233, 244, 250, 252, 258, 264, 299, 343, 348, 354, 355, 363, 369, 447, 452, 459, 460, 468, 474], "hand": [1, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 48, 54, 220, 248, 557], "fletcher4": [1, 79, 185, 207, 233, 299, 354, 459], "non": [1, 8, 9, 10, 18, 19, 20, 22, 25, 26, 31, 32, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 63, 66, 68, 71, 72, 74, 75, 77, 79, 80, 81, 82, 87, 88, 89, 92, 93, 94, 95, 96, 99, 104, 105, 109, 110, 111, 115, 116, 119, 122, 128, 130, 137, 140, 144, 152, 164, 166, 167, 169, 173, 175, 177, 178, 183, 184, 185, 187, 190, 195, 197, 199, 200, 205, 206, 207, 210, 213, 217, 220, 221, 222, 223, 224, 229, 230, 232, 233, 237, 241, 245, 248, 249, 250, 251, 252, 257, 258, 259, 262, 263, 264, 274, 275, 279, 280, 281, 285, 289, 292, 296, 299, 306, 313, 321, 333, 334, 335, 336, 340, 344, 347, 348, 350, 351, 354, 355, 356, 357, 362, 363, 364, 367, 368, 369, 370, 371, 374, 379, 380, 384, 385, 386, 390, 391, 394, 397, 401, 403, 410, 413, 417, 425, 437, 439, 440, 443, 446, 448, 451, 452, 454, 455, 457, 459, 460, 461, 462, 467, 468, 469, 472, 473, 474, 475, 476, 479, 484, 485, 489, 490, 491, 495, 496, 499, 502, 508, 510, 517, 520, 524, 532, 544, 546, 547, 557, 559, 564], "off": [1, 7, 8, 10, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 68, 71, 72, 79, 80, 81, 82, 96, 99, 101, 103, 111, 115, 116, 128, 137, 149, 150, 158, 159, 166, 167, 177, 178, 185, 187, 199, 200, 207, 210, 220, 223, 224, 231, 233, 237, 248, 251, 271, 273, 296, 299, 327, 334, 335, 344, 347, 348, 354, 355, 356, 357, 376, 378, 401, 410, 422, 423, 431, 432, 448, 451, 452, 459, 460, 461, 462, 476, 479, 481, 483, 491, 495, 496, 508, 517, 529, 530, 538, 539, 546, 547], "do": [1, 8, 9, 10, 11, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 58, 63, 67, 68, 71, 72, 77, 79, 80, 81, 85, 87, 88, 91, 93, 94, 95, 96, 97, 98, 99, 101, 102, 105, 107, 109, 110, 111, 112, 113, 114, 115, 116, 121, 125, 128, 130, 131, 132, 137, 140, 142, 146, 148, 152, 153, 156, 157, 158, 162, 163, 165, 169, 172, 173, 177, 178, 181, 183, 185, 186, 187, 190, 194, 195, 199, 200, 203, 205, 207, 208, 209, 210, 213, 217, 220, 223, 224, 227, 229, 231, 232, 233, 235, 236, 237, 241, 245, 248, 251, 252, 255, 257, 261, 263, 264, 267, 268, 271, 272, 273, 275, 277, 279, 280, 281, 282, 283, 284, 285, 291, 294, 296, 299, 300, 301, 306, 309, 311, 315, 317, 321, 322, 326, 327, 331, 332, 334, 340, 344, 347, 348, 354, 355, 356, 360, 362, 363, 366, 368, 369, 370, 372, 373, 376, 377, 380, 382, 384, 385, 386, 387, 388, 389, 390, 396, 399, 401, 403, 404, 405, 410, 413, 415, 419, 421, 425, 426, 430, 431, 435, 436, 438, 443, 447, 448, 451, 452, 457, 459, 460, 461, 465, 467, 468, 471, 473, 474, 475, 476, 477, 478, 479, 481, 482, 485, 487, 489, 490, 491, 492, 493, 494, 495, 496, 501, 505, 508, 510, 511, 512, 517, 520, 522, 526, 528, 532, 533, 536, 537, 538, 542, 543, 545, 557, 563], "fletcher2": [1, 79, 185, 207, 233, 299, 354, 459], "deprec": [1, 48, 63, 72, 82, 166, 167, 169, 190, 213, 233, 237, 241, 251, 335, 336, 340, 348, 357, 439, 440, 443, 452, 462, 546, 547], "fletcher": [1, 48, 72, 199, 223, 251, 348, 452], "instead": [1, 5, 18, 19, 20, 21, 22, 23, 27, 33, 35, 36, 38, 40, 42, 43, 44, 47, 48, 54, 62, 63, 67, 68, 72, 79, 80, 81, 82, 87, 90, 91, 96, 97, 99, 101, 102, 104, 106, 107, 109, 110, 111, 115, 116, 121, 122, 125, 128, 131, 133, 139, 140, 142, 144, 146, 148, 149, 150, 152, 157, 158, 159, 163, 169, 173, 183, 185, 187, 190, 195, 199, 205, 207, 210, 213, 217, 223, 224, 229, 233, 237, 240, 241, 245, 251, 252, 257, 261, 266, 267, 269, 271, 272, 274, 276, 277, 279, 280, 281, 285, 286, 291, 292, 294, 296, 299, 300, 302, 308, 309, 311, 313, 315, 317, 318, 319, 321, 326, 327, 328, 332, 334, 335, 339, 340, 343, 344, 348, 354, 355, 356, 357, 362, 366, 371, 372, 374, 376, 377, 379, 381, 382, 384, 385, 386, 390, 391, 396, 397, 399, 401, 404, 406, 412, 413, 415, 417, 419, 421, 422, 423, 425, 430, 431, 432, 436, 442, 443, 447, 448, 452, 459, 460, 461, 462, 467, 470, 471, 476, 477, 479, 481, 482, 484, 486, 487, 489, 490, 491, 495, 496, 501, 502, 505, 508, 511, 513, 519, 520, 522, 524, 526, 528, 529, 530, 532, 537, 538, 539, 543], "also": [1, 4, 5, 9, 10, 12, 18, 19, 20, 21, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 63, 65, 66, 67, 68, 69, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 169, 171, 172, 173, 175, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 190, 192, 193, 194, 195, 197, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 215, 216, 217, 218, 221, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 238, 241, 243, 244, 245, 246, 249, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 342, 343, 344, 345, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 404, 405, 406, 407, 408, 409, 410, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 443, 445, 446, 447, 448, 449, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 511, 512, 513, 514, 515, 516, 517, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 556, 559], "default": [1, 5, 7, 8, 9, 16, 18, 19, 20, 21, 22, 25, 26, 29, 31, 32, 33, 35, 36, 37, 38, 39, 40, 43, 44, 47, 48, 49, 51, 54, 55, 62, 65, 66, 67, 68, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 86, 87, 88, 89, 91, 93, 94, 96, 97, 99, 101, 102, 103, 105, 107, 109, 110, 111, 114, 115, 116, 119, 121, 125, 128, 131, 132, 137, 142, 144, 146, 148, 154, 157, 158, 159, 164, 165, 166, 167, 171, 173, 175, 177, 182, 183, 184, 185, 186, 187, 192, 193, 195, 197, 199, 204, 205, 206, 207, 209, 210, 215, 216, 217, 220, 221, 223, 224, 228, 229, 230, 232, 233, 236, 237, 240, 243, 244, 245, 248, 249, 251, 252, 256, 257, 258, 261, 263, 264, 266, 267, 269, 271, 272, 275, 276, 277, 279, 280, 281, 284, 285, 286, 291, 294, 296, 298, 299, 300, 301, 306, 311, 313, 315, 317, 323, 326, 327, 328, 333, 334, 335, 339, 342, 343, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 361, 362, 363, 366, 368, 369, 371, 372, 374, 376, 377, 378, 380, 382, 384, 385, 386, 389, 390, 391, 396, 399, 401, 404, 405, 410, 415, 417, 419, 421, 427, 430, 431, 432, 437, 438, 442, 445, 446, 447, 448, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 466, 467, 468, 469, 471, 473, 474, 476, 477, 479, 481, 482, 483, 485, 487, 489, 490, 491, 494, 495, 496, 499, 501, 505, 508, 511, 512, 517, 522, 524, 526, 528, 534, 537, 538, 539, 544, 545, 546, 547, 559], "nopar": [1, 79, 207, 233, 299, 354, 459], "sha512": [1, 79, 80, 200, 207, 224, 233, 252, 299, 354, 355, 459, 460], "requir": [1, 2, 8, 9, 10, 12, 13, 17, 25, 27, 31, 32, 33, 34, 47, 48, 49, 50, 55, 58, 63, 66, 67, 68, 71, 72, 75, 78, 79, 80, 81, 82, 87, 88, 91, 102, 103, 105, 109, 110, 111, 115, 121, 128, 140, 144, 146, 154, 155, 161, 166, 167, 169, 171, 173, 176, 177, 178, 181, 184, 185, 187, 190, 193, 195, 198, 199, 200, 203, 206, 207, 210, 213, 216, 217, 220, 222, 223, 224, 227, 229, 230, 231, 232, 233, 237, 241, 244, 245, 248, 250, 251, 252, 255, 257, 258, 261, 272, 273, 275, 279, 280, 281, 285, 291, 296, 298, 299, 313, 315, 323, 324, 330, 334, 335, 340, 343, 344, 347, 348, 351, 353, 354, 355, 356, 357, 362, 363, 366, 377, 378, 380, 384, 385, 386, 390, 396, 401, 413, 417, 419, 427, 428, 434, 443, 446, 447, 448, 451, 452, 455, 458, 459, 460, 461, 462, 467, 468, 471, 482, 483, 485, 489, 490, 491, 495, 501, 508, 520, 524, 526, 534, 535, 541, 546, 547, 555, 559, 561, 562, 563], "org": [1, 2, 7, 9, 12, 18, 19, 20, 22, 23, 25, 26, 27, 31, 32, 34, 36, 38, 43, 44, 47, 49, 54, 56, 67, 72, 80, 103, 105, 171, 173, 178, 193, 195, 200, 216, 217, 224, 231, 232, 244, 245, 252, 273, 275, 343, 355, 378, 380, 447, 452, 460, 483, 485], "illumo": [1, 2, 11, 12, 47, 48, 49, 54, 58, 67, 72, 80, 164, 171, 173, 178, 193, 200, 210, 216, 220, 224, 237, 244, 248, 252, 333, 343, 355, 437, 447, 452, 460, 544], "salt": [1, 48, 72, 80, 200, 224, 251, 252, 348, 355, 452, 460], "current": [1, 5, 7, 8, 9, 10, 18, 19, 21, 22, 27, 32, 35, 36, 38, 43, 47, 48, 49, 50, 62, 63, 72, 75, 77, 79, 80, 81, 82, 86, 87, 88, 91, 94, 95, 102, 104, 105, 111, 115, 117, 121, 122, 124, 128, 132, 133, 134, 135, 136, 137, 140, 141, 144, 145, 146, 148, 149, 150, 158, 159, 161, 162, 164, 165, 169, 172, 176, 177, 182, 183, 184, 185, 187, 190, 194, 198, 199, 200, 204, 205, 206, 207, 210, 213, 222, 223, 224, 228, 229, 230, 232, 233, 236, 237, 241, 250, 251, 252, 256, 257, 258, 261, 264, 265, 272, 274, 275, 281, 285, 287, 291, 292, 293, 296, 299, 301, 302, 303, 304, 306, 310, 313, 314, 315, 317, 327, 328, 330, 331, 333, 334, 335, 339, 340, 348, 351, 354, 355, 356, 357, 361, 362, 363, 366, 369, 370, 377, 379, 380, 386, 390, 392, 396, 397, 398, 401, 405, 406, 407, 408, 409, 410, 413, 414, 417, 418, 419, 421, 422, 423, 431, 432, 434, 437, 438, 442, 443, 452, 455, 457, 459, 460, 461, 462, 466, 467, 468, 471, 474, 475, 482, 484, 485, 491, 495, 497, 501, 502, 504, 508, 512, 513, 514, 515, 516, 517, 520, 521, 524, 525, 526, 528, 529, 530, 538, 539, 541, 542, 544, 545, 556, 557, 558, 559, 561, 562, 563], "support": [1, 2, 5, 7, 8, 9, 11, 14, 16, 25, 26, 28, 31, 41, 47, 48, 49, 58, 62, 65, 67, 71, 72, 77, 79, 80, 81, 82, 86, 87, 88, 89, 91, 93, 102, 104, 105, 109, 110, 111, 115, 119, 121, 122, 124, 133, 134, 137, 152, 154, 161, 162, 164, 169, 171, 177, 178, 182, 184, 185, 187, 188, 190, 192, 193, 194, 199, 200, 204, 205, 206, 207, 210, 211, 213, 215, 216, 220, 223, 224, 228, 229, 230, 232, 233, 237, 238, 240, 241, 243, 244, 248, 251, 252, 256, 257, 258, 259, 261, 263, 272, 274, 275, 279, 280, 281, 285, 289, 291, 292, 293, 299, 302, 303, 306, 310, 321, 323, 330, 331, 333, 334, 335, 337, 339, 340, 342, 343, 347, 348, 354, 355, 356, 357, 361, 362, 363, 364, 366, 368, 377, 379, 380, 384, 385, 386, 390, 394, 396, 397, 398, 406, 407, 410, 425, 427, 434, 435, 437, 442, 445, 447, 451, 452, 457, 459, 460, 461, 462, 466, 467, 468, 469, 471, 473, 482, 484, 485, 489, 490, 491, 495, 499, 501, 502, 504, 513, 514, 517, 532, 534, 541, 542, 544, 558], "ani": [1, 2, 3, 4, 5, 7, 8, 9, 10, 12, 18, 19, 20, 21, 22, 26, 33, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 49, 54, 63, 66, 67, 68, 71, 72, 74, 75, 78, 79, 80, 81, 82, 87, 88, 89, 91, 93, 94, 96, 99, 100, 101, 102, 104, 105, 108, 109, 110, 111, 113, 114, 115, 116, 119, 120, 121, 122, 123, 127, 128, 132, 133, 134, 137, 138, 140, 144, 145, 146, 154, 155, 156, 158, 161, 162, 164, 166, 167, 169, 175, 176, 177, 178, 183, 184, 185, 186, 187, 190, 195, 197, 198, 199, 200, 205, 206, 207, 209, 210, 213, 217, 220, 221, 222, 223, 224, 229, 230, 231, 232, 233, 236, 237, 241, 245, 248, 249, 250, 251, 252, 257, 258, 259, 261, 263, 264, 266, 269, 270, 271, 272, 273, 274, 275, 278, 279, 280, 281, 283, 284, 285, 286, 289, 290, 291, 292, 296, 298, 299, 301, 303, 306, 307, 313, 314, 315, 323, 324, 325, 327, 330, 333, 334, 335, 340, 343, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 362, 363, 364, 366, 368, 369, 371, 374, 375, 376, 377, 379, 380, 383, 384, 385, 386, 388, 389, 390, 391, 394, 395, 396, 397, 401, 405, 407, 410, 411, 413, 417, 418, 419, 427, 428, 429, 431, 434, 435, 437, 443, 446, 447, 448, 451, 452, 454, 455, 458, 459, 460, 461, 462, 467, 468, 469, 471, 473, 474, 476, 479, 480, 481, 482, 484, 485, 488, 489, 490, 491, 493, 494, 495, 496, 499, 500, 501, 502, 503, 507, 508, 512, 513, 514, 517, 518, 520, 524, 525, 526, 534, 535, 536, 538, 541, 542, 544, 546, 547, 549, 554, 557, 559, 563], "boot": [1, 14, 16, 25, 26, 27, 28, 29, 31, 32, 47, 48, 49, 58, 75, 78, 82, 103, 104, 117, 122, 146, 178, 185, 187, 200, 207, 210, 224, 231, 233, 237, 273, 274, 287, 292, 298, 315, 335, 351, 353, 357, 378, 379, 392, 397, 419, 455, 458, 462, 483, 484, 497, 502, 526, 549], "skein": [1, 79, 80, 200, 207, 224, 233, 252, 299, 354, 355, 459, 460], "edonr": [1, 2, 79, 80, 200, 207, 224, 233, 252, 299, 354, 355, 459, 460], "In": [1, 4, 5, 7, 9, 10, 12, 13, 18, 19, 20, 21, 22, 32, 33, 34, 35, 36, 37, 38, 39, 43, 44, 45, 46, 47, 48, 49, 51, 54, 68, 71, 72, 74, 77, 79, 80, 81, 82, 86, 87, 88, 91, 94, 97, 102, 105, 106, 107, 109, 110, 111, 114, 115, 121, 124, 125, 128, 134, 135, 140, 144, 148, 152, 154, 164, 165, 166, 167, 173, 175, 177, 182, 184, 185, 187, 195, 197, 199, 200, 204, 205, 206, 207, 210, 217, 220, 221, 222, 223, 224, 228, 229, 230, 232, 233, 237, 245, 248, 249, 250, 251, 252, 256, 257, 258, 261, 264, 267, 272, 275, 277, 279, 280, 281, 284, 285, 291, 293, 294, 296, 299, 303, 304, 313, 321, 323, 333, 334, 335, 336, 344, 347, 348, 350, 354, 355, 356, 357, 361, 362, 363, 366, 369, 372, 377, 380, 381, 382, 384, 385, 386, 389, 390, 396, 398, 399, 401, 407, 408, 413, 417, 425, 427, 437, 438, 439, 440, 448, 451, 452, 454, 457, 459, 460, 461, 462, 466, 467, 468, 471, 474, 477, 482, 485, 486, 487, 489, 490, 491, 494, 495, 501, 504, 505, 508, 514, 515, 520, 524, 528, 532, 534, 544, 545, 546, 547, 554, 556, 557], "abund": [1, 80, 200, 224, 252, 355, 460], "caution": [1, 23, 54, 80, 87, 88, 184, 200, 206, 224, 230, 252, 258, 355, 363, 460, 467, 468], "edon": [1, 80, 200, 224, 252, 355, 460], "r": [1, 8, 9, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 48, 54, 55, 65, 67, 68, 79, 80, 87, 89, 91, 94, 95, 96, 98, 99, 101, 102, 106, 109, 110, 111, 112, 113, 114, 115, 116, 118, 119, 121, 124, 128, 132, 137, 140, 144, 146, 158, 161, 171, 172, 173, 176, 178, 183, 185, 186, 187, 193, 194, 195, 198, 200, 205, 207, 209, 210, 216, 217, 222, 224, 229, 233, 236, 237, 244, 245, 250, 252, 257, 259, 261, 264, 265, 266, 268, 269, 271, 272, 276, 279, 280, 281, 282, 283, 284, 285, 286, 288, 289, 291, 293, 296, 299, 301, 306, 313, 315, 327, 330, 342, 343, 344, 354, 355, 362, 364, 366, 369, 370, 371, 373, 374, 376, 377, 381, 384, 385, 386, 387, 388, 389, 390, 391, 393, 394, 396, 398, 401, 405, 410, 413, 417, 419, 431, 434, 445, 447, 448, 459, 460, 467, 469, 471, 474, 475, 476, 478, 479, 481, 482, 486, 489, 490, 491, 492, 493, 494, 495, 496, 498, 499, 501, 504, 508, 512, 517, 520, 524, 526, 538, 541, 559], "verif": [1, 12, 48, 65, 72, 80, 109, 110, 172, 177, 192, 194, 199, 200, 207, 215, 223, 224, 233, 243, 251, 252, 279, 280, 342, 348, 355, 384, 385, 445, 452, 460, 489, 490], "verifi": [1, 5, 8, 12, 14, 16, 18, 19, 20, 22, 25, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 48, 57, 65, 72, 79, 80, 87, 109, 110, 134, 137, 154, 156, 172, 183, 185, 187, 192, 194, 199, 205, 207, 210, 215, 223, 224, 229, 233, 237, 243, 251, 252, 257, 279, 280, 299, 306, 325, 342, 348, 354, 355, 362, 384, 385, 407, 410, 427, 429, 445, 452, 459, 460, 467, 489, 490, 514, 517, 534, 536], "blake3": [1, 72, 79, 80, 452, 459, 460], "openzf": [1, 2, 5, 8, 9, 10, 13, 18, 19, 20, 22, 26, 27, 29, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 47, 48, 49, 53, 55, 58, 59, 72, 79, 80, 83, 93, 103, 164, 166, 167, 207, 231, 233, 240, 241, 243, 244, 245, 248, 250, 251, 252, 253, 255, 258, 273, 299, 301, 333, 337, 348, 354, 355, 358, 378, 437, 452, 459, 460, 463, 473, 483, 544, 546, 547, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 561, 562, 563], "ha": [1, 3, 4, 5, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 22, 24, 25, 30, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 54, 55, 63, 66, 71, 72, 74, 77, 78, 79, 80, 81, 82, 83, 87, 88, 91, 93, 94, 95, 96, 97, 98, 99, 100, 102, 104, 105, 107, 109, 110, 111, 112, 113, 114, 115, 116, 117, 120, 121, 122, 123, 125, 126, 127, 128, 132, 134, 135, 140, 141, 146, 152, 154, 156, 161, 162, 163, 164, 169, 175, 176, 177, 178, 179, 181, 184, 185, 187, 190, 197, 198, 199, 200, 201, 203, 206, 207, 209, 210, 213, 220, 221, 222, 223, 224, 225, 227, 230, 231, 232, 233, 236, 237, 241, 248, 249, 250, 251, 252, 253, 255, 257, 258, 261, 263, 264, 265, 266, 267, 268, 269, 270, 272, 273, 274, 275, 277, 279, 280, 281, 282, 283, 284, 285, 286, 290, 291, 292, 294, 295, 296, 298, 299, 301, 303, 304, 305, 310, 315, 321, 323, 325, 330, 332, 333, 334, 335, 340, 347, 348, 350, 353, 354, 355, 356, 357, 358, 362, 363, 366, 368, 369, 370, 371, 372, 373, 374, 375, 377, 379, 380, 382, 384, 385, 386, 387, 388, 389, 390, 391, 392, 395, 396, 397, 399, 400, 401, 405, 407, 408, 413, 414, 419, 425, 427, 429, 434, 435, 436, 437, 443, 446, 451, 452, 454, 457, 458, 459, 460, 461, 462, 463, 467, 468, 471, 473, 474, 475, 476, 477, 478, 479, 480, 482, 484, 485, 487, 489, 490, 491, 492, 493, 494, 495, 496, 497, 500, 501, 502, 503, 505, 506, 507, 508, 512, 514, 515, 520, 521, 526, 532, 534, 536, 541, 542, 543, 544, 549, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 561, 562], "abil": [1, 37, 39, 47, 48, 54, 80, 89, 91, 102, 109, 110, 111, 115, 119, 121, 128, 166, 167, 178, 185, 200, 207, 224, 233, 252, 259, 261, 272, 279, 280, 281, 285, 289, 291, 296, 336, 355, 364, 366, 377, 384, 385, 386, 390, 394, 396, 401, 439, 440, 460, 469, 471, 482, 489, 490, 491, 495, 499, 501, 508, 546, 547, 559], "offload": 1, "oper": [1, 46, 47, 48, 49, 50, 51, 66, 72, 74, 77, 78, 79, 80, 81, 82, 83, 87, 88, 91, 92, 93, 96, 99, 102, 105, 108, 109, 110, 111, 115, 116, 121, 128, 139, 140, 143, 146, 153, 156, 159, 161, 164, 175, 176, 177, 178, 179, 181, 183, 185, 187, 197, 198, 199, 200, 201, 203, 205, 207, 210, 221, 222, 223, 224, 225, 227, 229, 232, 233, 237, 249, 250, 251, 252, 253, 255, 257, 261, 262, 263, 266, 269, 272, 275, 278, 279, 280, 281, 285, 286, 291, 296, 298, 299, 308, 312, 315, 322, 325, 330, 333, 334, 335, 348, 350, 353, 354, 355, 356, 357, 358, 362, 363, 366, 367, 368, 371, 374, 377, 380, 383, 384, 385, 386, 390, 391, 396, 401, 412, 413, 416, 419, 426, 429, 434, 437, 446, 452, 454, 457, 458, 459, 460, 461, 462, 463, 467, 468, 471, 472, 473, 476, 479, 482, 485, 488, 489, 490, 491, 495, 496, 501, 508, 519, 520, 523, 526, 533, 536, 539, 541, 544, 557, 559], "intel": [1, 16, 47, 48, 54], "quickassist": [1, 48], "technologi": [1, 48], "qat": [1, 72, 199, 223, 251, 348, 452], "adapt": [1, 8, 11, 12, 54, 172, 177, 194, 199, 223, 251, 348], "some": [1, 4, 7, 9, 10, 12, 18, 19, 20, 21, 22, 27, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 63, 71, 72, 78, 79, 80, 81, 82, 83, 87, 88, 91, 96, 97, 99, 102, 105, 107, 109, 110, 111, 115, 116, 121, 124, 125, 128, 130, 137, 140, 162, 164, 166, 167, 169, 177, 178, 179, 183, 184, 185, 187, 188, 190, 199, 200, 201, 205, 206, 207, 210, 211, 213, 220, 222, 223, 224, 225, 229, 230, 232, 233, 237, 238, 240, 241, 248, 250, 251, 252, 253, 257, 258, 261, 266, 267, 269, 272, 275, 277, 279, 280, 281, 285, 286, 291, 293, 294, 296, 299, 306, 331, 334, 335, 337, 340, 347, 348, 354, 355, 356, 357, 358, 362, 363, 366, 371, 372, 374, 377, 380, 382, 384, 385, 386, 390, 391, 396, 398, 399, 401, 403, 410, 413, 435, 437, 443, 451, 452, 458, 459, 460, 461, 462, 463, 467, 468, 471, 476, 477, 479, 482, 485, 487, 489, 490, 491, 495, 496, 501, 504, 505, 508, 510, 517, 520, 542, 544, 546, 547, 554, 555, 556, 563], "ko": [1, 25, 29, 48], "kernel": [1, 8, 9, 13, 14, 15, 16, 18, 19, 20, 22, 25, 26, 27, 29, 31, 32, 34, 35, 36, 37, 38, 39, 40, 45, 47, 48, 49, 58, 68, 71, 72, 75, 82, 87, 88, 128, 140, 164, 172, 173, 176, 177, 183, 184, 187, 194, 195, 198, 199, 205, 206, 210, 217, 220, 222, 223, 229, 230, 233, 237, 245, 248, 250, 251, 257, 258, 296, 309, 333, 335, 344, 347, 348, 351, 357, 362, 363, 401, 413, 437, 448, 451, 452, 455, 462, 467, 468, 508, 520, 544], "modul": [1, 8, 9, 11, 14, 15, 16, 18, 19, 20, 21, 22, 25, 26, 27, 29, 31, 32, 33, 34, 35, 36, 38, 43, 44, 45, 49, 52, 54, 59, 60, 71, 72, 75, 77, 79, 81, 82, 88, 105, 128, 140, 164, 174, 176, 184, 185, 187, 195, 196, 198, 206, 207, 208, 210, 217, 219, 222, 224, 230, 232, 233, 235, 237, 245, 247, 250, 252, 258, 275, 296, 299, 300, 309, 333, 334, 335, 347, 348, 351, 354, 356, 363, 380, 401, 413, 437, 451, 452, 455, 457, 459, 461, 462, 468, 485, 508, 520, 544, 559], "load": [1, 4, 8, 10, 14, 15, 16, 18, 19, 20, 21, 25, 26, 27, 31, 32, 33, 36, 37, 38, 43, 44, 47, 48, 49, 68, 72, 75, 79, 81, 82, 84, 87, 88, 89, 91, 103, 104, 105, 111, 115, 117, 119, 121, 122, 128, 140, 144, 152, 156, 158, 172, 176, 177, 183, 184, 187, 194, 198, 199, 205, 206, 210, 222, 223, 229, 230, 231, 232, 233, 237, 250, 251, 254, 257, 258, 259, 261, 273, 274, 275, 281, 285, 289, 291, 292, 296, 299, 313, 321, 327, 334, 335, 348, 351, 354, 356, 357, 359, 362, 363, 364, 366, 378, 379, 380, 386, 390, 392, 394, 396, 397, 401, 413, 417, 425, 431, 448, 452, 455, 459, 461, 462, 464, 467, 468, 469, 471, 483, 484, 485, 491, 495, 497, 499, 501, 502, 508, 520, 524, 532, 536, 538, 559], "determin": [1, 4, 5, 7, 18, 19, 20, 22, 35, 36, 37, 38, 39, 47, 48, 50, 51, 54, 71, 72, 74, 78, 79, 81, 82, 86, 87, 91, 94, 102, 105, 109, 110, 111, 115, 121, 128, 140, 144, 166, 167, 175, 177, 182, 185, 187, 197, 199, 204, 207, 210, 220, 221, 222, 223, 228, 232, 233, 237, 248, 249, 250, 251, 256, 261, 264, 272, 275, 279, 280, 281, 285, 291, 296, 298, 299, 313, 335, 347, 348, 350, 353, 354, 356, 357, 361, 362, 366, 369, 377, 380, 384, 385, 386, 390, 396, 401, 413, 417, 451, 452, 454, 458, 459, 461, 462, 466, 467, 471, 474, 482, 485, 489, 490, 491, 495, 501, 508, 520, 524, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 560, 563], "optim": [1, 14, 16, 25, 28, 31, 47, 48, 49, 50, 54, 71, 72, 79, 82, 177, 185, 187, 199, 207, 210, 220, 223, 233, 237, 248, 251, 299, 335, 347, 348, 354, 357, 451, 452, 459, 462], "result": [1, 4, 12, 18, 19, 20, 22, 32, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 65, 66, 68, 71, 72, 78, 79, 80, 81, 82, 83, 87, 88, 90, 91, 93, 102, 105, 109, 110, 111, 115, 121, 136, 137, 144, 148, 153, 156, 164, 165, 166, 167, 172, 173, 177, 178, 185, 187, 192, 194, 195, 199, 200, 207, 210, 215, 217, 220, 223, 224, 232, 233, 237, 243, 245, 248, 251, 252, 260, 261, 263, 272, 275, 279, 280, 281, 285, 291, 298, 299, 305, 306, 322, 325, 333, 334, 335, 342, 344, 347, 348, 353, 354, 355, 356, 357, 358, 365, 366, 368, 377, 380, 384, 385, 386, 390, 396, 409, 410, 426, 429, 437, 438, 445, 446, 448, 451, 452, 458, 459, 460, 461, 462, 463, 467, 468, 470, 471, 473, 482, 485, 489, 490, 491, 495, 501, 516, 517, 524, 528, 533, 536, 544, 545, 546, 547, 555, 556, 559], "observ": [1, 4, 5, 47, 48, 49, 54, 62, 72, 146, 164, 199, 222, 223, 237, 240, 250, 251, 315, 333, 339, 348, 413, 419, 437, 442, 451, 452, 526, 544], "proc": [1, 4, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 71, 72, 103, 123, 127, 177, 199, 220, 223, 248, 251, 347, 348, 378, 451, 452, 483, 503, 507], "spl": [1, 4, 8, 11, 12, 13, 48, 54, 68, 70, 72, 82, 131, 177, 195, 199, 208, 210, 217, 219, 223, 235, 237, 245, 247, 251, 300, 335, 344, 346, 348, 357, 404, 448, 450, 452, 462, 511], "kstat": [1, 4, 48, 72, 177, 199, 223, 251, 348, 452], "directori": [1, 7, 8, 18, 19, 20, 22, 25, 35, 36, 37, 38, 39, 43, 44, 48, 52, 54, 66, 68, 72, 74, 78, 79, 80, 82, 87, 88, 91, 94, 95, 102, 106, 118, 121, 128, 132, 144, 146, 164, 172, 173, 175, 177, 181, 184, 185, 186, 187, 194, 195, 197, 199, 203, 205, 206, 207, 209, 210, 217, 221, 223, 224, 227, 229, 230, 233, 236, 237, 245, 249, 251, 252, 255, 257, 258, 261, 265, 272, 276, 291, 296, 298, 299, 301, 313, 315, 333, 335, 344, 348, 350, 353, 354, 355, 357, 362, 363, 366, 370, 377, 381, 396, 401, 405, 417, 419, 437, 446, 448, 452, 454, 458, 459, 460, 462, 467, 468, 471, 474, 475, 482, 486, 498, 501, 508, 512, 524, 526, 544, 549, 551, 554, 556], "win": [1, 48, 49], "report": [1, 4, 17, 18, 19, 20, 22, 25, 29, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 58, 62, 66, 72, 74, 79, 80, 81, 82, 86, 87, 97, 104, 105, 106, 107, 111, 115, 122, 125, 132, 134, 146, 148, 156, 159, 160, 164, 175, 182, 183, 185, 186, 187, 197, 199, 204, 205, 207, 209, 210, 221, 223, 228, 229, 232, 233, 236, 237, 240, 249, 251, 252, 256, 257, 267, 274, 275, 276, 277, 281, 285, 292, 294, 299, 301, 304, 315, 317, 325, 328, 329, 333, 335, 339, 348, 350, 354, 355, 356, 357, 361, 362, 372, 379, 380, 381, 382, 386, 390, 397, 399, 405, 419, 421, 429, 432, 433, 437, 442, 446, 452, 454, 459, 460, 461, 462, 466, 467, 477, 484, 485, 486, 487, 491, 495, 502, 505, 512, 526, 528, 536, 539, 540, 544], "fastest": [1, 48, 72, 79, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "becom": [1, 3, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 66, 68, 72, 78, 79, 80, 81, 82, 87, 91, 102, 108, 121, 134, 149, 150, 163, 169, 173, 177, 178, 185, 187, 190, 195, 199, 200, 207, 210, 213, 217, 223, 224, 233, 237, 241, 245, 251, 252, 261, 272, 278, 291, 298, 299, 318, 319, 332, 334, 335, 344, 348, 353, 354, 355, 356, 357, 366, 377, 383, 396, 422, 423, 436, 446, 448, 452, 458, 459, 460, 461, 462, 467, 471, 482, 488, 501, 529, 530, 543], "overridden": [1, 48, 66, 71, 79, 82, 88, 109, 110, 111, 115, 133, 134, 137, 146, 154, 184, 185, 187, 206, 207, 210, 220, 230, 233, 237, 248, 258, 279, 280, 281, 285, 299, 302, 303, 306, 315, 323, 335, 347, 354, 357, 363, 384, 385, 386, 390, 406, 407, 410, 419, 427, 446, 451, 459, 462, 468, 489, 490, 491, 495, 513, 514, 517, 526, 534], "paramet": [1, 3, 4, 33, 49, 51, 52, 54, 59, 60, 65, 71, 72, 74, 79, 81, 87, 100, 109, 110, 120, 140, 172, 174, 175, 181, 192, 194, 195, 196, 197, 203, 208, 210, 215, 217, 219, 221, 222, 224, 227, 233, 235, 237, 243, 245, 247, 249, 250, 252, 255, 270, 279, 280, 290, 300, 309, 333, 334, 335, 342, 347, 348, 350, 356, 375, 384, 385, 395, 413, 445, 451, 452, 454, 461, 467, 480, 489, 490, 500, 520, 559], "filenam": [1, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 49, 66, 74, 80, 82, 88, 106, 131, 159, 175, 184, 197, 206, 221, 230, 249, 258, 300, 350, 355, 357, 363, 381, 404, 446, 454, 460, 462, 468, 486, 511, 539], "fletcher_4_bench": [1, 48], "zfs_fletcher_4_impl": [1, 72, 199, 223, 251, 348, 452], "all": [1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 32, 33, 35, 36, 37, 38, 39, 42, 43, 44, 47, 48, 49, 51, 54, 56, 57, 58, 62, 63, 65, 66, 67, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 87, 88, 89, 91, 92, 93, 94, 96, 97, 98, 99, 101, 102, 103, 104, 105, 106, 107, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 121, 122, 124, 125, 126, 128, 132, 133, 134, 136, 137, 138, 140, 141, 142, 143, 144, 145, 146, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 169, 175, 176, 177, 178, 179, 181, 183, 184, 185, 186, 187, 188, 190, 192, 197, 198, 199, 200, 201, 203, 205, 206, 207, 209, 210, 211, 213, 215, 218, 220, 221, 222, 223, 224, 225, 227, 229, 230, 231, 232, 233, 236, 237, 238, 240, 241, 243, 246, 248, 249, 250, 251, 252, 253, 255, 257, 258, 259, 261, 262, 263, 264, 266, 267, 268, 269, 271, 272, 273, 274, 275, 276, 277, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 298, 299, 301, 302, 303, 305, 306, 309, 310, 311, 312, 313, 314, 315, 317, 318, 319, 320, 321, 322, 323, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 337, 339, 340, 342, 343, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 362, 363, 364, 366, 367, 368, 369, 371, 372, 373, 374, 376, 377, 378, 379, 380, 381, 382, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 396, 397, 398, 399, 400, 401, 405, 406, 407, 409, 410, 411, 413, 414, 415, 416, 417, 418, 419, 421, 422, 423, 424, 425, 426, 427, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 442, 443, 445, 446, 447, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 467, 468, 469, 471, 472, 473, 474, 476, 477, 478, 479, 481, 482, 483, 484, 485, 486, 487, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 501, 502, 504, 505, 506, 508, 512, 513, 514, 516, 517, 518, 520, 521, 522, 523, 524, 525, 526, 528, 529, 530, 531, 532, 533, 534, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 549, 553, 554, 555, 557, 559, 561], "chksum_bench": [1, 72, 452], "zfs_blake3_impl": [1, 72, 452], "zfs_sha256_impl": 1, "zfs_sha512_impl": 1, "while": [1, 5, 12, 18, 19, 33, 36, 38, 43, 44, 45, 47, 48, 49, 54, 63, 65, 66, 71, 72, 74, 79, 80, 81, 82, 87, 88, 91, 102, 105, 109, 110, 111, 115, 121, 128, 134, 136, 137, 149, 150, 152, 164, 166, 167, 169, 175, 177, 178, 183, 185, 187, 190, 192, 197, 199, 200, 205, 207, 210, 213, 215, 220, 221, 223, 224, 229, 232, 233, 237, 241, 243, 248, 249, 251, 252, 257, 258, 261, 272, 275, 279, 280, 281, 285, 291, 296, 299, 305, 306, 318, 319, 321, 333, 334, 335, 340, 342, 347, 348, 350, 354, 355, 356, 357, 362, 363, 366, 377, 380, 384, 385, 386, 390, 396, 401, 409, 410, 422, 423, 425, 437, 443, 445, 446, 451, 452, 454, 459, 460, 461, 462, 467, 468, 471, 482, 485, 489, 490, 491, 495, 501, 508, 516, 517, 529, 530, 532, 544, 546, 547, 550, 551, 552, 553, 559], "mai": [1, 8, 9, 10, 11, 12, 18, 19, 20, 22, 23, 26, 27, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 62, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 93, 94, 97, 100, 102, 103, 105, 106, 107, 109, 110, 111, 115, 117, 119, 120, 121, 125, 126, 128, 130, 131, 132, 134, 135, 136, 139, 140, 141, 144, 145, 146, 147, 151, 154, 155, 159, 161, 162, 163, 164, 165, 166, 167, 173, 175, 176, 177, 178, 182, 183, 184, 185, 187, 195, 197, 198, 199, 200, 204, 205, 206, 207, 210, 217, 220, 221, 222, 223, 224, 228, 229, 230, 231, 232, 233, 237, 240, 245, 248, 249, 250, 251, 252, 256, 257, 258, 259, 261, 263, 264, 267, 272, 273, 275, 277, 279, 280, 281, 285, 289, 291, 294, 296, 298, 299, 303, 304, 305, 308, 310, 313, 314, 315, 323, 328, 330, 331, 333, 334, 335, 336, 339, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 372, 375, 376, 377, 378, 380, 381, 382, 384, 385, 386, 389, 390, 392, 393, 394, 395, 396, 399, 400, 401, 403, 404, 405, 406, 407, 408, 409, 411, 412, 413, 414, 417, 418, 419, 420, 424, 427, 428, 432, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 473, 474, 477, 480, 482, 483, 485, 486, 487, 489, 490, 491, 495, 497, 499, 500, 501, 505, 506, 508, 510, 511, 512, 514, 515, 516, 519, 520, 521, 524, 525, 526, 527, 531, 534, 535, 539, 541, 542, 543, 544, 545, 546, 547, 550, 551, 552, 556, 557, 559, 560, 561, 562, 563], "tempt": 1, "improv": [1, 5, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 58, 71, 72, 79, 80, 81, 177, 178, 185, 187, 199, 200, 207, 210, 220, 223, 224, 233, 237, 240, 248, 251, 252, 299, 334, 347, 348, 354, 355, 356, 451, 452, 459, 460, 461], "cpu": [1, 18, 19, 20, 22, 35, 36, 37, 38, 39, 49, 52, 54, 71, 72, 78, 79, 91, 102, 121, 177, 185, 199, 207, 220, 223, 233, 248, 251, 261, 272, 291, 298, 299, 347, 348, 353, 354, 366, 377, 396, 451, 452, 458, 459, 471, 482, 501], "perform": [1, 5, 6, 7, 8, 9, 12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 51, 58, 59, 60, 63, 68, 71, 72, 77, 78, 79, 80, 81, 82, 87, 94, 97, 104, 105, 107, 109, 110, 113, 118, 122, 125, 128, 133, 140, 143, 144, 151, 152, 153, 158, 161, 164, 165, 169, 172, 176, 177, 178, 181, 183, 185, 187, 190, 194, 198, 199, 200, 203, 205, 207, 210, 213, 220, 222, 223, 224, 227, 229, 232, 233, 237, 241, 248, 250, 251, 252, 255, 257, 267, 274, 275, 277, 279, 280, 292, 294, 296, 298, 299, 302, 312, 313, 320, 321, 322, 327, 330, 334, 335, 340, 347, 348, 353, 354, 355, 356, 357, 362, 372, 379, 380, 382, 384, 385, 397, 399, 401, 406, 413, 416, 417, 424, 425, 426, 431, 434, 438, 443, 451, 452, 457, 458, 459, 460, 461, 462, 467, 474, 477, 484, 485, 487, 489, 490, 493, 498, 502, 505, 508, 513, 520, 523, 524, 531, 532, 533, 538, 541, 544, 545, 555, 559], "wide": [1, 9, 12, 32, 47, 48, 49, 58, 72, 79, 81, 134, 146, 148, 185, 187, 207, 210, 220, 233, 237, 299, 315, 317, 334, 354, 356, 419, 421, 452, 459, 461, 526, 528, 556], "consid": [1, 18, 19, 20, 35, 36, 38, 47, 48, 49, 54, 72, 78, 79, 109, 110, 111, 115, 139, 141, 165, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 281, 285, 298, 299, 308, 310, 348, 353, 354, 386, 390, 412, 414, 438, 452, 458, 459, 489, 490, 491, 495, 519, 521, 545, 563], "commun": [1, 4, 16, 17, 18, 19, 20, 22, 25, 29, 31, 33, 35, 36, 37, 38, 39, 43, 44, 54, 59, 60, 83, 128, 173, 195, 207, 217, 233, 240, 245, 296, 358, 401, 463, 508], "extrodinarili": 1, "bad": [1, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 63, 72, 81, 82, 140, 169, 176, 181, 187, 190, 198, 203, 210, 213, 222, 223, 227, 237, 241, 250, 251, 255, 334, 335, 340, 348, 356, 357, 413, 443, 452, 461, 462, 520, 563], "idea": [1, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 62, 68, 173, 195, 217, 220, 240, 245, 248, 339, 344, 442, 448], "don": [1, 9, 10, 14, 16, 21, 25, 27, 28, 29, 31, 33, 43, 44, 48, 72, 87, 88, 106, 128, 164, 176, 177, 198, 199, 205, 222, 223, 229, 250, 251, 257, 348, 362, 363, 452, 467, 468, 486, 508, 544], "t": [1, 4, 8, 9, 10, 11, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 43, 44, 48, 49, 50, 54, 65, 66, 68, 71, 72, 75, 77, 79, 80, 81, 82, 87, 88, 95, 96, 97, 99, 101, 104, 105, 106, 107, 109, 110, 111, 115, 116, 122, 125, 126, 128, 132, 137, 140, 144, 146, 148, 149, 150, 159, 163, 164, 165, 172, 173, 176, 177, 178, 183, 185, 186, 187, 192, 194, 195, 198, 199, 200, 205, 207, 209, 210, 215, 217, 220, 222, 223, 224, 229, 231, 232, 233, 236, 237, 243, 245, 248, 250, 251, 252, 257, 265, 266, 267, 269, 271, 273, 274, 275, 277, 279, 280, 281, 285, 286, 292, 294, 295, 299, 301, 306, 313, 315, 317, 318, 319, 328, 332, 334, 335, 342, 344, 347, 348, 351, 354, 355, 356, 357, 362, 363, 370, 371, 372, 374, 376, 379, 380, 382, 384, 385, 386, 390, 391, 397, 399, 400, 405, 410, 413, 417, 419, 421, 422, 423, 432, 436, 438, 445, 446, 448, 451, 452, 455, 457, 459, 460, 461, 462, 467, 468, 475, 476, 477, 479, 481, 484, 485, 486, 487, 489, 490, 491, 495, 496, 502, 505, 506, 508, 512, 517, 520, 524, 526, 528, 529, 530, 539, 543, 544, 545, 559], "zf": [2, 5, 6, 9, 10, 11, 13, 21, 27, 41, 47, 49, 52, 53, 55, 56, 58, 59, 60, 62, 67, 68, 69, 70, 74, 76, 77, 78, 79, 80, 81, 82, 84, 86, 87, 88, 129, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 171, 172, 173, 174, 175, 178, 180, 182, 183, 184, 186, 187, 188, 192, 193, 194, 195, 196, 197, 200, 202, 204, 205, 206, 209, 210, 211, 215, 216, 217, 218, 219, 221, 224, 226, 228, 229, 230, 236, 237, 238, 240, 244, 245, 246, 247, 249, 252, 254, 256, 257, 258, 297, 298, 299, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 339, 343, 344, 345, 346, 350, 352, 353, 354, 355, 356, 357, 359, 361, 362, 363, 402, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 447, 448, 449, 450, 454, 456, 457, 458, 459, 460, 461, 462, 464, 466, 467, 468, 509, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547], "disk": [2, 3, 4, 5, 7, 14, 16, 25, 28, 31, 33, 47, 51, 58, 65, 67, 68, 72, 74, 78, 79, 80, 81, 82, 86, 87, 91, 102, 105, 111, 115, 121, 124, 128, 130, 133, 134, 137, 140, 141, 146, 148, 149, 150, 154, 156, 158, 159, 164, 171, 173, 175, 176, 177, 178, 182, 183, 185, 187, 192, 193, 195, 197, 198, 199, 200, 204, 205, 207, 210, 215, 216, 217, 218, 221, 222, 223, 224, 228, 229, 232, 233, 237, 243, 244, 245, 246, 249, 250, 251, 252, 256, 257, 261, 272, 275, 281, 285, 291, 293, 296, 298, 299, 302, 306, 310, 315, 317, 318, 319, 323, 325, 327, 328, 333, 334, 335, 342, 343, 344, 348, 350, 353, 354, 355, 356, 357, 361, 362, 366, 377, 380, 386, 390, 396, 398, 401, 403, 406, 410, 413, 414, 419, 421, 422, 423, 427, 429, 431, 432, 437, 445, 447, 448, 452, 454, 458, 459, 460, 461, 462, 466, 467, 471, 482, 485, 491, 495, 501, 504, 508, 510, 513, 517, 520, 521, 526, 528, 529, 530, 534, 536, 538, 539, 544, 550, 552, 555, 557, 558, 559], "format": [2, 4, 12, 13, 14, 16, 25, 28, 31, 48, 58, 66, 72, 74, 79, 80, 82, 87, 91, 102, 105, 111, 115, 121, 143, 146, 148, 159, 162, 163, 164, 165, 168, 175, 176, 177, 178, 185, 187, 189, 197, 198, 199, 200, 207, 210, 212, 220, 221, 222, 223, 224, 229, 232, 233, 237, 239, 248, 249, 250, 251, 252, 257, 261, 272, 275, 281, 285, 291, 299, 312, 315, 317, 328, 331, 332, 333, 335, 338, 350, 354, 355, 357, 362, 366, 377, 380, 386, 390, 396, 416, 419, 421, 432, 435, 436, 437, 438, 441, 446, 452, 454, 459, 460, 462, 467, 471, 482, 485, 491, 495, 501, 523, 526, 528, 539, 542, 543, 544, 545, 548, 559], "were": [2, 5, 32, 48, 49, 54, 55, 63, 68, 72, 75, 79, 80, 81, 87, 89, 94, 97, 105, 107, 109, 110, 111, 115, 119, 125, 128, 140, 144, 155, 164, 169, 173, 178, 183, 185, 187, 190, 195, 199, 200, 205, 207, 210, 213, 217, 222, 223, 224, 229, 232, 233, 237, 241, 245, 250, 251, 252, 257, 259, 264, 267, 275, 277, 279, 280, 281, 285, 289, 294, 296, 299, 313, 324, 333, 340, 344, 348, 351, 354, 355, 356, 362, 364, 369, 372, 380, 382, 384, 385, 386, 390, 394, 399, 401, 413, 417, 428, 437, 443, 448, 452, 455, 459, 460, 461, 467, 469, 474, 477, 485, 487, 489, 490, 491, 495, 499, 505, 508, 520, 524, 535, 544, 557, 559], "origin": [2, 10, 12, 18, 19, 20, 21, 22, 35, 36, 43, 44, 47, 48, 49, 54, 72, 78, 79, 81, 89, 91, 92, 93, 94, 102, 105, 108, 109, 110, 111, 113, 115, 118, 119, 121, 128, 140, 144, 164, 172, 176, 185, 187, 194, 198, 199, 207, 210, 222, 223, 233, 237, 240, 250, 251, 259, 261, 262, 272, 278, 279, 280, 281, 285, 289, 291, 296, 298, 299, 313, 333, 334, 348, 353, 354, 356, 364, 366, 367, 377, 383, 384, 385, 386, 390, 394, 396, 401, 413, 417, 437, 452, 458, 459, 461, 469, 471, 472, 473, 474, 482, 485, 488, 489, 490, 491, 493, 495, 498, 499, 501, 508, 520, 524, 544, 552, 557, 558, 559], "version": [2, 8, 9, 12, 13, 21, 25, 26, 27, 32, 35, 36, 37, 38, 39, 43, 45, 48, 49, 54, 55, 57, 72, 78, 79, 80, 82, 87, 88, 89, 92, 93, 94, 96, 99, 105, 108, 109, 110, 111, 113, 115, 116, 118, 119, 124, 128, 162, 164, 178, 183, 184, 185, 187, 199, 200, 205, 206, 207, 210, 223, 224, 229, 230, 232, 233, 237, 251, 252, 257, 258, 259, 275, 279, 280, 281, 285, 289, 293, 296, 299, 331, 333, 335, 348, 354, 355, 357, 362, 363, 364, 380, 384, 385, 386, 390, 394, 398, 401, 435, 437, 452, 458, 459, 460, 462, 467, 468, 469, 472, 473, 474, 476, 479, 485, 488, 489, 490, 491, 493, 495, 496, 498, 499, 504, 508, 542, 544, 559, 564], "singl": [2, 3, 5, 14, 16, 18, 19, 20, 25, 28, 31, 33, 36, 38, 43, 44, 47, 48, 49, 54, 66, 71, 72, 74, 79, 80, 81, 96, 98, 99, 101, 105, 112, 116, 128, 132, 137, 140, 142, 146, 148, 157, 163, 164, 172, 175, 177, 178, 184, 185, 187, 194, 197, 199, 200, 206, 207, 209, 210, 220, 221, 223, 224, 230, 232, 233, 236, 237, 248, 249, 251, 252, 258, 266, 268, 269, 271, 275, 282, 286, 296, 299, 301, 306, 309, 311, 315, 317, 326, 332, 333, 334, 347, 348, 350, 354, 355, 356, 371, 373, 374, 376, 380, 387, 391, 401, 405, 410, 413, 415, 419, 421, 430, 436, 437, 446, 451, 452, 454, 459, 460, 461, 476, 478, 479, 481, 485, 492, 496, 508, 512, 517, 520, 522, 526, 528, 537, 543, 544, 550, 552], "number": [2, 3, 5, 7, 8, 9, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 32, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 50, 51, 54, 55, 57, 65, 68, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 86, 87, 89, 96, 99, 101, 105, 109, 110, 111, 115, 116, 119, 124, 128, 131, 132, 134, 140, 141, 142, 144, 146, 148, 152, 157, 159, 162, 163, 166, 167, 172, 173, 175, 176, 177, 178, 182, 183, 185, 186, 187, 192, 194, 195, 197, 198, 199, 200, 204, 205, 207, 208, 209, 210, 215, 217, 220, 221, 222, 223, 224, 228, 229, 232, 233, 235, 236, 237, 243, 245, 248, 249, 250, 251, 252, 256, 257, 259, 266, 269, 271, 275, 279, 280, 281, 285, 286, 289, 293, 296, 298, 299, 300, 301, 310, 311, 313, 315, 317, 321, 326, 328, 331, 332, 334, 335, 342, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 361, 362, 364, 371, 374, 376, 380, 384, 385, 386, 390, 391, 394, 398, 401, 404, 405, 413, 414, 415, 417, 419, 421, 425, 430, 432, 435, 436, 445, 448, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 466, 467, 469, 476, 479, 481, 485, 489, 490, 491, 495, 496, 499, 504, 508, 511, 512, 520, 521, 522, 524, 526, 528, 532, 537, 539, 542, 543, 546, 547], "which": [2, 3, 5, 7, 8, 9, 10, 11, 12, 16, 18, 19, 20, 22, 25, 26, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 50, 51, 54, 55, 58, 62, 63, 66, 67, 68, 71, 72, 74, 75, 78, 79, 80, 81, 82, 83, 86, 87, 88, 89, 91, 94, 96, 99, 100, 102, 105, 109, 110, 111, 113, 115, 116, 119, 120, 121, 123, 127, 128, 131, 132, 134, 135, 136, 140, 143, 144, 156, 161, 162, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 178, 179, 182, 183, 184, 185, 186, 187, 190, 193, 194, 195, 197, 198, 199, 200, 201, 204, 205, 206, 207, 208, 209, 210, 213, 216, 217, 220, 221, 222, 223, 224, 225, 228, 229, 230, 232, 233, 235, 236, 237, 240, 241, 244, 245, 248, 249, 250, 251, 252, 253, 256, 257, 258, 259, 261, 264, 266, 269, 270, 272, 275, 279, 280, 281, 283, 285, 286, 289, 290, 291, 296, 298, 299, 300, 301, 304, 312, 313, 330, 331, 333, 334, 335, 336, 339, 340, 343, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 361, 362, 363, 364, 366, 369, 371, 374, 375, 377, 380, 384, 385, 386, 388, 390, 391, 394, 395, 396, 401, 404, 405, 408, 409, 413, 416, 417, 429, 434, 435, 437, 438, 442, 443, 446, 447, 448, 451, 452, 454, 455, 458, 459, 460, 461, 462, 463, 466, 467, 468, 469, 471, 474, 476, 479, 480, 482, 485, 489, 490, 491, 493, 495, 496, 499, 500, 501, 503, 507, 508, 511, 512, 515, 516, 520, 523, 524, 536, 541, 542, 544, 545, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "increas": [2, 3, 5, 8, 37, 39, 46, 47, 48, 49, 50, 51, 54, 65, 68, 71, 72, 79, 80, 81, 82, 87, 88, 124, 140, 148, 162, 164, 172, 173, 176, 177, 178, 181, 183, 184, 185, 187, 192, 194, 195, 198, 199, 200, 203, 205, 206, 207, 210, 215, 217, 220, 222, 223, 224, 227, 229, 230, 233, 237, 243, 245, 248, 250, 251, 252, 255, 257, 258, 293, 299, 331, 333, 334, 335, 342, 344, 347, 348, 354, 355, 356, 357, 362, 363, 398, 413, 435, 437, 445, 448, 451, 452, 459, 460, 461, 462, 467, 468, 504, 520, 528, 542, 544], "whenev": [2, 47, 49, 54, 79, 80, 109, 110, 185, 207, 224, 233, 252, 279, 280, 299, 354, 355, 384, 385, 459, 460, 489, 490], "approach": [2, 33, 48, 49, 50, 54, 72, 80, 177, 199, 223, 251, 252, 348, 355, 452, 460], "wa": [2, 8, 11, 12, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 49, 50, 54, 55, 68, 71, 72, 75, 78, 79, 80, 81, 82, 85, 87, 88, 90, 91, 93, 96, 97, 99, 102, 105, 107, 108, 109, 110, 111, 115, 116, 121, 125, 128, 131, 136, 140, 143, 144, 149, 150, 156, 166, 167, 171, 172, 173, 176, 177, 178, 181, 184, 185, 186, 187, 193, 194, 195, 198, 199, 200, 203, 205, 206, 207, 209, 210, 216, 217, 220, 222, 223, 224, 227, 229, 230, 232, 233, 236, 237, 240, 244, 245, 248, 250, 251, 252, 255, 257, 258, 260, 261, 263, 266, 267, 269, 272, 275, 277, 278, 279, 280, 281, 285, 286, 291, 294, 296, 298, 299, 301, 305, 312, 313, 318, 319, 325, 334, 335, 344, 347, 348, 351, 353, 354, 355, 356, 357, 360, 362, 363, 365, 366, 368, 371, 372, 374, 377, 380, 382, 383, 384, 385, 386, 390, 391, 396, 399, 401, 404, 409, 413, 416, 417, 422, 423, 429, 448, 451, 452, 455, 458, 459, 460, 461, 462, 465, 467, 468, 470, 471, 473, 476, 477, 479, 482, 485, 487, 488, 489, 490, 491, 495, 496, 501, 505, 508, 511, 516, 520, 523, 524, 529, 530, 536, 546, 547, 550, 551, 552, 553, 555, 557, 558, 559, 560], "develop": [2, 4, 5, 9, 10, 12, 26, 29, 32, 41, 45, 47, 48, 54, 56, 58, 59, 60, 68, 71, 72, 77, 79, 82, 111, 115, 173, 184, 185, 195, 206, 207, 217, 220, 223, 230, 233, 245, 248, 251, 258, 281, 285, 299, 344, 347, 348, 354, 386, 390, 448, 451, 452, 457, 459, 462, 491, 495], "driven": 2, "organis": 2, "For": [2, 3, 4, 5, 8, 12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 54, 63, 71, 72, 74, 77, 78, 79, 80, 81, 82, 87, 88, 89, 96, 99, 101, 105, 109, 110, 111, 115, 116, 119, 128, 130, 132, 137, 140, 141, 144, 146, 156, 159, 164, 169, 173, 175, 176, 177, 178, 184, 185, 187, 190, 197, 198, 199, 200, 206, 207, 209, 210, 213, 220, 221, 222, 223, 224, 230, 232, 233, 236, 237, 241, 248, 249, 250, 251, 252, 257, 258, 259, 266, 269, 271, 275, 279, 280, 281, 285, 286, 289, 296, 298, 299, 301, 306, 309, 310, 313, 315, 325, 328, 333, 334, 335, 340, 347, 348, 350, 353, 354, 355, 356, 357, 362, 363, 364, 371, 374, 376, 380, 384, 385, 386, 390, 391, 394, 401, 403, 405, 410, 413, 414, 417, 419, 429, 432, 437, 443, 451, 452, 454, 457, 458, 459, 460, 461, 462, 467, 468, 469, 476, 479, 481, 485, 489, 490, 491, 495, 496, 499, 508, 510, 512, 517, 520, 521, 524, 526, 536, 539, 544, 550, 551, 552, 553, 557, 559], "distribut": [2, 3, 6, 7, 8, 9, 12, 32, 41, 42, 43, 45, 48, 49, 54, 58, 68, 71, 72, 80, 81, 82, 83, 88, 128, 134, 165, 179, 184, 185, 187, 199, 201, 206, 207, 210, 220, 223, 225, 230, 233, 237, 248, 251, 253, 258, 296, 334, 335, 344, 347, 348, 355, 356, 357, 358, 363, 401, 438, 448, 451, 452, 460, 461, 462, 463, 468, 508, 545], "unsuit": [2, 80, 200, 224, 252, 355, 460], "would": [2, 5, 9, 19, 20, 32, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 65, 66, 72, 78, 79, 80, 82, 87, 91, 94, 100, 102, 105, 109, 110, 111, 115, 120, 121, 123, 127, 130, 133, 137, 140, 144, 158, 164, 176, 177, 178, 183, 185, 187, 192, 198, 199, 200, 205, 207, 210, 215, 222, 223, 224, 229, 232, 233, 237, 243, 250, 251, 252, 257, 261, 264, 270, 272, 275, 279, 280, 281, 285, 290, 291, 298, 299, 302, 306, 313, 327, 333, 335, 342, 348, 353, 354, 355, 357, 362, 366, 369, 375, 377, 380, 384, 385, 386, 390, 395, 396, 403, 406, 410, 413, 417, 431, 437, 445, 446, 451, 452, 458, 459, 460, 462, 467, 471, 474, 480, 482, 485, 489, 490, 491, 495, 500, 501, 503, 507, 510, 513, 517, 520, 524, 538, 544, 559], "have": [2, 3, 8, 10, 11, 12, 17, 18, 19, 20, 22, 25, 26, 27, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 58, 63, 66, 68, 71, 72, 74, 77, 78, 79, 80, 81, 82, 87, 88, 89, 91, 93, 94, 100, 102, 105, 108, 109, 110, 111, 115, 118, 119, 120, 121, 123, 126, 127, 128, 130, 132, 134, 136, 137, 140, 144, 145, 146, 148, 152, 154, 156, 158, 159, 162, 164, 169, 171, 173, 175, 176, 177, 178, 184, 185, 187, 190, 193, 195, 197, 198, 199, 200, 206, 207, 209, 210, 213, 216, 217, 220, 221, 222, 223, 224, 230, 232, 233, 236, 237, 241, 244, 245, 248, 249, 250, 251, 252, 258, 259, 261, 263, 264, 270, 272, 275, 278, 279, 280, 281, 285, 289, 290, 291, 295, 296, 298, 299, 301, 305, 306, 313, 314, 315, 323, 327, 331, 333, 334, 335, 340, 344, 347, 348, 350, 353, 354, 355, 356, 357, 362, 363, 364, 366, 368, 369, 375, 377, 380, 383, 384, 385, 386, 390, 394, 395, 396, 400, 401, 403, 405, 409, 410, 413, 417, 418, 419, 425, 427, 431, 435, 437, 443, 446, 448, 451, 452, 454, 457, 458, 459, 460, 461, 462, 467, 468, 469, 471, 473, 474, 480, 482, 485, 488, 489, 490, 491, 495, 499, 500, 501, 503, 506, 507, 508, 510, 512, 516, 517, 520, 524, 525, 526, 528, 532, 534, 536, 538, 539, 542, 544, 549, 551, 555, 556, 559, 561, 562, 563], "agreement": 2, "across": [2, 3, 8, 33, 47, 48, 49, 54, 71, 72, 74, 77, 78, 79, 80, 81, 82, 149, 150, 178, 187, 197, 200, 207, 210, 221, 223, 224, 233, 237, 249, 251, 252, 299, 318, 319, 334, 335, 347, 348, 350, 354, 355, 356, 357, 422, 423, 451, 452, 454, 457, 458, 459, 460, 461, 462, 529, 530], "altern": [2, 11, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 49, 68, 72, 74, 78, 82, 86, 91, 102, 121, 136, 149, 150, 175, 182, 185, 187, 197, 204, 207, 210, 221, 228, 233, 237, 249, 256, 261, 272, 291, 298, 335, 344, 350, 353, 357, 361, 366, 377, 396, 409, 422, 423, 448, 452, 454, 458, 462, 466, 471, 482, 501, 516, 529, 530, 549, 550, 551, 554, 556, 559, 563], "tradit": [2, 5, 8, 33, 47, 48, 54, 58, 72, 78, 80, 177, 185, 199, 207, 223, 233, 251, 252, 298, 348, 353, 355, 452, 458, 460, 559], "allow": [2, 3, 5, 7, 8, 9, 12, 14, 16, 18, 19, 20, 22, 25, 27, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 54, 66, 68, 71, 72, 74, 78, 79, 80, 81, 82, 83, 84, 87, 88, 91, 97, 100, 102, 104, 105, 107, 109, 110, 111, 115, 119, 120, 121, 122, 123, 125, 127, 128, 131, 140, 141, 144, 146, 156, 161, 164, 169, 173, 176, 177, 178, 184, 185, 187, 190, 195, 197, 198, 199, 200, 206, 207, 210, 213, 217, 220, 221, 222, 223, 224, 229, 230, 232, 233, 237, 241, 245, 248, 249, 250, 251, 252, 254, 257, 258, 261, 267, 270, 272, 274, 275, 277, 279, 280, 281, 285, 289, 290, 291, 292, 294, 296, 299, 313, 315, 325, 330, 333, 334, 335, 340, 344, 347, 348, 350, 354, 355, 356, 357, 358, 359, 362, 363, 366, 372, 375, 377, 379, 380, 382, 384, 385, 386, 390, 394, 395, 396, 397, 399, 401, 404, 413, 414, 417, 419, 429, 434, 437, 446, 448, 451, 452, 454, 458, 459, 460, 461, 462, 463, 464, 467, 468, 471, 477, 480, 482, 484, 485, 487, 489, 490, 491, 495, 499, 500, 501, 502, 503, 505, 507, 508, 511, 520, 521, 524, 526, 536, 541, 544, 557, 559, 561, 562], "uniqu": [2, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 49, 54, 67, 71, 72, 74, 77, 78, 79, 80, 81, 82, 86, 87, 88, 98, 112, 128, 131, 140, 144, 151, 164, 171, 175, 176, 178, 182, 183, 184, 185, 187, 193, 197, 198, 200, 204, 205, 206, 207, 208, 210, 216, 220, 221, 222, 223, 224, 228, 229, 230, 233, 235, 237, 244, 248, 249, 250, 251, 252, 256, 257, 258, 268, 282, 296, 298, 299, 300, 313, 320, 333, 334, 335, 343, 347, 348, 350, 353, 354, 355, 356, 357, 361, 362, 363, 373, 387, 401, 404, 413, 417, 424, 437, 447, 451, 452, 454, 457, 458, 459, 460, 461, 462, 466, 467, 468, 478, 492, 508, 511, 520, 524, 531, 544], "name": [2, 5, 10, 12, 14, 16, 18, 19, 20, 21, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 55, 58, 62, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 190, 192, 193, 194, 195, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 215, 216, 217, 218, 220, 221, 222, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 238, 240, 241, 243, 244, 245, 246, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 339, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 549, 550, 551, 552, 553, 555, 556, 557, 558, 559, 560, 561, 562, 563], "thi": [2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 45, 46, 47, 48, 49, 50, 51, 54, 60, 63, 65, 66, 67, 68, 71, 72, 74, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 96, 97, 99, 100, 101, 102, 104, 105, 106, 107, 108, 109, 110, 111, 113, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 127, 128, 129, 130, 131, 132, 133, 134, 136, 137, 138, 140, 141, 144, 146, 148, 149, 150, 151, 152, 153, 154, 155, 156, 158, 159, 160, 161, 162, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 178, 181, 182, 183, 184, 185, 186, 187, 190, 192, 193, 194, 195, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 213, 215, 216, 217, 220, 221, 222, 223, 224, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 241, 243, 244, 245, 248, 249, 250, 251, 252, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 266, 267, 269, 270, 271, 272, 273, 274, 275, 277, 278, 279, 280, 281, 283, 285, 286, 289, 290, 291, 292, 293, 294, 296, 297, 298, 299, 300, 301, 302, 303, 305, 306, 307, 310, 313, 315, 317, 318, 319, 320, 321, 322, 323, 324, 325, 327, 328, 329, 330, 331, 333, 334, 335, 340, 342, 343, 344, 347, 348, 350, 353, 354, 355, 356, 357, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 371, 372, 374, 375, 376, 377, 379, 380, 381, 382, 383, 384, 385, 386, 388, 390, 391, 392, 394, 395, 396, 397, 398, 399, 401, 402, 403, 404, 405, 406, 407, 409, 410, 411, 413, 414, 417, 419, 421, 422, 423, 424, 425, 426, 427, 428, 429, 431, 432, 433, 434, 435, 437, 438, 443, 445, 446, 447, 448, 451, 452, 454, 457, 458, 459, 460, 461, 462, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 476, 477, 479, 480, 481, 482, 484, 485, 486, 487, 488, 489, 490, 491, 493, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 507, 508, 509, 510, 511, 512, 513, 514, 516, 517, 518, 520, 521, 524, 526, 528, 529, 530, 531, 532, 533, 534, 535, 536, 538, 539, 540, 541, 542, 544, 545, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 561, 562, 563], "independ": [2, 47, 48, 49, 50, 56, 72, 77, 79, 80, 82, 83, 87, 124, 177, 178, 179, 181, 183, 185, 199, 200, 201, 203, 205, 207, 223, 224, 225, 227, 229, 233, 251, 252, 253, 255, 257, 293, 299, 348, 354, 355, 358, 362, 398, 452, 457, 459, 460, 462, 463, 467, 504], "depend": [2, 9, 12, 13, 16, 18, 19, 20, 22, 27, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 65, 66, 72, 78, 79, 80, 81, 82, 94, 103, 105, 108, 109, 110, 111, 115, 128, 133, 134, 146, 164, 172, 177, 178, 185, 187, 194, 199, 200, 207, 210, 220, 223, 224, 231, 232, 233, 237, 251, 252, 264, 273, 275, 278, 279, 280, 281, 285, 296, 298, 299, 315, 333, 334, 335, 342, 348, 353, 354, 355, 356, 357, 369, 378, 380, 383, 384, 385, 386, 390, 401, 419, 437, 445, 446, 452, 458, 459, 460, 461, 462, 474, 483, 485, 488, 489, 490, 491, 495, 508, 513, 526, 544, 556, 557], "where": [2, 4, 7, 12, 18, 19, 21, 29, 36, 37, 39, 47, 48, 49, 51, 54, 55, 65, 71, 72, 74, 78, 79, 80, 81, 82, 91, 93, 96, 99, 101, 102, 105, 109, 110, 111, 115, 116, 121, 128, 132, 137, 140, 148, 156, 164, 166, 167, 176, 178, 185, 186, 187, 192, 197, 198, 199, 200, 207, 209, 210, 215, 220, 221, 222, 223, 224, 232, 233, 236, 237, 243, 248, 249, 250, 251, 252, 261, 263, 266, 269, 271, 272, 275, 279, 280, 281, 285, 286, 291, 296, 298, 299, 301, 325, 333, 334, 335, 342, 347, 348, 350, 353, 354, 355, 356, 357, 366, 368, 371, 374, 376, 377, 380, 384, 385, 386, 390, 391, 396, 401, 405, 413, 429, 437, 445, 451, 452, 454, 458, 459, 460, 461, 462, 471, 473, 476, 479, 481, 482, 485, 489, 490, 491, 495, 496, 501, 508, 512, 517, 520, 528, 536, 544, 546, 547, 558, 559], "us": [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 23, 25, 26, 27, 28, 29, 31, 32, 33, 34, 37, 39, 40, 42, 45, 47, 48, 49, 55, 57, 58, 59, 60, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 118, 119, 120, 121, 122, 124, 125, 128, 130, 131, 133, 134, 135, 137, 138, 140, 141, 142, 144, 145, 146, 148, 149, 150, 152, 154, 156, 157, 158, 159, 161, 162, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 178, 182, 183, 184, 185, 187, 190, 192, 193, 194, 195, 197, 198, 199, 200, 204, 205, 206, 207, 208, 210, 213, 215, 216, 217, 218, 220, 221, 222, 223, 224, 228, 229, 230, 232, 233, 235, 237, 241, 243, 244, 245, 246, 248, 249, 250, 251, 252, 256, 257, 258, 259, 260, 261, 263, 264, 266, 267, 268, 269, 270, 271, 272, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 294, 296, 298, 299, 300, 302, 303, 306, 307, 309, 310, 311, 313, 315, 317, 318, 319, 321, 323, 326, 327, 328, 331, 333, 334, 335, 336, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 360, 361, 362, 363, 364, 365, 366, 368, 369, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 393, 394, 395, 396, 397, 398, 399, 401, 403, 404, 406, 407, 408, 410, 411, 413, 414, 415, 417, 418, 419, 421, 422, 423, 425, 427, 429, 430, 431, 432, 435, 437, 438, 439, 440, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 498, 499, 500, 501, 502, 504, 505, 508, 510, 511, 513, 514, 515, 517, 518, 520, 521, 522, 524, 525, 526, 528, 529, 530, 532, 534, 536, 537, 538, 539, 541, 542, 544, 545, 546, 547, 549, 550, 551, 552, 553, 554, 556, 557, 558, 559, 560, 561, 562, 563], "multipl": [2, 4, 5, 8, 9, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 63, 68, 72, 74, 79, 80, 81, 82, 87, 88, 89, 93, 94, 97, 101, 105, 107, 109, 110, 118, 119, 125, 128, 132, 134, 144, 146, 169, 172, 173, 175, 177, 183, 184, 185, 187, 190, 194, 195, 197, 199, 200, 205, 206, 207, 209, 210, 213, 217, 220, 221, 223, 224, 229, 230, 232, 233, 236, 237, 241, 245, 249, 251, 252, 257, 258, 259, 263, 264, 267, 271, 275, 277, 279, 280, 289, 294, 296, 299, 301, 313, 315, 334, 335, 340, 344, 348, 350, 354, 355, 356, 357, 362, 363, 364, 368, 369, 372, 376, 380, 382, 384, 385, 394, 399, 401, 405, 417, 419, 443, 448, 452, 454, 459, 460, 461, 462, 467, 468, 469, 473, 474, 477, 481, 485, 487, 489, 490, 498, 499, 505, 508, 512, 524, 526, 560], "portabl": [2, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 141, 187, 210, 237, 310, 414, 521], "those": [2, 4, 5, 10, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 55, 69, 72, 79, 80, 81, 82, 85, 87, 93, 96, 97, 99, 104, 105, 107, 114, 116, 122, 125, 136, 141, 164, 177, 178, 181, 183, 185, 187, 199, 200, 203, 205, 207, 210, 218, 223, 224, 227, 229, 233, 237, 246, 251, 252, 255, 257, 263, 266, 267, 269, 274, 275, 277, 284, 286, 292, 294, 297, 299, 305, 310, 334, 345, 348, 354, 355, 356, 357, 360, 362, 368, 371, 372, 374, 379, 380, 382, 389, 391, 397, 399, 409, 414, 437, 449, 452, 459, 460, 461, 462, 465, 467, 473, 476, 477, 479, 484, 485, 487, 494, 496, 502, 505, 516, 521, 544, 563], "exclus": [2, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 79, 172, 194, 207, 233, 299, 354, 459], "enabl": [2, 7, 8, 9, 12, 14, 16, 18, 19, 20, 22, 25, 26, 31, 32, 35, 36, 37, 38, 39, 40, 43, 44, 47, 48, 49, 54, 63, 66, 67, 68, 71, 72, 74, 78, 79, 80, 82, 87, 88, 90, 91, 96, 99, 102, 103, 105, 109, 110, 111, 115, 116, 121, 128, 136, 137, 144, 149, 150, 152, 156, 159, 161, 162, 169, 171, 172, 173, 177, 178, 181, 183, 184, 185, 187, 190, 193, 194, 195, 197, 199, 200, 203, 205, 206, 207, 210, 213, 216, 217, 220, 221, 223, 224, 227, 229, 230, 231, 232, 233, 237, 241, 244, 245, 248, 249, 251, 252, 255, 257, 258, 260, 261, 263, 272, 273, 275, 279, 280, 281, 285, 291, 296, 298, 299, 305, 306, 313, 321, 331, 335, 340, 343, 344, 347, 348, 350, 353, 354, 355, 357, 362, 363, 365, 366, 368, 377, 378, 380, 384, 385, 386, 390, 396, 401, 409, 410, 417, 422, 423, 425, 429, 435, 443, 446, 447, 448, 451, 452, 454, 458, 459, 460, 462, 467, 468, 470, 471, 476, 479, 482, 483, 485, 489, 490, 491, 495, 496, 501, 508, 516, 517, 524, 529, 530, 532, 536, 539, 541, 542, 559], "should": [2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 26, 28, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 54, 55, 60, 63, 68, 71, 72, 74, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88, 91, 94, 102, 103, 104, 111, 115, 121, 122, 128, 130, 131, 132, 140, 144, 146, 158, 164, 165, 169, 173, 175, 176, 177, 179, 182, 183, 184, 185, 187, 190, 195, 197, 198, 199, 201, 204, 205, 206, 207, 210, 213, 217, 220, 221, 222, 223, 225, 228, 229, 230, 231, 233, 236, 237, 241, 245, 248, 249, 250, 251, 252, 253, 256, 257, 258, 261, 264, 272, 273, 274, 281, 285, 291, 292, 296, 298, 299, 300, 301, 313, 315, 327, 333, 334, 335, 340, 344, 347, 348, 350, 353, 354, 355, 356, 357, 360, 361, 362, 363, 366, 369, 377, 378, 379, 386, 390, 396, 397, 401, 403, 404, 405, 413, 417, 419, 431, 437, 438, 443, 448, 451, 452, 454, 457, 458, 459, 460, 461, 462, 465, 466, 467, 468, 471, 474, 482, 483, 484, 491, 495, 501, 502, 508, 510, 511, 512, 520, 524, 526, 538, 544, 545, 551, 552, 555, 556, 557, 559, 563], "port": [2, 11, 13, 27, 47, 49, 54, 74, 86, 95, 175, 182, 185, 197, 204, 207, 221, 228, 233, 240, 249, 256, 265, 350, 361, 370, 454, 466, 475], "christoph": 2, "siden": 2, "2012": [2, 183, 187, 188, 211, 238], "01": [2, 16, 35, 36, 37, 38, 39, 49, 66, 173, 195, 217, 231, 446], "internet": [2, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44], "archiv": [2, 9, 16, 32, 35, 36, 38, 40, 56], "wayback": 2, "machin": [2, 10, 16, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 52, 54, 62, 63, 71, 78, 93, 94, 109, 110, 111, 115, 128, 137, 156, 161, 169, 185, 187, 190, 207, 210, 213, 220, 233, 237, 240, 241, 248, 263, 264, 281, 285, 296, 298, 306, 339, 340, 347, 353, 368, 369, 386, 390, 401, 410, 429, 442, 443, 451, 458, 473, 474, 489, 490, 491, 495, 508, 517, 536, 541], "particular": [2, 11, 32, 47, 54, 55, 72, 74, 79, 80, 88, 132, 175, 177, 184, 185, 186, 197, 199, 206, 207, 209, 221, 222, 223, 230, 233, 236, 249, 250, 251, 258, 299, 301, 348, 350, 354, 355, 363, 405, 413, 452, 454, 459, 460, 468, 512], "legaci": [2, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 35, 36, 38, 43, 44, 48, 54, 72, 78, 79, 80, 82, 83, 85, 92, 93, 94, 103, 104, 105, 108, 113, 118, 122, 128, 137, 162, 179, 181, 185, 187, 200, 201, 203, 207, 210, 223, 224, 225, 227, 231, 232, 233, 237, 251, 252, 253, 255, 273, 274, 275, 283, 292, 296, 298, 299, 306, 331, 348, 353, 354, 355, 357, 358, 360, 378, 379, 380, 388, 397, 401, 410, 435, 452, 458, 459, 460, 462, 463, 465, 472, 473, 474, 483, 484, 485, 488, 493, 498, 502, 508, 517, 542], "still": [2, 11, 18, 19, 20, 22, 26, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 71, 72, 75, 79, 80, 81, 83, 87, 88, 89, 91, 93, 102, 105, 111, 115, 119, 121, 132, 133, 134, 137, 141, 166, 167, 178, 184, 185, 187, 199, 200, 206, 207, 209, 210, 220, 223, 224, 230, 232, 233, 236, 237, 248, 251, 252, 258, 259, 261, 263, 272, 275, 281, 285, 289, 291, 299, 301, 302, 306, 310, 334, 336, 347, 348, 351, 354, 355, 356, 358, 363, 364, 366, 368, 377, 380, 386, 390, 394, 396, 405, 406, 410, 414, 439, 440, 451, 452, 455, 459, 460, 461, 463, 467, 468, 469, 471, 473, 482, 485, 491, 495, 499, 501, 512, 513, 517, 521, 546, 547, 552, 559], "exist": [2, 8, 10, 11, 14, 15, 16, 18, 19, 20, 22, 25, 26, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 55, 58, 66, 68, 72, 75, 78, 79, 80, 81, 82, 86, 88, 90, 91, 92, 93, 94, 96, 97, 98, 99, 101, 102, 103, 104, 105, 107, 109, 110, 111, 112, 115, 116, 121, 122, 125, 128, 131, 134, 135, 137, 140, 154, 155, 156, 164, 169, 173, 176, 177, 178, 182, 184, 185, 187, 190, 195, 198, 199, 200, 204, 206, 207, 208, 210, 213, 217, 222, 223, 224, 228, 230, 232, 233, 235, 237, 241, 245, 250, 251, 252, 256, 258, 260, 261, 262, 263, 264, 266, 267, 268, 269, 271, 272, 274, 275, 277, 279, 280, 281, 282, 285, 286, 291, 292, 294, 296, 298, 299, 300, 303, 304, 306, 323, 324, 325, 333, 334, 335, 344, 348, 351, 353, 354, 355, 356, 357, 361, 363, 365, 366, 367, 368, 369, 371, 372, 373, 374, 376, 377, 378, 379, 380, 382, 384, 385, 386, 387, 390, 391, 396, 397, 399, 401, 404, 407, 408, 410, 413, 427, 428, 429, 437, 446, 448, 452, 455, 458, 459, 460, 461, 462, 466, 468, 470, 471, 472, 473, 474, 476, 477, 478, 479, 481, 482, 483, 484, 485, 487, 489, 490, 491, 492, 495, 496, 501, 502, 505, 508, 511, 514, 515, 517, 520, 534, 535, 536, 544, 550, 552, 556, 558, 559, 563], "1": [2, 3, 4, 5, 8, 9, 14, 16, 21, 25, 27, 28, 31, 32, 47, 48, 49, 54, 57, 59, 60, 61, 71, 72, 74, 77, 78, 79, 80, 81, 82, 87, 88, 89, 90, 92, 93, 94, 95, 96, 99, 100, 101, 103, 105, 108, 109, 110, 111, 113, 115, 116, 118, 119, 120, 123, 127, 128, 131, 132, 133, 137, 138, 140, 141, 146, 148, 152, 156, 159, 162, 163, 164, 168, 175, 176, 177, 183, 184, 185, 186, 187, 189, 197, 198, 199, 205, 206, 207, 208, 209, 210, 212, 220, 221, 222, 223, 229, 230, 231, 232, 233, 235, 236, 237, 239, 248, 249, 250, 251, 252, 257, 258, 265, 266, 269, 270, 271, 273, 275, 281, 283, 285, 286, 290, 296, 298, 299, 300, 301, 315, 317, 328, 332, 333, 334, 335, 338, 347, 348, 350, 353, 354, 355, 356, 357, 362, 363, 368, 370, 371, 374, 375, 376, 378, 380, 386, 388, 390, 391, 395, 401, 404, 405, 413, 419, 421, 429, 432, 436, 437, 451, 452, 454, 457, 458, 459, 460, 461, 462, 467, 468, 469, 470, 472, 473, 474, 475, 476, 479, 480, 481, 483, 485, 488, 489, 490, 491, 493, 495, 496, 498, 499, 500, 503, 507, 508, 511, 512, 513, 517, 518, 520, 521, 526, 528, 532, 536, 539, 542, 543, 544, 548, 556, 561, 562, 564], "28": [2, 36, 75, 80, 87, 134, 140, 169, 172, 178, 181, 183, 185, 186, 187, 188, 190, 194, 199, 200, 203, 205, 209, 211, 213, 220, 224, 227, 229, 236, 238, 252, 257, 351, 355, 362, 455, 460, 467, 555], "zpool": [2, 4, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 58, 67, 68, 72, 75, 76, 77, 78, 79, 81, 82, 83, 84, 87, 88, 90, 91, 93, 102, 103, 109, 110, 111, 115, 121, 124, 128, 132, 165, 171, 172, 173, 174, 176, 177, 179, 180, 183, 184, 185, 186, 193, 194, 195, 196, 198, 199, 201, 202, 205, 206, 207, 209, 216, 217, 219, 222, 223, 225, 226, 229, 230, 231, 233, 236, 244, 245, 247, 250, 251, 253, 254, 257, 258, 260, 261, 263, 272, 273, 279, 280, 281, 285, 291, 293, 296, 298, 299, 301, 334, 335, 343, 344, 348, 351, 352, 353, 354, 356, 357, 358, 359, 362, 363, 365, 366, 368, 377, 378, 384, 385, 386, 390, 396, 398, 401, 405, 438, 447, 448, 452, 455, 456, 457, 458, 459, 461, 462, 463, 464, 467, 468, 470, 471, 473, 482, 483, 489, 490, 491, 495, 501, 504, 508, 512, 545, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "7": [2, 5, 8, 9, 12, 32, 47, 48, 49, 54, 55, 59, 60, 61, 67, 68, 69, 72, 74, 85, 87, 90, 91, 92, 93, 96, 97, 99, 100, 101, 102, 103, 104, 107, 109, 110, 111, 115, 116, 117, 118, 120, 121, 122, 123, 124, 125, 127, 128, 133, 134, 137, 140, 142, 144, 146, 148, 152, 154, 157, 158, 159, 160, 161, 162, 164, 168, 173, 175, 176, 183, 185, 187, 195, 197, 198, 199, 205, 207, 210, 217, 218, 221, 222, 223, 229, 231, 233, 237, 245, 246, 249, 250, 251, 257, 273, 296, 299, 333, 343, 344, 345, 348, 350, 360, 362, 365, 366, 367, 368, 371, 372, 374, 375, 376, 377, 378, 379, 382, 384, 385, 386, 390, 391, 392, 393, 395, 396, 397, 398, 399, 401, 406, 407, 410, 413, 415, 417, 421, 425, 427, 430, 431, 432, 433, 434, 435, 437, 441, 447, 448, 449, 452, 454, 465, 467, 470, 471, 472, 473, 476, 477, 479, 480, 481, 482, 483, 484, 487, 489, 490, 491, 495, 496, 497, 498, 500, 501, 502, 503, 504, 505, 507, 508, 513, 514, 517, 520, 522, 524, 526, 528, 532, 534, 537, 538, 539, 540, 541, 542, 544, 548, 561, 562], "man": [2, 4, 10, 11, 18, 33, 48, 54, 59, 60, 171, 172, 176, 181, 182, 185, 186, 193, 194, 198, 203, 204, 207, 209, 210, 216, 222, 227, 228, 233, 236, 237, 244, 250, 255, 296, 301, 309, 333, 335, 559], "page": [2, 4, 5, 7, 8, 10, 11, 12, 14, 16, 18, 19, 20, 22, 24, 25, 28, 29, 30, 31, 32, 33, 35, 38, 42, 43, 44, 49, 52, 54, 59, 60, 62, 71, 72, 82, 88, 105, 133, 134, 142, 144, 148, 154, 157, 158, 164, 171, 172, 173, 176, 181, 182, 184, 185, 186, 192, 193, 194, 195, 198, 203, 204, 206, 207, 209, 210, 215, 216, 217, 220, 222, 223, 227, 228, 230, 232, 233, 236, 237, 240, 243, 244, 245, 248, 250, 251, 255, 258, 275, 296, 301, 302, 303, 306, 309, 311, 313, 317, 323, 326, 327, 333, 335, 339, 347, 348, 357, 363, 380, 406, 407, 415, 417, 421, 427, 430, 431, 437, 442, 451, 452, 462, 468, 485, 513, 514, 522, 524, 528, 534, 537, 538, 544, 559], "5": [2, 3, 21, 25, 27, 32, 47, 48, 49, 51, 54, 55, 57, 68, 71, 72, 78, 79, 81, 83, 85, 86, 89, 96, 99, 103, 105, 116, 117, 119, 128, 133, 134, 137, 140, 146, 156, 161, 164, 168, 171, 173, 179, 181, 182, 185, 187, 188, 189, 193, 195, 201, 203, 204, 207, 208, 210, 211, 212, 216, 217, 218, 225, 227, 228, 231, 232, 233, 235, 237, 238, 239, 244, 245, 246, 253, 255, 256, 260, 273, 275, 279, 280, 281, 285, 287, 296, 298, 299, 300, 306, 309, 311, 321, 326, 331, 333, 334, 335, 338, 344, 347, 348, 353, 354, 356, 358, 360, 361, 378, 380, 392, 401, 413, 429, 437, 441, 448, 451, 452, 458, 459, 461, 463, 465, 466, 469, 476, 479, 483, 485, 496, 497, 499, 508, 513, 517, 520, 526, 536, 541, 544, 548], "matrix": 2, "flagread": 2, "onlycompatibleopenzf": 2, "linux": [2, 4, 8, 9, 10, 11, 13, 18, 19, 20, 22, 23, 25, 32, 33, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 47, 48, 53, 57, 58, 59, 60, 66, 68, 71, 72, 74, 78, 79, 80, 82, 83, 88, 89, 104, 109, 110, 119, 122, 128, 136, 144, 149, 150, 164, 171, 172, 173, 175, 176, 179, 181, 184, 185, 186, 192, 193, 194, 195, 197, 198, 199, 200, 201, 203, 205, 206, 207, 208, 209, 210, 215, 216, 217, 218, 220, 221, 224, 225, 227, 229, 230, 233, 235, 236, 237, 245, 248, 249, 251, 252, 253, 255, 258, 259, 274, 279, 280, 289, 292, 296, 299, 310, 313, 333, 335, 344, 347, 348, 350, 354, 355, 357, 358, 363, 364, 379, 384, 385, 394, 397, 401, 409, 417, 422, 423, 437, 446, 448, 451, 452, 454, 458, 459, 460, 462, 463, 468, 469, 484, 489, 490, 499, 502, 508, 516, 524, 529, 530, 544], "freebsd": [2, 8, 41, 47, 48, 49, 54, 58, 59, 60, 72, 79, 80, 100, 120, 144, 173, 251, 252, 270, 290, 299, 313, 348, 354, 355, 375, 395, 417, 452, 459, 460, 480, 500, 524], "13": [2, 5, 27, 32, 35, 47, 49, 54, 79, 91, 102, 121, 128, 146, 164, 185, 187, 207, 210, 233, 237, 261, 272, 291, 296, 299, 333, 354, 366, 377, 396, 401, 437, 459, 471, 482, 501, 508, 526, 544, 561, 562], "pre": [2, 8, 9, 27, 48, 66, 72, 75, 80, 200, 223, 224, 251, 252, 348, 351, 355, 446, 452, 455, 460], "openzfsillumosjoyentnetbsdnexentaomnio": 2, "ceopenzf": 2, "x": [2, 3, 8, 9, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 58, 62, 63, 66, 68, 72, 78, 79, 81, 87, 109, 110, 111, 115, 144, 159, 169, 172, 177, 183, 185, 187, 190, 194, 199, 205, 207, 210, 213, 223, 229, 233, 237, 240, 241, 251, 257, 279, 280, 298, 299, 313, 328, 334, 339, 340, 348, 353, 354, 356, 362, 384, 385, 417, 432, 442, 443, 446, 452, 458, 459, 461, 467, 489, 490, 491, 495, 524, 539, 550, 551, 552, 553, 555, 556, 557, 558, 559, 561, 562, 563], "0": [2, 5, 8, 9, 11, 14, 16, 18, 19, 20, 21, 22, 25, 27, 31, 32, 33, 34, 35, 36, 37, 38, 39, 43, 44, 45, 46, 47, 48, 49, 50, 54, 55, 57, 59, 60, 61, 63, 65, 66, 67, 68, 71, 72, 74, 79, 80, 81, 82, 87, 88, 96, 99, 105, 106, 116, 128, 130, 131, 132, 134, 140, 146, 152, 159, 163, 164, 166, 167, 169, 171, 173, 175, 176, 177, 183, 184, 185, 186, 187, 190, 192, 193, 195, 197, 198, 199, 205, 206, 207, 209, 210, 213, 215, 216, 217, 220, 221, 222, 223, 229, 230, 232, 233, 236, 237, 241, 243, 244, 245, 248, 249, 250, 251, 257, 258, 275, 276, 296, 299, 300, 301, 306, 332, 333, 334, 335, 340, 342, 343, 344, 347, 348, 350, 354, 356, 357, 362, 380, 381, 401, 403, 404, 405, 413, 436, 437, 443, 445, 446, 447, 448, 451, 452, 454, 459, 460, 461, 462, 467, 468, 476, 479, 485, 486, 496, 508, 510, 511, 512, 520, 526, 532, 539, 543, 544, 546, 547, 550, 551, 552, 553, 555, 556, 557, 558, 559, 561, 562, 563], "6": [2, 5, 21, 25, 32, 47, 48, 49, 54, 55, 57, 59, 60, 61, 72, 74, 78, 79, 81, 96, 99, 116, 128, 134, 137, 140, 164, 175, 176, 185, 187, 197, 198, 207, 210, 221, 222, 223, 233, 237, 249, 250, 251, 296, 299, 333, 348, 350, 354, 401, 413, 437, 452, 454, 458, 459, 461, 476, 479, 496, 508, 517, 520, 544, 559], "110": [2, 72, 251, 348, 452], "130": 2, "8": [2, 4, 5, 8, 9, 11, 16, 25, 28, 31, 32, 37, 39, 48, 49, 54, 55, 59, 60, 61, 65, 67, 68, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 168, 171, 172, 175, 176, 177, 178, 189, 192, 193, 194, 195, 197, 198, 199, 200, 212, 215, 216, 217, 220, 221, 222, 223, 224, 243, 244, 245, 248, 249, 250, 251, 252, 338, 342, 343, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 441, 445, 447, 448, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 548, 561, 562], "62": 2, "72": [2, 10, 12, 59, 60, 564], "152": [2, 48, 210, 237], "2": [2, 3, 5, 8, 9, 11, 12, 14, 16, 25, 26, 27, 31, 32, 45, 47, 48, 49, 54, 59, 60, 61, 62, 65, 66, 68, 71, 72, 74, 79, 80, 81, 82, 87, 89, 92, 93, 94, 96, 97, 99, 100, 105, 107, 109, 110, 111, 113, 115, 116, 118, 119, 120, 125, 128, 131, 132, 133, 134, 137, 140, 146, 148, 152, 153, 158, 159, 162, 164, 169, 172, 173, 175, 176, 177, 183, 184, 185, 187, 190, 194, 195, 197, 198, 199, 205, 206, 207, 208, 209, 210, 213, 217, 220, 221, 222, 223, 229, 230, 231, 232, 233, 235, 236, 237, 240, 241, 245, 248, 249, 250, 251, 257, 258, 267, 270, 273, 275, 277, 281, 285, 290, 294, 296, 299, 300, 301, 315, 317, 328, 332, 333, 334, 335, 339, 342, 344, 347, 348, 350, 354, 356, 357, 362, 371, 372, 374, 375, 380, 382, 391, 395, 399, 401, 404, 405, 410, 413, 419, 421, 426, 431, 432, 436, 437, 442, 445, 446, 448, 451, 452, 454, 459, 460, 461, 462, 467, 469, 472, 473, 474, 476, 477, 479, 480, 485, 487, 489, 490, 491, 493, 495, 496, 498, 499, 500, 505, 508, 511, 512, 513, 517, 520, 526, 528, 532, 533, 538, 539, 542, 544, 555, 556, 557, 564], "3master12": 2, "012": 2, "0mastermaster9": 2, "3main4": 2, "fpmasterr151046r151048master2": 2, "02": [2, 66, 185, 446], "22": [2, 36, 37, 40, 41, 128, 156, 185, 207, 233, 296, 401, 508, 536], "3rc4main": 2, "zfsonlinux": [2, 9, 10, 12, 17, 18, 19, 20, 22, 25, 26, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 56, 80, 200, 224, 252, 355, 460], "allocation_classesyesnonoyesyesyesyesyesnoyesyesyesnonononoyesyesyesyesyesyesy": 2, "com": [2, 7, 8, 9, 10, 12, 14, 16, 18, 19, 20, 22, 25, 27, 29, 31, 35, 36, 37, 38, 39, 40, 43, 44, 47, 49, 54, 67, 80, 96, 99, 116, 128, 166, 167, 171, 172, 178, 179, 181, 185, 186, 192, 193, 194, 200, 201, 203, 207, 209, 215, 216, 224, 225, 227, 233, 236, 243, 244, 252, 253, 255, 296, 301, 343, 355, 401, 447, 460, 476, 479, 496, 508, 546, 547], "delphix": [2, 12, 67, 80, 171, 178, 193, 200, 216, 224, 244, 252, 343, 355, 447, 460], "async_destroyyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "blake3nonononononoyesyesnononononononononononoyesyesyesy": 2, "fudosecur": [2, 80, 460], "block_cloningyesnononononoyesyesnononononononononononoyesyesyesy": 2, "datto": [2, 80, 224, 252, 355, 460], "bookmark_v2nononoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "bookmark_writtennonononoyesyesyesyesnononononononononononoyesyesyesy": 2, "bookmarksyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "nexenta": 2, "class_of_storageyesnononononononononononononoyesyesnonononononono": 2, "device_rebuildyesnononoyesyesyesyesnononononononononononoyesyesyesy": 2, "device_removalnononoyesyesyesyesyesyesyesyesyesnononoyesyesyesyesyesyesyesy": 2, "draidnononononoyesyesyesnononononononononononoyesyesyesy": 2, "edonrnoyes1yes1yes1yes1yes1yes1yesnonoyesyesnononoyesyesyesyesyesyesyesy": 2, "embedded_datanoyesyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesy": 2, "empty_bpobjyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "enabled_txgyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "encryptionnononoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "extensible_datasetnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "joyent": [2, 80, 178, 200, 224, 252, 355, 460], "filesystem_limitsyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "head_errlognonononononoyesyesnononononononononononoyesyesyesy": 2, "hole_birthnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "open": [2, 7, 8, 10, 12, 14, 16, 18, 19, 20, 22, 25, 28, 29, 31, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 54, 56, 58, 67, 72, 79, 80, 81, 87, 91, 102, 121, 126, 133, 140, 144, 146, 148, 158, 159, 171, 176, 178, 185, 187, 193, 198, 199, 200, 207, 210, 216, 222, 223, 224, 233, 237, 244, 250, 251, 252, 261, 272, 291, 295, 299, 302, 313, 315, 317, 327, 328, 334, 343, 348, 354, 355, 356, 366, 377, 396, 400, 406, 413, 417, 419, 421, 431, 432, 447, 452, 459, 460, 461, 467, 471, 482, 501, 506, 513, 520, 524, 526, 528, 538, 539, 549, 550, 551, 552, 553, 555, 559, 563], "large_blocksnoyesyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesy": 2, "large_dnodenonoyesyesyesyesyesyesnoyesyesyesnonononoyesyesyesyesyesyesy": 2, "livelistyesnononoyesyesyesyesnononononononononononoyesyesyesy": 2, "log_spacemapyesnononoyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "lz4_compressnoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "meta_devicesyesnononononononononononononoyesyesnonononononono": 2, "multi_vdev_crash_dumpnonoyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "obsolete_countsyesnonoyesyesyesyesyesyesyesyesyesnononoyesyesyesyesyesyesyesy": 2, "project_quotayesnonoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "raidz_expansionnononononononoyesnonononononononononononoyesyesy": 2, "redacted_datasetsnonononoyesyesyesyesnononononononononononoyesyesyesy": 2, "redaction_bookmarksnonononoyesyesyesyesnononononononononononoyesyesyesy": 2, "redaction_list_spillnononononononoyesnononononononononononoyesyesyesy": 2, "resilver_deferyesnonoyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "sha512nonoyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesy": 2, "skeinnonoyesyesyesyesyesyesyesyesyesyesyesyesnoyesyesyesyesyesyesyesy": 2, "spacemap_histogramyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesyesy": 2, "spacemap_v2yesnonoyesyesyesyesyesyesyesyesyesnonononoyesyesyesyesyesyesy": 2, "userobj_accountingyesnoyesyesyesyesyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "vdev_propertiesyesnononononononononononononoyesyesnonononononono": 2, "klarasystem": [2, 80, 460], "vdev_zaps_v2nonononononoyesyesnononononononononononoyesyesyesy": 2, "wbcnononononononononononononononoyesnonononononono": 2, "zilsaxattryesnononononoyesyesnonoyesyesnonononoyesyesyesyesyesyesy": 2, "zpool_checkpointyesnonoyesyesyesyesyesyesyesyesyesnonononoyesyesyesyesyesyesy": 2, "zstd_compressnonononoyesyesyesyesnononononononononononoyesyesyesy": 2, "up": [2, 3, 5, 8, 10, 12, 14, 16, 19, 20, 23, 25, 27, 28, 31, 33, 36, 37, 38, 39, 43, 44, 48, 49, 50, 51, 55, 58, 68, 71, 72, 75, 77, 79, 81, 82, 87, 93, 106, 111, 115, 124, 133, 136, 138, 146, 148, 149, 150, 156, 158, 159, 162, 164, 173, 177, 185, 187, 195, 199, 205, 207, 210, 217, 220, 223, 229, 233, 237, 245, 248, 251, 257, 263, 281, 285, 293, 299, 302, 307, 315, 317, 325, 327, 328, 331, 333, 335, 344, 347, 348, 351, 354, 356, 357, 362, 368, 381, 386, 390, 398, 406, 409, 411, 419, 421, 422, 423, 429, 431, 432, 435, 437, 448, 451, 452, 455, 457, 459, 461, 462, 467, 473, 486, 491, 495, 504, 513, 516, 518, 526, 528, 529, 530, 536, 538, 539, 542, 544, 559], "releas": [2, 5, 8, 12, 23, 25, 26, 27, 31, 39, 41, 43, 48, 49, 54, 56, 58, 59, 60, 71, 72, 75, 79, 82, 84, 88, 89, 98, 119, 128, 137, 184, 185, 206, 207, 220, 230, 233, 237, 248, 251, 254, 258, 259, 268, 289, 296, 299, 347, 348, 351, 354, 357, 359, 363, 364, 373, 394, 401, 410, 451, 452, 455, 459, 462, 464, 468, 469, 478, 499, 508, 517], "tabl": [2, 14, 16, 25, 28, 31, 48, 72, 78, 81, 87, 91, 102, 105, 121, 152, 177, 181, 183, 199, 203, 205, 223, 227, 229, 232, 233, 237, 251, 255, 257, 261, 272, 275, 291, 321, 334, 348, 356, 362, 366, 377, 380, 396, 425, 452, 458, 461, 467, 471, 482, 485, 501, 532], "gener": [2, 5, 8, 9, 11, 12, 14, 16, 18, 19, 20, 22, 23, 25, 28, 29, 31, 33, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 51, 52, 58, 62, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 78, 79, 80, 81, 82, 84, 86, 87, 88, 89, 105, 109, 110, 111, 115, 119, 124, 128, 131, 140, 146, 151, 164, 166, 167, 169, 175, 176, 177, 182, 183, 184, 185, 187, 190, 197, 198, 199, 200, 204, 205, 206, 207, 208, 210, 213, 218, 220, 221, 222, 223, 224, 226, 228, 229, 230, 232, 233, 235, 237, 240, 241, 243, 244, 245, 246, 248, 249, 250, 251, 252, 254, 256, 257, 258, 259, 275, 279, 280, 281, 285, 289, 293, 296, 298, 299, 300, 309, 315, 320, 333, 335, 336, 339, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 359, 361, 362, 363, 364, 380, 384, 385, 386, 390, 394, 398, 401, 404, 413, 419, 424, 437, 439, 440, 442, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 458, 459, 460, 461, 462, 464, 466, 467, 468, 469, 485, 489, 490, 491, 495, 499, 504, 508, 511, 520, 526, 531, 544, 546, 547], "pars": [2, 54, 66, 75, 79, 86, 87, 96, 99, 116, 182, 183, 185, 204, 205, 207, 228, 229, 233, 256, 257, 266, 269, 286, 351, 354, 361, 362, 371, 374, 391, 446, 455, 459, 466, 467, 476, 479, 496], "manpag": [2, 185, 561, 562], "entir": [2, 5, 8, 11, 12, 37, 39, 47, 48, 49, 54, 71, 72, 79, 80, 81, 89, 91, 102, 103, 105, 109, 110, 119, 121, 140, 177, 178, 185, 199, 200, 207, 210, 220, 223, 224, 232, 233, 237, 248, 251, 252, 259, 261, 272, 275, 279, 280, 289, 291, 299, 309, 334, 347, 348, 354, 355, 356, 364, 366, 377, 378, 380, 384, 385, 394, 396, 413, 451, 452, 459, 460, 461, 469, 471, 482, 483, 485, 489, 490, 499, 501, 520, 554, 556], "good": [2, 9, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 49, 54, 57, 79, 80, 81, 140, 187, 200, 210, 222, 224, 237, 250, 252, 299, 334, 354, 355, 356, 413, 459, 460, 461, 520], "accur": [2, 48, 72, 251, 348, 452], "document": [2, 10, 13, 18, 19, 20, 22, 23, 27, 29, 33, 35, 36, 37, 38, 39, 41, 43, 44, 45, 47, 48, 49, 58, 59, 63, 78, 80, 87, 105, 111, 115, 128, 161, 165, 169, 173, 176, 178, 183, 190, 192, 195, 198, 200, 205, 207, 213, 215, 217, 222, 224, 229, 232, 233, 237, 241, 243, 245, 250, 252, 257, 275, 281, 285, 330, 340, 355, 362, 380, 386, 390, 401, 434, 438, 443, 458, 460, 467, 485, 491, 495, 508, 541, 545, 549, 557, 560, 561, 562], "last": [2, 10, 12, 32, 36, 37, 38, 39, 48, 72, 75, 79, 80, 82, 94, 109, 110, 111, 115, 133, 140, 144, 146, 148, 156, 158, 159, 162, 172, 176, 177, 178, 185, 187, 194, 198, 199, 200, 207, 210, 222, 223, 224, 233, 237, 250, 251, 252, 264, 279, 280, 281, 285, 302, 313, 315, 317, 325, 327, 328, 331, 335, 348, 351, 355, 357, 369, 384, 385, 386, 390, 406, 413, 417, 419, 421, 429, 431, 432, 435, 452, 455, 459, 460, 462, 474, 489, 490, 491, 495, 513, 520, 524, 526, 528, 536, 538, 539, 542, 560], "updat": [2, 4, 9, 10, 11, 12, 14, 16, 18, 19, 20, 22, 23, 25, 26, 31, 32, 35, 36, 37, 38, 39, 40, 43, 44, 48, 52, 54, 72, 79, 81, 82, 85, 88, 96, 99, 109, 110, 116, 130, 144, 160, 164, 181, 184, 185, 187, 199, 203, 206, 207, 210, 223, 227, 230, 231, 233, 237, 240, 251, 255, 258, 273, 279, 280, 299, 313, 329, 333, 334, 335, 348, 354, 356, 357, 360, 363, 384, 385, 403, 417, 433, 437, 452, 459, 461, 462, 465, 468, 476, 479, 489, 490, 496, 510, 524, 540, 544, 559], "2024": [2, 16, 43, 44, 72, 82, 101, 140, 481], "03": [2, 5, 37, 39, 49, 66, 185, 446], "28t09": 2, "44": [2, 5], "55": [2, 25, 146, 159, 164, 333, 437, 526, 539, 544], "376137z": 2, "compatibility_matrix": 2, "py": [2, 8, 27, 48], "tl": 3, "dr": 3, "effect": [3, 5, 47, 48, 49, 50, 51, 54, 71, 72, 77, 79, 80, 81, 82, 87, 89, 91, 94, 102, 105, 109, 110, 111, 113, 115, 119, 121, 128, 164, 177, 178, 183, 185, 187, 199, 200, 205, 207, 210, 220, 223, 224, 229, 232, 233, 237, 248, 251, 252, 257, 259, 261, 264, 272, 275, 279, 280, 281, 283, 285, 289, 291, 296, 299, 335, 347, 348, 354, 355, 356, 357, 362, 364, 366, 369, 377, 380, 384, 385, 386, 388, 390, 394, 396, 401, 437, 451, 452, 457, 459, 460, 461, 462, 467, 469, 471, 474, 482, 485, 489, 490, 491, 493, 495, 499, 501, 508, 544], "larg": [3, 5, 8, 9, 12, 36, 38, 47, 48, 49, 68, 71, 72, 79, 80, 81, 94, 105, 111, 115, 126, 156, 172, 173, 177, 178, 185, 194, 195, 199, 200, 207, 217, 220, 223, 224, 232, 233, 245, 248, 251, 252, 264, 275, 281, 285, 295, 299, 344, 347, 348, 354, 355, 356, 369, 380, 386, 390, 400, 429, 448, 451, 452, 459, 460, 461, 474, 485, 491, 495, 506, 536], "size": [3, 5, 7, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 48, 54, 62, 65, 68, 71, 72, 77, 79, 80, 81, 82, 87, 91, 93, 102, 105, 111, 115, 121, 128, 133, 137, 140, 146, 148, 152, 154, 159, 164, 165, 166, 167, 172, 173, 176, 177, 178, 183, 185, 187, 192, 194, 195, 198, 199, 200, 205, 207, 210, 215, 217, 220, 222, 223, 224, 229, 232, 233, 237, 240, 243, 245, 248, 250, 251, 252, 257, 261, 263, 272, 275, 281, 285, 291, 299, 306, 315, 317, 323, 328, 333, 334, 335, 339, 342, 344, 347, 348, 354, 355, 356, 357, 362, 366, 368, 377, 380, 386, 390, 396, 401, 410, 413, 419, 421, 425, 427, 432, 437, 438, 442, 445, 448, 451, 452, 457, 459, 460, 461, 462, 467, 471, 473, 482, 485, 491, 495, 501, 508, 513, 517, 520, 526, 528, 532, 534, 539, 544, 545, 546, 547], "sequenti": [3, 5, 48, 52, 72, 80, 81, 134, 154, 156, 199, 223, 251, 252, 303, 323, 348, 355, 356, 407, 427, 429, 452, 460, 461, 514, 534, 536], "workload": [3, 5, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 52, 54, 59, 60, 71, 72, 79, 80, 81, 159, 178, 185, 187, 200, 207, 210, 220, 224, 233, 237, 248, 252, 299, 328, 334, 347, 348, 354, 355, 356, 432, 451, 452, 459, 460, 461, 539], "variat": [3, 72, 187, 210, 223, 237, 251, 334, 348, 356, 452], "better": [3, 9, 11, 47, 48, 49, 54, 72, 79, 80, 81, 177, 178, 185, 187, 199, 200, 207, 210, 223, 224, 233, 237, 251, 252, 299, 334, 348, 354, 355, 356, 452, 459, 460, 461], "pariti": [3, 4, 5, 48, 49, 65, 68, 72, 77, 79, 80, 81, 82, 134, 173, 187, 192, 195, 199, 207, 210, 215, 217, 223, 233, 237, 243, 245, 251, 299, 334, 335, 342, 344, 348, 354, 355, 356, 357, 445, 448, 452, 457, 459, 460, 461, 462], "elimin": [3, 5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 80, 178, 187, 200, 210, 224, 237, 252, 334, 355, 356, 460], "hole": [3, 47, 48, 72, 79, 80, 81, 82, 91, 102, 121, 177, 178, 187, 199, 200, 210, 223, 224, 233, 237, 251, 252, 261, 272, 291, 299, 334, 335, 348, 354, 355, 356, 357, 366, 377, 396, 452, 459, 460, 461, 462, 471, 482, 501], "inconsist": [3, 47, 48, 54, 72, 81, 87, 144, 177, 181, 183, 187, 199, 203, 205, 210, 223, 227, 229, 237, 251, 255, 257, 313, 334, 348, 356, 362, 417, 452, 461, 467, 524], "power": [3, 14, 16, 25, 28, 31, 37, 39, 48, 49, 52, 54, 72, 77, 78, 79, 80, 81, 82, 136, 149, 150, 159, 164, 178, 185, 187, 194, 200, 207, 210, 223, 224, 233, 237, 251, 252, 298, 299, 334, 335, 348, 353, 354, 355, 356, 357, 409, 422, 423, 432, 437, 452, 457, 458, 459, 460, 461, 462, 516, 529, 530, 539, 544, 563], "loss": [3, 47, 48, 49, 54, 72, 81, 82, 109, 110, 134, 185, 187, 210, 223, 237, 251, 334, 335, 348, 356, 357, 452, 461, 462, 489, 490, 555, 563], "stripe": [3, 5, 47, 48, 72, 79, 81, 177, 187, 199, 207, 210, 223, 233, 237, 251, 299, 334, 348, 354, 356, 452, 459, 461], "within": [3, 47, 48, 63, 66, 69, 72, 78, 79, 80, 81, 82, 87, 88, 89, 98, 100, 109, 110, 112, 113, 119, 120, 123, 127, 128, 129, 133, 136, 137, 138, 141, 144, 146, 148, 164, 169, 177, 178, 183, 184, 185, 187, 190, 199, 200, 205, 206, 207, 210, 213, 223, 224, 229, 230, 233, 237, 241, 251, 252, 257, 258, 259, 268, 270, 279, 280, 282, 283, 289, 290, 296, 297, 298, 299, 305, 306, 307, 310, 313, 315, 317, 333, 334, 335, 340, 345, 348, 353, 354, 355, 356, 357, 362, 363, 364, 373, 375, 384, 385, 387, 388, 394, 395, 401, 402, 409, 410, 414, 417, 419, 421, 437, 443, 446, 449, 452, 458, 459, 460, 461, 462, 467, 468, 469, 478, 480, 489, 490, 492, 493, 499, 500, 503, 507, 508, 509, 513, 516, 517, 518, 521, 524, 526, 528, 544, 553, 555, 556, 563], "group": [3, 5, 8, 22, 25, 31, 35, 36, 37, 38, 39, 48, 49, 51, 54, 55, 66, 68, 72, 79, 80, 81, 82, 87, 88, 89, 97, 105, 107, 119, 125, 128, 132, 134, 137, 177, 178, 184, 185, 186, 187, 199, 200, 206, 207, 209, 210, 223, 224, 230, 232, 233, 236, 237, 251, 252, 258, 259, 267, 275, 277, 289, 294, 296, 299, 301, 306, 334, 335, 344, 348, 354, 355, 356, 357, 363, 364, 372, 380, 382, 394, 399, 401, 405, 410, 446, 448, 452, 459, 460, 461, 462, 467, 468, 469, 477, 485, 487, 499, 505, 508, 512, 517, 563], "A": [3, 4, 5, 7, 8, 12, 14, 16, 18, 19, 20, 22, 25, 29, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 56, 63, 66, 71, 72, 74, 77, 78, 79, 80, 81, 82, 86, 87, 88, 89, 96, 99, 101, 105, 109, 110, 111, 115, 116, 119, 128, 132, 134, 141, 142, 144, 146, 152, 156, 157, 161, 164, 169, 175, 177, 178, 182, 183, 184, 185, 186, 187, 190, 194, 197, 199, 200, 204, 205, 206, 207, 209, 210, 213, 220, 221, 223, 224, 228, 229, 230, 232, 233, 236, 237, 241, 248, 249, 251, 252, 256, 257, 258, 263, 266, 269, 270, 271, 275, 279, 280, 281, 285, 286, 290, 296, 298, 299, 301, 310, 311, 313, 315, 321, 326, 330, 333, 334, 335, 340, 347, 348, 350, 353, 354, 355, 356, 357, 361, 362, 363, 371, 374, 376, 380, 384, 385, 386, 390, 391, 401, 405, 414, 415, 417, 419, 425, 429, 430, 434, 437, 443, 446, 451, 452, 454, 457, 458, 459, 460, 461, 462, 466, 467, 468, 469, 476, 479, 481, 485, 489, 490, 491, 495, 496, 499, 508, 512, 521, 522, 524, 526, 532, 536, 537, 541, 544, 550, 551, 552, 553, 555, 556, 557, 563], "doubl": [3, 37, 39, 47, 49, 68, 72, 81, 87, 187, 195, 210, 217, 237, 245, 257, 334, 344, 348, 356, 362, 448, 452, 461, 467], "tripl": [3, 49, 81, 187, 210, 237, 334, 356, 461], "mean": [3, 5, 9, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 55, 63, 71, 72, 79, 80, 81, 82, 87, 91, 102, 105, 109, 110, 121, 132, 159, 169, 172, 177, 183, 185, 187, 190, 194, 199, 200, 205, 207, 209, 210, 213, 220, 224, 229, 232, 233, 236, 237, 241, 248, 251, 252, 257, 261, 272, 275, 279, 280, 291, 299, 301, 328, 334, 335, 340, 347, 348, 354, 355, 356, 357, 362, 366, 377, 380, 384, 385, 396, 405, 432, 443, 451, 452, 459, 460, 461, 462, 467, 471, 482, 485, 489, 490, 501, 512, 539, 550, 551], "sustain": [3, 47, 48, 81, 140, 176, 187, 198, 210, 222, 237, 250, 334, 356, 413, 461, 520], "one": [3, 4, 5, 8, 10, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 50, 54, 66, 68, 71, 72, 74, 79, 80, 81, 82, 83, 87, 91, 92, 93, 94, 96, 97, 99, 101, 102, 103, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 118, 121, 125, 126, 128, 130, 132, 136, 140, 145, 148, 154, 155, 156, 161, 163, 164, 165, 169, 172, 173, 175, 177, 178, 185, 186, 187, 190, 194, 195, 197, 199, 200, 207, 209, 210, 213, 217, 220, 221, 222, 223, 224, 231, 232, 233, 236, 237, 241, 245, 248, 249, 250, 251, 252, 261, 263, 266, 267, 269, 270, 271, 272, 273, 275, 276, 277, 279, 280, 281, 284, 285, 286, 290, 291, 294, 295, 296, 299, 301, 305, 314, 323, 324, 325, 330, 332, 333, 334, 335, 344, 347, 348, 350, 354, 355, 356, 357, 358, 366, 368, 371, 372, 374, 376, 377, 378, 380, 381, 382, 384, 385, 386, 389, 390, 391, 396, 399, 400, 401, 403, 405, 409, 413, 418, 427, 428, 429, 434, 436, 437, 438, 446, 448, 451, 452, 454, 459, 460, 461, 462, 463, 467, 471, 472, 473, 474, 476, 477, 479, 481, 482, 483, 485, 486, 487, 488, 489, 490, 491, 493, 494, 495, 496, 498, 501, 505, 506, 508, 510, 512, 516, 520, 525, 528, 534, 535, 536, 541, 543, 544, 545, 550, 551, 554], "two": [3, 8, 18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 50, 54, 72, 74, 77, 79, 80, 81, 82, 87, 93, 105, 111, 115, 128, 132, 133, 134, 137, 146, 156, 164, 175, 177, 185, 187, 194, 197, 199, 207, 209, 210, 221, 223, 224, 232, 233, 236, 237, 249, 251, 252, 257, 263, 265, 275, 281, 285, 296, 299, 301, 303, 306, 333, 334, 348, 350, 354, 355, 356, 362, 368, 380, 386, 390, 401, 405, 407, 410, 429, 437, 452, 454, 457, 459, 460, 461, 462, 467, 473, 485, 491, 495, 508, 512, 513, 514, 517, 526, 536, 544, 563], "three": [3, 8, 18, 19, 20, 33, 36, 37, 38, 39, 42, 43, 44, 47, 48, 66, 72, 79, 80, 81, 134, 172, 178, 183, 185, 187, 194, 200, 207, 210, 224, 233, 237, 252, 299, 303, 334, 348, 354, 355, 356, 407, 446, 452, 459, 460, 461, 514], "failur": [3, 5, 12, 18, 19, 20, 22, 35, 36, 37, 38, 43, 44, 48, 49, 67, 71, 72, 81, 82, 109, 110, 111, 115, 132, 134, 140, 156, 159, 171, 173, 176, 177, 186, 187, 193, 195, 198, 199, 207, 209, 210, 216, 217, 220, 222, 223, 233, 236, 237, 244, 245, 248, 250, 251, 279, 280, 281, 285, 301, 325, 328, 334, 335, 343, 347, 348, 356, 357, 384, 385, 386, 390, 405, 413, 429, 432, 447, 451, 452, 461, 462, 489, 490, 491, 495, 512, 520, 536, 539, 550, 551, 552, 553, 557, 564], "respect": [3, 21, 47, 48, 49, 66, 72, 77, 79, 81, 96, 99, 109, 110, 111, 115, 116, 128, 159, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 266, 269, 281, 285, 286, 296, 299, 334, 348, 354, 356, 371, 374, 386, 390, 391, 401, 446, 452, 457, 459, 461, 476, 479, 489, 490, 491, 495, 496, 508, 539], "without": [3, 4, 8, 12, 14, 16, 18, 19, 20, 21, 22, 25, 27, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 66, 68, 71, 72, 78, 79, 80, 81, 82, 87, 91, 94, 95, 102, 105, 106, 111, 115, 121, 128, 133, 134, 137, 144, 146, 158, 160, 161, 164, 166, 167, 173, 178, 185, 187, 195, 199, 200, 207, 210, 217, 220, 223, 224, 232, 233, 237, 245, 248, 251, 252, 261, 264, 265, 272, 275, 276, 281, 285, 291, 296, 298, 299, 302, 306, 313, 315, 327, 329, 330, 333, 334, 335, 344, 347, 348, 353, 354, 355, 356, 357, 366, 369, 370, 377, 380, 381, 386, 390, 396, 401, 406, 410, 417, 419, 431, 433, 434, 437, 446, 448, 451, 452, 458, 459, 460, 461, 462, 467, 471, 474, 475, 482, 485, 486, 491, 495, 501, 508, 513, 517, 524, 526, 538, 540, 541, 544, 546, 547, 555], "lose": [3, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 81, 187, 210, 237, 334, 356, 461], "raidz1": [3, 81, 148, 164, 187, 210, 237, 333, 334, 356, 437, 461, 528, 544, 559], "vdev": [3, 6, 7, 8, 19, 20, 21, 36, 38, 43, 44, 47, 49, 51, 54, 68, 72, 74, 77, 79, 80, 81, 82, 86, 87, 132, 133, 134, 137, 140, 142, 144, 146, 148, 152, 153, 157, 158, 159, 161, 163, 164, 173, 175, 176, 177, 182, 183, 186, 187, 195, 197, 198, 199, 200, 204, 205, 207, 209, 210, 217, 221, 222, 223, 224, 228, 229, 233, 236, 237, 245, 249, 250, 251, 252, 256, 257, 299, 301, 302, 303, 306, 308, 313, 315, 317, 321, 322, 327, 328, 330, 333, 334, 335, 344, 348, 350, 354, 355, 356, 357, 361, 362, 405, 406, 407, 410, 413, 417, 419, 421, 425, 426, 431, 432, 434, 437, 448, 452, 454, 457, 459, 460, 461, 462, 466, 467, 512, 513, 514, 517, 520, 522, 524, 526, 528, 532, 533, 537, 538, 539, 541, 544], "type": [3, 4, 5, 7, 9, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 57, 63, 72, 77, 78, 79, 80, 81, 82, 87, 88, 89, 92, 95, 96, 97, 99, 101, 105, 107, 109, 110, 116, 119, 125, 126, 128, 132, 140, 161, 163, 164, 166, 167, 169, 176, 183, 184, 185, 186, 187, 190, 198, 205, 206, 207, 209, 210, 213, 222, 223, 229, 230, 231, 232, 233, 236, 237, 241, 250, 251, 257, 258, 259, 262, 265, 266, 267, 269, 271, 273, 275, 277, 279, 280, 286, 289, 294, 295, 296, 298, 299, 301, 330, 332, 333, 334, 335, 340, 348, 353, 354, 355, 356, 357, 362, 363, 364, 367, 370, 371, 372, 374, 376, 380, 382, 384, 385, 391, 394, 399, 400, 401, 405, 413, 434, 436, 437, 443, 452, 457, 458, 459, 460, 461, 462, 467, 468, 469, 472, 475, 476, 477, 479, 481, 485, 487, 489, 490, 496, 499, 505, 506, 508, 512, 520, 541, 543, 544, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "specifi": [3, 5, 7, 8, 9, 18, 19, 20, 21, 22, 27, 32, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 51, 54, 62, 66, 67, 68, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 91, 92, 93, 94, 96, 97, 98, 99, 100, 101, 102, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 131, 132, 133, 134, 136, 137, 140, 142, 143, 144, 145, 146, 147, 148, 149, 150, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 171, 172, 173, 175, 176, 177, 178, 182, 183, 184, 185, 186, 187, 192, 193, 194, 195, 197, 198, 199, 200, 204, 205, 206, 207, 208, 209, 210, 215, 216, 217, 220, 221, 222, 223, 224, 228, 229, 230, 232, 233, 235, 236, 237, 240, 243, 244, 245, 248, 249, 250, 251, 252, 256, 257, 258, 259, 261, 262, 263, 264, 266, 267, 268, 269, 270, 271, 272, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 298, 299, 300, 301, 302, 305, 306, 311, 312, 313, 314, 315, 316, 317, 318, 319, 321, 323, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 339, 343, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 366, 367, 368, 369, 371, 372, 373, 374, 375, 376, 377, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 404, 405, 406, 409, 410, 413, 415, 416, 417, 418, 419, 420, 421, 422, 423, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 446, 447, 448, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 471, 472, 473, 474, 476, 477, 478, 479, 480, 481, 482, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 511, 512, 513, 516, 517, 520, 522, 523, 524, 525, 526, 527, 528, 529, 530, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 549], "raidz2": [3, 18, 19, 20, 22, 35, 36, 38, 43, 44, 81, 134, 187, 210, 237, 334, 356, 461], "raidz3": [3, 18, 19, 20, 22, 35, 36, 38, 43, 44, 81, 187, 210, 237, 334, 356, 461], "alia": [3, 48, 54, 62, 72, 74, 79, 80, 81, 86, 88, 105, 109, 110, 118, 166, 167, 175, 182, 184, 185, 187, 197, 204, 206, 207, 210, 221, 223, 228, 230, 233, 237, 240, 249, 251, 256, 258, 275, 279, 280, 288, 296, 299, 333, 334, 339, 348, 350, 354, 356, 361, 363, 380, 384, 385, 393, 439, 440, 442, 452, 454, 459, 460, 461, 466, 468, 485, 489, 490, 498, 546, 547], "n": [3, 5, 14, 16, 18, 19, 20, 22, 23, 25, 27, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 66, 77, 79, 81, 85, 87, 91, 93, 94, 97, 102, 105, 107, 109, 110, 111, 115, 121, 123, 125, 127, 133, 137, 144, 146, 152, 153, 158, 165, 172, 177, 181, 185, 187, 194, 199, 203, 207, 210, 223, 227, 232, 233, 237, 251, 255, 261, 263, 264, 267, 272, 275, 277, 279, 280, 281, 285, 291, 294, 299, 302, 306, 313, 315, 321, 322, 327, 334, 354, 356, 360, 362, 366, 368, 369, 372, 377, 380, 382, 384, 385, 386, 390, 396, 399, 406, 410, 417, 419, 425, 426, 431, 438, 446, 457, 459, 461, 465, 467, 471, 473, 474, 477, 482, 485, 487, 489, 490, 491, 495, 501, 503, 505, 507, 513, 517, 524, 526, 532, 533, 538, 545, 555], "p": [3, 5, 9, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 62, 63, 66, 68, 79, 80, 81, 86, 87, 88, 92, 93, 94, 95, 96, 97, 98, 99, 101, 106, 107, 111, 112, 113, 115, 116, 125, 132, 133, 142, 146, 148, 152, 156, 157, 158, 159, 163, 164, 169, 172, 173, 182, 183, 184, 185, 186, 187, 190, 194, 195, 204, 205, 206, 207, 209, 210, 213, 217, 224, 228, 229, 230, 233, 236, 237, 240, 241, 245, 252, 256, 257, 258, 262, 263, 264, 265, 266, 267, 269, 271, 276, 277, 281, 283, 285, 286, 294, 299, 301, 302, 311, 315, 317, 321, 325, 326, 327, 328, 332, 333, 334, 339, 340, 344, 354, 355, 356, 361, 362, 363, 367, 368, 369, 370, 371, 372, 374, 376, 381, 382, 386, 388, 390, 391, 399, 405, 406, 415, 419, 421, 425, 429, 430, 431, 432, 436, 437, 442, 443, 446, 448, 459, 460, 461, 466, 467, 468, 472, 473, 474, 475, 476, 477, 478, 479, 481, 486, 487, 491, 492, 493, 495, 496, 505, 512, 513, 522, 526, 528, 532, 536, 537, 538, 539, 543, 544], "hold": [3, 5, 45, 48, 66, 72, 79, 81, 84, 89, 94, 105, 109, 110, 111, 112, 115, 118, 119, 128, 185, 187, 207, 210, 232, 233, 237, 251, 254, 259, 264, 275, 279, 280, 281, 282, 285, 288, 289, 296, 299, 334, 348, 354, 356, 359, 364, 369, 380, 384, 385, 386, 387, 390, 393, 394, 401, 446, 452, 459, 461, 464, 469, 474, 485, 489, 490, 491, 492, 495, 498, 499, 508], "approxim": [3, 5, 12, 48, 49, 72, 80, 81, 91, 102, 121, 159, 177, 178, 187, 199, 200, 210, 224, 237, 251, 252, 261, 272, 291, 328, 334, 348, 355, 356, 366, 377, 396, 432, 452, 460, 461, 471, 482, 501, 539, 555], "byte": [3, 47, 48, 49, 54, 62, 68, 72, 77, 79, 80, 81, 82, 87, 96, 99, 105, 106, 116, 140, 161, 163, 166, 167, 172, 173, 176, 177, 178, 183, 185, 187, 194, 195, 198, 199, 200, 205, 207, 210, 217, 222, 223, 224, 229, 232, 233, 237, 240, 245, 250, 251, 252, 257, 266, 269, 275, 286, 296, 299, 330, 332, 334, 339, 344, 348, 354, 355, 356, 362, 371, 374, 380, 381, 391, 413, 434, 436, 442, 448, 452, 457, 459, 460, 461, 462, 467, 476, 479, 485, 486, 496, 520, 541, 543, 546, 547], "withstand": [3, 81, 187, 210, 237, 334, 356, 461], "devic": [3, 5, 7, 8, 11, 18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 58, 67, 69, 71, 72, 74, 77, 78, 79, 80, 81, 82, 86, 87, 89, 93, 95, 96, 99, 100, 103, 109, 110, 116, 119, 120, 123, 127, 128, 130, 132, 133, 134, 136, 137, 138, 139, 140, 141, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 158, 159, 161, 163, 164, 168, 175, 176, 177, 181, 182, 183, 185, 186, 187, 197, 198, 199, 200, 203, 204, 205, 207, 209, 210, 218, 221, 222, 223, 224, 227, 228, 229, 231, 233, 236, 237, 246, 249, 250, 251, 252, 255, 256, 257, 259, 263, 265, 270, 273, 279, 280, 289, 290, 296, 298, 299, 301, 302, 303, 305, 306, 307, 308, 309, 310, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 327, 328, 330, 332, 333, 334, 335, 345, 347, 348, 350, 353, 354, 355, 356, 357, 361, 362, 364, 368, 370, 375, 378, 384, 385, 394, 395, 401, 403, 405, 406, 407, 409, 410, 411, 412, 413, 414, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 431, 432, 434, 436, 437, 441, 447, 449, 451, 452, 454, 457, 458, 459, 460, 461, 462, 466, 467, 469, 473, 475, 476, 479, 480, 483, 489, 490, 496, 499, 500, 503, 507, 508, 510, 512, 513, 514, 516, 517, 518, 519, 520, 521, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 538, 539, 541, 543, 544, 548, 549, 555, 556, 561, 562, 563, 564], "fail": [3, 5, 16, 18, 19, 20, 21, 23, 25, 31, 35, 36, 38, 43, 44, 47, 48, 50, 54, 65, 66, 71, 72, 79, 81, 86, 87, 93, 105, 109, 110, 111, 115, 128, 132, 133, 134, 137, 140, 144, 145, 154, 159, 161, 164, 166, 167, 172, 173, 176, 177, 182, 183, 185, 186, 187, 192, 194, 195, 198, 199, 204, 205, 207, 209, 210, 215, 217, 220, 222, 223, 228, 229, 232, 233, 236, 237, 243, 245, 248, 250, 251, 256, 257, 263, 275, 279, 280, 281, 285, 296, 299, 301, 302, 306, 309, 313, 314, 323, 328, 330, 333, 334, 342, 347, 348, 354, 356, 361, 362, 368, 380, 384, 385, 386, 390, 401, 405, 406, 410, 413, 417, 418, 427, 432, 434, 437, 445, 446, 451, 452, 459, 461, 466, 467, 473, 485, 489, 490, 491, 495, 508, 512, 513, 517, 520, 524, 525, 534, 539, 541, 544, 546, 547, 550, 563, 564], "minimum": [3, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44, 46, 48, 50, 51, 54, 68, 71, 72, 79, 80, 81, 82, 132, 154, 173, 176, 177, 185, 187, 195, 198, 199, 200, 207, 209, 210, 217, 220, 223, 224, 233, 236, 237, 245, 248, 251, 252, 299, 301, 323, 334, 335, 344, 348, 354, 355, 356, 357, 405, 427, 448, 452, 459, 460, 461, 462, 512, 534], "more": [3, 4, 5, 7, 9, 10, 11, 18, 19, 20, 21, 22, 25, 33, 35, 36, 38, 43, 44, 47, 48, 49, 51, 54, 58, 62, 63, 67, 68, 71, 72, 74, 77, 78, 79, 80, 81, 82, 83, 87, 88, 89, 91, 93, 95, 96, 99, 100, 101, 102, 103, 104, 105, 111, 114, 115, 116, 119, 120, 121, 122, 128, 132, 136, 137, 140, 142, 144, 145, 146, 156, 157, 158, 159, 161, 169, 171, 173, 175, 176, 177, 178, 183, 184, 185, 187, 190, 193, 195, 197, 198, 199, 200, 205, 206, 207, 209, 210, 213, 216, 217, 220, 221, 222, 223, 224, 229, 230, 232, 233, 236, 237, 240, 241, 244, 245, 248, 249, 250, 251, 252, 257, 258, 259, 261, 263, 265, 266, 269, 270, 272, 274, 275, 281, 284, 285, 286, 289, 290, 291, 292, 296, 298, 299, 301, 305, 306, 309, 311, 313, 314, 315, 326, 327, 328, 330, 333, 334, 335, 339, 340, 343, 344, 347, 348, 350, 353, 354, 355, 356, 357, 358, 362, 363, 364, 366, 368, 370, 371, 374, 375, 377, 378, 379, 380, 386, 389, 390, 391, 394, 395, 396, 397, 401, 405, 409, 410, 413, 415, 417, 418, 419, 429, 430, 431, 432, 434, 442, 443, 447, 448, 451, 452, 454, 457, 458, 459, 460, 461, 462, 463, 467, 468, 469, 471, 473, 475, 476, 479, 480, 481, 482, 483, 484, 485, 491, 494, 495, 496, 499, 500, 501, 502, 508, 512, 516, 517, 520, 522, 524, 525, 526, 536, 537, 538, 539, 541, 550, 551, 552, 553, 554, 556, 557, 558, 559, 561, 562, 563], "than": [3, 5, 10, 11, 12, 18, 19, 20, 22, 32, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 50, 51, 58, 62, 66, 67, 68, 71, 72, 78, 79, 80, 81, 82, 83, 87, 88, 89, 96, 99, 100, 103, 105, 109, 110, 111, 114, 115, 116, 119, 120, 132, 134, 140, 154, 171, 173, 177, 178, 183, 185, 187, 193, 195, 199, 200, 205, 207, 209, 210, 216, 217, 220, 222, 223, 224, 229, 232, 233, 236, 237, 240, 244, 245, 248, 250, 251, 252, 257, 258, 259, 266, 269, 270, 275, 279, 280, 281, 284, 285, 286, 289, 290, 298, 299, 301, 323, 334, 335, 339, 343, 344, 347, 348, 353, 354, 355, 356, 357, 358, 362, 363, 364, 371, 374, 375, 378, 380, 384, 385, 386, 389, 390, 391, 394, 395, 405, 413, 427, 442, 446, 447, 448, 451, 452, 458, 459, 460, 461, 462, 463, 467, 468, 469, 476, 479, 480, 483, 485, 489, 490, 491, 494, 495, 496, 499, 500, 512, 520, 534], "recommend": [3, 5, 8, 10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 32, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 52, 54, 72, 75, 78, 79, 81, 88, 93, 109, 110, 137, 146, 153, 164, 184, 185, 187, 206, 207, 210, 223, 230, 233, 237, 251, 258, 298, 299, 315, 322, 333, 334, 348, 351, 353, 354, 356, 363, 419, 426, 437, 452, 455, 458, 459, 461, 468, 473, 489, 490, 517, 526, 533, 544, 555], "between": [3, 5, 12, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 51, 54, 55, 68, 71, 72, 78, 79, 80, 81, 95, 105, 128, 131, 132, 141, 173, 177, 178, 185, 186, 187, 195, 199, 200, 207, 208, 209, 210, 217, 220, 223, 224, 231, 232, 233, 235, 236, 237, 245, 248, 251, 252, 265, 273, 275, 296, 298, 299, 300, 301, 310, 334, 344, 347, 348, 353, 354, 355, 356, 370, 380, 401, 404, 405, 414, 448, 451, 452, 458, 459, 460, 461, 475, 485, 508, 511, 512, 521, 558, 559], "3": [3, 5, 21, 25, 31, 32, 42, 45, 47, 48, 49, 50, 54, 66, 72, 74, 79, 80, 81, 87, 89, 91, 94, 96, 99, 102, 105, 116, 118, 119, 121, 123, 127, 128, 131, 132, 134, 137, 140, 164, 166, 167, 172, 175, 176, 177, 183, 184, 185, 187, 194, 197, 198, 199, 200, 205, 206, 207, 209, 210, 221, 222, 223, 224, 229, 230, 231, 232, 233, 236, 237, 249, 250, 251, 252, 257, 258, 261, 272, 273, 275, 281, 285, 291, 296, 299, 300, 301, 333, 334, 348, 350, 354, 355, 356, 362, 366, 377, 380, 396, 401, 404, 405, 413, 437, 446, 452, 454, 459, 460, 461, 467, 469, 471, 474, 476, 479, 482, 485, 496, 498, 499, 501, 503, 507, 508, 511, 512, 517, 520, 544, 546, 547, 557, 564], "9": [3, 9, 21, 25, 31, 32, 36, 37, 38, 39, 47, 48, 49, 54, 65, 68, 72, 79, 81, 82, 128, 139, 142, 143, 144, 148, 149, 150, 157, 160, 164, 173, 185, 187, 192, 195, 207, 210, 215, 217, 223, 233, 237, 243, 245, 251, 270, 290, 295, 296, 299, 302, 304, 305, 306, 307, 308, 309, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 324, 325, 326, 327, 329, 331, 333, 334, 335, 342, 344, 348, 354, 356, 357, 401, 412, 415, 416, 417, 421, 422, 423, 425, 430, 433, 435, 437, 445, 448, 452, 459, 461, 462, 508, 519, 522, 523, 524, 528, 529, 530, 537, 540, 544, 560], "help": [3, 5, 10, 12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 62, 65, 68, 72, 81, 86, 88, 111, 115, 128, 164, 165, 172, 173, 182, 184, 185, 187, 192, 194, 195, 199, 204, 206, 207, 210, 215, 217, 223, 228, 230, 233, 237, 240, 243, 245, 251, 256, 258, 281, 285, 296, 333, 334, 339, 342, 344, 348, 356, 361, 363, 386, 390, 401, 437, 438, 442, 445, 448, 451, 452, 461, 466, 468, 491, 495, 508, 544, 545], "actual": [3, 18, 19, 20, 22, 35, 36, 37, 38, 39, 47, 48, 49, 54, 62, 65, 68, 72, 79, 80, 82, 85, 93, 105, 109, 110, 111, 115, 133, 137, 144, 152, 154, 158, 173, 176, 177, 185, 187, 192, 195, 198, 199, 207, 210, 215, 217, 222, 223, 232, 233, 237, 240, 243, 245, 250, 251, 263, 275, 279, 280, 281, 285, 299, 302, 306, 313, 321, 323, 327, 335, 339, 342, 344, 348, 354, 357, 360, 368, 380, 384, 385, 386, 390, 406, 410, 413, 417, 425, 427, 431, 442, 445, 448, 452, 459, 460, 462, 465, 473, 485, 489, 490, 491, 495, 513, 517, 524, 532, 534, 538, 555], "base": [3, 4, 5, 7, 8, 9, 11, 12, 14, 16, 18, 19, 25, 27, 28, 30, 31, 38, 41, 43, 44, 47, 48, 49, 54, 59, 60, 71, 72, 74, 79, 80, 82, 86, 101, 105, 137, 146, 164, 175, 177, 181, 182, 185, 187, 197, 199, 203, 204, 207, 210, 220, 221, 223, 224, 227, 228, 231, 232, 233, 237, 248, 249, 251, 252, 255, 256, 271, 273, 275, 299, 315, 333, 335, 347, 348, 350, 354, 355, 357, 361, 376, 380, 419, 437, 451, 452, 454, 459, 460, 462, 466, 481, 485, 517, 526, 544], "sever": [3, 9, 10, 36, 47, 48, 49, 54, 72, 79, 82, 185, 187, 207, 210, 233, 237, 251, 299, 335, 348, 354, 357, 452, 459, 462, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "point": [3, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 35, 36, 38, 43, 44, 46, 47, 49, 54, 55, 72, 75, 78, 79, 80, 81, 82, 85, 89, 90, 93, 94, 96, 99, 100, 104, 113, 116, 118, 119, 120, 122, 123, 127, 128, 135, 137, 176, 177, 178, 185, 187, 198, 199, 200, 207, 210, 222, 223, 224, 233, 237, 250, 251, 252, 260, 264, 270, 274, 283, 290, 292, 296, 298, 299, 306, 334, 335, 348, 351, 353, 354, 355, 356, 357, 360, 365, 369, 375, 379, 388, 395, 397, 401, 408, 410, 452, 455, 458, 459, 460, 461, 462, 465, 469, 470, 473, 474, 476, 479, 480, 484, 493, 496, 499, 500, 502, 503, 507, 508, 515, 517, 557, 561, 562], "minim": [3, 11, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 48, 49, 71, 72, 78, 128, 185, 207, 220, 233, 248, 251, 296, 347, 348, 401, 451, 452, 458, 508], "sector": [3, 5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 72, 77, 79, 80, 81, 82, 152, 178, 187, 200, 210, 224, 233, 237, 251, 252, 299, 335, 348, 354, 355, 356, 357, 425, 452, 457, 459, 460, 461, 462, 532], "via": [3, 4, 7, 11, 33, 36, 37, 38, 39, 47, 48, 49, 50, 51, 54, 66, 72, 75, 79, 80, 85, 88, 89, 91, 93, 102, 105, 119, 121, 128, 146, 164, 177, 178, 184, 185, 199, 200, 206, 207, 223, 224, 230, 232, 233, 237, 251, 252, 258, 259, 261, 263, 272, 275, 276, 289, 291, 296, 299, 315, 333, 348, 351, 354, 355, 360, 363, 364, 366, 368, 377, 380, 394, 396, 401, 419, 437, 446, 452, 455, 459, 460, 465, 468, 469, 471, 473, 482, 485, 499, 501, 508, 526, 544, 559], "ashift": [3, 11, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 54, 65, 72, 77, 82, 133, 134, 140, 154, 164, 176, 187, 192, 198, 210, 215, 222, 223, 237, 243, 250, 251, 302, 303, 323, 333, 335, 342, 348, 357, 406, 407, 413, 427, 437, 445, 452, 457, 462, 513, 514, 520, 534, 544], "width": [3, 5, 81, 177, 199, 223, 251, 356, 461], "dynam": [3, 11, 48, 71, 72, 81, 82, 89, 119, 185, 187, 199, 207, 210, 220, 223, 233, 237, 248, 251, 259, 289, 334, 335, 347, 348, 356, 357, 364, 394, 451, 452, 461, 462, 469, 499], "start": [3, 5, 7, 8, 9, 10, 12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 50, 54, 55, 59, 60, 63, 66, 72, 75, 80, 81, 87, 88, 132, 134, 140, 154, 155, 156, 164, 169, 172, 176, 177, 186, 187, 190, 192, 194, 198, 199, 200, 209, 210, 213, 215, 222, 223, 224, 231, 236, 237, 241, 243, 250, 251, 252, 257, 258, 273, 301, 303, 323, 324, 325, 333, 334, 340, 348, 351, 355, 356, 362, 363, 405, 407, 413, 427, 428, 429, 437, 443, 446, 452, 455, 460, 461, 467, 468, 512, 514, 520, 534, 535, 536, 544], "least": [3, 36, 38, 47, 48, 49, 72, 75, 78, 80, 81, 111, 115, 140, 156, 172, 177, 185, 194, 199, 207, 222, 223, 231, 233, 237, 250, 251, 273, 281, 285, 298, 334, 348, 351, 353, 356, 386, 390, 413, 452, 455, 458, 460, 461, 491, 495, 520, 536], "part": [3, 5, 7, 8, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 45, 46, 47, 48, 63, 72, 78, 79, 80, 81, 82, 88, 94, 104, 113, 117, 122, 134, 140, 144, 147, 149, 150, 152, 156, 166, 167, 169, 176, 177, 178, 181, 185, 187, 190, 198, 199, 200, 203, 207, 210, 213, 220, 222, 223, 224, 227, 233, 237, 241, 248, 250, 251, 252, 255, 264, 274, 283, 287, 288, 292, 299, 303, 313, 316, 318, 319, 321, 334, 335, 340, 348, 354, 355, 356, 357, 363, 369, 379, 388, 392, 393, 397, 407, 413, 417, 420, 422, 423, 425, 429, 443, 452, 458, 459, 460, 461, 462, 468, 474, 484, 493, 497, 498, 502, 514, 520, 524, 527, 529, 530, 532, 536, 546, 547, 549, 554, 557], "count": [3, 4, 36, 37, 38, 39, 46, 48, 49, 50, 62, 67, 68, 72, 79, 87, 94, 95, 105, 140, 146, 148, 159, 166, 167, 171, 177, 183, 185, 187, 193, 199, 205, 207, 210, 216, 222, 223, 229, 232, 233, 237, 240, 244, 250, 251, 257, 264, 265, 275, 299, 315, 317, 328, 336, 339, 343, 344, 348, 354, 362, 369, 370, 380, 413, 419, 421, 432, 439, 440, 442, 447, 448, 452, 459, 467, 474, 475, 485, 520, 526, 528, 539, 546, 547, 557], "minu": [3, 87, 257, 362, 467], "records": [3, 18, 19, 20, 35, 36, 37, 38, 39, 43, 44, 48, 54, 72, 78, 79, 80, 89, 96, 99, 111, 115, 116, 119, 128, 177, 178, 185, 199, 200, 207, 223, 224, 233, 251, 252, 259, 281, 285, 289, 296, 299, 348, 354, 355, 364, 386, 390, 394, 401, 452, 458, 459, 460, 469, 476, 479, 491, 495, 496, 499, 508], "split": [3, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 81, 84, 87, 111, 115, 135, 139, 152, 156, 164, 187, 207, 210, 223, 229, 233, 237, 251, 254, 257, 281, 285, 304, 308, 321, 333, 334, 348, 356, 359, 362, 386, 390, 408, 412, 425, 429, 437, 452, 461, 464, 467, 491, 495, 515, 519, 532, 536, 544], "equal": [3, 48, 49, 71, 72, 77, 79, 80, 82, 93, 109, 110, 140, 154, 177, 185, 187, 199, 207, 210, 220, 222, 223, 231, 233, 237, 248, 250, 251, 263, 273, 279, 280, 299, 323, 335, 347, 348, 354, 357, 368, 384, 385, 413, 427, 451, 452, 457, 459, 460, 462, 473, 489, 490, 520, 534], "addit": [3, 5, 8, 10, 11, 12, 18, 19, 20, 22, 32, 33, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 51, 54, 63, 65, 68, 72, 74, 77, 78, 79, 80, 81, 82, 87, 88, 91, 98, 102, 103, 105, 111, 112, 115, 121, 128, 132, 133, 134, 135, 140, 143, 146, 148, 159, 164, 169, 173, 177, 178, 184, 185, 187, 190, 192, 195, 197, 199, 200, 205, 206, 207, 209, 210, 213, 215, 217, 221, 222, 223, 224, 229, 230, 232, 233, 236, 237, 241, 243, 245, 249, 250, 251, 252, 257, 258, 261, 268, 272, 275, 281, 282, 285, 291, 296, 298, 299, 301, 304, 312, 315, 317, 333, 334, 335, 340, 342, 344, 348, 350, 353, 354, 355, 356, 357, 362, 363, 366, 373, 377, 378, 380, 386, 387, 390, 396, 401, 405, 408, 413, 416, 419, 421, 437, 443, 445, 448, 452, 454, 457, 458, 459, 460, 461, 462, 467, 468, 471, 478, 482, 483, 485, 491, 492, 495, 501, 508, 512, 513, 515, 520, 523, 526, 528, 539, 544, 550, 554, 557, 559], "per": [3, 5, 6, 7, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 62, 65, 66, 68, 71, 72, 74, 78, 79, 80, 81, 87, 111, 115, 140, 156, 161, 172, 175, 177, 185, 192, 194, 197, 199, 207, 215, 220, 221, 222, 223, 233, 237, 240, 243, 248, 249, 250, 251, 252, 261, 272, 281, 285, 291, 298, 299, 330, 339, 342, 344, 347, 348, 350, 353, 354, 355, 356, 386, 390, 413, 429, 434, 442, 445, 446, 448, 451, 452, 454, 458, 459, 460, 461, 467, 491, 495, 520, 536, 541, 559], "due": [3, 11, 12, 17, 32, 36, 38, 48, 49, 54, 68, 71, 72, 78, 79, 81, 82, 87, 93, 109, 110, 111, 115, 128, 133, 137, 140, 148, 156, 164, 166, 167, 177, 183, 185, 187, 199, 205, 207, 210, 220, 222, 223, 229, 233, 237, 248, 250, 251, 257, 263, 279, 280, 281, 285, 296, 298, 299, 302, 306, 325, 333, 334, 335, 344, 347, 348, 353, 354, 356, 357, 362, 368, 384, 385, 386, 390, 401, 406, 410, 413, 429, 437, 448, 451, 452, 458, 459, 461, 462, 467, 473, 489, 490, 491, 495, 508, 513, 517, 520, 528, 536, 544, 546, 547, 550, 552, 553, 555, 556, 559, 563], "input": [3, 14, 16, 25, 28, 31, 47, 79, 105, 109, 110, 128, 165, 166, 167, 185, 207, 232, 233, 275, 279, 280, 296, 336, 354, 380, 384, 385, 401, 438, 439, 440, 459, 485, 489, 490, 508, 545, 546, 547], "less": [3, 5, 10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 62, 67, 71, 72, 78, 79, 81, 111, 115, 134, 140, 171, 177, 185, 193, 199, 207, 216, 220, 222, 223, 233, 240, 244, 248, 250, 251, 281, 285, 298, 299, 334, 339, 343, 347, 348, 353, 354, 356, 386, 390, 413, 442, 447, 451, 452, 458, 459, 461, 491, 495, 520], "": [3, 5, 7, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 49, 51, 53, 54, 62, 63, 65, 66, 68, 72, 75, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 169, 172, 173, 176, 177, 178, 181, 182, 183, 184, 185, 186, 187, 190, 192, 194, 195, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 213, 215, 217, 218, 220, 222, 223, 224, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 240, 241, 243, 245, 246, 248, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 339, 340, 342, 344, 347, 348, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 448, 452, 455, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 559, 563], "effict": 3, "mirror": [3, 5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 49, 54, 68, 72, 79, 80, 81, 82, 133, 134, 137, 139, 140, 144, 146, 149, 150, 152, 154, 156, 158, 159, 164, 173, 176, 177, 185, 187, 195, 198, 199, 200, 207, 210, 217, 222, 223, 224, 233, 237, 245, 250, 251, 252, 299, 303, 306, 308, 318, 319, 321, 323, 325, 327, 333, 334, 335, 344, 348, 354, 355, 356, 357, 407, 410, 412, 413, 422, 423, 425, 427, 429, 431, 437, 448, 452, 459, 460, 461, 462, 513, 514, 517, 519, 520, 524, 526, 529, 530, 532, 534, 536, 538, 539, 544, 550, 552, 557, 558, 563], "same": [3, 5, 9, 18, 19, 20, 21, 36, 38, 43, 44, 45, 47, 48, 49, 54, 55, 62, 67, 72, 74, 77, 78, 79, 80, 81, 82, 86, 87, 89, 91, 92, 93, 94, 101, 102, 103, 105, 109, 110, 111, 115, 118, 119, 121, 128, 130, 137, 144, 152, 154, 164, 171, 175, 182, 183, 185, 187, 193, 197, 204, 205, 207, 210, 216, 221, 223, 228, 229, 232, 233, 237, 240, 244, 249, 251, 256, 257, 259, 261, 262, 263, 264, 270, 271, 272, 275, 279, 280, 281, 285, 288, 289, 290, 291, 296, 298, 299, 306, 313, 321, 323, 333, 334, 335, 339, 343, 348, 350, 353, 354, 355, 356, 357, 361, 362, 364, 366, 367, 368, 369, 376, 377, 378, 380, 384, 385, 386, 390, 393, 394, 396, 401, 403, 410, 417, 425, 427, 437, 442, 447, 452, 454, 457, 458, 459, 460, 461, 462, 466, 467, 469, 471, 472, 473, 474, 481, 482, 483, 485, 489, 490, 491, 495, 498, 499, 501, 508, 510, 517, 524, 532, 534, 544, 550, 552, 553, 557], "exampl": [3, 4, 5, 7, 8, 12, 14, 16, 18, 19, 21, 25, 27, 28, 31, 32, 33, 47, 48, 49, 50, 51, 54, 63, 66, 67, 68, 72, 74, 77, 78, 79, 80, 81, 87, 89, 90, 92, 93, 94, 95, 96, 99, 101, 103, 105, 108, 109, 110, 111, 113, 114, 115, 116, 118, 119, 123, 127, 128, 130, 131, 132, 133, 137, 138, 140, 141, 144, 146, 148, 152, 156, 159, 162, 164, 166, 167, 169, 171, 173, 175, 176, 178, 183, 185, 187, 190, 193, 195, 197, 198, 200, 205, 207, 208, 209, 210, 213, 216, 217, 221, 222, 224, 229, 231, 232, 233, 235, 236, 237, 241, 244, 245, 249, 250, 251, 252, 257, 259, 264, 267, 271, 273, 275, 277, 279, 280, 281, 285, 289, 294, 296, 298, 299, 300, 301, 325, 333, 334, 340, 343, 344, 348, 350, 353, 354, 355, 356, 362, 364, 369, 376, 378, 380, 384, 385, 386, 390, 394, 401, 403, 404, 405, 413, 429, 437, 443, 446, 447, 448, 452, 454, 457, 458, 459, 460, 461, 467, 469, 470, 472, 473, 474, 475, 476, 479, 481, 483, 485, 488, 489, 490, 491, 493, 494, 495, 496, 498, 499, 503, 507, 508, 510, 511, 512, 513, 517, 518, 520, 521, 524, 526, 528, 532, 536, 539, 542, 544, 546, 547, 556, 557], "12": [3, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 33, 35, 36, 37, 38, 39, 47, 49, 54, 72, 82, 90, 109, 110, 128, 164, 185, 187, 207, 210, 223, 233, 237, 251, 296, 333, 335, 348, 357, 386, 390, 401, 437, 452, 462, 470, 489, 490, 508, 544, 555], "4k": [3, 5, 47, 49, 71, 79, 207, 210, 220, 233, 237, 248, 251, 299, 347, 354, 451, 459], "we": [3, 8, 10, 11, 12, 14, 16, 18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 50, 54, 72, 79, 80, 82, 111, 115, 176, 177, 198, 199, 220, 222, 223, 224, 237, 250, 251, 252, 281, 285, 334, 335, 348, 355, 356, 357, 386, 390, 451, 452, 459, 460, 462, 491, 495], "alloc": [3, 5, 54, 62, 65, 68, 71, 72, 77, 79, 80, 81, 82, 87, 105, 134, 146, 148, 152, 159, 161, 164, 173, 177, 183, 187, 195, 199, 205, 207, 210, 217, 220, 223, 224, 229, 232, 233, 237, 245, 248, 251, 252, 257, 275, 299, 317, 321, 328, 330, 333, 334, 335, 339, 342, 344, 347, 348, 354, 355, 356, 357, 362, 380, 421, 425, 432, 434, 437, 442, 445, 448, 451, 452, 457, 459, 460, 461, 462, 467, 485, 526, 528, 532, 539, 541, 544], "usabl": [3, 5, 33, 48, 49, 56, 80, 81, 87, 94, 111, 115, 128, 185, 205, 207, 229, 233, 257, 264, 281, 285, 296, 356, 362, 369, 386, 390, 401, 461, 467, 474, 491, 495, 508, 555], "ratio": [3, 5, 48, 49, 51, 54, 72, 79, 80, 81, 82, 87, 91, 102, 121, 134, 177, 178, 183, 185, 199, 200, 205, 207, 223, 224, 229, 233, 251, 252, 257, 261, 272, 291, 299, 348, 354, 355, 356, 362, 366, 377, 396, 452, 459, 460, 461, 462, 467, 471, 482, 501], "50": [3, 12, 32, 37, 39, 48, 68, 72, 80, 96, 99, 116, 128, 177, 178, 185, 199, 200, 207, 223, 224, 233, 251, 252, 296, 344, 348, 355, 401, 448, 452, 460, 476, 479, 496, 508], "anoth": [3, 5, 10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 72, 78, 79, 80, 81, 95, 97, 100, 107, 111, 115, 120, 123, 125, 127, 128, 136, 137, 144, 154, 164, 177, 178, 185, 187, 199, 200, 207, 210, 223, 224, 233, 237, 251, 252, 265, 267, 270, 277, 281, 285, 290, 294, 296, 298, 299, 305, 306, 313, 323, 333, 334, 348, 353, 354, 355, 356, 370, 372, 375, 382, 386, 390, 395, 399, 401, 409, 410, 417, 427, 437, 452, 458, 459, 460, 461, 475, 477, 480, 487, 491, 495, 500, 503, 505, 507, 508, 516, 517, 524, 534, 544, 550, 552, 557, 559, 560], "128k": [3, 49, 54, 72, 96, 99, 116, 128, 185, 194, 207, 210, 223, 233, 237, 251, 296, 348, 401, 452, 476, 479, 496, 508], "total": [3, 5, 48, 57, 62, 68, 72, 77, 78, 79, 80, 81, 82, 87, 97, 107, 125, 146, 152, 165, 173, 177, 185, 187, 195, 199, 207, 210, 217, 223, 224, 233, 237, 240, 245, 251, 252, 267, 277, 294, 298, 299, 315, 321, 335, 339, 344, 348, 353, 354, 355, 356, 357, 372, 382, 399, 419, 425, 429, 438, 442, 448, 452, 457, 458, 459, 460, 461, 462, 467, 477, 487, 505, 526, 532, 545], "becaus": [3, 9, 12, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 54, 63, 68, 71, 72, 78, 79, 80, 81, 82, 83, 89, 103, 105, 109, 110, 111, 115, 119, 126, 128, 140, 144, 156, 159, 163, 166, 167, 169, 173, 176, 177, 179, 185, 187, 190, 195, 198, 199, 200, 201, 207, 210, 213, 217, 220, 222, 223, 224, 225, 231, 232, 233, 237, 241, 245, 248, 250, 251, 252, 253, 259, 273, 275, 279, 280, 281, 285, 289, 295, 296, 298, 299, 313, 325, 328, 332, 334, 335, 340, 344, 347, 348, 353, 354, 355, 356, 357, 358, 364, 378, 380, 384, 385, 386, 390, 394, 400, 401, 413, 417, 429, 432, 436, 443, 448, 451, 452, 458, 459, 460, 461, 462, 463, 469, 483, 485, 489, 490, 491, 495, 499, 506, 508, 520, 524, 536, 539, 543, 546, 547, 552, 553, 555, 557, 559], "8k": [3, 49, 79, 177, 199, 207, 223, 233, 251, 299, 354, 459], "16": [3, 5, 32, 47, 48, 49, 54, 68, 72, 79, 80, 82, 88, 89, 92, 93, 94, 95, 96, 99, 104, 108, 113, 114, 116, 118, 119, 122, 128, 133, 137, 138, 141, 144, 146, 148, 152, 156, 159, 162, 164, 171, 177, 179, 185, 193, 199, 201, 207, 210, 216, 220, 223, 225, 233, 235, 237, 248, 251, 274, 279, 280, 292, 296, 310, 333, 335, 344, 348, 357, 363, 379, 384, 385, 397, 401, 414, 429, 437, 448, 452, 459, 460, 462, 468, 469, 472, 473, 474, 475, 476, 479, 484, 488, 493, 494, 496, 498, 499, 502, 508, 513, 517, 518, 521, 524, 526, 528, 532, 536, 539, 542, 544], "12k": 3, "192k": 3, "case": [3, 4, 7, 8, 11, 12, 18, 19, 20, 21, 22, 33, 35, 36, 37, 38, 39, 43, 44, 45, 46, 47, 48, 49, 51, 54, 55, 62, 63, 66, 72, 77, 79, 80, 81, 82, 85, 86, 88, 94, 95, 97, 105, 106, 107, 109, 110, 111, 113, 115, 124, 125, 128, 134, 140, 144, 148, 152, 154, 164, 169, 176, 177, 178, 182, 184, 185, 187, 190, 198, 199, 200, 204, 206, 207, 210, 213, 220, 222, 223, 224, 228, 230, 232, 233, 237, 240, 241, 250, 251, 252, 256, 258, 264, 265, 267, 275, 277, 279, 280, 281, 283, 285, 293, 294, 296, 299, 303, 313, 321, 323, 333, 334, 335, 339, 340, 348, 354, 355, 356, 357, 360, 361, 363, 369, 370, 372, 380, 381, 382, 384, 385, 386, 388, 390, 398, 399, 401, 407, 413, 417, 425, 427, 437, 442, 443, 446, 452, 457, 459, 460, 461, 462, 465, 466, 468, 474, 475, 477, 485, 486, 487, 489, 490, 491, 493, 495, 504, 505, 508, 514, 520, 524, 528, 532, 534, 544, 554, 556, 557], "66": 3, "wider": 3, "greater": [3, 46, 48, 49, 54, 72, 79, 82, 87, 154, 177, 183, 185, 187, 199, 205, 207, 210, 223, 229, 233, 237, 251, 257, 299, 323, 335, 348, 354, 357, 362, 427, 452, 459, 462, 467, 534], "you": [3, 4, 7, 8, 9, 10, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 40, 42, 43, 44, 47, 48, 49, 54, 55, 63, 68, 71, 72, 77, 78, 79, 80, 81, 82, 96, 99, 100, 101, 105, 109, 110, 111, 114, 115, 116, 120, 123, 127, 128, 130, 136, 141, 146, 149, 150, 151, 164, 166, 167, 169, 173, 177, 185, 187, 190, 195, 199, 207, 210, 213, 217, 220, 223, 224, 231, 232, 233, 237, 241, 245, 248, 251, 252, 270, 271, 273, 275, 276, 279, 280, 281, 284, 285, 290, 296, 298, 299, 310, 315, 320, 334, 335, 340, 344, 347, 348, 353, 354, 355, 356, 357, 375, 376, 380, 384, 385, 386, 389, 390, 395, 401, 403, 409, 414, 419, 422, 423, 424, 437, 443, 448, 451, 452, 457, 458, 459, 460, 461, 462, 476, 479, 480, 481, 485, 489, 490, 491, 494, 495, 496, 500, 503, 507, 508, 510, 516, 521, 526, 529, 530, 531, 544, 546, 547, 549, 551, 557, 560, 561, 562, 563], "find": [3, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 48, 54, 63, 66, 71, 72, 144, 169, 187, 190, 199, 210, 213, 220, 223, 233, 237, 241, 248, 251, 276, 313, 340, 347, 348, 417, 443, 446, 451, 452, 524, 557], "cost": [3, 37, 39, 47, 48, 54, 71, 72, 79, 91, 102, 121, 177, 199, 220, 223, 233, 248, 251, 261, 272, 291, 299, 347, 348, 354, 366, 377, 396, 451, 452, 459, 471, 482, 501], "here": [3, 9, 10, 14, 18, 19, 20, 22, 24, 26, 32, 33, 34, 35, 36, 37, 38, 39, 43, 44, 47, 50, 54, 55, 72, 79, 109, 110, 140, 176, 177, 198, 199, 222, 223, 233, 250, 251, 279, 280, 299, 348, 354, 384, 385, 413, 452, 459, 489, 490, 520], "full": [3, 5, 8, 9, 12, 47, 48, 49, 50, 72, 75, 79, 80, 81, 82, 87, 105, 109, 110, 111, 115, 128, 140, 146, 148, 158, 159, 164, 176, 177, 183, 185, 187, 198, 199, 205, 207, 210, 222, 223, 229, 232, 233, 237, 250, 251, 252, 257, 275, 279, 280, 281, 285, 296, 299, 315, 317, 327, 328, 333, 334, 335, 348, 351, 354, 355, 356, 357, 362, 380, 384, 385, 386, 390, 401, 413, 419, 421, 431, 432, 437, 452, 455, 459, 460, 461, 462, 467, 485, 489, 490, 491, 495, 508, 520, 526, 528, 538, 539, 544], "One": [3, 5, 10, 12, 47, 48, 49, 54, 79, 81, 101, 111, 115, 185, 187, 207, 210, 220, 233, 237, 271, 281, 285, 299, 334, 354, 356, 376, 386, 390, 459, 461, 481, 491, 495, 550, 551, 552, 553, 554, 556, 557, 563], "iop": [3, 5, 47, 48, 49, 50, 72, 81, 177, 199, 223, 251, 348, 356, 452, 461], "slowest": [3, 49], "worst": [3, 47, 48, 72, 79, 177, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "draft": 4, "contain": [4, 7, 8, 9, 12, 14, 16, 18, 19, 20, 22, 23, 25, 26, 28, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 65, 66, 72, 75, 77, 79, 80, 81, 82, 83, 86, 87, 88, 93, 97, 103, 105, 107, 109, 110, 111, 115, 125, 128, 137, 138, 140, 144, 146, 152, 159, 164, 166, 167, 176, 177, 178, 179, 182, 183, 184, 185, 187, 192, 198, 199, 200, 201, 204, 205, 206, 207, 210, 215, 220, 222, 223, 224, 225, 228, 229, 230, 232, 233, 237, 243, 248, 250, 251, 252, 253, 256, 257, 258, 263, 267, 275, 277, 281, 285, 294, 296, 299, 306, 307, 313, 315, 333, 334, 335, 336, 342, 348, 351, 354, 355, 356, 357, 358, 361, 362, 363, 368, 372, 378, 380, 382, 386, 390, 399, 401, 410, 413, 417, 419, 425, 437, 439, 440, 445, 446, 452, 455, 457, 459, 460, 461, 462, 463, 466, 467, 468, 473, 477, 483, 485, 487, 489, 490, 491, 495, 505, 508, 517, 518, 520, 524, 526, 532, 539, 544, 546, 547, 552, 559], "tip": [4, 8, 49], "what": [4, 5, 7, 9, 10, 11, 47, 48, 49, 51, 55, 58, 68, 72, 79, 81, 94, 95, 96, 99, 105, 111, 115, 116, 126, 128, 142, 157, 163, 173, 176, 177, 185, 187, 195, 198, 199, 207, 210, 217, 222, 223, 232, 233, 237, 245, 250, 251, 264, 266, 269, 275, 281, 285, 286, 295, 296, 299, 311, 326, 332, 334, 344, 348, 354, 356, 369, 371, 374, 380, 386, 390, 391, 400, 401, 415, 430, 436, 448, 452, 459, 461, 474, 475, 476, 479, 485, 491, 495, 496, 506, 508, 522, 537, 543], "info": [4, 48, 81, 91, 102, 121, 233, 237, 261, 272, 291, 334, 356, 366, 377, 396, 461, 471, 482, 501], "might": [4, 7, 12, 16, 25, 26, 31, 33, 36, 38, 47, 48, 54, 72, 74, 75, 79, 80, 81, 89, 100, 119, 120, 123, 127, 128, 175, 179, 185, 187, 197, 201, 207, 210, 221, 225, 233, 237, 249, 253, 259, 270, 289, 290, 296, 299, 334, 348, 350, 351, 354, 355, 356, 364, 375, 394, 395, 401, 452, 454, 455, 459, 460, 461, 469, 480, 499, 500, 503, 507, 508], "want": [4, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 23, 25, 28, 31, 33, 35, 36, 37, 38, 39, 40, 42, 43, 44, 47, 48, 49, 55, 103, 111, 115, 130, 185, 187, 231, 233, 273, 281, 285, 378, 386, 390, 403, 483, 491, 495, 510, 563], "bug": [4, 12, 17, 18, 19, 20, 22, 25, 29, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 55, 72, 80, 88, 166, 167, 177, 179, 184, 199, 201, 206, 223, 224, 225, 230, 251, 252, 253, 258, 348, 355, 363, 452, 460, 468, 546, 547], "triag": 4, "veri": [4, 10, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 47, 48, 49, 54, 72, 75, 78, 79, 80, 87, 103, 105, 156, 177, 183, 185, 187, 199, 200, 205, 207, 210, 220, 223, 224, 229, 231, 232, 233, 237, 248, 251, 252, 257, 273, 275, 298, 299, 325, 348, 351, 353, 354, 355, 362, 378, 380, 429, 452, 455, 458, 459, 460, 467, 483, 485, 536, 557], "interest": [4, 45], "inform": [4, 7, 8, 11, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 49, 54, 72, 75, 77, 78, 79, 80, 81, 82, 87, 88, 89, 91, 93, 94, 95, 96, 99, 100, 101, 102, 103, 104, 105, 109, 110, 111, 115, 116, 119, 120, 121, 122, 124, 128, 137, 140, 142, 144, 147, 148, 157, 158, 159, 160, 161, 164, 166, 167, 176, 178, 183, 184, 185, 187, 198, 200, 205, 206, 207, 210, 222, 223, 224, 229, 230, 231, 232, 233, 237, 240, 250, 251, 252, 257, 258, 259, 261, 263, 264, 266, 269, 270, 271, 272, 273, 274, 275, 279, 280, 281, 285, 286, 289, 290, 291, 292, 293, 296, 298, 299, 306, 309, 311, 313, 316, 317, 326, 327, 328, 329, 330, 333, 334, 335, 336, 348, 351, 353, 354, 355, 356, 357, 362, 363, 364, 366, 368, 369, 371, 374, 375, 376, 377, 378, 379, 380, 384, 385, 386, 390, 391, 394, 395, 396, 397, 398, 401, 410, 413, 415, 417, 420, 421, 430, 431, 432, 433, 434, 437, 439, 440, 452, 455, 457, 458, 459, 460, 461, 462, 467, 468, 469, 471, 473, 474, 475, 476, 479, 480, 481, 482, 483, 484, 485, 489, 490, 491, 495, 496, 499, 500, 501, 502, 504, 508, 517, 520, 522, 524, 527, 528, 537, 538, 539, 540, 541, 544, 546, 547, 557, 559, 561, 562], "correl": [4, 79, 185, 207, 233, 299, 354, 459], "system": [4, 7, 8, 9, 11, 15, 17, 26, 27, 29, 32, 33, 34, 47, 48, 49, 50, 53, 58, 60, 67, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 171, 172, 175, 176, 177, 178, 179, 181, 182, 184, 185, 186, 187, 188, 189, 193, 194, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 216, 218, 220, 221, 222, 223, 224, 225, 227, 228, 229, 230, 232, 233, 235, 236, 237, 238, 239, 244, 246, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 343, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 447, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "pro": [4, 47], "infrastructur": [4, 176, 198, 222, 250], "tool": [4, 8, 9, 12, 14, 16, 17, 18, 19, 20, 22, 25, 27, 31, 33, 34, 35, 36, 37, 38, 39, 43, 44, 48, 49, 65, 67, 68, 78, 79, 87, 134, 171, 173, 183, 185, 192, 193, 195, 205, 207, 215, 216, 217, 229, 233, 243, 244, 245, 257, 298, 299, 342, 343, 344, 353, 354, 362, 445, 447, 448, 458, 459, 467, 559], "like": [4, 5, 9, 10, 12, 18, 19, 20, 21, 22, 23, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 62, 63, 65, 68, 72, 75, 78, 80, 81, 82, 86, 91, 97, 102, 105, 107, 111, 115, 121, 125, 128, 130, 140, 144, 146, 165, 169, 173, 177, 182, 185, 190, 192, 195, 199, 204, 207, 210, 213, 215, 217, 222, 223, 228, 232, 233, 237, 240, 241, 243, 245, 250, 251, 256, 261, 272, 275, 276, 281, 285, 291, 296, 298, 313, 315, 335, 339, 340, 342, 344, 348, 351, 353, 356, 357, 361, 366, 372, 377, 380, 382, 386, 390, 396, 399, 401, 403, 413, 417, 419, 438, 442, 443, 445, 448, 452, 455, 458, 460, 461, 462, 466, 471, 477, 482, 485, 487, 491, 495, 501, 505, 508, 510, 520, 524, 526, 545, 555, 557], "elasticsearch": 4, "fluentd": 4, "influxdb": [4, 165, 438, 545], "splunk": 4, "simplifi": [4, 5, 32, 54], "analysi": [4, 8, 48, 71, 91, 102, 111, 115, 121, 177, 199, 220, 223, 248, 251, 261, 272, 281, 285, 291, 347, 348, 366, 377, 386, 390, 396, 451, 471, 482, 491, 495, 501], "typic": [4, 32, 47, 48, 49, 50, 51, 54, 66, 72, 74, 79, 80, 81, 82, 111, 115, 128, 175, 177, 178, 185, 187, 197, 199, 200, 207, 210, 220, 221, 223, 224, 233, 237, 248, 249, 251, 252, 281, 285, 296, 299, 334, 335, 348, 350, 354, 355, 356, 357, 386, 390, 401, 446, 452, 454, 459, 460, 461, 462, 491, 495, 508, 563], "avail": [4, 5, 7, 8, 9, 11, 12, 14, 16, 17, 18, 25, 26, 27, 28, 29, 31, 32, 34, 38, 39, 42, 43, 44, 45, 47, 48, 49, 54, 58, 62, 66, 71, 72, 79, 80, 81, 82, 87, 89, 91, 93, 96, 99, 101, 102, 103, 104, 105, 108, 111, 115, 116, 117, 119, 121, 122, 128, 130, 133, 137, 140, 142, 144, 148, 149, 150, 157, 158, 162, 164, 169, 176, 177, 178, 183, 185, 187, 190, 198, 199, 200, 205, 207, 210, 213, 220, 222, 223, 224, 229, 231, 232, 233, 237, 240, 241, 248, 250, 251, 252, 257, 259, 261, 263, 271, 272, 273, 274, 275, 278, 281, 285, 287, 289, 291, 292, 296, 299, 311, 313, 318, 319, 326, 327, 331, 333, 334, 335, 339, 340, 347, 348, 354, 355, 356, 357, 362, 364, 366, 368, 376, 377, 378, 379, 380, 383, 386, 390, 392, 394, 396, 397, 401, 403, 413, 415, 417, 422, 423, 430, 431, 435, 437, 442, 446, 451, 452, 459, 460, 461, 462, 467, 469, 471, 473, 476, 479, 481, 482, 483, 484, 485, 488, 491, 495, 496, 497, 499, 501, 502, 508, 510, 513, 517, 520, 522, 524, 528, 529, 530, 537, 538, 542, 544, 549, 550, 551, 552, 553, 554, 555, 556, 557, 560], "dmesg": [4, 48, 54], "var": [4, 8, 18, 19, 20, 22, 25, 35, 36, 37, 38, 39, 43, 44, 66, 87, 128, 205, 229, 257, 362, 446, 467, 508], "syslog": 4, "sent": [4, 47, 48, 49, 55, 72, 79, 109, 110, 111, 115, 128, 177, 185, 199, 207, 210, 223, 233, 251, 279, 280, 281, 285, 296, 299, 348, 354, 384, 385, 386, 390, 401, 452, 459, 489, 490, 491, 495, 508], "eg": [4, 48, 172, 183, 185, 194, 199, 205, 223, 229, 251, 257], "rsyslogd": 4, "intern": [4, 5, 47, 48, 49, 54, 72, 74, 77, 79, 80, 81, 82, 86, 87, 105, 126, 128, 143, 146, 148, 159, 163, 175, 177, 178, 182, 183, 185, 187, 197, 199, 200, 204, 205, 207, 210, 221, 223, 224, 228, 229, 232, 233, 237, 249, 251, 252, 256, 257, 275, 295, 296, 299, 312, 315, 317, 328, 332, 334, 335, 348, 350, 354, 355, 356, 357, 361, 362, 380, 400, 401, 416, 419, 421, 432, 436, 452, 454, 457, 459, 460, 461, 462, 466, 467, 485, 506, 508, 523, 526, 528, 539, 543], "buffer": [4, 48, 49, 68, 72, 80, 87, 88, 177, 199, 200, 205, 217, 223, 224, 229, 245, 251, 252, 257, 344, 348, 355, 362, 448, 452, 460, 467, 468], "detail": [4, 7, 8, 14, 16, 25, 31, 33, 47, 48, 49, 54, 78, 79, 80, 82, 86, 88, 89, 90, 92, 96, 99, 104, 105, 109, 110, 111, 115, 116, 118, 119, 122, 128, 137, 144, 148, 159, 162, 164, 165, 176, 178, 182, 184, 185, 187, 198, 199, 200, 204, 206, 207, 210, 222, 224, 228, 230, 232, 233, 237, 250, 252, 256, 258, 259, 260, 262, 266, 269, 274, 275, 279, 280, 281, 285, 286, 288, 289, 292, 296, 298, 299, 306, 313, 328, 331, 333, 335, 353, 354, 355, 357, 361, 363, 364, 365, 367, 371, 374, 379, 380, 384, 385, 386, 390, 391, 393, 394, 397, 401, 410, 417, 432, 435, 437, 438, 458, 459, 460, 462, 466, 468, 469, 470, 472, 476, 479, 484, 485, 489, 490, 491, 495, 496, 498, 499, 502, 508, 517, 524, 528, 539, 542, 544, 545, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 560, 561, 562, 563], "pseudo": [4, 81, 187, 210, 237, 334, 356, 461], "dbgmsg": [4, 48, 72, 105, 177, 199, 223, 232, 251, 275, 348, 380, 452, 485], "build": [4, 9, 10, 11, 12, 13, 25, 27, 29, 31, 32, 33, 34, 48, 54, 59, 60, 68, 72, 173, 177, 184, 195, 199, 206, 217, 223, 230, 245, 251, 258, 344, 348, 448, 452], "zfs_dbgmsg_enabl": [4, 72, 177, 199, 223, 251, 348, 452], "symptom": [4, 48], "command": [4, 5, 7, 8, 9, 10, 12, 14, 16, 18, 19, 20, 22, 23, 25, 28, 31, 32, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 62, 63, 65, 66, 67, 68, 69, 72, 74, 77, 78, 79, 80, 81, 82, 85, 87, 88, 89, 91, 92, 93, 94, 96, 98, 99, 101, 102, 104, 105, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 121, 122, 126, 128, 130, 131, 133, 135, 137, 138, 139, 141, 143, 144, 145, 146, 148, 149, 150, 152, 156, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 171, 172, 173, 175, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 192, 193, 194, 195, 197, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 215, 216, 217, 218, 221, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 238, 239, 240, 241, 243, 244, 245, 246, 249, 251, 252, 255, 257, 258, 259, 260, 261, 263, 264, 266, 268, 269, 271, 272, 273, 274, 275, 278, 279, 280, 281, 282, 284, 285, 286, 287, 289, 291, 292, 293, 295, 296, 298, 299, 300, 302, 304, 306, 307, 308, 310, 312, 313, 314, 315, 317, 318, 319, 321, 325, 327, 328, 329, 330, 332, 333, 334, 335, 336, 337, 338, 339, 340, 342, 343, 344, 345, 348, 350, 353, 354, 355, 356, 357, 360, 362, 363, 364, 366, 368, 369, 371, 373, 374, 376, 377, 379, 380, 383, 384, 385, 386, 387, 389, 390, 391, 392, 394, 396, 397, 400, 401, 403, 404, 406, 408, 410, 411, 412, 414, 416, 417, 418, 419, 421, 422, 423, 425, 429, 431, 432, 433, 434, 436, 437, 438, 439, 440, 441, 442, 443, 445, 446, 447, 448, 449, 452, 454, 457, 458, 459, 460, 461, 462, 465, 467, 468, 469, 471, 472, 473, 474, 476, 478, 479, 481, 482, 484, 485, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 501, 502, 506, 508, 510, 511, 513, 515, 517, 518, 519, 521, 523, 524, 525, 526, 528, 529, 530, 532, 536, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 552, 557, 560, 561, 562], "appear": [4, 18, 19, 20, 22, 23, 28, 35, 36, 38, 43, 44, 47, 48, 69, 72, 74, 79, 93, 105, 128, 132, 133, 134, 137, 144, 154, 164, 165, 172, 175, 185, 187, 194, 197, 199, 207, 209, 210, 221, 223, 232, 233, 236, 237, 249, 251, 263, 275, 299, 301, 302, 303, 306, 313, 323, 345, 348, 350, 354, 368, 380, 405, 406, 407, 410, 417, 427, 438, 449, 452, 454, 459, 473, 485, 508, 512, 513, 514, 517, 524, 534, 544, 545, 549], "hung": [4, 48, 72, 140, 177, 199, 222, 223, 250, 251, 348, 413, 452, 520], "doe": [4, 10, 11, 18, 19, 20, 22, 27, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 55, 58, 63, 67, 72, 75, 79, 80, 81, 82, 87, 91, 94, 102, 105, 109, 110, 111, 113, 115, 121, 126, 128, 134, 140, 144, 146, 152, 156, 159, 166, 167, 169, 171, 176, 177, 178, 179, 181, 183, 185, 187, 190, 193, 198, 199, 200, 201, 203, 205, 207, 208, 210, 213, 216, 222, 223, 224, 225, 227, 229, 232, 233, 235, 237, 241, 244, 250, 251, 252, 253, 255, 257, 261, 264, 272, 275, 279, 280, 283, 291, 295, 296, 299, 313, 315, 325, 328, 335, 340, 343, 348, 351, 354, 355, 357, 362, 366, 369, 377, 380, 384, 385, 388, 396, 400, 401, 413, 417, 419, 425, 429, 432, 443, 447, 452, 455, 459, 460, 461, 462, 467, 471, 474, 482, 485, 489, 490, 491, 493, 495, 501, 506, 508, 520, 524, 526, 532, 536, 539, 546, 547, 557, 559], "return": [4, 5, 10, 12, 43, 44, 48, 49, 72, 79, 80, 81, 82, 83, 87, 88, 98, 105, 112, 126, 128, 130, 134, 135, 144, 145, 152, 154, 156, 161, 163, 164, 177, 178, 179, 184, 185, 187, 199, 200, 201, 205, 206, 207, 210, 218, 220, 223, 224, 225, 229, 230, 232, 233, 237, 246, 248, 251, 252, 253, 257, 258, 268, 275, 282, 295, 296, 299, 303, 304, 313, 314, 321, 323, 325, 330, 332, 333, 334, 335, 348, 354, 355, 356, 357, 358, 362, 363, 373, 380, 387, 400, 401, 403, 407, 408, 417, 418, 425, 427, 429, 434, 436, 437, 452, 459, 460, 461, 462, 463, 467, 468, 478, 485, 492, 506, 508, 510, 514, 515, 524, 525, 532, 534, 536, 541, 543, 544, 555, 559], "killabl": 4, "caus": [4, 10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 51, 54, 55, 63, 67, 71, 72, 78, 79, 80, 87, 91, 94, 96, 99, 102, 104, 105, 109, 110, 111, 115, 116, 117, 121, 122, 128, 140, 158, 161, 164, 169, 171, 177, 185, 187, 190, 193, 199, 207, 210, 213, 216, 220, 222, 223, 224, 232, 233, 237, 241, 244, 248, 250, 251, 252, 261, 264, 266, 269, 272, 274, 275, 279, 280, 281, 285, 286, 291, 292, 296, 298, 299, 327, 330, 333, 340, 343, 347, 348, 353, 354, 355, 366, 369, 371, 374, 377, 379, 380, 384, 385, 386, 390, 391, 392, 396, 397, 401, 413, 431, 434, 437, 443, 447, 451, 452, 458, 459, 460, 467, 471, 474, 476, 479, 482, 484, 485, 489, 490, 491, 495, 496, 497, 501, 502, 508, 520, 538, 541, 544, 557], "thread": [4, 48, 49, 50, 68, 71, 72, 79, 80, 172, 173, 177, 184, 194, 195, 199, 206, 217, 220, 223, 230, 245, 248, 251, 252, 258, 344, 347, 348, 355, 448, 451, 452, 460], "panic": [4, 71, 72, 82, 87, 132, 140, 176, 183, 186, 187, 198, 205, 209, 210, 220, 222, 223, 229, 236, 237, 248, 250, 251, 257, 301, 335, 347, 348, 357, 362, 405, 413, 451, 452, 462, 467, 512, 520], "stuck": [4, 33, 71, 220, 222, 248, 250, 347, 413, 451], "backtrac": [4, 54], "until": [4, 12, 18, 19, 20, 22, 29, 35, 36, 38, 43, 44, 46, 47, 48, 49, 72, 75, 80, 81, 82, 88, 94, 100, 120, 123, 126, 127, 134, 135, 144, 145, 146, 148, 152, 154, 156, 158, 161, 163, 164, 177, 178, 185, 187, 199, 200, 207, 210, 223, 224, 233, 237, 251, 252, 264, 270, 290, 295, 303, 304, 313, 314, 315, 317, 321, 323, 325, 327, 330, 332, 333, 334, 335, 348, 351, 355, 356, 357, 363, 369, 375, 395, 400, 407, 408, 417, 418, 419, 421, 425, 427, 429, 431, 434, 436, 437, 452, 455, 460, 461, 462, 468, 474, 480, 500, 503, 506, 507, 514, 515, 524, 525, 526, 528, 532, 534, 536, 538, 541, 543, 544, 551, 554, 559], "deadman": [4, 48, 72, 87, 140, 177, 199, 222, 223, 229, 250, 251, 257, 348, 362, 413, 452, 467, 520], "timer": [4, 48, 87, 156, 161, 177, 229, 257, 362, 429, 467, 536, 541], "expir": [4, 32, 43, 44, 48, 72, 177, 199, 220, 223, 248, 251, 348, 452], "tunabl": [4, 48, 49, 50, 51, 55, 71, 72, 79, 128, 177, 199, 207, 220, 223, 233, 248, 251, 299, 347, 348, 354, 451, 452, 459, 508], "interfac": [4, 8, 9, 11, 12, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 38, 48, 52, 54, 72, 83, 97, 105, 107, 125, 128, 164, 185, 207, 210, 232, 233, 237, 267, 275, 277, 294, 296, 333, 358, 372, 380, 382, 399, 401, 437, 452, 463, 477, 485, 487, 505, 508, 544], "consum": [4, 18, 19, 20, 35, 36, 38, 47, 48, 54, 66, 72, 75, 78, 79, 81, 97, 107, 108, 125, 128, 140, 164, 177, 185, 199, 207, 210, 223, 233, 237, 251, 267, 277, 278, 294, 296, 298, 299, 309, 333, 334, 348, 351, 353, 354, 356, 372, 382, 383, 399, 401, 413, 437, 446, 452, 455, 458, 459, 461, 477, 487, 488, 505, 508, 520, 544], "run": [4, 7, 9, 10, 12, 18, 19, 20, 22, 27, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 65, 66, 68, 71, 72, 74, 79, 80, 81, 82, 83, 86, 87, 88, 91, 93, 94, 100, 102, 103, 105, 109, 110, 111, 115, 120, 121, 123, 124, 127, 130, 132, 134, 140, 144, 145, 146, 155, 156, 158, 159, 161, 162, 164, 165, 166, 167, 169, 172, 173, 175, 176, 177, 178, 179, 182, 184, 185, 186, 187, 190, 192, 194, 195, 197, 198, 199, 200, 201, 204, 205, 206, 207, 209, 210, 213, 215, 217, 220, 221, 222, 223, 224, 225, 228, 229, 230, 231, 232, 233, 236, 237, 241, 243, 245, 248, 249, 250, 251, 252, 253, 256, 257, 258, 261, 263, 264, 270, 272, 273, 275, 279, 280, 281, 285, 290, 291, 293, 299, 301, 303, 313, 314, 315, 324, 327, 328, 330, 333, 334, 335, 336, 342, 344, 347, 348, 350, 354, 355, 356, 357, 358, 361, 362, 363, 366, 368, 369, 375, 377, 378, 380, 384, 385, 386, 390, 395, 396, 398, 403, 405, 407, 413, 417, 418, 419, 428, 431, 432, 434, 437, 438, 439, 440, 445, 446, 448, 451, 452, 454, 459, 460, 461, 462, 463, 466, 467, 468, 471, 473, 474, 480, 482, 483, 485, 489, 490, 491, 495, 500, 501, 503, 504, 507, 510, 512, 514, 520, 524, 525, 526, 535, 536, 538, 539, 541, 542, 544, 545, 546, 547, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "daemon": [4, 5, 22, 35, 36, 38, 48, 72, 77, 82, 88, 103, 165, 184, 206, 210, 230, 231, 237, 258, 273, 335, 348, 357, 363, 378, 438, 452, 457, 462, 468, 483, 545], "zed": [4, 5, 11, 18, 19, 20, 36, 37, 38, 39, 43, 44, 72, 82, 84, 103, 140, 164, 176, 180, 198, 199, 202, 210, 222, 223, 226, 231, 237, 250, 251, 254, 273, 309, 333, 335, 348, 357, 359, 378, 413, 437, 452, 462, 464, 483, 520, 544], "userland": [4, 49, 105, 128, 164, 232, 233, 237, 275, 296, 333, 380, 401, 437, 485, 508, 544], "listen": [4, 185, 207, 233, 296], "them": [4, 8, 10, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 54, 55, 63, 66, 71, 72, 75, 77, 78, 79, 80, 81, 82, 87, 92, 93, 94, 108, 109, 110, 111, 113, 115, 118, 128, 133, 146, 164, 169, 177, 185, 187, 190, 199, 207, 210, 213, 220, 223, 224, 231, 233, 237, 241, 248, 251, 252, 273, 281, 285, 296, 298, 299, 333, 334, 340, 347, 348, 351, 353, 354, 355, 356, 386, 390, 401, 437, 443, 446, 451, 452, 455, 457, 458, 459, 460, 461, 462, 467, 472, 473, 474, 488, 489, 490, 491, 493, 495, 498, 508, 513, 526, 544, 559], "extens": [4, 8, 11, 33, 48, 49, 72, 185, 199, 223, 251, 348, 452], "shell": [4, 25, 31, 66, 83, 172, 179, 194, 201, 225, 253, 358, 446, 463], "script": [4, 8, 9, 12, 14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 34, 35, 37, 39, 43, 44, 47, 66, 79, 88, 96, 99, 100, 101, 105, 116, 120, 128, 130, 140, 142, 146, 148, 157, 159, 163, 164, 184, 185, 187, 206, 207, 210, 230, 231, 232, 233, 237, 258, 266, 269, 270, 271, 273, 275, 286, 290, 296, 309, 311, 315, 317, 326, 328, 332, 333, 354, 363, 371, 374, 375, 376, 380, 391, 395, 401, 403, 413, 415, 419, 421, 430, 432, 436, 437, 446, 459, 468, 476, 479, 480, 481, 485, 496, 500, 508, 510, 520, 522, 526, 528, 537, 539, 543, 544], "program": [4, 14, 47, 48, 72, 78, 82, 84, 87, 128, 132, 172, 181, 186, 187, 194, 203, 209, 210, 223, 226, 227, 233, 236, 237, 251, 254, 255, 296, 301, 335, 348, 357, 359, 401, 405, 452, 458, 462, 464, 467, 508, 512, 557], "subscrib": 4, "take": [4, 5, 10, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 54, 71, 72, 74, 75, 78, 79, 80, 81, 82, 89, 96, 99, 103, 105, 111, 115, 116, 119, 128, 132, 133, 135, 144, 146, 149, 150, 162, 164, 175, 177, 178, 185, 187, 197, 199, 200, 207, 209, 210, 220, 221, 223, 224, 232, 233, 236, 237, 248, 249, 251, 252, 266, 269, 275, 281, 285, 286, 296, 298, 299, 301, 304, 313, 318, 319, 333, 334, 335, 347, 348, 350, 351, 353, 354, 355, 356, 357, 371, 374, 378, 380, 386, 390, 391, 401, 405, 408, 417, 422, 423, 435, 437, 451, 452, 454, 455, 458, 459, 460, 461, 462, 469, 476, 479, 483, 485, 491, 495, 496, 499, 508, 512, 513, 515, 524, 526, 529, 530, 542, 544, 549, 560, 561, 562], "action": [4, 5, 43, 44, 48, 63, 72, 81, 130, 144, 151, 164, 187, 199, 210, 223, 237, 251, 320, 333, 334, 348, 356, 403, 424, 437, 443, 452, 461, 510, 524, 531, 544, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "usual": [4, 5, 9, 33, 47, 48, 49, 51, 54, 62, 72, 77, 106, 109, 110, 140, 176, 181, 185, 198, 203, 207, 222, 227, 233, 240, 250, 255, 276, 279, 280, 339, 348, 381, 384, 385, 413, 442, 452, 457, 486, 489, 490, 520, 559], "instal": [4, 9, 10, 12, 13, 32, 33, 41, 48, 49, 58, 72, 80, 82, 83, 88, 179, 184, 187, 201, 206, 210, 225, 230, 237, 251, 253, 258, 335, 348, 355, 357, 358, 363, 452, 460, 462, 463, 468], "etc": [4, 8, 16, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 40, 43, 44, 48, 58, 63, 65, 67, 71, 72, 74, 75, 78, 79, 80, 82, 83, 85, 86, 87, 111, 115, 128, 131, 140, 146, 164, 169, 171, 175, 176, 177, 179, 181, 182, 183, 185, 190, 192, 193, 197, 198, 199, 201, 203, 204, 205, 207, 208, 210, 213, 215, 216, 220, 221, 222, 223, 225, 227, 228, 229, 233, 235, 237, 241, 243, 244, 248, 249, 250, 251, 253, 255, 256, 257, 281, 285, 296, 298, 299, 300, 315, 333, 340, 342, 343, 347, 348, 350, 351, 353, 354, 355, 357, 358, 360, 361, 362, 386, 390, 401, 404, 413, 419, 437, 443, 445, 447, 451, 452, 454, 455, 458, 459, 460, 462, 463, 465, 466, 467, 491, 495, 508, 511, 520, 526, 544], "d": [4, 5, 8, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 28, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 65, 66, 67, 68, 74, 79, 80, 81, 82, 86, 87, 88, 89, 94, 96, 99, 101, 103, 105, 106, 109, 110, 111, 115, 116, 119, 128, 132, 135, 137, 144, 146, 148, 159, 161, 163, 164, 166, 167, 171, 173, 175, 177, 182, 183, 184, 185, 186, 187, 192, 193, 195, 197, 199, 204, 205, 206, 207, 209, 210, 215, 216, 217, 220, 221, 223, 228, 229, 230, 231, 232, 233, 236, 237, 238, 243, 244, 245, 249, 251, 256, 257, 258, 259, 264, 266, 269, 271, 273, 275, 276, 279, 280, 281, 285, 286, 289, 296, 299, 301, 304, 306, 313, 315, 317, 328, 330, 332, 333, 334, 336, 337, 342, 343, 344, 350, 354, 355, 356, 357, 361, 362, 363, 364, 369, 371, 374, 376, 378, 380, 381, 384, 385, 386, 390, 391, 394, 401, 405, 408, 410, 417, 419, 421, 432, 434, 436, 437, 439, 440, 445, 446, 447, 448, 454, 459, 460, 461, 462, 466, 467, 468, 469, 474, 476, 479, 481, 483, 485, 486, 489, 490, 491, 495, 496, 499, 508, 512, 515, 517, 524, 526, 528, 539, 541, 543, 544, 546, 547, 549, 551, 554], "sh": [4, 7, 9, 10, 12, 25, 27, 31, 36, 37, 39, 43, 44, 75, 103, 231, 273, 351, 378, 455, 483], "histori": [4, 48, 72, 84, 87, 94, 103, 111, 113, 115, 118, 128, 131, 159, 162, 164, 177, 183, 185, 187, 199, 205, 207, 210, 223, 229, 233, 237, 251, 254, 257, 281, 285, 296, 300, 328, 331, 333, 348, 359, 362, 378, 386, 390, 401, 404, 432, 435, 437, 452, 464, 467, 474, 483, 491, 493, 495, 498, 508, 511, 539, 542, 544], "begin": [4, 8, 12, 47, 48, 74, 77, 79, 80, 81, 82, 89, 103, 119, 134, 137, 145, 155, 156, 164, 166, 167, 175, 185, 187, 188, 197, 207, 210, 211, 221, 224, 231, 233, 237, 238, 249, 252, 259, 273, 289, 299, 303, 306, 314, 324, 325, 333, 334, 337, 350, 354, 355, 356, 364, 378, 394, 407, 410, 418, 428, 429, 437, 454, 457, 459, 460, 461, 462, 469, 483, 499, 514, 517, 525, 535, 536, 544, 546, 547, 552, 557], "These": [4, 5, 8, 9, 11, 12, 21, 26, 32, 37, 39, 47, 48, 49, 54, 57, 71, 72, 75, 77, 79, 80, 81, 82, 88, 89, 111, 115, 119, 126, 133, 140, 142, 146, 148, 157, 158, 159, 162, 163, 164, 172, 176, 177, 178, 184, 185, 187, 194, 198, 199, 200, 206, 207, 210, 220, 222, 223, 224, 230, 233, 237, 248, 250, 251, 252, 258, 259, 281, 285, 289, 295, 299, 302, 309, 311, 315, 317, 326, 327, 328, 331, 332, 333, 334, 335, 347, 348, 351, 354, 355, 356, 357, 363, 364, 386, 390, 394, 400, 406, 413, 415, 419, 421, 430, 431, 432, 435, 436, 437, 451, 452, 455, 457, 459, 460, 461, 462, 468, 469, 491, 495, 499, 506, 513, 520, 522, 526, 528, 537, 538, 539, 542, 543, 544, 559, 563], "ram": [4, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 72, 78, 177, 185, 199, 207, 223, 233, 251, 298, 348, 353, 452, 458], "limit": [4, 5, 14, 16, 18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 50, 51, 54, 63, 68, 71, 72, 77, 78, 79, 80, 82, 87, 91, 96, 99, 101, 102, 105, 111, 115, 116, 121, 130, 169, 173, 177, 178, 183, 185, 190, 195, 199, 200, 205, 207, 213, 217, 220, 223, 224, 229, 232, 233, 241, 245, 248, 251, 252, 257, 261, 266, 269, 271, 272, 275, 281, 285, 286, 291, 299, 340, 344, 347, 348, 354, 355, 362, 366, 371, 374, 376, 377, 380, 386, 390, 391, 396, 403, 443, 448, 451, 452, 457, 458, 459, 460, 462, 467, 471, 476, 479, 481, 482, 485, 491, 495, 496, 501, 510, 557], "valu": [4, 5, 11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 54, 55, 62, 65, 66, 68, 71, 72, 74, 75, 77, 79, 80, 81, 82, 86, 87, 88, 89, 91, 92, 93, 96, 99, 100, 101, 102, 105, 106, 109, 110, 111, 115, 116, 118, 119, 120, 121, 123, 126, 127, 128, 130, 131, 132, 133, 134, 137, 140, 142, 144, 146, 148, 152, 154, 157, 158, 159, 163, 164, 165, 166, 167, 173, 175, 176, 177, 178, 182, 183, 184, 185, 187, 192, 195, 197, 198, 199, 200, 204, 205, 206, 207, 208, 209, 210, 215, 217, 220, 221, 222, 223, 224, 228, 229, 230, 232, 233, 235, 236, 237, 240, 243, 245, 248, 249, 250, 251, 252, 256, 257, 258, 261, 262, 263, 266, 269, 270, 271, 272, 275, 276, 279, 280, 281, 285, 286, 288, 290, 291, 295, 296, 299, 300, 301, 302, 303, 306, 311, 313, 315, 317, 321, 323, 326, 327, 328, 332, 333, 334, 335, 339, 342, 344, 347, 348, 350, 351, 354, 355, 356, 357, 361, 362, 363, 366, 367, 368, 371, 374, 375, 376, 377, 380, 381, 384, 385, 386, 390, 391, 393, 395, 396, 400, 401, 403, 404, 405, 406, 407, 410, 413, 415, 417, 419, 421, 425, 427, 430, 431, 432, 436, 437, 438, 442, 445, 446, 448, 451, 452, 454, 455, 457, 459, 460, 461, 462, 466, 467, 468, 469, 471, 472, 473, 476, 479, 480, 481, 482, 485, 486, 489, 490, 491, 495, 496, 498, 499, 500, 501, 503, 506, 507, 508, 510, 511, 512, 513, 514, 517, 520, 522, 524, 526, 528, 532, 534, 537, 538, 539, 543, 544, 545, 546, 547, 559], "zfs_event_len_max": 4, "throttl": [4, 46, 47, 48, 50, 72, 177, 199, 223, 251, 348, 452], "prevent": [4, 26, 45, 47, 48, 49, 54, 71, 72, 78, 79, 80, 81, 82, 98, 112, 128, 137, 164, 177, 185, 187, 199, 200, 207, 210, 220, 223, 224, 233, 237, 248, 251, 252, 268, 282, 296, 298, 299, 306, 333, 334, 335, 347, 348, 353, 354, 355, 356, 357, 373, 387, 401, 410, 437, 451, 452, 458, 459, 460, 461, 462, 478, 492, 508, 517, 544, 553, 555, 559], "overconsumpt": 4, "resourc": [4, 10, 59, 60, 78, 79, 88, 184, 185, 206, 207, 230, 233, 258, 298, 299, 353, 354, 363, 458, 459, 468], "v": [4, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 49, 50, 58, 62, 63, 65, 68, 72, 79, 85, 87, 88, 93, 94, 104, 109, 110, 111, 115, 122, 124, 128, 129, 133, 140, 146, 148, 156, 159, 162, 164, 166, 167, 169, 172, 173, 176, 177, 181, 183, 184, 185, 187, 188, 190, 192, 194, 195, 198, 199, 203, 205, 206, 207, 210, 211, 213, 215, 217, 222, 223, 227, 229, 230, 233, 237, 238, 240, 241, 243, 245, 250, 251, 255, 257, 258, 263, 264, 274, 279, 280, 281, 285, 292, 293, 296, 297, 299, 309, 315, 317, 328, 331, 333, 335, 336, 337, 339, 340, 342, 344, 348, 354, 357, 360, 362, 363, 368, 369, 379, 384, 385, 386, 390, 397, 398, 401, 402, 413, 419, 421, 432, 435, 437, 439, 440, 442, 443, 445, 448, 452, 459, 465, 467, 468, 473, 474, 484, 489, 490, 491, 495, 502, 504, 508, 509, 513, 520, 526, 528, 536, 539, 542, 544, 546, 547, 556, 558, 561, 562], "content": [4, 14, 16, 25, 28, 31, 41, 45, 48, 51, 62, 72, 78, 79, 80, 81, 87, 92, 95, 109, 110, 114, 128, 132, 133, 146, 164, 177, 178, 181, 183, 185, 186, 187, 199, 200, 203, 205, 207, 209, 210, 223, 224, 227, 229, 233, 236, 237, 238, 251, 252, 255, 257, 265, 279, 280, 296, 298, 299, 301, 333, 334, 337, 339, 348, 353, 354, 355, 356, 362, 370, 384, 385, 401, 405, 437, 442, 452, 458, 459, 460, 461, 467, 472, 475, 489, 490, 494, 508, 512, 513, 526, 544], "verbos": [4, 25, 31, 63, 65, 68, 87, 88, 93, 94, 105, 109, 110, 111, 115, 129, 146, 148, 156, 159, 166, 167, 169, 172, 173, 181, 183, 184, 185, 187, 188, 190, 192, 194, 195, 203, 205, 206, 207, 210, 211, 213, 215, 217, 227, 229, 230, 232, 233, 237, 238, 241, 243, 245, 255, 257, 258, 263, 264, 275, 279, 280, 281, 285, 297, 315, 317, 328, 336, 337, 340, 342, 344, 362, 363, 368, 369, 380, 384, 385, 386, 390, 402, 419, 421, 432, 439, 440, 443, 445, 448, 467, 468, 473, 474, 485, 489, 490, 491, 495, 509, 526, 528, 536, 539, 546, 547], "subject": [4, 12, 47, 49, 72, 111, 115, 162, 177, 185, 199, 223, 251, 281, 285, 348, 386, 390, 435, 452, 491, 495, 542], "time": [4, 5, 7, 8, 9, 11, 12, 13, 14, 16, 18, 19, 20, 21, 22, 25, 31, 32, 35, 37, 39, 41, 43, 44, 48, 49, 50, 51, 54, 62, 65, 66, 68, 71, 72, 78, 79, 80, 81, 82, 87, 88, 89, 90, 91, 93, 95, 97, 101, 102, 103, 107, 109, 110, 111, 114, 115, 118, 119, 121, 125, 126, 128, 130, 132, 134, 140, 144, 146, 148, 156, 158, 159, 163, 164, 165, 172, 173, 176, 177, 178, 183, 184, 185, 186, 187, 192, 194, 195, 198, 199, 200, 205, 206, 207, 209, 210, 215, 217, 220, 222, 223, 224, 229, 230, 233, 236, 237, 240, 243, 245, 248, 250, 251, 252, 257, 258, 259, 260, 261, 263, 265, 267, 271, 272, 277, 279, 280, 281, 284, 285, 288, 289, 291, 294, 295, 296, 298, 299, 301, 313, 315, 317, 325, 327, 328, 332, 333, 334, 335, 339, 342, 344, 347, 348, 353, 354, 355, 356, 357, 362, 363, 364, 365, 366, 368, 370, 372, 376, 377, 378, 382, 384, 385, 386, 389, 390, 393, 394, 396, 399, 400, 401, 403, 405, 413, 417, 419, 421, 429, 431, 432, 436, 437, 438, 442, 445, 446, 448, 451, 452, 458, 459, 460, 461, 462, 467, 468, 469, 470, 471, 473, 475, 477, 481, 482, 483, 487, 489, 490, 491, 494, 495, 498, 499, 501, 505, 506, 508, 510, 512, 520, 524, 526, 528, 536, 538, 539, 543, 544, 545, 557, 563], "class": [4, 46, 48, 51, 68, 72, 79, 80, 81, 88, 177, 184, 199, 206, 223, 224, 230, 233, 237, 251, 252, 258, 299, 334, 344, 348, 354, 355, 356, 363, 448, 452, 459, 460, 461, 468], "identifi": [4, 47, 48, 49, 54, 67, 68, 71, 74, 79, 80, 81, 82, 86, 87, 88, 94, 97, 100, 106, 107, 120, 123, 125, 127, 128, 140, 144, 151, 164, 171, 175, 176, 178, 182, 183, 184, 185, 187, 193, 195, 197, 198, 200, 204, 205, 206, 207, 210, 216, 217, 220, 221, 222, 224, 228, 229, 230, 233, 237, 244, 245, 248, 249, 250, 252, 256, 257, 258, 264, 267, 270, 276, 277, 290, 294, 296, 299, 313, 320, 333, 334, 335, 343, 344, 347, 350, 354, 355, 356, 357, 361, 362, 363, 369, 372, 375, 381, 382, 395, 399, 401, 413, 417, 424, 437, 447, 448, 451, 454, 459, 460, 461, 462, 466, 467, 468, 474, 477, 480, 486, 487, 500, 503, 505, 507, 508, 520, 524, 531, 544, 549, 552, 554, 560], "filter": [4, 48, 146, 188, 211, 237, 238, 315, 337, 419, 451, 526], "commonli": [4, 54, 80, 111, 115, 185, 207, 233, 281, 285, 355, 386, 390, 460, 491, 495], "seen": [4, 43, 44, 72, 223, 251, 348, 452, 558], "relat": [4, 11, 17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 41, 43, 44, 48, 54, 56, 72, 87, 91, 102, 121, 183, 199, 205, 223, 229, 233, 251, 257, 261, 272, 291, 348, 362, 366, 377, 396, 452, 467, 471, 482, 501], "manag": [4, 9, 12, 14, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 35, 36, 37, 38, 39, 43, 44, 49, 54, 58, 77, 78, 79, 80, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 182, 185, 187, 204, 205, 207, 208, 210, 224, 228, 229, 232, 233, 235, 237, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 353, 354, 355, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 457, 458, 459, 460, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547], "sysev": 4, "f": [4, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 62, 68, 75, 79, 80, 82, 85, 87, 88, 94, 95, 101, 103, 104, 105, 109, 110, 111, 113, 114, 115, 122, 128, 131, 132, 133, 134, 137, 138, 140, 141, 144, 147, 149, 150, 154, 164, 172, 173, 176, 181, 183, 184, 185, 186, 187, 194, 195, 198, 200, 203, 205, 206, 207, 209, 210, 217, 222, 224, 227, 229, 230, 231, 232, 233, 236, 237, 240, 245, 250, 252, 255, 257, 258, 264, 265, 273, 274, 275, 279, 280, 281, 283, 284, 285, 292, 295, 296, 299, 300, 301, 302, 303, 306, 307, 309, 310, 313, 316, 318, 319, 323, 333, 335, 339, 344, 351, 354, 355, 357, 360, 362, 363, 369, 370, 378, 379, 380, 384, 385, 386, 388, 389, 390, 397, 401, 404, 405, 406, 407, 410, 411, 413, 414, 417, 420, 422, 423, 427, 437, 442, 448, 455, 459, 460, 462, 465, 467, 468, 474, 475, 481, 483, 484, 485, 489, 490, 491, 493, 494, 495, 502, 508, 511, 512, 513, 514, 517, 518, 520, 521, 524, 527, 529, 530, 534, 544, 555, 560], "export": [4, 8, 9, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 48, 54, 72, 75, 77, 78, 79, 81, 82, 84, 87, 93, 96, 99, 101, 116, 117, 128, 137, 140, 144, 147, 151, 156, 160, 163, 164, 176, 177, 183, 185, 187, 198, 199, 205, 207, 210, 222, 223, 229, 233, 237, 250, 251, 254, 257, 263, 287, 296, 298, 299, 306, 313, 316, 320, 325, 329, 332, 333, 334, 335, 348, 351, 353, 354, 356, 357, 359, 362, 368, 392, 401, 410, 413, 417, 420, 424, 429, 433, 436, 437, 452, 455, 457, 458, 459, 461, 462, 464, 467, 473, 476, 479, 481, 496, 497, 508, 517, 520, 524, 527, 531, 536, 540, 543, 544, 550, 551, 552, 553, 556, 559, 560], "error": [4, 5, 11, 12, 14, 18, 19, 20, 22, 25, 29, 35, 36, 38, 43, 44, 48, 54, 55, 63, 66, 68, 72, 77, 79, 80, 81, 82, 83, 87, 93, 105, 109, 110, 111, 115, 128, 132, 136, 137, 140, 144, 146, 152, 156, 159, 164, 169, 176, 177, 179, 183, 185, 186, 187, 190, 198, 199, 201, 205, 207, 209, 210, 213, 222, 223, 224, 225, 229, 232, 233, 236, 237, 241, 250, 251, 252, 253, 257, 263, 275, 279, 280, 281, 285, 296, 299, 301, 305, 306, 313, 315, 321, 325, 328, 333, 334, 335, 340, 344, 348, 354, 355, 356, 357, 358, 362, 368, 380, 384, 385, 386, 390, 401, 405, 409, 410, 413, 417, 419, 425, 429, 432, 437, 443, 446, 448, 452, 457, 459, 460, 461, 462, 463, 467, 473, 485, 489, 490, 491, 495, 508, 512, 516, 517, 520, 524, 526, 532, 536, 539, 544, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "ereport": [4, 72, 140, 176, 198, 222, 250, 251, 348, 413, 452, 520], "invalu": 4, "fault": [4, 47, 77, 81, 82, 83, 132, 140, 148, 149, 150, 156, 164, 176, 179, 186, 187, 198, 201, 209, 210, 222, 225, 236, 237, 250, 253, 301, 318, 319, 325, 333, 334, 335, 356, 357, 358, 405, 413, 422, 423, 429, 437, 457, 461, 462, 463, 512, 520, 528, 529, 530, 536, 544, 550, 551, 552, 553, 554, 555, 557, 558, 561, 562, 563], "variou": [4, 7, 18, 19, 20, 35, 36, 37, 38, 39, 43, 44, 49, 54, 62, 77, 79, 80, 81, 140, 176, 185, 198, 207, 222, 233, 237, 240, 250, 299, 334, 339, 354, 356, 413, 442, 457, 459, 460, 461, 520], "layer": [4, 8, 11, 25, 47, 48, 49, 72, 81, 82, 140, 172, 176, 187, 194, 198, 210, 222, 237, 250, 251, 334, 335, 348, 356, 357, 413, 452, 461, 462, 520], "softwar": [4, 9, 43, 44, 45, 47, 49, 54, 58, 80, 109, 110, 128, 162, 164, 178, 185, 187, 200, 207, 210, 224, 233, 237, 252, 279, 280, 293, 296, 331, 333, 355, 384, 385, 401, 437, 460, 489, 490, 508, 542, 544, 558, 559], "deal": [4, 33, 48, 49, 72, 177, 199, 223, 251, 348, 452], "simpl": [4, 33, 48, 49, 54, 63, 74, 78, 80, 169, 175, 185, 190, 197, 207, 213, 221, 233, 241, 249, 298, 340, 350, 353, 355, 443, 454, 458, 460], "faulti": [4, 54, 55, 557], "could": [4, 11, 21, 22, 33, 35, 47, 48, 49, 54, 55, 65, 71, 72, 79, 81, 82, 105, 126, 133, 136, 140, 146, 163, 164, 176, 184, 185, 187, 192, 198, 206, 207, 210, 215, 220, 222, 223, 230, 232, 233, 237, 243, 248, 250, 251, 258, 275, 295, 299, 305, 332, 333, 334, 335, 342, 347, 348, 354, 356, 357, 380, 400, 409, 413, 436, 437, 445, 451, 452, 459, 461, 462, 485, 506, 513, 516, 520, 526, 543, 544, 550, 551, 552, 553, 556, 563], "io": [4, 5, 47, 48, 49, 72, 77, 87, 132, 140, 176, 177, 185, 186, 198, 199, 207, 209, 210, 222, 223, 233, 236, 237, 250, 251, 298, 299, 301, 315, 321, 328, 348, 353, 354, 405, 413, 425, 432, 452, 457, 467, 512, 520, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 561, 562, 563], "dure": [4, 7, 8, 9, 18, 19, 43, 44, 47, 48, 51, 54, 66, 68, 72, 79, 80, 81, 82, 109, 110, 113, 134, 152, 154, 156, 173, 177, 178, 185, 187, 192, 195, 199, 200, 207, 210, 215, 217, 223, 224, 231, 233, 237, 243, 245, 251, 252, 273, 279, 280, 283, 299, 303, 321, 323, 325, 334, 335, 344, 348, 354, 355, 356, 357, 384, 385, 388, 407, 425, 427, 429, 446, 448, 452, 459, 460, 461, 462, 489, 490, 493, 514, 532, 534, 536, 550, 551, 553, 555, 558], "erport": 4, "checksum": [4, 5, 6, 14, 16, 25, 31, 36, 38, 47, 54, 55, 59, 60, 67, 72, 77, 79, 80, 81, 87, 89, 91, 96, 99, 102, 109, 110, 111, 115, 116, 119, 121, 128, 132, 134, 140, 144, 154, 156, 166, 167, 176, 177, 183, 185, 186, 187, 188, 198, 199, 200, 205, 207, 209, 210, 211, 222, 223, 224, 229, 233, 236, 237, 238, 250, 251, 252, 257, 259, 261, 272, 279, 280, 281, 285, 289, 291, 296, 299, 301, 303, 313, 323, 325, 334, 336, 337, 348, 354, 355, 356, 362, 364, 366, 377, 384, 385, 386, 390, 394, 396, 401, 405, 407, 413, 417, 427, 429, 439, 440, 447, 452, 457, 459, 460, 461, 467, 469, 471, 476, 479, 482, 489, 490, 491, 495, 496, 499, 501, 508, 512, 514, 520, 524, 534, 536, 546, 547, 557], "level": [4, 5, 7, 8, 36, 38, 48, 51, 54, 58, 72, 77, 78, 79, 80, 81, 105, 128, 132, 133, 137, 140, 146, 152, 164, 166, 167, 176, 177, 185, 186, 187, 198, 199, 207, 209, 210, 222, 223, 224, 232, 233, 236, 237, 250, 251, 252, 275, 296, 298, 299, 301, 302, 306, 315, 321, 333, 334, 348, 353, 354, 355, 356, 380, 401, 405, 406, 410, 413, 419, 425, 437, 452, 457, 458, 459, 460, 461, 485, 508, 512, 513, 517, 520, 526, 532, 544, 546, 547, 550, 552, 557, 564], "reflect": [4, 54, 79, 86, 87, 89, 119, 182, 183, 185, 204, 205, 207, 228, 229, 233, 256, 257, 259, 289, 299, 354, 361, 362, 364, 394, 459, 466, 467, 469, 499], "counter": [4, 48, 72, 222, 223, 250, 251, 348, 413, 452], "statu": [4, 5, 11, 12, 13, 37, 39, 47, 48, 54, 59, 60, 72, 80, 81, 83, 84, 88, 105, 128, 130, 134, 135, 136, 143, 144, 146, 148, 152, 155, 156, 163, 164, 165, 177, 184, 185, 187, 199, 206, 207, 210, 223, 230, 232, 233, 237, 251, 254, 258, 275, 296, 304, 305, 312, 313, 315, 317, 321, 324, 325, 332, 333, 334, 348, 355, 356, 358, 359, 363, 380, 401, 403, 408, 409, 416, 417, 419, 421, 425, 428, 429, 436, 437, 438, 452, 460, 461, 463, 464, 468, 485, 508, 510, 515, 516, 523, 524, 526, 528, 532, 535, 536, 543, 544, 545, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "If": [4, 5, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 40, 42, 43, 44, 46, 47, 48, 49, 50, 51, 54, 55, 63, 66, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 89, 91, 92, 93, 94, 96, 97, 98, 99, 101, 102, 103, 104, 105, 106, 107, 109, 110, 111, 112, 113, 115, 116, 119, 121, 122, 124, 125, 126, 128, 130, 131, 134, 136, 139, 140, 144, 145, 146, 147, 148, 149, 150, 152, 154, 155, 156, 159, 161, 162, 163, 164, 166, 167, 169, 175, 177, 178, 182, 183, 184, 185, 187, 190, 197, 199, 200, 204, 205, 206, 207, 208, 210, 213, 220, 221, 222, 223, 224, 228, 229, 230, 231, 232, 233, 235, 237, 241, 248, 249, 250, 251, 252, 256, 257, 258, 259, 261, 262, 263, 264, 266, 267, 268, 269, 271, 272, 273, 274, 275, 276, 277, 279, 280, 281, 282, 283, 285, 286, 289, 291, 292, 293, 294, 295, 296, 298, 299, 300, 303, 305, 308, 313, 314, 315, 316, 317, 318, 319, 321, 323, 324, 325, 328, 330, 331, 332, 333, 334, 335, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 364, 366, 367, 368, 369, 371, 372, 373, 374, 376, 377, 378, 379, 380, 381, 382, 384, 385, 386, 387, 388, 390, 391, 394, 396, 397, 398, 399, 400, 401, 403, 404, 407, 409, 412, 413, 417, 418, 419, 420, 421, 422, 423, 425, 427, 428, 429, 432, 434, 435, 436, 437, 443, 446, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 469, 471, 472, 473, 474, 476, 477, 478, 479, 481, 482, 483, 484, 485, 486, 487, 489, 490, 491, 492, 493, 495, 496, 499, 501, 502, 504, 505, 506, 508, 510, 511, 514, 516, 519, 520, 524, 525, 526, 527, 528, 529, 530, 532, 534, 535, 536, 539, 541, 542, 543, 544, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "correspond": [4, 47, 48, 49, 87, 88, 91, 97, 102, 105, 107, 111, 115, 121, 125, 137, 140, 184, 185, 187, 206, 207, 210, 222, 230, 232, 233, 237, 250, 258, 261, 267, 272, 275, 277, 281, 285, 288, 291, 294, 306, 363, 366, 372, 377, 380, 382, 386, 390, 393, 396, 399, 410, 413, 467, 468, 471, 477, 482, 485, 487, 491, 495, 498, 501, 505, 517, 520], "output": [4, 8, 12, 14, 16, 19, 21, 25, 28, 29, 31, 37, 39, 43, 44, 47, 48, 55, 62, 63, 66, 71, 72, 79, 80, 82, 87, 93, 95, 96, 97, 98, 99, 101, 103, 105, 106, 107, 111, 112, 115, 116, 125, 128, 131, 146, 159, 164, 165, 166, 167, 169, 172, 175, 181, 183, 185, 187, 188, 190, 194, 197, 203, 205, 207, 210, 211, 213, 220, 221, 227, 229, 231, 232, 233, 237, 238, 240, 241, 248, 251, 255, 257, 263, 265, 266, 267, 268, 269, 271, 273, 275, 277, 281, 282, 285, 286, 294, 299, 315, 328, 333, 335, 336, 337, 339, 340, 347, 348, 354, 357, 362, 368, 370, 371, 372, 373, 374, 376, 378, 380, 382, 386, 387, 390, 391, 399, 401, 404, 419, 429, 432, 437, 438, 439, 440, 442, 443, 446, 451, 452, 459, 460, 462, 467, 473, 475, 476, 477, 478, 479, 481, 483, 485, 486, 487, 491, 492, 495, 496, 505, 508, 511, 526, 539, 544, 545, 546, 547, 555], "describ": [5, 10, 11, 12, 27, 32, 46, 48, 54, 66, 72, 75, 81, 82, 85, 87, 96, 99, 101, 104, 105, 109, 110, 116, 122, 128, 133, 137, 140, 164, 176, 177, 181, 183, 185, 187, 198, 199, 203, 205, 207, 210, 222, 223, 227, 229, 232, 233, 237, 250, 251, 255, 257, 266, 269, 271, 274, 275, 279, 280, 286, 292, 296, 302, 306, 334, 335, 348, 351, 354, 356, 357, 360, 362, 371, 374, 376, 379, 380, 384, 385, 391, 397, 401, 406, 410, 413, 437, 446, 452, 455, 461, 462, 465, 467, 476, 479, 481, 484, 485, 489, 490, 496, 502, 508, 513, 517, 520, 544, 550, 559], "function": [5, 8, 10, 11, 12, 18, 19, 20, 22, 25, 32, 35, 43, 44, 46, 47, 48, 49, 54, 56, 58, 63, 65, 67, 68, 72, 75, 79, 80, 81, 91, 100, 102, 105, 120, 121, 131, 132, 140, 169, 173, 176, 177, 178, 185, 186, 187, 190, 195, 198, 199, 200, 207, 209, 210, 213, 217, 220, 222, 223, 224, 232, 233, 236, 237, 241, 245, 250, 251, 252, 261, 272, 275, 291, 299, 301, 334, 340, 342, 344, 348, 351, 354, 355, 356, 366, 375, 377, 380, 395, 396, 404, 405, 413, 443, 445, 447, 448, 452, 455, 459, 460, 461, 471, 480, 482, 485, 500, 501, 511, 512, 520, 550, 551, 552, 553, 557, 559, 563], "been": [5, 8, 9, 11, 12, 18, 19, 20, 22, 24, 25, 26, 30, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 66, 71, 72, 74, 79, 80, 81, 82, 88, 89, 91, 95, 100, 102, 105, 109, 110, 111, 115, 119, 120, 121, 123, 126, 127, 134, 136, 140, 146, 148, 154, 156, 159, 161, 163, 164, 175, 176, 177, 178, 184, 185, 187, 197, 198, 199, 200, 206, 207, 210, 220, 221, 222, 223, 224, 230, 232, 233, 237, 248, 249, 250, 251, 252, 258, 259, 261, 265, 270, 272, 275, 279, 280, 281, 285, 289, 290, 291, 295, 299, 305, 315, 323, 330, 332, 333, 334, 335, 347, 348, 350, 354, 355, 356, 357, 363, 364, 366, 370, 375, 377, 380, 384, 385, 386, 390, 394, 395, 396, 400, 409, 413, 419, 427, 429, 434, 436, 437, 446, 451, 452, 454, 459, 460, 461, 462, 468, 469, 471, 475, 480, 482, 485, 489, 490, 491, 495, 499, 500, 501, 503, 506, 507, 516, 520, 526, 528, 534, 536, 539, 541, 543, 544, 549, 550, 551, 552, 554, 555, 556, 557, 559, 560, 563], "ad": [5, 7, 11, 17, 18, 19, 20, 22, 27, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 54, 55, 66, 68, 72, 79, 80, 81, 82, 87, 88, 89, 109, 110, 119, 132, 133, 134, 139, 140, 146, 155, 159, 164, 173, 177, 184, 185, 186, 187, 195, 199, 206, 207, 209, 210, 217, 223, 230, 233, 236, 237, 245, 251, 258, 259, 279, 280, 289, 299, 301, 302, 308, 324, 333, 334, 335, 344, 348, 354, 355, 356, 357, 363, 364, 384, 385, 394, 405, 406, 412, 413, 428, 437, 446, 448, 452, 459, 460, 461, 462, 467, 468, 469, 489, 490, 499, 512, 513, 519, 520, 526, 535, 539, 544, 555], "variant": [5, 14, 16, 25, 31, 58, 72, 80, 81, 348, 355, 356, 452, 460, 461], "raidz": [5, 6, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 54, 59, 60, 65, 68, 72, 80, 81, 82, 134, 137, 148, 149, 150, 152, 154, 156, 164, 173, 187, 192, 195, 199, 200, 210, 215, 217, 223, 224, 237, 243, 245, 251, 252, 303, 306, 318, 319, 321, 323, 325, 333, 334, 335, 342, 344, 348, 355, 356, 357, 407, 410, 422, 423, 425, 427, 429, 437, 445, 448, 452, 460, 461, 462, 514, 517, 528, 529, 530, 532, 534, 536, 544], "provid": [5, 7, 8, 9, 11, 12, 17, 18, 19, 20, 22, 23, 26, 32, 33, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 49, 54, 60, 66, 72, 74, 75, 78, 79, 80, 81, 82, 86, 87, 88, 89, 91, 102, 105, 109, 110, 111, 115, 119, 121, 128, 129, 131, 136, 156, 161, 163, 164, 165, 166, 167, 176, 183, 184, 185, 187, 198, 200, 205, 206, 207, 208, 210, 222, 223, 224, 229, 230, 232, 233, 235, 237, 249, 250, 251, 252, 256, 257, 258, 261, 272, 275, 279, 280, 281, 285, 291, 296, 297, 298, 299, 300, 325, 332, 333, 334, 335, 336, 348, 350, 351, 353, 354, 355, 356, 357, 361, 362, 363, 366, 377, 380, 384, 385, 386, 390, 396, 401, 402, 404, 409, 429, 436, 437, 438, 439, 440, 446, 452, 454, 455, 458, 459, 460, 461, 462, 466, 467, 468, 469, 471, 482, 485, 489, 490, 491, 495, 499, 501, 508, 509, 511, 516, 536, 541, 543, 544, 545, 546, 547, 550, 552, 557], "integr": [5, 8, 9, 11, 12, 14, 16, 22, 25, 31, 47, 48, 49, 54, 58, 79, 80, 81, 185, 187, 207, 210, 233, 237, 299, 334, 354, 355, 356, 459, 460, 461], "hot": [5, 48, 80, 81, 137, 140, 152, 164, 187, 210, 237, 309, 321, 333, 334, 355, 356, 413, 425, 437, 460, 461, 517, 520, 532, 544, 550, 552], "faster": [5, 19, 20, 36, 37, 38, 39, 43, 44, 48, 49, 72, 79, 80, 81, 105, 132, 178, 185, 186, 200, 207, 209, 224, 232, 233, 236, 251, 252, 275, 299, 301, 348, 354, 355, 356, 380, 405, 452, 459, 460, 461, 485, 512], "resilv": [5, 51, 72, 80, 81, 84, 91, 102, 121, 134, 140, 149, 150, 153, 154, 156, 158, 159, 163, 164, 176, 177, 187, 198, 199, 210, 222, 223, 224, 233, 237, 250, 251, 252, 254, 261, 272, 291, 303, 318, 319, 322, 323, 325, 327, 328, 332, 333, 348, 355, 356, 359, 366, 377, 396, 407, 413, 422, 423, 426, 427, 429, 431, 432, 436, 437, 452, 460, 461, 464, 471, 482, 501, 514, 520, 529, 530, 533, 534, 536, 538, 539, 543, 544, 550, 552, 557], "retain": [5, 72, 80, 81, 109, 110, 134, 144, 187, 207, 210, 233, 237, 251, 279, 280, 313, 348, 355, 356, 384, 385, 417, 452, 460, 461, 489, 490, 524], "benefit": [5, 33, 36, 38, 47, 48, 49, 54, 72, 79, 80, 81, 82, 109, 110, 111, 115, 177, 185, 199, 207, 223, 233, 237, 251, 279, 280, 281, 285, 299, 335, 348, 354, 355, 356, 357, 384, 385, 386, 390, 452, 459, 460, 461, 462, 489, 490, 491, 495], "construct": [5, 48, 72, 79, 81, 87, 140, 169, 176, 183, 185, 190, 198, 205, 207, 213, 222, 223, 229, 233, 241, 250, 251, 257, 299, 340, 348, 354, 356, 362, 413, 452, 459, 461, 467, 520], "children": [5, 72, 75, 77, 79, 81, 91, 94, 96, 99, 100, 101, 102, 105, 106, 116, 120, 121, 123, 127, 185, 207, 232, 233, 261, 264, 266, 269, 270, 271, 272, 275, 276, 286, 290, 291, 299, 348, 351, 354, 356, 366, 369, 371, 374, 375, 376, 377, 380, 381, 391, 395, 396, 452, 455, 457, 459, 461, 471, 474, 476, 479, 480, 481, 482, 485, 486, 496, 500, 501, 503, 507], "order": [5, 9, 10, 12, 18, 19, 20, 21, 22, 32, 33, 34, 35, 36, 37, 38, 39, 43, 44, 47, 48, 51, 54, 72, 75, 79, 80, 81, 82, 87, 97, 101, 103, 107, 109, 110, 111, 114, 115, 125, 134, 140, 177, 178, 185, 187, 199, 200, 207, 210, 222, 223, 224, 229, 231, 233, 237, 250, 251, 252, 257, 267, 271, 273, 277, 279, 280, 281, 284, 285, 294, 299, 334, 335, 348, 351, 354, 355, 356, 357, 362, 372, 376, 378, 382, 384, 385, 386, 389, 390, 399, 413, 452, 455, 459, 460, 461, 462, 467, 477, 481, 483, 487, 489, 490, 491, 494, 495, 505, 520], "fulli": [5, 9, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 74, 79, 80, 81, 96, 99, 105, 111, 115, 116, 126, 128, 144, 175, 178, 185, 197, 200, 207, 221, 224, 232, 233, 237, 249, 252, 275, 281, 285, 295, 296, 299, 313, 334, 350, 354, 355, 356, 380, 386, 390, 400, 401, 417, 454, 459, 460, 461, 476, 479, 485, 491, 495, 496, 506, 508, 524], "util": [5, 8, 9, 14, 16, 22, 25, 27, 31, 35, 37, 39, 40, 43, 44, 47, 48, 54, 58, 67, 68, 72, 79, 80, 81, 87, 97, 107, 109, 110, 125, 128, 129, 131, 148, 164, 166, 167, 171, 172, 177, 183, 185, 187, 188, 193, 194, 195, 199, 200, 205, 207, 208, 210, 211, 216, 217, 223, 224, 229, 233, 235, 237, 238, 240, 244, 245, 251, 252, 257, 267, 277, 279, 280, 294, 296, 297, 299, 300, 333, 336, 337, 343, 344, 348, 354, 355, 356, 362, 372, 382, 384, 385, 399, 401, 402, 404, 437, 439, 440, 447, 448, 452, 459, 460, 461, 467, 477, 487, 489, 490, 505, 508, 509, 511, 528, 544, 546, 547], "known": [5, 33, 47, 48, 49, 50, 54, 55, 72, 75, 79, 81, 109, 110, 128, 156, 177, 185, 187, 199, 207, 210, 220, 223, 233, 237, 248, 251, 279, 280, 296, 299, 334, 348, 351, 354, 356, 384, 385, 401, 452, 455, 459, 461, 489, 490, 508, 536, 550, 551, 552, 553, 554, 557, 558, 559, 563], "declust": 5, "activ": [5, 18, 19, 20, 22, 26, 27, 32, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 51, 72, 78, 79, 80, 81, 82, 87, 94, 101, 105, 111, 115, 126, 128, 138, 141, 144, 145, 146, 147, 163, 164, 177, 178, 183, 185, 187, 199, 200, 205, 207, 210, 223, 224, 229, 231, 232, 233, 237, 251, 252, 257, 264, 273, 275, 281, 285, 295, 296, 298, 299, 307, 313, 315, 316, 332, 333, 334, 335, 348, 353, 354, 355, 356, 357, 362, 369, 380, 386, 390, 400, 401, 411, 414, 417, 418, 419, 420, 436, 437, 452, 458, 459, 460, 461, 462, 467, 474, 481, 485, 491, 495, 506, 508, 518, 521, 524, 525, 526, 527, 543, 544, 549, 550, 551, 552, 553, 559, 560], "area": [5, 47, 49, 185, 207, 233, 263, 368], "research": [5, 47, 111, 115, 281, 285, 386, 390, 491, 495], "imag": [5, 14, 16, 18, 19, 20, 22, 23, 25, 28, 29, 31, 33, 35, 36, 37, 38, 39, 43, 44, 49, 118], "below": [5, 7, 8, 9, 14, 15, 16, 18, 19, 20, 21, 22, 23, 25, 26, 28, 29, 31, 32, 35, 36, 37, 38, 39, 40, 42, 43, 44, 47, 48, 49, 63, 66, 72, 77, 79, 85, 87, 89, 103, 105, 119, 126, 140, 163, 169, 176, 177, 181, 183, 185, 190, 198, 199, 203, 205, 207, 213, 222, 223, 227, 229, 231, 232, 233, 241, 250, 251, 255, 257, 259, 273, 275, 289, 295, 299, 332, 340, 348, 354, 360, 362, 364, 378, 380, 394, 400, 413, 436, 443, 446, 452, 457, 459, 465, 467, 469, 483, 485, 499, 506, 520, 543, 550, 551, 552, 553, 555, 558, 559], "illustr": [5, 92, 93, 94, 108, 113, 118, 128, 185, 207, 233, 296, 401, 472, 473, 474, 488, 493, 498, 508], "differ": [5, 9, 10, 11, 12, 18, 19, 20, 21, 22, 25, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 54, 55, 65, 68, 72, 77, 78, 79, 80, 81, 82, 91, 95, 102, 105, 106, 109, 110, 111, 115, 118, 121, 128, 132, 134, 137, 140, 141, 154, 156, 161, 173, 176, 177, 178, 185, 187, 195, 198, 199, 200, 207, 209, 210, 217, 220, 222, 223, 224, 232, 233, 236, 237, 245, 248, 250, 251, 252, 261, 265, 272, 275, 276, 279, 280, 281, 285, 291, 296, 299, 301, 306, 310, 323, 325, 330, 334, 335, 342, 344, 348, 354, 355, 356, 357, 366, 370, 377, 380, 381, 384, 385, 386, 390, 396, 401, 405, 410, 413, 414, 427, 429, 434, 445, 448, 452, 457, 458, 459, 460, 461, 462, 471, 475, 482, 485, 486, 489, 490, 491, 495, 501, 508, 512, 517, 520, 521, 534, 536, 541, 549, 551, 552, 554, 557], "addition": [5, 7, 9, 36, 38, 39, 48, 54, 66, 72, 74, 82, 88, 103, 109, 110, 128, 177, 184, 185, 187, 197, 199, 206, 207, 210, 221, 223, 230, 231, 233, 237, 249, 251, 258, 273, 279, 280, 335, 348, 350, 357, 363, 378, 384, 385, 446, 452, 454, 462, 468, 483, 489, 490, 508], "must": [5, 8, 9, 11, 12, 14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 51, 54, 55, 63, 66, 71, 72, 74, 77, 78, 79, 80, 81, 82, 87, 88, 89, 90, 96, 98, 99, 101, 103, 105, 108, 109, 110, 111, 112, 114, 115, 116, 119, 124, 128, 131, 137, 141, 147, 149, 150, 151, 152, 154, 156, 158, 169, 172, 175, 177, 184, 185, 187, 190, 194, 197, 199, 205, 206, 207, 208, 210, 213, 220, 221, 223, 224, 229, 230, 231, 232, 233, 235, 237, 241, 248, 249, 251, 257, 258, 259, 260, 266, 268, 269, 271, 273, 275, 278, 279, 280, 281, 282, 284, 285, 286, 289, 293, 296, 298, 299, 300, 306, 310, 316, 318, 319, 320, 321, 323, 327, 334, 335, 340, 347, 348, 350, 353, 354, 355, 356, 357, 362, 363, 364, 365, 371, 373, 374, 376, 378, 380, 383, 384, 385, 386, 387, 389, 390, 391, 394, 398, 401, 404, 410, 414, 420, 422, 423, 424, 425, 427, 431, 443, 446, 451, 452, 454, 457, 458, 459, 460, 461, 462, 467, 468, 469, 470, 476, 478, 479, 481, 483, 485, 488, 489, 490, 491, 492, 494, 495, 496, 499, 504, 508, 511, 517, 521, 527, 529, 530, 531, 532, 534, 536, 538, 551, 553, 555, 558, 559, 563], "shuffl": 5, "its": [5, 7, 8, 12, 18, 19, 20, 22, 25, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 54, 63, 66, 68, 72, 75, 79, 80, 81, 82, 83, 87, 88, 89, 91, 94, 95, 96, 98, 99, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 112, 115, 116, 118, 119, 121, 122, 128, 140, 144, 148, 149, 150, 164, 169, 173, 176, 177, 178, 179, 184, 185, 187, 190, 195, 198, 199, 200, 201, 205, 206, 207, 210, 213, 217, 222, 223, 224, 225, 229, 230, 231, 232, 233, 237, 241, 245, 250, 251, 252, 253, 257, 258, 259, 261, 266, 268, 269, 271, 272, 273, 274, 275, 276, 278, 279, 280, 282, 286, 289, 291, 292, 296, 299, 313, 318, 319, 333, 334, 335, 340, 344, 348, 351, 354, 355, 356, 357, 358, 362, 363, 364, 366, 371, 373, 374, 376, 377, 378, 379, 380, 381, 383, 384, 385, 386, 387, 390, 391, 394, 396, 397, 401, 413, 417, 422, 423, 437, 443, 446, 448, 452, 455, 459, 460, 461, 462, 463, 467, 468, 469, 471, 474, 475, 476, 478, 479, 481, 482, 483, 484, 485, 486, 488, 489, 490, 491, 492, 495, 496, 498, 499, 501, 502, 508, 520, 524, 528, 529, 530, 544, 549, 552, 555, 560], "child": [5, 22, 35, 48, 49, 78, 79, 91, 93, 96, 99, 102, 105, 108, 109, 110, 111, 114, 115, 116, 121, 128, 185, 207, 232, 233, 261, 272, 275, 278, 279, 280, 281, 284, 285, 291, 296, 298, 299, 353, 354, 366, 377, 380, 383, 384, 385, 386, 389, 390, 396, 401, 458, 459, 471, 473, 476, 479, 482, 485, 488, 489, 490, 491, 494, 495, 496, 501, 508], "wai": [5, 7, 8, 10, 11, 12, 18, 19, 20, 21, 22, 27, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 72, 74, 77, 78, 79, 82, 91, 102, 111, 115, 121, 128, 133, 134, 137, 144, 164, 169, 175, 177, 185, 187, 190, 197, 199, 207, 210, 213, 220, 221, 223, 233, 237, 241, 249, 251, 261, 272, 281, 285, 291, 296, 298, 299, 303, 313, 333, 348, 350, 353, 354, 366, 377, 386, 390, 396, 401, 407, 417, 437, 452, 454, 457, 458, 459, 462, 471, 482, 491, 495, 501, 508, 513, 514, 517, 524, 544, 559], "regardless": [5, 18, 19, 20, 22, 35, 43, 44, 47, 48, 49, 51, 72, 79, 82, 101, 111, 115, 133, 146, 148, 158, 159, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 271, 299, 302, 315, 317, 327, 328, 335, 348, 354, 357, 376, 406, 419, 421, 431, 432, 452, 459, 462, 481, 491, 495, 513, 526, 528, 538, 539], "drive": [5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 48, 49, 52, 54, 68, 72, 74, 86, 91, 102, 121, 130, 140, 155, 164, 175, 176, 177, 182, 187, 197, 198, 199, 204, 221, 222, 223, 228, 237, 249, 250, 251, 256, 261, 272, 291, 324, 348, 350, 361, 366, 377, 396, 403, 413, 428, 437, 448, 452, 454, 466, 471, 482, 501, 510, 520, 535, 544], "both": [5, 8, 9, 12, 18, 19, 21, 22, 33, 35, 36, 38, 45, 47, 48, 49, 51, 58, 72, 79, 80, 81, 82, 87, 89, 94, 97, 103, 105, 107, 109, 110, 119, 125, 146, 152, 159, 169, 177, 183, 185, 187, 190, 199, 205, 207, 210, 213, 220, 223, 229, 232, 233, 237, 241, 248, 251, 257, 259, 264, 267, 275, 277, 279, 280, 289, 294, 299, 315, 321, 328, 334, 348, 354, 356, 362, 364, 369, 372, 378, 380, 382, 384, 385, 394, 399, 419, 425, 432, 452, 459, 460, 461, 462, 467, 469, 474, 477, 483, 485, 487, 489, 490, 499, 505, 526, 532, 539], "evenli": [5, 48, 54, 71, 80, 220, 248, 347, 355, 451, 460], "among": [5, 78, 81, 131, 134, 185, 187, 207, 208, 210, 233, 235, 237, 298, 300, 334, 353, 356, 404, 458, 461, 511], "surviv": [5, 47, 78, 185, 207, 233, 298, 353, 458], "accomplish": [5, 10, 48, 79, 233, 299, 354, 459, 559], "carefulli": [5, 33], "chosen": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 50, 72, 79, 91, 102, 121, 177, 199, 223, 233, 251, 261, 272, 291, 299, 348, 354, 366, 377, 396, 451, 452, 459, 471, 482, 501], "precomput": 5, "permut": 5, "map": [5, 47, 48, 54, 65, 72, 74, 79, 80, 86, 87, 91, 97, 102, 105, 107, 121, 125, 128, 132, 152, 175, 178, 182, 183, 185, 186, 197, 200, 204, 205, 207, 209, 221, 223, 224, 228, 229, 232, 233, 236, 237, 249, 251, 252, 256, 257, 261, 267, 272, 275, 277, 291, 294, 296, 299, 301, 321, 342, 348, 350, 354, 355, 361, 362, 366, 372, 377, 380, 382, 396, 399, 401, 405, 425, 445, 452, 454, 459, 460, 466, 467, 471, 477, 482, 485, 487, 501, 505, 508, 512, 532], "keep": [5, 8, 10, 12, 18, 19, 20, 22, 32, 35, 36, 37, 38, 39, 43, 44, 48, 49, 50, 54, 62, 71, 72, 81, 94, 106, 113, 118, 128, 177, 185, 187, 199, 207, 210, 220, 223, 233, 237, 240, 248, 251, 276, 296, 334, 339, 347, 348, 356, 381, 401, 442, 451, 452, 461, 474, 486, 493, 498, 508, 549], "creation": [5, 47, 48, 49, 54, 55, 71, 72, 79, 80, 82, 91, 93, 96, 99, 102, 116, 121, 128, 130, 133, 137, 164, 177, 185, 187, 199, 207, 210, 220, 223, 224, 233, 237, 248, 251, 252, 261, 263, 272, 291, 296, 299, 302, 306, 333, 335, 347, 348, 354, 355, 357, 366, 368, 377, 396, 401, 403, 406, 410, 437, 451, 452, 459, 460, 462, 471, 473, 476, 479, 482, 496, 501, 508, 510, 513, 517, 544], "fast": [5, 48, 49, 72, 79, 199, 223, 251, 299, 348, 354, 452, 459], "make": [5, 8, 9, 12, 13, 17, 18, 19, 20, 21, 22, 25, 27, 28, 29, 32, 33, 35, 36, 37, 38, 39, 40, 43, 44, 47, 48, 49, 54, 63, 71, 72, 77, 78, 79, 80, 81, 82, 87, 91, 92, 93, 94, 102, 108, 111, 113, 115, 118, 121, 128, 146, 164, 169, 176, 177, 178, 183, 185, 187, 190, 198, 199, 200, 205, 207, 210, 213, 220, 223, 224, 229, 233, 237, 241, 248, 251, 252, 257, 261, 272, 278, 281, 285, 291, 296, 298, 299, 315, 333, 334, 340, 347, 348, 353, 354, 355, 356, 362, 366, 377, 383, 386, 390, 396, 401, 419, 437, 443, 451, 452, 457, 458, 459, 460, 461, 462, 467, 471, 472, 473, 474, 482, 488, 491, 493, 495, 498, 501, 508, 526, 544, 561, 562, 563], "imposs": [5, 47], "damag": [5, 47, 48, 54, 72, 80, 81, 136, 140, 144, 156, 176, 187, 198, 210, 222, 223, 237, 250, 251, 252, 305, 313, 325, 348, 355, 356, 409, 413, 417, 429, 452, 460, 461, 516, 520, 524, 536, 550, 552, 556, 557], "lost": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 79, 140, 144, 185, 187, 207, 210, 222, 233, 237, 250, 299, 313, 354, 413, 417, 459, 520, 524, 553, 555, 556, 557, 563], "fix": [5, 9, 11, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 63, 67, 71, 72, 79, 81, 105, 169, 177, 185, 190, 199, 207, 213, 220, 223, 232, 233, 241, 248, 251, 275, 299, 340, 347, 348, 354, 356, 380, 443, 447, 451, 452, 459, 461, 485, 559, 563], "pad": [5, 81, 356, 461], "necessari": [5, 9, 18, 19, 20, 21, 22, 32, 33, 35, 36, 37, 38, 39, 43, 44, 48, 72, 79, 80, 81, 89, 105, 111, 115, 119, 130, 185, 187, 199, 207, 210, 223, 232, 233, 237, 251, 259, 275, 281, 285, 289, 299, 334, 348, 354, 355, 356, 364, 380, 386, 390, 394, 403, 452, 459, 460, 461, 469, 485, 491, 495, 499, 510, 557, 561, 562], "zero": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 51, 54, 62, 66, 71, 72, 79, 80, 81, 88, 93, 94, 105, 106, 130, 177, 178, 184, 185, 199, 200, 206, 207, 220, 223, 224, 230, 232, 233, 237, 240, 248, 251, 252, 258, 263, 264, 275, 276, 299, 334, 339, 347, 348, 354, 355, 356, 363, 368, 369, 380, 381, 403, 442, 446, 451, 452, 459, 460, 461, 468, 473, 474, 485, 486, 510, 557, 559], "howev": [5, 7, 9, 10, 12, 14, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 54, 68, 71, 72, 78, 79, 80, 81, 82, 83, 88, 97, 105, 107, 109, 110, 111, 115, 125, 134, 137, 146, 156, 166, 167, 173, 177, 178, 185, 187, 195, 199, 200, 206, 207, 210, 217, 220, 223, 224, 230, 232, 233, 237, 245, 248, 251, 252, 258, 267, 275, 277, 279, 280, 281, 285, 294, 298, 299, 306, 315, 334, 335, 336, 344, 347, 348, 353, 354, 355, 356, 357, 358, 363, 372, 380, 382, 384, 385, 386, 390, 399, 410, 419, 439, 440, 448, 451, 452, 458, 459, 460, 461, 462, 463, 468, 477, 485, 487, 489, 490, 491, 495, 505, 517, 526, 536, 546, 547, 557, 559], "significantli": [5, 11, 47, 48, 49, 72, 79, 80, 81, 178, 185, 199, 200, 207, 223, 224, 233, 251, 252, 299, 348, 354, 355, 356, 452, 459, 460, 461], "capac": [5, 47, 48, 54, 58, 72, 77, 81, 82, 93, 133, 146, 148, 159, 164, 177, 187, 199, 210, 223, 237, 251, 263, 317, 333, 335, 348, 356, 357, 368, 421, 437, 452, 457, 461, 462, 473, 513, 526, 528, 539, 544], "32k": [5, 48, 49, 68, 71, 87, 173, 183, 195, 205, 217, 220, 229, 245, 248, 257, 344, 347, 362, 448, 451, 467], "compress": [5, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 54, 58, 62, 67, 72, 78, 79, 80, 81, 87, 89, 91, 96, 99, 102, 109, 110, 111, 115, 116, 119, 121, 128, 166, 167, 171, 177, 178, 183, 185, 193, 199, 200, 205, 207, 216, 223, 224, 229, 233, 240, 244, 251, 252, 257, 259, 261, 272, 279, 280, 281, 285, 289, 291, 296, 298, 299, 339, 343, 348, 353, 354, 355, 356, 362, 364, 366, 377, 384, 385, 386, 390, 394, 396, 401, 442, 447, 452, 458, 459, 460, 461, 467, 469, 471, 476, 479, 482, 489, 490, 491, 495, 496, 499, 501, 508, 546, 547], "rel": [5, 33, 37, 39, 47, 48, 50, 71, 72, 79, 80, 81, 82, 87, 101, 177, 185, 187, 199, 205, 207, 210, 220, 223, 229, 233, 237, 248, 251, 257, 271, 299, 334, 347, 348, 354, 355, 356, 357, 362, 376, 451, 452, 459, 460, 461, 462, 467, 481], "reduc": [5, 18, 19, 20, 22, 35, 43, 44, 47, 48, 49, 51, 62, 72, 77, 78, 79, 80, 81, 82, 152, 164, 165, 177, 178, 185, 187, 199, 200, 207, 210, 223, 224, 233, 237, 240, 251, 252, 298, 299, 321, 333, 339, 348, 353, 354, 355, 356, 425, 437, 438, 442, 452, 457, 458, 459, 460, 461, 462, 532, 544, 545], "volblocks": [5, 48, 72, 79, 81, 89, 93, 119, 185, 199, 207, 223, 233, 251, 259, 263, 289, 299, 348, 354, 356, 364, 368, 394, 452, 459, 461, 469, 473, 499], "account": [5, 10, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 72, 79, 80, 81, 82, 87, 96, 99, 103, 108, 116, 128, 185, 187, 199, 200, 207, 210, 223, 224, 233, 237, 251, 252, 257, 278, 296, 299, 335, 348, 354, 355, 356, 357, 362, 378, 383, 401, 452, 459, 460, 461, 462, 467, 476, 479, 483, 488, 496, 508], "signific": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 72, 78, 79, 81, 82, 185, 207, 220, 233, 237, 251, 298, 299, 335, 348, 353, 354, 356, 357, 452, 458, 459, 461, 462], "amount": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 50, 51, 54, 72, 77, 78, 79, 80, 81, 82, 87, 96, 99, 105, 116, 128, 146, 152, 159, 163, 165, 177, 178, 183, 185, 187, 199, 200, 205, 207, 210, 223, 224, 229, 232, 233, 237, 251, 252, 257, 266, 269, 275, 286, 296, 298, 299, 315, 321, 328, 332, 335, 348, 353, 354, 355, 356, 357, 362, 371, 374, 380, 391, 401, 419, 425, 432, 436, 438, 452, 457, 458, 459, 460, 461, 462, 467, 476, 479, 485, 496, 508, 526, 532, 539, 543, 545], "small": [5, 11, 12, 14, 16, 19, 20, 25, 28, 31, 33, 36, 38, 43, 44, 47, 48, 49, 50, 54, 68, 71, 72, 79, 81, 82, 173, 177, 185, 187, 195, 199, 207, 210, 217, 220, 223, 233, 237, 245, 248, 251, 299, 334, 335, 344, 347, 348, 354, 356, 357, 448, 451, 452, 459, 461, 462], "add": [5, 7, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 49, 54, 67, 68, 72, 78, 80, 81, 82, 84, 89, 98, 109, 110, 112, 119, 128, 130, 132, 134, 137, 145, 146, 152, 164, 165, 171, 172, 173, 181, 185, 187, 193, 194, 195, 203, 207, 209, 210, 216, 217, 224, 227, 231, 233, 236, 237, 244, 245, 251, 252, 254, 255, 259, 268, 273, 279, 280, 282, 289, 296, 301, 303, 306, 314, 315, 321, 333, 334, 335, 343, 344, 348, 355, 356, 357, 359, 364, 373, 384, 385, 387, 394, 401, 403, 405, 407, 410, 418, 419, 425, 437, 438, 447, 448, 452, 458, 460, 461, 462, 464, 469, 478, 489, 490, 492, 499, 508, 510, 512, 514, 517, 525, 526, 532, 544, 545], "special": [5, 12, 19, 20, 34, 36, 38, 43, 44, 47, 48, 49, 72, 78, 79, 80, 81, 87, 88, 109, 110, 130, 152, 165, 168, 184, 185, 187, 206, 207, 210, 223, 224, 230, 231, 233, 237, 251, 252, 258, 273, 279, 280, 299, 321, 334, 348, 354, 355, 356, 363, 384, 385, 403, 425, 438, 441, 452, 458, 459, 460, 461, 467, 468, 489, 490, 510, 532, 545, 548], "regard": [5, 7, 12, 32, 47, 81, 87, 140, 183, 205, 222, 229, 250, 257, 356, 362, 413, 461, 467, 520], "similar": [5, 9, 11, 18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 47, 48, 49, 72, 79, 81, 87, 89, 95, 105, 111, 115, 119, 128, 134, 144, 148, 156, 164, 165, 177, 183, 185, 187, 199, 205, 207, 210, 223, 229, 232, 233, 237, 251, 257, 265, 275, 281, 285, 296, 299, 325, 333, 348, 354, 356, 362, 370, 380, 386, 390, 401, 429, 437, 438, 452, 459, 461, 467, 469, 475, 485, 491, 495, 499, 508, 524, 528, 536, 544, 545], "sinc": [5, 11, 18, 19, 20, 22, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 54, 55, 62, 68, 71, 72, 75, 78, 79, 80, 81, 88, 91, 100, 102, 105, 109, 110, 111, 114, 115, 120, 121, 123, 127, 140, 146, 156, 159, 176, 177, 178, 184, 185, 187, 195, 198, 199, 200, 206, 207, 210, 217, 220, 222, 223, 224, 230, 233, 237, 240, 245, 248, 250, 251, 252, 258, 261, 270, 272, 275, 279, 280, 281, 284, 285, 290, 291, 298, 299, 315, 328, 334, 339, 344, 347, 348, 351, 353, 354, 355, 356, 363, 366, 375, 377, 380, 384, 385, 386, 389, 390, 395, 396, 413, 419, 429, 432, 442, 448, 451, 452, 455, 458, 459, 460, 461, 468, 471, 480, 482, 485, 489, 490, 491, 494, 495, 500, 501, 503, 507, 520, 526, 536, 539, 559, 561, 562], "access": [5, 18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 49, 54, 62, 72, 78, 79, 80, 81, 82, 87, 88, 89, 91, 96, 99, 100, 102, 105, 109, 110, 111, 115, 116, 119, 120, 121, 123, 124, 127, 128, 136, 137, 146, 162, 172, 177, 184, 185, 187, 194, 199, 205, 206, 207, 210, 220, 223, 224, 229, 230, 232, 233, 237, 240, 248, 251, 257, 258, 259, 261, 270, 272, 275, 279, 280, 281, 285, 289, 290, 291, 293, 296, 298, 299, 306, 315, 331, 335, 339, 348, 353, 354, 356, 357, 362, 363, 364, 366, 375, 377, 380, 384, 385, 386, 390, 394, 395, 396, 398, 401, 409, 410, 419, 435, 442, 452, 458, 459, 460, 461, 462, 467, 468, 469, 471, 476, 479, 480, 482, 485, 489, 490, 491, 495, 496, 499, 500, 501, 503, 504, 507, 508, 516, 517, 526, 542, 556, 559, 560], "deliv": [5, 48, 81, 356, 461], "random": [5, 8, 14, 16, 25, 28, 31, 47, 48, 49, 54, 68, 72, 79, 80, 81, 131, 173, 185, 187, 195, 200, 207, 208, 210, 217, 224, 233, 235, 237, 245, 251, 252, 299, 300, 334, 344, 348, 354, 355, 356, 404, 448, 452, 459, 460, 461, 511], "reason": [5, 8, 9, 11, 12, 21, 36, 38, 47, 48, 49, 54, 63, 71, 72, 77, 79, 80, 81, 82, 88, 109, 110, 111, 115, 130, 177, 185, 187, 190, 199, 200, 206, 207, 210, 213, 220, 223, 224, 230, 233, 237, 241, 248, 251, 252, 258, 279, 280, 281, 285, 299, 335, 340, 347, 348, 354, 355, 356, 357, 363, 384, 385, 386, 390, 403, 443, 451, 452, 457, 459, 460, 461, 462, 468, 489, 490, 491, 495, 510], "floor": [5, 48, 68, 81, 173, 177, 195, 199, 217, 223, 245, 251, 344, 356, 448, 461], "summari": [5, 7, 12, 54, 65, 66, 68, 72, 86, 88, 103, 131, 144, 165, 166, 167, 173, 182, 184, 187, 192, 195, 204, 206, 210, 215, 217, 228, 230, 237, 243, 245, 251, 256, 258, 300, 313, 336, 342, 344, 348, 361, 363, 378, 404, 417, 438, 439, 440, 445, 446, 448, 452, 466, 468, 483, 511, 524, 545, 546, 547], "enumer": 5, "immedi": [5, 47, 48, 54, 71, 72, 79, 80, 82, 89, 94, 109, 110, 119, 126, 133, 134, 161, 163, 164, 177, 178, 185, 187, 199, 200, 207, 210, 220, 223, 224, 233, 237, 248, 251, 252, 259, 264, 279, 280, 289, 295, 299, 303, 330, 332, 333, 335, 347, 348, 354, 355, 357, 364, 369, 384, 385, 394, 400, 407, 434, 436, 437, 451, 452, 459, 460, 462, 469, 474, 489, 490, 499, 506, 513, 514, 541, 543, 544], "unlik": [5, 18, 19, 20, 36, 37, 38, 39, 43, 44, 47, 48, 49, 72, 78, 79, 81, 82, 83, 87, 140, 165, 176, 179, 183, 185, 198, 199, 201, 205, 207, 222, 223, 225, 229, 233, 237, 250, 251, 253, 257, 298, 299, 335, 348, 353, 354, 356, 357, 358, 362, 413, 438, 452, 458, 459, 461, 462, 463, 467, 520, 545, 550], "colon": [5, 77, 79, 82, 87, 137, 164, 183, 185, 187, 205, 207, 210, 229, 233, 237, 257, 299, 306, 333, 354, 362, 410, 437, 457, 459, 462, 467, 517, 544], "separ": [5, 7, 8, 11, 18, 19, 20, 22, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 49, 54, 62, 68, 77, 79, 80, 81, 82, 87, 89, 93, 94, 95, 96, 99, 101, 103, 104, 116, 119, 122, 128, 129, 132, 137, 140, 142, 144, 146, 148, 157, 163, 164, 173, 177, 183, 185, 187, 190, 195, 205, 207, 210, 213, 217, 224, 229, 231, 233, 236, 237, 240, 241, 245, 252, 257, 259, 263, 264, 265, 266, 269, 271, 273, 274, 286, 289, 292, 297, 299, 301, 309, 311, 313, 315, 317, 326, 332, 333, 334, 339, 344, 354, 355, 356, 357, 362, 364, 368, 369, 370, 371, 374, 376, 378, 379, 391, 394, 397, 402, 405, 413, 415, 417, 419, 421, 430, 436, 437, 442, 448, 457, 459, 460, 461, 462, 467, 469, 473, 474, 475, 476, 479, 481, 483, 484, 496, 499, 502, 508, 509, 512, 517, 520, 522, 524, 526, 528, 537, 543, 544], "option": [5, 9, 10, 11, 12, 13, 14, 16, 21, 25, 26, 27, 29, 31, 34, 36, 37, 38, 39, 47, 48, 49, 54, 59, 60, 62, 63, 65, 66, 67, 68, 72, 74, 78, 79, 81, 82, 83, 85, 86, 87, 88, 89, 91, 93, 94, 95, 96, 97, 99, 101, 102, 103, 104, 105, 107, 109, 110, 111, 113, 114, 115, 116, 118, 119, 121, 122, 124, 125, 128, 130, 131, 132, 133, 135, 137, 140, 144, 153, 156, 158, 159, 161, 162, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 179, 181, 182, 183, 184, 185, 186, 187, 188, 190, 192, 193, 194, 195, 197, 198, 199, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 215, 216, 217, 221, 222, 223, 225, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 238, 240, 241, 243, 244, 245, 249, 250, 251, 253, 255, 256, 257, 258, 259, 261, 263, 264, 265, 266, 267, 269, 271, 272, 273, 274, 275, 276, 277, 279, 280, 281, 283, 284, 285, 286, 289, 291, 292, 293, 294, 296, 298, 299, 300, 301, 302, 306, 310, 313, 315, 327, 328, 330, 331, 333, 334, 335, 337, 339, 340, 342, 343, 344, 348, 350, 353, 354, 356, 357, 358, 360, 361, 362, 363, 364, 366, 368, 369, 370, 371, 372, 374, 376, 377, 378, 379, 380, 382, 384, 385, 386, 388, 389, 390, 391, 394, 396, 397, 398, 399, 401, 403, 404, 405, 406, 408, 410, 413, 417, 426, 429, 431, 432, 434, 435, 437, 438, 442, 443, 445, 446, 447, 448, 452, 454, 458, 459, 461, 462, 463, 465, 466, 467, 468, 469, 471, 473, 474, 475, 476, 477, 479, 481, 482, 483, 484, 485, 487, 489, 490, 491, 493, 494, 495, 496, 499, 501, 502, 504, 505, 508, 510, 511, 512, 513, 515, 517, 520, 524, 533, 536, 538, 539, 541, 542, 544, 545, 546, 547, 549, 551, 554, 555, 556, 560], "most": [5, 9, 18, 19, 20, 22, 32, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 54, 65, 71, 72, 77, 78, 79, 80, 82, 83, 85, 87, 88, 109, 110, 114, 124, 128, 176, 177, 178, 183, 185, 192, 198, 199, 200, 205, 207, 215, 220, 222, 223, 224, 229, 233, 237, 243, 248, 250, 251, 252, 257, 279, 280, 284, 293, 296, 299, 335, 342, 347, 348, 354, 355, 357, 358, 360, 362, 363, 384, 385, 389, 398, 401, 445, 451, 452, 457, 458, 459, 460, 462, 463, 465, 467, 468, 489, 490, 494, 504, 508, 557, 563], "control": [5, 7, 11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 52, 54, 71, 72, 74, 77, 78, 79, 80, 81, 82, 87, 88, 96, 99, 116, 128, 161, 175, 177, 178, 185, 187, 197, 199, 200, 207, 210, 220, 221, 223, 224, 233, 237, 248, 249, 251, 252, 257, 266, 269, 286, 296, 298, 299, 330, 334, 335, 347, 348, 350, 353, 354, 355, 356, 357, 362, 363, 371, 374, 391, 401, 434, 451, 452, 454, 457, 458, 459, 460, 461, 462, 467, 468, 476, 479, 496, 508, 541], "By": [5, 7, 9, 10, 12, 26, 32, 48, 49, 54, 66, 68, 71, 72, 78, 79, 80, 81, 82, 87, 93, 94, 101, 111, 114, 115, 137, 158, 165, 173, 177, 178, 183, 185, 187, 195, 199, 200, 205, 207, 210, 217, 220, 223, 224, 229, 233, 237, 245, 248, 251, 252, 257, 263, 264, 271, 281, 284, 285, 298, 299, 306, 327, 334, 335, 344, 347, 348, 353, 354, 355, 356, 357, 362, 368, 369, 376, 386, 389, 390, 410, 431, 438, 446, 448, 451, 452, 458, 459, 460, 461, 462, 467, 473, 474, 481, 491, 494, 495, 517, 538, 545], "unspecifi": [5, 79, 158, 185, 187, 207, 210, 233, 237, 299, 327, 354, 431, 459, 538], "c": [5, 10, 11, 12, 16, 18, 19, 20, 21, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 54, 62, 63, 66, 67, 68, 74, 75, 80, 82, 86, 87, 89, 95, 105, 106, 109, 110, 111, 115, 119, 128, 132, 140, 144, 145, 146, 159, 161, 164, 166, 167, 169, 171, 172, 175, 176, 182, 183, 185, 186, 187, 188, 190, 193, 194, 197, 198, 204, 205, 207, 209, 210, 211, 213, 216, 221, 222, 224, 228, 229, 232, 233, 236, 237, 238, 240, 241, 244, 249, 250, 252, 256, 257, 259, 265, 275, 276, 279, 280, 281, 285, 289, 296, 301, 306, 309, 313, 314, 315, 317, 328, 330, 333, 335, 336, 337, 339, 340, 343, 344, 350, 351, 355, 357, 361, 362, 364, 370, 380, 381, 384, 385, 386, 390, 394, 401, 405, 413, 417, 418, 419, 432, 434, 437, 439, 440, 442, 443, 446, 447, 448, 454, 455, 460, 462, 466, 467, 469, 475, 485, 486, 489, 490, 491, 495, 499, 508, 512, 520, 524, 525, 526, 539, 541, 544, 546, 547], "smaller": [5, 47, 48, 49, 51, 71, 72, 79, 80, 81, 82, 111, 115, 177, 178, 199, 200, 207, 220, 223, 224, 233, 237, 248, 251, 252, 281, 285, 299, 334, 335, 347, 348, 354, 355, 356, 357, 386, 390, 451, 452, 459, 460, 461, 462, 491, 495], "speed": [5, 8, 12, 19, 20, 36, 38, 43, 44, 48, 49, 50, 55, 72, 79, 80, 81, 177, 185, 199, 207, 223, 233, 251, 252, 299, 348, 354, 355, 356, 452, 459, 460, 461], "expens": [5, 36, 38, 47, 48, 49, 54, 71, 72, 79, 81, 87, 183, 199, 205, 220, 223, 229, 233, 248, 251, 257, 299, 347, 348, 354, 356, 362, 451, 452, 459, 461, 467], "unless": [5, 8, 10, 14, 16, 18, 19, 20, 25, 28, 31, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 66, 72, 79, 80, 81, 87, 93, 103, 106, 111, 115, 137, 144, 149, 150, 153, 166, 167, 177, 178, 185, 187, 199, 200, 207, 210, 220, 223, 224, 233, 237, 248, 251, 252, 257, 276, 299, 306, 313, 318, 319, 322, 348, 354, 355, 356, 362, 368, 378, 381, 410, 417, 422, 423, 426, 446, 452, 459, 460, 461, 467, 473, 483, 486, 491, 495, 517, 524, 529, 530, 533, 546, 547], "expect": [5, 12, 33, 36, 38, 47, 48, 49, 58, 71, 72, 77, 79, 81, 82, 83, 91, 102, 105, 121, 134, 158, 164, 165, 176, 179, 185, 187, 198, 201, 207, 210, 220, 222, 225, 232, 233, 237, 248, 250, 251, 253, 261, 272, 275, 291, 299, 327, 333, 334, 335, 347, 348, 354, 356, 357, 358, 366, 377, 380, 396, 413, 431, 437, 438, 451, 452, 457, 459, 461, 462, 463, 471, 482, 485, 501, 538, 544, 545, 557], "cross": [5, 46, 48, 49, 72, 81, 177, 199, 223, 251, 348, 356, 452, 461], "list": [5, 7, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 40, 43, 44, 48, 49, 54, 57, 58, 59, 60, 62, 66, 67, 71, 72, 77, 78, 79, 80, 81, 82, 84, 88, 89, 91, 93, 94, 96, 98, 99, 102, 103, 104, 105, 106, 111, 112, 115, 116, 119, 121, 122, 124, 126, 128, 132, 133, 134, 135, 137, 140, 142, 144, 146, 154, 157, 158, 159, 163, 164, 169, 171, 175, 176, 177, 178, 181, 182, 184, 185, 186, 187, 190, 193, 197, 198, 199, 200, 203, 204, 206, 207, 209, 210, 213, 216, 220, 221, 222, 223, 224, 227, 228, 230, 231, 232, 233, 236, 237, 240, 241, 244, 248, 250, 251, 252, 254, 255, 258, 259, 261, 263, 264, 266, 268, 269, 272, 273, 274, 275, 276, 281, 282, 285, 286, 289, 291, 292, 293, 295, 296, 298, 299, 301, 302, 303, 304, 306, 309, 311, 313, 315, 323, 326, 327, 328, 332, 333, 334, 335, 339, 343, 347, 348, 353, 354, 355, 356, 357, 359, 363, 364, 366, 368, 369, 371, 373, 374, 377, 378, 379, 380, 381, 386, 387, 390, 391, 394, 396, 397, 398, 400, 401, 405, 406, 407, 408, 410, 413, 415, 417, 419, 427, 430, 431, 432, 436, 437, 442, 446, 447, 451, 452, 457, 458, 459, 460, 461, 462, 464, 468, 469, 471, 473, 474, 476, 478, 479, 482, 483, 484, 485, 486, 491, 492, 495, 496, 499, 501, 502, 504, 506, 508, 512, 513, 514, 515, 517, 520, 522, 524, 526, 534, 537, 538, 539, 543, 544, 549, 553, 554, 556, 558, 559, 561, 562], "11": [5, 22, 35, 57, 87, 128, 148, 164, 183, 185, 187, 205, 207, 210, 229, 233, 237, 257, 296, 333, 362, 401, 437, 467, 508, 520, 528, 544], "4": [5, 14, 16, 21, 25, 26, 28, 31, 32, 47, 48, 49, 50, 54, 68, 74, 75, 79, 80, 81, 82, 83, 86, 87, 89, 96, 99, 116, 118, 119, 128, 131, 134, 137, 140, 146, 159, 164, 166, 167, 168, 173, 175, 176, 177, 179, 182, 183, 185, 187, 195, 197, 198, 199, 200, 201, 204, 205, 207, 210, 217, 220, 221, 222, 223, 224, 225, 228, 229, 233, 237, 245, 248, 249, 250, 251, 252, 253, 256, 257, 281, 285, 296, 299, 333, 344, 350, 351, 354, 355, 357, 358, 361, 362, 401, 404, 413, 437, 441, 448, 454, 455, 459, 460, 461, 462, 463, 466, 467, 469, 476, 479, 496, 498, 499, 508, 511, 517, 520, 526, 539, 544, 546, 547, 548, 564], "tank": [5, 33, 49, 54, 67, 89, 95, 96, 99, 109, 110, 116, 119, 123, 127, 128, 133, 137, 138, 141, 144, 146, 148, 152, 159, 164, 171, 185, 187, 193, 207, 210, 216, 233, 237, 244, 279, 280, 296, 333, 343, 384, 385, 401, 437, 447, 469, 475, 476, 479, 489, 490, 496, 499, 503, 507, 508, 513, 517, 518, 521, 524, 526, 528, 532, 539, 544, 560], "4d": 5, "11c": 5, "dev": [5, 8, 9, 14, 16, 18, 19, 20, 22, 23, 25, 27, 28, 29, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 58, 69, 72, 74, 75, 79, 81, 82, 86, 93, 100, 120, 123, 127, 128, 130, 133, 146, 148, 154, 158, 159, 164, 175, 182, 185, 187, 197, 199, 204, 207, 210, 218, 221, 223, 228, 233, 237, 246, 249, 251, 256, 263, 270, 290, 299, 302, 315, 317, 323, 327, 328, 334, 335, 345, 348, 350, 351, 354, 356, 357, 361, 368, 375, 395, 403, 406, 419, 421, 427, 431, 432, 449, 452, 454, 455, 459, 461, 462, 466, 473, 480, 500, 503, 507, 508, 510, 513, 526, 528, 534, 538, 539, 544, 549], "sd": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 130, 146, 210, 237, 315, 403, 419, 510, 526], "k": [5, 14, 18, 19, 20, 22, 25, 35, 36, 38, 43, 44, 68, 77, 79, 87, 96, 99, 106, 116, 146, 172, 173, 185, 194, 195, 207, 210, 217, 229, 233, 237, 245, 257, 266, 269, 276, 286, 299, 315, 344, 354, 362, 371, 374, 381, 391, 419, 448, 457, 459, 467, 476, 479, 486, 496, 526], "state": [5, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 63, 68, 72, 77, 79, 80, 81, 82, 85, 87, 88, 94, 95, 104, 105, 109, 110, 111, 114, 115, 122, 128, 132, 135, 140, 144, 145, 146, 149, 150, 152, 156, 159, 164, 169, 176, 177, 178, 184, 185, 186, 187, 190, 198, 199, 200, 206, 207, 209, 210, 213, 222, 223, 224, 229, 230, 231, 232, 233, 236, 237, 241, 250, 251, 252, 257, 258, 264, 273, 274, 275, 279, 280, 281, 284, 285, 292, 296, 299, 301, 304, 313, 318, 319, 325, 333, 334, 335, 340, 344, 348, 354, 355, 356, 357, 360, 362, 363, 369, 379, 380, 384, 385, 386, 389, 390, 397, 401, 405, 408, 413, 417, 418, 422, 423, 429, 437, 443, 448, 452, 457, 459, 460, 461, 462, 465, 467, 468, 474, 475, 484, 485, 489, 490, 491, 494, 495, 502, 508, 512, 515, 520, 524, 525, 526, 529, 530, 532, 536, 539, 544, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "onlin": [5, 37, 39, 48, 54, 58, 72, 75, 77, 81, 82, 84, 103, 111, 115, 133, 134, 136, 144, 145, 146, 148, 149, 151, 152, 154, 155, 158, 159, 164, 177, 187, 199, 210, 223, 237, 251, 254, 281, 285, 302, 303, 313, 314, 318, 320, 323, 324, 327, 333, 334, 335, 348, 351, 356, 357, 359, 378, 386, 390, 406, 407, 409, 417, 418, 422, 424, 427, 428, 431, 432, 437, 452, 455, 457, 461, 462, 464, 483, 491, 495, 513, 514, 516, 524, 525, 526, 528, 529, 531, 532, 534, 535, 538, 539, 544, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "config": [5, 9, 10, 14, 16, 25, 26, 28, 31, 32, 37, 39, 43, 44, 54, 68, 72, 87, 132, 140, 144, 152, 164, 176, 177, 186, 187, 198, 199, 209, 210, 222, 223, 236, 237, 250, 251, 301, 333, 344, 348, 405, 413, 437, 448, 452, 467, 512, 520, 524, 532, 544, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "cksum": [5, 54, 146, 152, 159, 164, 187, 210, 237, 333, 437, 526, 532, 539, 544, 550, 551, 552, 553, 555, 556, 557, 558, 559, 561, 562, 563], "draid1": [5, 81, 356, 461], "sda": [5, 54, 81, 86, 130, 133, 134, 137, 144, 148, 152, 164, 182, 187, 204, 210, 228, 237, 256, 333, 334, 356, 361, 403, 437, 461, 466, 510, 513, 517, 524, 528, 532, 544], "sdb": [5, 54, 81, 133, 137, 144, 148, 152, 164, 187, 210, 237, 333, 334, 356, 437, 461, 513, 517, 524, 528, 532, 544], "sdc": [5, 54, 81, 133, 137, 146, 148, 152, 164, 187, 210, 237, 333, 334, 356, 437, 461, 513, 517, 526, 528, 532, 544], "sdd": [5, 54, 81, 133, 137, 146, 152, 164, 187, 210, 237, 333, 334, 356, 437, 461, 513, 517, 526, 532, 544], "sde": [5, 137, 152, 164, 187, 210, 237, 333, 437, 517, 532, 544], "sdf": [5, 137, 152, 164, 187, 210, 237, 333, 437, 517, 532, 544], "sdg": [5, 210, 237], "sdh": 5, "sdi": 5, "sdj": 5, "sdk": 5, "furthermor": [5, 47, 49, 72, 348, 452], "logic": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 78, 79, 87, 93, 103, 128, 132, 146, 159, 164, 177, 183, 185, 186, 187, 205, 207, 209, 210, 229, 233, 236, 237, 251, 257, 263, 296, 298, 299, 301, 315, 328, 333, 348, 353, 354, 362, 368, 378, 401, 405, 419, 432, 437, 452, 458, 459, 467, 473, 483, 508, 512, 526, 539, 544], "shown": [5, 9, 10, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 49, 57, 71, 72, 80, 146, 165, 177, 187, 199, 210, 220, 223, 237, 248, 251, 315, 347, 348, 355, 419, 438, 451, 452, 460, 526, 545, 559], "major": [5, 8, 18, 19, 20, 22, 32, 35, 43, 44, 47, 48, 49, 54, 71, 72, 177, 185, 199, 220, 223, 248, 251, 347, 348, 451, 452, 550, 552, 558, 560, 561, 562, 563], "heal": [5, 47, 72, 80, 109, 110, 166, 167, 252, 355, 452, 460, 489, 490, 546, 547], "scale": [5, 47, 48, 49, 50, 62, 72, 177, 199, 223, 240, 251, 339, 348, 442, 452], "divid": [5, 47, 48, 49, 51, 72, 77, 79, 82, 128, 177, 185, 199, 207, 223, 233, 251, 296, 299, 348, 354, 401, 452, 457, 459, 462, 508], "greatli": [5, 8, 48, 54, 220], "restor": [5, 36, 38, 47, 67, 72, 80, 81, 87, 96, 99, 105, 109, 110, 111, 115, 116, 128, 134, 135, 154, 164, 185, 207, 233, 237, 251, 252, 257, 266, 269, 275, 279, 280, 281, 285, 286, 296, 303, 304, 323, 333, 334, 348, 355, 356, 362, 371, 374, 380, 384, 385, 386, 390, 391, 401, 407, 408, 427, 437, 447, 452, 460, 461, 467, 476, 479, 485, 489, 490, 491, 495, 496, 508, 514, 515, 534, 544, 552, 553, 555, 556, 563], "fraction": [5, 48, 72, 132, 177, 186, 199, 209, 223, 236, 251, 301, 348, 405, 452, 512], "follow": [5, 8, 9, 10, 12, 14, 16, 18, 19, 20, 21, 22, 23, 25, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 51, 54, 62, 63, 67, 72, 74, 77, 79, 80, 81, 82, 87, 88, 89, 90, 92, 93, 94, 95, 96, 97, 99, 101, 103, 105, 107, 108, 109, 110, 111, 113, 114, 115, 116, 118, 119, 123, 125, 127, 128, 130, 132, 133, 135, 137, 138, 140, 141, 142, 144, 146, 148, 152, 157, 162, 164, 169, 173, 175, 176, 177, 178, 183, 184, 185, 187, 188, 190, 194, 197, 198, 199, 200, 205, 206, 207, 209, 210, 211, 213, 220, 221, 222, 223, 224, 229, 230, 231, 232, 233, 236, 237, 238, 240, 241, 248, 249, 250, 251, 252, 257, 258, 259, 266, 267, 269, 271, 273, 275, 277, 281, 285, 286, 289, 294, 296, 299, 301, 304, 309, 311, 315, 326, 333, 334, 335, 337, 339, 340, 343, 348, 350, 354, 355, 356, 357, 362, 363, 364, 371, 372, 374, 376, 378, 380, 382, 386, 390, 391, 394, 399, 401, 403, 405, 408, 413, 415, 419, 430, 437, 442, 443, 447, 452, 454, 457, 459, 460, 461, 462, 467, 468, 469, 470, 472, 473, 474, 475, 476, 477, 479, 481, 483, 485, 487, 488, 489, 490, 491, 493, 494, 495, 496, 498, 499, 503, 505, 507, 508, 510, 512, 513, 515, 517, 518, 520, 521, 522, 524, 526, 528, 532, 537, 542, 544, 555, 556, 557], "graph": [5, 165, 438, 545], "show": [5, 12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 62, 72, 80, 82, 87, 89, 94, 95, 96, 99, 106, 113, 116, 118, 119, 128, 136, 149, 150, 159, 183, 185, 187, 199, 205, 207, 210, 223, 229, 233, 237, 240, 251, 257, 276, 296, 328, 335, 339, 348, 355, 357, 362, 370, 381, 401, 409, 422, 423, 432, 442, 452, 460, 462, 467, 469, 474, 475, 476, 479, 486, 493, 496, 498, 499, 508, 516, 529, 530, 539], "hour": [5, 12, 48, 72, 79, 133, 146, 164, 185, 187, 207, 210, 223, 233, 237, 251, 299, 333, 354, 437, 452, 459, 513, 526, 544], "90": 5, "hdd": [5, 54, 72, 251, 348, 452], "fill": [5, 48, 72, 79, 80, 111, 115, 133, 146, 164, 177, 178, 187, 199, 200, 210, 223, 224, 233, 237, 251, 252, 281, 285, 299, 333, 348, 354, 355, 386, 390, 437, 452, 459, 460, 491, 495, 513, 526, 544], "process": [5, 6, 9, 10, 12, 18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44, 48, 49, 50, 57, 66, 71, 72, 75, 78, 79, 80, 82, 88, 103, 104, 109, 110, 111, 113, 115, 117, 122, 132, 152, 158, 178, 184, 185, 187, 199, 200, 206, 207, 209, 210, 220, 223, 224, 230, 231, 233, 236, 237, 248, 251, 252, 258, 273, 274, 279, 280, 281, 283, 285, 287, 292, 298, 299, 301, 321, 327, 335, 347, 348, 351, 353, 354, 355, 357, 363, 378, 379, 384, 385, 386, 388, 390, 392, 397, 405, 425, 431, 446, 451, 452, 455, 458, 459, 460, 462, 468, 483, 484, 489, 490, 491, 493, 495, 497, 502, 512, 532, 538, 557], "handl": [5, 8, 11, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 45, 46, 47, 48, 49, 54, 71, 72, 74, 79, 81, 82, 85, 86, 105, 111, 115, 126, 128, 137, 140, 175, 176, 177, 181, 182, 184, 185, 187, 197, 198, 199, 203, 204, 206, 207, 210, 220, 221, 222, 223, 227, 228, 230, 232, 233, 237, 248, 249, 250, 251, 255, 256, 258, 275, 281, 285, 295, 299, 306, 334, 335, 347, 348, 350, 354, 356, 357, 360, 361, 380, 386, 390, 400, 401, 410, 413, 451, 452, 454, 459, 461, 462, 465, 466, 485, 491, 495, 506, 508, 517, 520], "almost": [5, 9, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 88, 184, 206, 230, 258, 363, 468], "ident": [5, 9, 36, 79, 81, 97, 107, 111, 115, 125, 144, 164, 185, 187, 207, 210, 233, 237, 267, 277, 281, 285, 294, 299, 313, 333, 334, 354, 356, 372, 382, 386, 390, 399, 417, 437, 459, 461, 477, 487, 491, 495, 505, 524, 544], "event": [5, 6, 11, 36, 47, 48, 72, 81, 82, 84, 88, 91, 95, 102, 103, 121, 143, 159, 164, 174, 177, 184, 185, 187, 196, 199, 206, 207, 210, 219, 220, 223, 230, 231, 233, 237, 247, 248, 251, 254, 258, 261, 265, 272, 273, 291, 312, 328, 333, 335, 347, 348, 356, 357, 359, 363, 366, 370, 377, 378, 396, 416, 432, 437, 452, 461, 462, 464, 468, 471, 475, 482, 483, 501, 523, 539, 544, 557], "echo": [5, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 48, 559], "offlin": [5, 48, 72, 77, 81, 82, 84, 87, 133, 139, 140, 146, 148, 150, 152, 158, 159, 164, 176, 177, 187, 198, 199, 210, 222, 223, 237, 250, 251, 254, 302, 308, 315, 317, 319, 321, 327, 328, 333, 334, 335, 348, 356, 357, 359, 406, 412, 413, 419, 421, 423, 425, 431, 432, 437, 452, 457, 461, 462, 464, 467, 513, 519, 520, 526, 528, 530, 532, 538, 539, 544], "sy": [5, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 72, 79, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459, 559], "replac": [5, 11, 18, 19, 20, 22, 27, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 72, 74, 75, 77, 79, 80, 81, 82, 84, 86, 92, 93, 94, 105, 108, 111, 113, 115, 118, 128, 130, 133, 134, 137, 139, 140, 145, 146, 147, 148, 152, 155, 156, 158, 159, 163, 164, 176, 178, 182, 185, 187, 197, 198, 199, 200, 204, 207, 210, 221, 222, 223, 224, 228, 232, 233, 237, 249, 250, 251, 252, 254, 256, 275, 281, 285, 296, 299, 302, 303, 306, 308, 309, 314, 315, 316, 317, 321, 324, 325, 327, 328, 332, 333, 334, 335, 348, 350, 351, 354, 355, 356, 357, 359, 361, 380, 386, 390, 401, 403, 406, 407, 410, 412, 413, 418, 419, 420, 421, 425, 428, 429, 431, 432, 436, 437, 452, 454, 455, 457, 459, 460, 461, 462, 464, 466, 472, 473, 474, 485, 488, 491, 493, 495, 498, 508, 510, 513, 514, 517, 519, 520, 525, 526, 527, 528, 532, 535, 536, 538, 539, 543, 544, 550, 552, 557, 563], "being": [5, 8, 10, 11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 71, 72, 77, 78, 79, 80, 81, 82, 85, 87, 89, 91, 102, 104, 109, 110, 111, 115, 117, 119, 121, 122, 130, 135, 137, 140, 141, 145, 161, 166, 167, 175, 176, 177, 178, 181, 185, 187, 197, 198, 199, 200, 203, 207, 210, 220, 221, 222, 223, 224, 227, 233, 237, 248, 250, 251, 252, 255, 257, 259, 261, 268, 272, 274, 279, 280, 281, 282, 285, 289, 291, 292, 298, 299, 304, 306, 310, 314, 330, 334, 335, 347, 348, 353, 354, 355, 356, 357, 360, 362, 364, 366, 377, 379, 384, 385, 386, 390, 392, 394, 396, 397, 403, 408, 410, 413, 414, 418, 434, 451, 452, 457, 458, 459, 460, 461, 462, 465, 467, 469, 471, 482, 484, 489, 490, 491, 495, 497, 499, 501, 502, 510, 515, 517, 520, 521, 525, 541, 546, 547, 556, 557, 559, 560], "continu": [5, 14, 16, 19, 20, 25, 31, 35, 36, 37, 39, 43, 44, 47, 48, 54, 58, 63, 72, 80, 81, 82, 91, 102, 105, 111, 115, 121, 140, 152, 162, 169, 176, 177, 187, 190, 198, 199, 210, 213, 222, 223, 232, 237, 240, 241, 250, 251, 261, 272, 275, 281, 285, 291, 321, 331, 334, 335, 340, 348, 355, 356, 357, 366, 377, 380, 386, 390, 396, 413, 425, 435, 443, 452, 460, 461, 462, 471, 482, 485, 491, 495, 501, 520, 532, 542, 550, 551, 552, 553, 557, 561, 562, 563], "possibli": [5, 11, 72, 185, 452], "wait": [5, 18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44, 48, 49, 50, 69, 72, 75, 82, 84, 91, 102, 121, 128, 134, 135, 136, 140, 143, 144, 145, 146, 149, 150, 152, 154, 156, 158, 159, 161, 164, 176, 177, 187, 198, 199, 210, 218, 222, 223, 233, 237, 246, 250, 251, 254, 261, 272, 291, 296, 303, 304, 309, 312, 313, 314, 315, 321, 323, 325, 327, 328, 330, 333, 335, 345, 348, 351, 357, 359, 366, 377, 396, 401, 407, 408, 409, 413, 416, 417, 418, 419, 422, 423, 425, 427, 429, 431, 432, 434, 437, 449, 452, 455, 462, 464, 471, 482, 501, 508, 514, 515, 516, 520, 523, 524, 525, 526, 529, 530, 532, 534, 536, 538, 539, 541, 544, 557, 559, 561, 563], "complet": [5, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 68, 71, 72, 78, 79, 80, 81, 82, 92, 93, 105, 109, 110, 111, 114, 115, 118, 126, 128, 132, 134, 140, 146, 152, 154, 156, 159, 163, 164, 173, 176, 177, 178, 185, 187, 195, 198, 199, 200, 207, 209, 210, 217, 220, 222, 223, 224, 232, 233, 236, 237, 245, 248, 250, 251, 252, 262, 263, 275, 279, 280, 281, 284, 285, 295, 296, 298, 299, 301, 303, 315, 321, 323, 325, 328, 332, 333, 334, 335, 344, 347, 348, 353, 354, 355, 356, 357, 367, 368, 380, 384, 385, 386, 389, 390, 400, 401, 405, 407, 413, 419, 425, 427, 429, 432, 436, 437, 448, 451, 452, 458, 459, 460, 461, 462, 472, 473, 485, 489, 490, 491, 494, 495, 506, 508, 512, 514, 520, 526, 532, 534, 536, 539, 543, 544, 550, 552, 554, 557, 559], "scan": [5, 18, 19, 20, 22, 35, 43, 44, 48, 49, 51, 54, 71, 72, 75, 79, 81, 87, 144, 156, 177, 185, 199, 207, 210, 220, 223, 233, 237, 248, 251, 257, 299, 313, 334, 347, 348, 351, 354, 356, 362, 417, 429, 451, 452, 455, 459, 461, 467, 524, 536, 549, 559], "progress": [5, 37, 39, 42, 48, 72, 80, 81, 87, 104, 122, 126, 134, 135, 152, 153, 154, 156, 159, 161, 163, 177, 183, 185, 187, 199, 205, 207, 210, 223, 224, 229, 233, 237, 251, 252, 257, 274, 292, 295, 304, 321, 322, 323, 325, 328, 330, 332, 334, 348, 355, 356, 362, 379, 397, 400, 408, 425, 426, 427, 429, 432, 434, 436, 452, 460, 461, 467, 484, 502, 506, 515, 532, 533, 534, 536, 539, 541, 543, 557, 559], "tue": [5, 96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508], "nov": [5, 173, 195, 217], "24": [5, 48, 71, 72, 79, 85, 128, 177, 185, 199, 207, 208, 222, 223, 233, 241, 243, 244, 245, 248, 250, 251, 252, 253, 255, 258, 273, 296, 299, 301, 337, 347, 348, 354, 360, 401, 451, 452, 459, 465, 508, 555], "14": [5, 14, 16, 25, 31, 47, 59, 60, 72, 128, 146, 148, 159, 164, 185, 187, 205, 207, 210, 229, 233, 237, 257, 296, 333, 348, 401, 437, 452, 508, 526, 528, 539, 544, 557, 564], "34": [5, 32, 72, 452], "25": [5, 48, 72, 78, 87, 132, 156, 177, 183, 185, 199, 205, 207, 209, 223, 229, 233, 236, 251, 257, 298, 301, 330, 332, 336, 348, 353, 362, 405, 429, 452, 458, 467, 512, 536], "2020": [5, 36, 37, 39, 47, 49, 71, 91, 102, 121, 129, 137, 231, 240, 241, 243, 244, 245, 248, 250, 252, 253, 255, 258, 261, 272, 273, 279, 280, 283, 291, 297, 301, 303, 310, 323, 328, 330, 332, 336, 337, 347, 362, 366, 368, 377, 384, 385, 388, 396, 402, 407, 410, 414, 451, 471, 482, 501, 509, 514, 517], "51t": 5, "4g": [5, 18, 19, 20, 22, 35, 43, 44, 54, 148, 164, 187, 210, 237, 333, 437, 528, 544], "59t": 5, "issu": [5, 8, 11, 12, 13, 17, 18, 19, 20, 22, 29, 32, 33, 35, 36, 37, 38, 39, 43, 44, 46, 48, 51, 54, 55, 56, 58, 59, 60, 72, 78, 80, 82, 87, 128, 140, 146, 156, 164, 166, 167, 176, 177, 185, 198, 199, 207, 210, 222, 223, 224, 233, 237, 250, 251, 252, 257, 296, 298, 315, 325, 333, 335, 348, 353, 355, 357, 362, 401, 413, 419, 429, 437, 452, 458, 460, 462, 467, 508, 520, 526, 536, 544, 546, 547, 552, 559], "07g": 5, "13t": 5, "326g": 5, "57": 5, "17": [5, 32, 47, 72, 117, 128, 129, 164, 185, 207, 233, 296, 297, 348, 392, 401, 402, 452, 497, 508, 509, 544], "done": [5, 8, 9, 10, 11, 14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 66, 72, 79, 88, 111, 115, 124, 128, 136, 140, 156, 159, 161, 162, 166, 167, 176, 177, 185, 187, 198, 199, 206, 207, 210, 222, 223, 230, 233, 237, 250, 251, 258, 281, 285, 293, 296, 299, 328, 330, 331, 348, 354, 363, 386, 390, 398, 401, 409, 413, 429, 432, 434, 435, 446, 452, 459, 468, 491, 495, 504, 508, 516, 520, 536, 539, 541, 542, 546, 547, 557, 559], "00": [5, 32, 54, 66, 74, 87, 156, 175, 183, 197, 205, 221, 229, 249, 257, 350, 362, 429, 446, 454, 467, 536], "21": [5, 96, 99, 116, 128, 185, 207, 210, 233, 237, 296, 354, 401, 452, 476, 479, 496, 508], "go": [5, 10, 12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 72, 109, 110, 156, 177, 199, 223, 251, 334, 348, 356, 429, 452, 489, 490, 536, 557], "unavail": [5, 23, 79, 81, 82, 91, 102, 121, 144, 158, 159, 187, 210, 233, 237, 261, 272, 291, 299, 313, 327, 328, 334, 335, 354, 356, 357, 366, 377, 396, 417, 431, 432, 459, 461, 462, 471, 482, 501, 524, 538, 539, 550, 551, 554, 555, 556, 558, 563], "inus": 5, "achiev": [5, 33, 47, 48, 49, 54, 72, 79, 82, 185, 207, 233, 237, 251, 299, 335, 348, 354, 357, 452, 459, 462], "goal": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 50, 58, 72, 79, 177, 199, 223, 233, 251, 299, 348, 354, 452, 459], "worth": [5, 48, 72, 94, 113, 118, 128, 177, 185, 199, 207, 223, 231, 233, 251, 273, 296, 348, 401, 452, 474, 493, 498, 508, 563], "moment": [5, 38, 133, 134, 154, 185, 187, 207, 210, 233, 237, 288, 302, 303, 323, 393, 406, 407, 427, 498, 513, 514, 534], "summar": [5, 156, 187, 210, 237, 325, 429, 536], "tree": [5, 7, 13, 27, 33, 48, 49, 72, 79, 80, 100, 106, 120, 123, 127, 178, 185, 200, 207, 223, 224, 233, 251, 252, 270, 276, 290, 299, 348, 354, 355, 375, 381, 395, 452, 459, 460, 480, 486, 500, 503, 507], "downsid": [5, 8, 9, 18, 19, 54], "ideal": [5, 11, 46, 47, 48, 49, 54, 71, 72, 91, 102, 121, 177, 199, 220, 223, 248, 251, 261, 272, 291, 347, 348, 366, 377, 396, 451, 452, 471, 482, 501], "space": [5, 6, 7, 8, 11, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 48, 58, 62, 63, 65, 67, 68, 72, 75, 77, 78, 79, 80, 81, 82, 87, 88, 91, 96, 97, 98, 99, 101, 102, 107, 108, 112, 116, 121, 125, 128, 132, 133, 134, 135, 137, 140, 142, 146, 148, 149, 150, 152, 157, 161, 163, 164, 169, 171, 173, 177, 178, 183, 184, 185, 186, 187, 190, 192, 193, 195, 199, 200, 205, 206, 207, 209, 210, 213, 215, 216, 217, 223, 224, 229, 230, 231, 233, 236, 237, 240, 241, 243, 244, 245, 251, 252, 257, 258, 261, 266, 267, 268, 269, 271, 272, 273, 277, 278, 282, 286, 291, 294, 296, 298, 299, 301, 304, 306, 309, 311, 315, 317, 318, 319, 321, 326, 330, 332, 333, 334, 335, 339, 340, 342, 343, 344, 348, 351, 353, 354, 355, 356, 357, 362, 363, 366, 371, 372, 373, 374, 376, 377, 382, 383, 387, 391, 396, 399, 401, 405, 408, 410, 413, 415, 419, 421, 422, 423, 425, 430, 434, 436, 437, 442, 443, 445, 447, 448, 452, 455, 457, 458, 459, 460, 461, 462, 467, 468, 471, 476, 477, 478, 479, 481, 482, 487, 488, 492, 496, 501, 505, 508, 512, 513, 515, 517, 520, 522, 526, 528, 529, 530, 532, 537, 541, 543, 544], "boundari": [5, 63, 82, 135, 140, 187, 190, 210, 213, 222, 237, 241, 250, 304, 335, 340, 357, 408, 413, 443, 462, 515, 520], "larger": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 51, 58, 71, 72, 79, 80, 81, 82, 111, 115, 134, 146, 177, 178, 185, 187, 199, 200, 207, 210, 220, 223, 224, 233, 237, 248, 251, 252, 281, 285, 299, 315, 334, 335, 347, 348, 354, 355, 356, 357, 386, 390, 419, 451, 452, 459, 460, 461, 462, 491, 495, 526], "o": [5, 6, 11, 12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 50, 52, 54, 58, 59, 60, 62, 65, 66, 68, 72, 75, 77, 78, 79, 80, 81, 82, 85, 87, 88, 91, 92, 93, 96, 97, 99, 101, 102, 104, 107, 109, 110, 111, 115, 116, 118, 121, 122, 125, 128, 131, 132, 133, 134, 136, 137, 140, 142, 144, 146, 148, 152, 154, 156, 157, 158, 159, 162, 164, 165, 169, 172, 173, 176, 177, 178, 181, 183, 185, 187, 190, 192, 194, 195, 198, 199, 200, 203, 205, 207, 209, 210, 213, 215, 217, 222, 223, 224, 227, 229, 231, 233, 236, 237, 240, 241, 243, 245, 250, 251, 252, 255, 257, 261, 262, 263, 266, 267, 269, 271, 272, 273, 274, 277, 279, 280, 281, 285, 286, 288, 291, 292, 294, 296, 299, 300, 301, 302, 303, 305, 306, 311, 313, 315, 317, 323, 325, 326, 327, 333, 334, 335, 339, 340, 342, 344, 348, 351, 354, 355, 356, 357, 360, 362, 363, 366, 367, 368, 371, 372, 374, 376, 377, 379, 382, 384, 385, 386, 390, 391, 393, 396, 397, 399, 401, 404, 405, 406, 407, 409, 410, 413, 415, 417, 419, 421, 427, 429, 430, 431, 435, 437, 438, 442, 445, 446, 448, 452, 455, 457, 458, 459, 460, 461, 462, 465, 467, 468, 471, 472, 473, 476, 477, 479, 481, 482, 484, 487, 489, 490, 491, 495, 496, 498, 501, 502, 505, 508, 511, 512, 513, 514, 516, 517, 520, 522, 524, 526, 528, 532, 534, 536, 537, 538, 539, 542, 544, 545, 555, 557, 559, 563, 564], "price": [5, 47], "pai": [5, 8, 79, 233, 299, 354, 459], "cannot": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 55, 63, 72, 77, 78, 79, 80, 81, 82, 88, 89, 91, 94, 100, 102, 105, 109, 110, 111, 115, 119, 120, 121, 123, 124, 127, 136, 140, 144, 156, 166, 167, 169, 176, 178, 185, 187, 190, 198, 199, 200, 207, 210, 213, 220, 222, 223, 224, 232, 233, 237, 241, 250, 251, 252, 258, 259, 261, 264, 270, 272, 275, 279, 280, 281, 285, 289, 290, 291, 293, 298, 299, 303, 306, 313, 334, 335, 340, 348, 353, 354, 355, 356, 357, 363, 364, 366, 369, 375, 377, 380, 384, 385, 386, 390, 394, 395, 396, 398, 407, 409, 413, 417, 443, 452, 457, 458, 459, 460, 461, 462, 468, 469, 471, 474, 480, 482, 485, 489, 490, 491, 495, 499, 500, 501, 503, 504, 507, 514, 516, 520, 524, 536, 546, 547, 550, 551, 553, 554, 555, 556, 558, 559, 560, 561, 562, 563], "therefor": [5, 8, 47, 48, 49, 54, 71, 72, 74, 79, 80, 109, 110, 140, 144, 166, 167, 175, 176, 177, 178, 185, 187, 197, 198, 199, 200, 207, 210, 220, 221, 222, 223, 224, 233, 237, 248, 249, 250, 251, 252, 279, 280, 299, 313, 336, 347, 348, 350, 354, 355, 384, 385, 413, 417, 439, 440, 451, 452, 454, 459, 460, 489, 490, 520, 524, 546, 547, 557], "depth": [5, 48, 72, 96, 99, 101, 116, 185, 199, 207, 223, 233, 251, 266, 269, 271, 286, 348, 371, 374, 376, 391, 452, 476, 479, 481, 496], "explan": [5, 63, 67, 169, 171, 190, 193, 213, 216, 241, 244, 340, 343, 443, 447], "out": [5, 8, 12, 17, 18, 19, 20, 21, 22, 29, 33, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 50, 54, 63, 66, 72, 79, 82, 92, 93, 94, 108, 113, 118, 128, 146, 156, 158, 159, 169, 177, 185, 187, 190, 199, 207, 210, 213, 220, 223, 233, 237, 241, 248, 251, 296, 299, 315, 325, 327, 328, 335, 340, 347, 348, 354, 357, 401, 419, 429, 431, 432, 443, 446, 451, 452, 459, 462, 472, 473, 474, 488, 493, 498, 508, 526, 536, 538, 539], "slide": 5, "present": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 55, 72, 79, 80, 82, 87, 88, 109, 110, 111, 115, 141, 159, 162, 164, 183, 184, 185, 187, 199, 205, 206, 207, 210, 223, 229, 230, 233, 237, 251, 257, 258, 279, 280, 281, 285, 299, 310, 328, 333, 348, 354, 355, 357, 362, 363, 384, 385, 386, 390, 414, 432, 435, 437, 452, 459, 460, 462, 467, 468, 489, 490, 491, 495, 521, 539, 542, 544, 555], "summit": [5, 58], "made": [5, 8, 10, 11, 12, 32, 47, 48, 49, 57, 72, 78, 79, 80, 82, 105, 111, 115, 118, 128, 133, 144, 149, 150, 164, 177, 178, 185, 187, 199, 200, 210, 223, 224, 232, 233, 237, 251, 252, 275, 281, 285, 296, 299, 313, 318, 319, 333, 335, 348, 354, 355, 357, 380, 386, 390, 401, 417, 422, 423, 437, 452, 458, 459, 460, 462, 485, 491, 495, 508, 513, 524, 529, 530, 544, 549, 551, 554, 557, 561, 562], "again": [5, 18, 19, 20, 21, 25, 36, 37, 38, 39, 43, 44, 54, 71, 72, 81, 144, 156, 183, 187, 210, 223, 237, 251, 313, 325, 334, 348, 356, 417, 429, 452, 461, 524, 536, 550, 551, 552, 554, 559, 561, 562], "simpli": [5, 36, 38, 48, 54, 55, 66, 68, 72, 79, 81, 91, 102, 121, 132, 173, 185, 187, 195, 199, 207, 209, 210, 217, 223, 233, 236, 237, 245, 251, 261, 272, 291, 299, 301, 334, 344, 348, 354, 356, 366, 377, 396, 405, 446, 448, 452, 459, 461, 471, 482, 501, 512, 559], "new": [5, 8, 9, 10, 12, 14, 16, 17, 18, 19, 20, 22, 25, 26, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 56, 58, 60, 66, 67, 68, 71, 72, 74, 77, 79, 80, 81, 82, 88, 89, 90, 91, 93, 94, 95, 102, 105, 108, 109, 110, 111, 113, 115, 118, 119, 121, 124, 128, 134, 137, 140, 146, 149, 150, 151, 154, 155, 156, 158, 159, 164, 169, 171, 173, 175, 176, 177, 178, 185, 187, 190, 193, 195, 197, 198, 199, 200, 207, 210, 213, 216, 217, 220, 221, 222, 223, 224, 233, 237, 241, 244, 245, 248, 249, 250, 251, 252, 259, 260, 261, 263, 265, 272, 275, 278, 279, 280, 281, 283, 285, 289, 291, 293, 296, 299, 303, 306, 315, 318, 319, 320, 323, 324, 325, 327, 328, 333, 334, 335, 343, 344, 347, 348, 350, 354, 355, 356, 357, 363, 364, 365, 366, 368, 370, 377, 380, 383, 384, 385, 386, 388, 390, 394, 396, 398, 401, 407, 410, 413, 419, 422, 423, 424, 427, 428, 429, 431, 432, 437, 446, 447, 448, 451, 452, 454, 457, 459, 460, 461, 462, 468, 469, 470, 471, 473, 474, 475, 482, 485, 488, 489, 490, 491, 493, 495, 498, 499, 501, 504, 508, 514, 517, 520, 526, 529, 530, 531, 534, 535, 536, 538, 539, 544, 550, 552, 557, 559], "call": [5, 12, 42, 47, 48, 49, 54, 66, 72, 78, 79, 80, 81, 85, 88, 105, 109, 110, 118, 130, 178, 181, 184, 185, 187, 199, 200, 203, 206, 207, 210, 223, 224, 227, 230, 231, 232, 233, 237, 251, 252, 255, 258, 273, 275, 279, 280, 288, 299, 300, 334, 348, 354, 355, 356, 360, 363, 380, 384, 385, 393, 403, 446, 452, 458, 459, 460, 461, 465, 468, 485, 489, 490, 498, 510], "essenti": [5, 8, 9, 55, 75, 351, 455], "longer": [5, 19, 20, 35, 36, 37, 38, 47, 48, 65, 68, 71, 72, 80, 82, 92, 93, 94, 108, 111, 113, 115, 118, 124, 128, 140, 144, 161, 162, 164, 173, 176, 177, 185, 187, 192, 195, 198, 199, 207, 210, 215, 217, 220, 222, 223, 224, 233, 237, 243, 245, 248, 250, 251, 252, 278, 281, 285, 293, 296, 313, 330, 331, 333, 335, 342, 344, 347, 348, 355, 357, 383, 386, 390, 398, 401, 413, 417, 434, 435, 437, 445, 448, 451, 452, 460, 462, 472, 473, 474, 488, 491, 493, 495, 498, 504, 508, 520, 524, 541, 542, 544, 550, 551, 552, 553, 555], "need": [5, 7, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 22, 25, 27, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 62, 63, 68, 71, 72, 78, 79, 80, 81, 82, 87, 88, 91, 92, 93, 94, 97, 100, 102, 107, 108, 109, 110, 111, 113, 115, 118, 120, 121, 123, 125, 127, 128, 130, 156, 166, 167, 169, 173, 177, 178, 184, 185, 187, 190, 195, 199, 200, 206, 207, 210, 213, 217, 220, 223, 224, 230, 233, 237, 240, 241, 245, 248, 251, 252, 258, 261, 267, 270, 272, 277, 279, 280, 281, 283, 285, 290, 291, 294, 296, 298, 299, 334, 335, 339, 340, 344, 347, 348, 353, 354, 355, 356, 357, 363, 366, 372, 375, 377, 382, 384, 385, 386, 388, 390, 395, 396, 399, 401, 403, 442, 443, 448, 451, 452, 458, 459, 460, 461, 462, 467, 468, 471, 472, 473, 474, 477, 480, 482, 487, 488, 489, 490, 491, 493, 495, 498, 500, 501, 503, 505, 507, 508, 510, 536, 546, 547, 557, 559], "subsequ": [5, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 66, 82, 159, 172, 184, 194, 206, 210, 230, 237, 335, 357, 446, 462, 539, 557], "sdl": 5, "45": 5, "82g": 5, "10t": 5, "78g": 5, "565g": 5, "99": [5, 43, 44], "04": [5, 40, 41, 43, 44, 66, 156, 429, 446, 536], "onc": [5, 9, 10, 12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 51, 66, 67, 68, 72, 79, 80, 81, 82, 87, 91, 92, 93, 94, 102, 105, 108, 109, 110, 113, 118, 121, 124, 128, 133, 134, 144, 146, 156, 162, 164, 171, 173, 177, 178, 183, 185, 187, 193, 195, 199, 200, 205, 207, 210, 216, 217, 223, 224, 229, 232, 233, 237, 244, 245, 251, 252, 257, 261, 263, 272, 275, 279, 280, 291, 293, 296, 299, 313, 315, 325, 331, 333, 334, 335, 343, 344, 348, 354, 355, 356, 357, 362, 366, 368, 377, 380, 384, 385, 396, 398, 401, 417, 419, 429, 435, 437, 446, 447, 448, 452, 459, 460, 461, 462, 467, 471, 472, 473, 474, 482, 485, 488, 489, 490, 493, 498, 501, 504, 508, 513, 524, 526, 536, 542, 544, 549, 550, 552, 557, 559], "normal": [5, 9, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 51, 71, 72, 79, 81, 87, 88, 89, 91, 96, 97, 99, 102, 104, 107, 109, 110, 111, 115, 116, 119, 121, 122, 125, 128, 133, 146, 148, 158, 159, 164, 177, 185, 187, 199, 206, 207, 210, 220, 223, 230, 233, 237, 248, 251, 258, 259, 261, 267, 272, 274, 277, 279, 280, 281, 285, 289, 291, 292, 294, 296, 299, 302, 315, 317, 327, 328, 333, 334, 347, 348, 354, 356, 363, 364, 366, 372, 377, 379, 382, 384, 385, 386, 390, 394, 396, 397, 399, 401, 406, 419, 421, 431, 432, 437, 451, 452, 459, 461, 467, 468, 469, 471, 476, 477, 479, 482, 484, 487, 489, 490, 491, 495, 496, 499, 501, 502, 505, 508, 513, 526, 528, 538, 539, 544, 549, 559], "healthi": [5, 48, 72, 82, 109, 110, 140, 151, 187, 198, 210, 222, 223, 237, 250, 251, 320, 335, 348, 357, 413, 424, 452, 462, 489, 490, 520, 531, 559], "Their": [6, 59, 60], "algorithm": [6, 18, 19, 20, 22, 35, 43, 44, 48, 49, 67, 72, 79, 80, 87, 109, 110, 140, 166, 167, 171, 176, 178, 185, 193, 198, 199, 200, 207, 216, 222, 223, 224, 229, 233, 244, 250, 251, 252, 257, 279, 280, 299, 343, 348, 354, 355, 362, 384, 385, 413, 447, 452, 459, 460, 467, 489, 490, 520, 546, 547], "acceler": [6, 47, 48, 72, 199, 223, 251, 348, 452], "microbenchmark": [6, 48], "disabl": [6, 7, 11, 14, 16, 18, 19, 20, 22, 25, 26, 28, 31, 32, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 62, 71, 72, 77, 79, 80, 81, 82, 87, 88, 96, 99, 105, 111, 115, 116, 128, 137, 177, 178, 181, 183, 184, 185, 199, 200, 203, 205, 206, 207, 210, 220, 223, 224, 227, 229, 230, 232, 233, 237, 240, 248, 251, 252, 255, 257, 258, 275, 281, 285, 296, 299, 306, 334, 339, 347, 348, 354, 355, 356, 357, 362, 363, 380, 386, 390, 401, 410, 442, 451, 452, 457, 459, 460, 461, 462, 467, 468, 476, 479, 485, 491, 495, 496, 508, 517, 559], "flag": [6, 7, 47, 48, 49, 54, 59, 60, 67, 71, 72, 75, 79, 80, 82, 85, 87, 90, 91, 93, 94, 97, 102, 103, 105, 106, 107, 109, 110, 111, 113, 115, 121, 125, 128, 133, 136, 137, 140, 144, 145, 146, 148, 149, 150, 152, 158, 159, 161, 162, 171, 172, 176, 177, 181, 183, 185, 187, 193, 194, 198, 199, 203, 205, 207, 210, 216, 220, 222, 223, 224, 227, 229, 232, 233, 237, 244, 248, 250, 251, 252, 255, 257, 260, 261, 263, 264, 267, 272, 275, 276, 277, 279, 280, 281, 283, 285, 291, 293, 294, 296, 299, 302, 306, 313, 314, 315, 317, 318, 319, 321, 327, 328, 330, 331, 335, 343, 347, 348, 351, 354, 355, 357, 360, 362, 365, 366, 368, 369, 372, 377, 378, 380, 381, 382, 384, 385, 386, 388, 390, 396, 399, 401, 406, 409, 410, 413, 417, 418, 419, 421, 422, 423, 425, 431, 432, 434, 435, 447, 451, 452, 455, 459, 460, 462, 465, 467, 470, 471, 473, 474, 477, 482, 483, 485, 486, 487, 489, 490, 491, 493, 495, 501, 505, 508, 513, 516, 517, 520, 524, 525, 526, 528, 529, 530, 532, 538, 539, 541, 542, 560], "refer": [6, 10, 11, 14, 16, 18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 41, 43, 44, 48, 49, 54, 60, 67, 72, 78, 79, 80, 82, 87, 94, 98, 101, 111, 112, 115, 128, 129, 137, 156, 159, 171, 177, 178, 183, 185, 187, 193, 199, 200, 205, 207, 210, 216, 223, 224, 229, 233, 237, 244, 251, 252, 257, 264, 268, 281, 282, 285, 296, 297, 298, 299, 306, 328, 335, 343, 348, 353, 354, 355, 357, 362, 369, 373, 386, 387, 390, 401, 402, 410, 429, 432, 447, 452, 458, 459, 460, 462, 467, 474, 478, 481, 491, 492, 495, 508, 509, 517, 536, 539, 559], "materi": 6, "introduct": [6, 10, 52], "effici": [6, 47, 48, 49, 54, 58, 71, 72, 79, 80, 82, 156, 185, 207, 220, 223, 224, 233, 237, 248, 251, 252, 299, 335, 347, 348, 354, 355, 357, 429, 451, 452, 459, 460, 462, 536], "consider": [6, 47, 48, 49, 58], "troubleshoot": [6, 48, 59, 60, 71, 220, 248, 347, 451], "about": [6, 12, 14, 18, 19, 20, 33, 36, 38, 43, 44, 47, 48, 49, 54, 58, 60, 63, 72, 75, 77, 79, 80, 82, 87, 91, 93, 94, 102, 105, 109, 110, 111, 115, 121, 128, 137, 140, 144, 148, 159, 164, 166, 167, 169, 176, 177, 178, 183, 185, 187, 190, 198, 199, 200, 205, 207, 210, 213, 222, 223, 224, 229, 232, 233, 237, 241, 250, 251, 252, 257, 261, 263, 264, 272, 275, 279, 280, 281, 285, 291, 296, 299, 306, 309, 313, 328, 333, 335, 336, 340, 348, 351, 354, 355, 357, 362, 366, 368, 369, 377, 380, 384, 385, 386, 390, 396, 401, 410, 413, 417, 421, 432, 437, 439, 440, 443, 452, 455, 457, 459, 460, 462, 467, 471, 473, 474, 482, 485, 489, 490, 491, 495, 501, 508, 517, 520, 524, 528, 539, 544, 546, 547, 557], "log": [6, 8, 10, 12, 18, 19, 20, 22, 25, 35, 36, 37, 38, 39, 43, 44, 48, 49, 50, 54, 66, 71, 72, 75, 79, 80, 81, 87, 88, 105, 128, 132, 137, 140, 143, 144, 145, 152, 164, 172, 176, 177, 183, 185, 186, 187, 194, 198, 199, 205, 207, 209, 210, 220, 222, 223, 229, 232, 233, 236, 237, 248, 250, 251, 252, 257, 275, 296, 299, 301, 306, 312, 313, 314, 321, 333, 334, 347, 348, 351, 354, 355, 356, 362, 363, 380, 401, 405, 410, 413, 416, 417, 418, 425, 437, 446, 451, 452, 455, 459, 460, 461, 467, 468, 485, 508, 512, 517, 520, 523, 524, 525, 532, 544, 557, 564], "file": [6, 8, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 22, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 43, 44, 47, 48, 52, 55, 58, 62, 63, 66, 68, 71, 72, 74, 75, 78, 79, 80, 81, 82, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 137, 140, 144, 146, 156, 159, 164, 166, 167, 168, 169, 172, 173, 175, 176, 177, 178, 181, 182, 183, 184, 185, 186, 187, 189, 190, 194, 195, 197, 198, 199, 200, 203, 204, 205, 206, 207, 209, 210, 212, 213, 217, 220, 221, 222, 223, 224, 227, 228, 229, 230, 231, 232, 233, 236, 237, 239, 240, 241, 245, 248, 249, 250, 251, 252, 255, 256, 257, 258, 259, 261, 263, 264, 265, 267, 268, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 306, 313, 315, 333, 334, 335, 336, 338, 339, 340, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 360, 361, 362, 363, 364, 366, 368, 369, 370, 372, 373, 375, 376, 377, 378, 379, 380, 381, 382, 384, 385, 386, 387, 388, 389, 390, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 404, 405, 410, 413, 417, 419, 429, 437, 439, 440, 441, 442, 443, 446, 448, 451, 452, 454, 455, 458, 459, 460, 461, 462, 465, 466, 467, 468, 469, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 511, 512, 517, 520, 524, 526, 536, 539, 544, 546, 547, 548, 549, 556, 559], "unkil": 6, "draid": [6, 49, 59, 60, 68, 80, 81, 137, 156, 344, 355, 356, 410, 429, 448, 460, 461, 517, 536], "creat": [6, 7, 8, 9, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 25, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 55, 58, 66, 68, 69, 71, 72, 74, 78, 79, 80, 81, 82, 84, 86, 87, 89, 90, 91, 92, 94, 95, 96, 97, 99, 102, 105, 107, 108, 109, 110, 111, 113, 115, 116, 118, 119, 121, 125, 128, 130, 131, 132, 133, 134, 144, 145, 158, 164, 172, 173, 175, 176, 177, 178, 182, 183, 185, 186, 187, 192, 194, 195, 197, 198, 199, 200, 204, 205, 207, 208, 209, 210, 215, 217, 218, 220, 221, 222, 223, 224, 228, 229, 231, 232, 233, 235, 236, 237, 243, 245, 246, 248, 249, 250, 251, 252, 254, 256, 257, 259, 260, 261, 262, 264, 265, 267, 272, 273, 275, 277, 278, 279, 280, 281, 283, 285, 288, 289, 291, 294, 296, 298, 299, 300, 301, 302, 303, 313, 314, 327, 333, 334, 335, 344, 345, 347, 348, 350, 353, 354, 355, 356, 357, 359, 361, 362, 364, 365, 366, 367, 369, 370, 372, 377, 380, 382, 383, 384, 385, 386, 388, 390, 393, 394, 396, 399, 401, 403, 404, 405, 406, 407, 417, 418, 431, 437, 446, 448, 449, 451, 452, 454, 458, 459, 460, 461, 462, 464, 466, 467, 469, 470, 471, 472, 474, 475, 476, 477, 479, 482, 485, 487, 488, 489, 490, 491, 493, 495, 496, 498, 499, 501, 505, 508, 510, 511, 512, 513, 514, 524, 525, 538, 544, 551, 553, 556, 558, 559], "rebuild": [6, 9, 18, 19, 25, 29, 32, 36, 38, 48, 49, 72, 81, 146, 251, 334, 348, 356, 452, 461, 526], "spare": [6, 49, 68, 80, 81, 137, 140, 141, 149, 150, 152, 164, 176, 187, 198, 210, 222, 237, 250, 306, 309, 310, 318, 319, 321, 333, 334, 344, 355, 356, 410, 413, 414, 422, 423, 425, 437, 448, 460, 461, 517, 520, 521, 529, 530, 532, 544, 550, 552, 557], "rebalanc": [6, 48], "There": [7, 8, 10, 12, 14, 16, 18, 19, 20, 25, 31, 33, 35, 36, 38, 42, 43, 44, 47, 54, 78, 91, 102, 105, 109, 110, 121, 137, 184, 187, 206, 210, 230, 232, 233, 237, 261, 272, 275, 279, 280, 291, 306, 366, 377, 380, 384, 385, 396, 410, 458, 471, 482, 485, 489, 490, 501, 517, 551, 553, 561, 562, 563], "how": [7, 8, 9, 10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44, 47, 48, 49, 54, 60, 62, 63, 71, 72, 74, 77, 79, 80, 82, 86, 89, 92, 93, 94, 95, 96, 99, 108, 111, 113, 115, 116, 118, 119, 128, 135, 140, 144, 146, 169, 175, 176, 177, 178, 182, 185, 190, 197, 198, 199, 200, 204, 207, 210, 213, 220, 221, 222, 223, 224, 228, 233, 237, 240, 241, 248, 249, 250, 251, 252, 256, 270, 281, 285, 290, 296, 299, 304, 313, 315, 335, 339, 340, 347, 348, 350, 354, 355, 357, 361, 386, 390, 401, 408, 413, 417, 419, 442, 443, 451, 452, 454, 457, 459, 460, 462, 466, 469, 472, 473, 474, 475, 476, 479, 488, 491, 493, 495, 496, 498, 499, 508, 515, 520, 524, 526, 557], "impact": [7, 11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 80, 81, 177, 178, 187, 199, 200, 210, 220, 223, 224, 237, 248, 251, 252, 334, 348, 355, 356, 452, 460, 461, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "found": [7, 8, 18, 19, 22, 36, 38, 43, 47, 48, 54, 66, 72, 79, 80, 82, 87, 105, 111, 115, 144, 146, 175, 187, 197, 199, 205, 210, 221, 223, 229, 232, 237, 251, 257, 275, 281, 285, 299, 313, 315, 335, 348, 354, 355, 357, 362, 380, 386, 390, 417, 419, 446, 452, 459, 460, 462, 467, 485, 491, 495, 524, 526, 559], "github": [7, 9, 12, 13, 16, 18, 19, 20, 22, 25, 27, 29, 35, 36, 37, 38, 39, 43, 44, 49, 54, 57, 59, 60, 63, 166, 167, 443, 546, 547, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 561, 562, 563], "your": [7, 8, 9, 12, 13, 17, 18, 19, 20, 22, 23, 26, 27, 28, 29, 33, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 47, 48, 49, 54, 57, 60, 63, 68, 72, 77, 78, 79, 81, 82, 91, 102, 111, 115, 121, 128, 131, 133, 144, 146, 164, 169, 173, 177, 185, 187, 190, 195, 199, 207, 208, 210, 213, 217, 223, 233, 235, 237, 241, 245, 251, 261, 272, 281, 285, 291, 296, 298, 299, 300, 313, 333, 334, 340, 344, 348, 353, 354, 356, 366, 377, 386, 390, 396, 401, 404, 417, 437, 443, 448, 452, 457, 458, 459, 461, 462, 471, 482, 491, 495, 501, 508, 511, 513, 524, 526, 544], "compil": [7, 8, 9, 12, 47, 48, 54, 72, 79, 199, 223, 251, 348, 354, 452, 459], "top": [7, 8, 10, 48, 49, 54, 72, 77, 79, 80, 81, 105, 109, 110, 152, 164, 177, 187, 199, 207, 210, 223, 224, 232, 233, 237, 251, 252, 275, 276, 279, 280, 299, 321, 333, 334, 348, 354, 355, 356, 380, 384, 385, 425, 437, 452, 457, 459, 460, 461, 485, 489, 490, 532, 544, 564], "basi": [7, 47, 49, 79, 140, 156, 161, 185, 207, 222, 233, 237, 250, 299, 330, 354, 413, 429, 434, 459, 520, 536, 541], "none": [7, 11, 14, 16, 18, 19, 20, 22, 25, 28, 29, 31, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 72, 78, 79, 80, 82, 96, 99, 103, 111, 113, 115, 116, 128, 137, 140, 144, 145, 152, 153, 161, 164, 176, 178, 185, 187, 198, 199, 200, 207, 210, 222, 223, 224, 233, 237, 250, 251, 252, 266, 269, 281, 283, 285, 286, 296, 298, 299, 306, 313, 314, 330, 333, 335, 348, 353, 354, 357, 371, 374, 378, 386, 388, 390, 391, 401, 410, 413, 417, 418, 426, 434, 437, 452, 458, 459, 462, 476, 479, 483, 491, 493, 495, 496, 508, 517, 520, 524, 525, 532, 533, 541, 544, 550, 551, 552, 553, 556, 557, 558, 559, 561, 562, 563], "arch": [7, 25, 31, 41, 59, 60], "distro": [7, 18, 19, 20, 22, 30, 41, 48, 52, 59, 60], "perf": [7, 8], "coverag": [7, 48], "unstabl": [7, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 74, 87, 183, 197, 205, 221, 229, 249, 257, 350, 362, 454, 467], "messag": [7, 10, 12, 14, 18, 19, 20, 22, 23, 35, 36, 38, 43, 44, 48, 54, 59, 60, 62, 63, 72, 79, 80, 82, 85, 105, 128, 132, 146, 164, 169, 172, 181, 185, 186, 187, 190, 194, 199, 203, 207, 209, 210, 213, 223, 227, 232, 233, 236, 237, 240, 241, 251, 255, 275, 296, 299, 301, 315, 333, 335, 339, 340, 348, 354, 355, 357, 360, 380, 401, 405, 419, 437, 442, 443, 452, 459, 460, 462, 465, 485, 508, 512, 526, 544], "comma": [7, 77, 79, 80, 82, 89, 94, 96, 99, 101, 104, 116, 119, 122, 132, 142, 144, 148, 157, 169, 172, 185, 187, 190, 194, 207, 210, 213, 233, 236, 237, 241, 259, 264, 266, 269, 271, 274, 286, 289, 292, 299, 301, 311, 313, 317, 326, 354, 355, 357, 364, 369, 371, 374, 376, 379, 391, 394, 397, 405, 415, 417, 421, 430, 457, 459, 460, 462, 469, 474, 476, 479, 481, 484, 496, 499, 502, 512, 522, 524, 528, 537], "tag": [7, 8, 12, 47, 52, 54, 58, 98, 105, 112, 165, 185, 207, 233, 268, 275, 282, 373, 380, 387, 438, 478, 485, 492, 545], "architectur": [7, 48, 56, 58, 71, 220, 248, 347, 451], "exclud": [7, 18, 19, 20, 22, 26, 35, 38, 39, 43, 44, 72, 81, 109, 110, 111, 115, 207, 233, 237, 279, 280, 334, 348, 356, 384, 385, 452, 461, 489, 490, 491, 495], "fedora": [7, 8, 13, 32, 41, 59, 60], "rawhid": 7, "coupl": 7, "text": [7, 10, 36, 38, 48, 63, 77, 80, 82, 105, 169, 187, 190, 210, 213, 232, 233, 237, 241, 275, 335, 340, 355, 357, 380, 443, 457, 460, 462, 485], "bodi": [7, 12, 63, 169, 190, 213, 241, 340, 443], "sign": [7, 10, 12, 18, 19, 20, 32, 35, 36, 38, 43, 44, 47, 54, 58, 59, 60, 87, 94, 105, 185, 207, 232, 233, 257, 264, 275, 362, 369, 380, 467, 474, 485], "contributor": [7, 47], "email": [7, 10, 12, 18, 19, 20, 22, 35, 38, 39, 43, 44], "attempt": [7, 23, 37, 39, 47, 48, 49, 63, 72, 78, 80, 81, 82, 87, 91, 98, 102, 104, 105, 109, 110, 111, 112, 115, 121, 122, 128, 136, 137, 144, 149, 150, 158, 164, 166, 167, 169, 177, 178, 183, 185, 187, 190, 199, 200, 205, 207, 210, 213, 223, 224, 229, 232, 233, 237, 241, 251, 252, 257, 261, 268, 272, 274, 275, 279, 280, 282, 291, 292, 296, 298, 306, 313, 318, 319, 327, 333, 334, 335, 340, 348, 353, 355, 356, 357, 362, 366, 373, 377, 379, 380, 384, 385, 387, 396, 397, 401, 409, 410, 417, 422, 423, 431, 437, 443, 452, 458, 460, 461, 462, 467, 471, 478, 482, 484, 485, 489, 490, 491, 492, 495, 501, 502, 508, 516, 517, 524, 529, 530, 538, 544, 546, 547, 554, 555, 556, 557], "correct": [7, 8, 12, 13, 18, 19, 36, 37, 38, 39, 47, 48, 49, 54, 63, 72, 75, 81, 91, 102, 109, 110, 121, 164, 169, 187, 190, 210, 213, 233, 237, 241, 261, 272, 291, 334, 340, 351, 356, 366, 377, 396, 437, 443, 452, 455, 461, 471, 482, 489, 490, 501, 544, 555, 557, 559], "against": [7, 8, 9, 10, 36, 38, 47, 48, 49, 54, 58, 68, 72, 79, 80, 81, 82, 91, 102, 105, 109, 110, 111, 115, 121, 169, 173, 177, 185, 190, 195, 199, 207, 213, 217, 223, 224, 232, 233, 237, 241, 245, 251, 252, 261, 272, 275, 279, 280, 281, 285, 291, 299, 335, 344, 348, 354, 355, 356, 357, 366, 377, 380, 384, 385, 386, 390, 396, 448, 452, 459, 460, 461, 462, 471, 482, 485, 489, 490, 491, 495, 501], "instruct": [7, 9, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 54, 62, 72, 103, 105, 199, 223, 231, 232, 233, 240, 251, 273, 275, 339, 348, 378, 380, 442, 452, 483, 485], "ref": [7, 67, 171, 193, 216, 244, 343, 447], "123": [7, 79, 96, 99, 116, 128, 185, 207, 233, 296, 299, 354, 401, 459, 476, 479, 496, 508], "head": [7, 10, 29, 47, 48, 80, 111, 115, 185, 207, 233, 281, 285, 386, 390, 460, 491, 495], "clone": [7, 9, 12, 13, 17, 18, 19, 20, 21, 22, 27, 29, 33, 35, 36, 37, 38, 39, 43, 44, 58, 72, 78, 79, 80, 82, 84, 87, 89, 91, 93, 94, 102, 105, 108, 109, 110, 111, 113, 114, 115, 118, 119, 121, 128, 185, 207, 232, 233, 251, 252, 254, 259, 261, 264, 272, 275, 278, 279, 280, 281, 284, 285, 288, 289, 291, 296, 298, 299, 348, 353, 354, 355, 359, 364, 366, 369, 377, 380, 383, 384, 385, 386, 389, 390, 393, 394, 396, 401, 452, 458, 459, 460, 462, 464, 467, 469, 471, 473, 474, 482, 485, 488, 489, 490, 491, 493, 494, 495, 498, 499, 501, 508], "master": [7, 8, 10, 11, 12, 27, 59, 60, 61, 80, 91, 102, 121, 261, 272, 291, 366, 377, 396, 460, 471, 482, 501], "v4": 7, "execut": [7, 8, 48, 50, 54, 66, 68, 72, 75, 79, 85, 88, 104, 105, 117, 122, 128, 161, 172, 173, 177, 184, 185, 194, 195, 199, 206, 207, 217, 223, 230, 232, 233, 237, 245, 251, 258, 274, 275, 292, 296, 299, 330, 344, 348, 351, 354, 360, 363, 379, 380, 392, 397, 401, 434, 446, 448, 452, 455, 459, 465, 468, 484, 485, 497, 502, 508, 541, 555], "prefer": [7, 9, 18, 19, 20, 22, 23, 27, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 71, 82, 111, 115, 187, 210, 220, 237, 248, 281, 285, 335, 347, 357, 386, 390, 451, 462, 491, 495], "scenario": [7, 33, 47, 49, 87, 111, 115, 281, 285, 386, 390, 467, 491, 495], "No": [7, 22, 34, 43, 47, 51, 54, 55, 72, 89, 91, 93, 94, 102, 105, 108, 111, 115, 119, 121, 152, 158, 165, 177, 185, 199, 207, 223, 232, 233, 251, 259, 261, 263, 264, 272, 275, 278, 281, 285, 289, 291, 348, 364, 366, 368, 369, 377, 380, 383, 386, 390, 394, 396, 425, 431, 438, 452, 469, 471, 473, 474, 482, 485, 488, 491, 495, 499, 501, 532, 538, 545, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "lint": [7, 11, 169, 190, 213, 241, 340], "At": [7, 9, 11, 38, 47, 48, 49, 72, 81, 158, 187, 210, 237, 251, 327, 334, 348, 356, 431, 452, 461, 538, 561, 562], "variabl": [7, 8, 9, 18, 19, 20, 21, 35, 36, 37, 38, 39, 43, 44, 48, 50, 66, 68, 72, 79, 85, 87, 88, 103, 128, 130, 132, 136, 144, 146, 149, 150, 164, 173, 177, 183, 184, 185, 186, 187, 195, 199, 205, 206, 209, 210, 217, 223, 229, 230, 236, 237, 245, 251, 257, 258, 296, 301, 313, 315, 333, 344, 348, 354, 360, 362, 363, 378, 401, 403, 405, 409, 417, 419, 422, 423, 437, 446, 448, 452, 459, 465, 467, 468, 483, 508, 510, 512, 516, 524, 526, 529, 530, 544], "brief": [7, 48, 49], "descript": [7, 10, 12, 43, 48, 49, 51, 56, 62, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 169, 171, 172, 173, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 190, 192, 193, 194, 195, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 215, 216, 217, 218, 220, 221, 222, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 238, 240, 241, 243, 244, 245, 246, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 339, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "test_prepare_watchdog": 7, "watchdog": 7, "test_prepare_shar": 7, "nf": [7, 8, 18, 19, 20, 22, 35, 37, 38, 39, 43, 44, 48, 72, 79, 81, 89, 119, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 259, 289, 299, 334, 348, 354, 356, 364, 394, 452, 459, 461, 469, 499], "samba": [7, 8, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 79, 128, 185, 207, 233, 296, 299, 354, 401, 459, 508], "server": [7, 8, 10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 52, 54, 57, 79, 111, 115, 185, 207, 233, 281, 285, 299, 354, 386, 390, 459, 491, 495], "test_splat_skip": 7, "splat": [7, 171, 193], "test_splat_opt": 7, "line": [7, 10, 12, 18, 19, 20, 22, 25, 27, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 63, 66, 74, 79, 80, 81, 88, 93, 95, 101, 105, 128, 130, 131, 137, 146, 164, 165, 169, 175, 184, 185, 187, 190, 197, 206, 207, 208, 210, 213, 221, 230, 232, 233, 235, 237, 241, 249, 258, 263, 265, 271, 275, 296, 299, 300, 306, 315, 333, 334, 340, 350, 354, 355, 356, 363, 368, 370, 376, 380, 401, 403, 404, 410, 419, 437, 438, 443, 446, 454, 459, 460, 461, 468, 473, 475, 481, 485, 508, 510, 511, 517, 526, 544, 545], "test_ztest_skip": 7, "ztest": [7, 8, 64, 65, 67, 170, 171, 191, 192, 193, 214, 215, 216, 242, 243, 244, 341, 342, 343, 444, 445, 447], "test_ztest_timeout": 7, "length": [7, 10, 48, 49, 54, 72, 79, 81, 91, 102, 121, 128, 177, 185, 199, 207, 223, 233, 251, 261, 272, 291, 296, 299, 334, 348, 354, 356, 366, 377, 396, 401, 452, 459, 461, 471, 482, 501, 508], "test_ztest_dir": 7, "test_ztest_opt": 7, "pass": [7, 8, 9, 12, 47, 48, 49, 66, 67, 68, 72, 79, 88, 93, 105, 130, 146, 164, 172, 177, 184, 194, 199, 206, 210, 223, 230, 232, 233, 237, 251, 258, 263, 275, 299, 315, 343, 344, 348, 354, 363, 368, 380, 403, 419, 437, 446, 447, 448, 452, 459, 468, 473, 485, 510, 526, 544], "test_ztest_core_dir": 7, "core": [7, 8, 18, 19, 20, 22, 25, 31, 35, 36, 38, 43, 44, 48, 87, 137, 160, 164, 177, 183, 185, 187, 199, 205, 210, 223, 229, 237, 251, 257, 306, 329, 333, 362, 410, 433, 437, 467, 517, 540, 544], "dump": [7, 11, 37, 39, 48, 68, 79, 80, 82, 87, 137, 164, 166, 167, 183, 185, 187, 188, 200, 205, 207, 210, 211, 217, 224, 229, 233, 237, 238, 245, 252, 257, 299, 306, 333, 335, 336, 337, 344, 354, 355, 357, 362, 410, 437, 439, 440, 448, 459, 460, 462, 467, 517, 544, 546, 547], "test_zimport_skip": 7, "zimport": 7, "test_zimport_dir": 7, "test_zimport_vers": 7, "test_zimport_pool": 7, "test_zimport_opt": 7, "test_xfstests_skip": 7, "xfstest": 7, "test_xfstests_url": 7, "url": [7, 32, 75, 79, 103, 351, 354, 378, 455, 459, 483], "download": [7, 9, 10, 12, 14, 16, 25, 28, 31, 37, 39, 43, 44, 49, 57], "test_xfstests_v": 7, "tarbal": [7, 54], "test_xfstests_pool": 7, "test_xfstests_f": 7, "test_xfstests_vdev": 7, "test_xfstests_opt": 7, "test_zfstests_skip": 7, "test_zfstests_dir": 7, "loopback": [7, 8, 37, 39, 128, 401, 508], "test_zfstests_disk": 7, "delimit": [7, 97, 98, 106, 107, 112, 125, 172, 185, 194, 207, 233, 267, 268, 277, 282, 294, 372, 373, 381, 382, 387, 399, 477, 478, 486, 487, 492, 505], "test_zfstests_disks": 7, "test_zfstests_it": 7, "runner": [7, 64, 444], "test_zfstests_opt": 7, "test_zfstests_runfil": 7, "runfil": [7, 66, 446], "test_zfstests_tag": 7, "test_zfsstress_skip": 7, "zfsstress": 7, "test_zfsstress_url": 7, "test_zfsstress_v": 7, "test_zfsstress_runtim": 7, "durat": [7, 8, 48, 72, 104, 122, 185, 199, 207, 223, 233, 251, 274, 292, 348, 379, 397, 452, 484, 502], "runstress": 7, "test_zfsstress_pool": 7, "test_zfsstress_f": 7, "test_zfsstress_fsopt": 7, "test_zfsstress_vdev": 7, "test_zfsstress_opt": 7, "offici": [8, 9, 17, 25, 26, 32, 37, 39, 42, 43, 44, 54, 57], "maintain": [8, 12, 13, 26, 47, 48, 49, 54, 58, 79, 80, 82, 88, 94, 113, 118, 128, 134, 178, 184, 185, 200, 206, 207, 224, 230, 233, 240, 252, 258, 296, 299, 354, 355, 357, 363, 401, 459, 460, 462, 468, 474, 493, 498, 508], "organ": [8, 45, 54, 80, 81, 178, 187, 200, 210, 224, 237, 252, 334, 355, 356, 460, 461], "primari": [8, 18, 19, 20, 22, 32, 35, 36, 37, 38, 39, 43, 44, 48, 49, 79, 152, 160, 164, 181, 185, 203, 207, 210, 227, 233, 237, 255, 299, 321, 329, 333, 354, 425, 433, 437, 459, 532, 540, 544], "git": [8, 12, 13, 17, 18, 19, 20, 22, 27, 29, 35, 36, 37, 38, 39, 43, 44, 58, 59, 60], "project": [8, 10, 12, 14, 16, 25, 31, 42, 43, 44, 45, 49, 59, 60, 72, 79, 80, 84, 92, 93, 94, 97, 107, 108, 113, 118, 125, 128, 185, 207, 224, 233, 251, 252, 254, 267, 277, 294, 296, 299, 348, 354, 355, 359, 372, 382, 399, 401, 452, 459, 460, 464, 472, 473, 474, 477, 487, 488, 493, 498, 505, 508], "main": [8, 10, 18, 19, 20, 22, 23, 35, 36, 38, 40, 42, 43, 44, 48, 49, 54, 81, 105, 133, 146, 164, 187, 210, 232, 237, 275, 333, 334, 356, 380, 437, 461, 485, 513, 526, 544, 563], "compon": [8, 68, 74, 77, 78, 79, 81, 82, 86, 88, 111, 115, 128, 133, 146, 148, 158, 159, 173, 175, 182, 184, 185, 187, 195, 197, 204, 206, 207, 210, 217, 221, 228, 230, 233, 237, 245, 249, 256, 258, 281, 285, 298, 299, 302, 315, 317, 327, 328, 334, 344, 350, 353, 354, 356, 361, 363, 386, 390, 406, 419, 421, 431, 432, 448, 454, 457, 458, 459, 461, 462, 466, 468, 491, 495, 508, 513, 526, 528, 538, 539], "upstream": [8, 10, 11, 12, 18, 19, 20, 32, 36, 37, 38, 43, 44], "code": [8, 10, 11, 12, 13, 21, 45, 47, 48, 49, 56, 58, 63, 72, 79, 83, 88, 105, 128, 169, 179, 184, 185, 186, 190, 201, 206, 207, 209, 213, 223, 225, 230, 232, 233, 236, 241, 251, 253, 258, 275, 299, 301, 340, 348, 354, 358, 363, 380, 401, 443, 452, 459, 463, 468, 485, 508, 559], "extend": [8, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 62, 72, 79, 80, 181, 185, 199, 203, 207, 223, 227, 233, 240, 251, 255, 299, 339, 348, 354, 442, 452, 459, 460], "vast": [8, 451], "self": [8, 19, 20, 22, 35, 43, 44, 47], "modif": [8, 12, 33, 49, 78, 79, 88, 89, 118, 119, 184, 185, 206, 207, 230, 233, 258, 259, 288, 289, 299, 354, 363, 364, 393, 394, 458, 459, 468, 469, 498, 499], "thin": [8, 79, 83, 185, 207, 233, 299, 354, 358, 459, 463], "shim": [8, 18, 19, 20, 22, 35, 36, 38, 49], "respons": [8, 12, 47, 48, 49, 51, 71, 72, 78, 88, 177, 184, 185, 199, 206, 207, 220, 223, 230, 233, 248, 251, 258, 298, 347, 348, 353, 363, 451, 452, 458, 468, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "fundament": [8, 11], "It": [8, 9, 10, 11, 12, 14, 18, 19, 20, 21, 22, 26, 33, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 49, 54, 55, 63, 65, 71, 72, 74, 75, 78, 79, 80, 81, 82, 83, 87, 109, 110, 111, 115, 128, 130, 131, 144, 160, 164, 166, 167, 169, 175, 177, 178, 179, 181, 183, 185, 187, 190, 197, 199, 200, 201, 203, 205, 207, 208, 210, 213, 220, 221, 223, 224, 225, 227, 229, 231, 233, 235, 237, 241, 248, 249, 251, 252, 253, 255, 257, 273, 276, 281, 285, 296, 298, 299, 300, 313, 329, 333, 334, 335, 340, 342, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 362, 386, 390, 401, 403, 404, 417, 433, 437, 443, 445, 451, 452, 454, 455, 458, 459, 460, 461, 462, 463, 467, 489, 490, 491, 495, 508, 510, 511, 524, 540, 544, 546, 547, 554, 556, 557], "platform": [8, 9, 12, 22, 33, 47, 49, 54, 56, 58, 60, 79, 81, 100, 120, 141, 164, 173, 185, 187, 207, 210, 233, 237, 299, 310, 333, 334, 354, 356, 375, 395, 414, 437, 459, 461, 480, 500, 521, 544], "merg": [8, 11, 48, 66, 140, 146, 222, 237, 250, 315, 413, 419, 446, 520, 526], "first": [8, 9, 12, 13, 21, 23, 26, 32, 33, 40, 41, 47, 48, 49, 50, 51, 54, 57, 63, 72, 74, 75, 79, 80, 81, 86, 94, 95, 97, 105, 107, 109, 110, 111, 115, 125, 140, 144, 146, 166, 167, 169, 172, 175, 177, 178, 182, 185, 187, 190, 194, 197, 199, 200, 204, 207, 210, 213, 220, 221, 222, 223, 224, 228, 232, 233, 237, 241, 248, 249, 250, 251, 252, 256, 264, 265, 267, 275, 277, 279, 280, 281, 285, 294, 299, 313, 315, 334, 340, 347, 348, 350, 351, 354, 355, 356, 361, 369, 370, 372, 380, 382, 384, 385, 386, 390, 399, 413, 417, 419, 443, 452, 454, 455, 459, 460, 461, 466, 474, 475, 477, 485, 487, 489, 490, 491, 495, 505, 520, 524, 526, 546, 547, 559, 563], "thing": [8, 9, 10, 12, 18, 19, 20, 33, 36, 37, 38, 39, 43, 44, 47, 49, 130, 403, 451, 510], "ll": [8, 9, 10, 11, 12, 33, 49, 54, 63, 87, 169, 190, 205, 213, 229, 241, 257, 340, 362, 443, 467], "prepar": [8, 9, 12, 13, 33, 48, 54, 130, 403, 510], "environ": [8, 9, 14, 16, 23, 25, 27, 31, 37, 39, 47, 48, 54, 63, 66, 68, 75, 77, 79, 82, 85, 87, 88, 103, 128, 130, 132, 136, 144, 146, 149, 150, 164, 172, 173, 183, 184, 185, 186, 187, 194, 195, 205, 206, 207, 209, 210, 217, 229, 230, 233, 236, 237, 245, 257, 258, 296, 299, 301, 313, 315, 333, 335, 344, 351, 354, 357, 360, 362, 363, 378, 401, 403, 405, 409, 417, 419, 422, 423, 437, 443, 446, 448, 455, 457, 459, 462, 465, 467, 468, 483, 508, 510, 512, 516, 524, 526, 529, 530, 544], "chain": 8, "header": [8, 9, 18, 19, 20, 22, 23, 25, 26, 35, 36, 37, 38, 39, 43, 44, 47, 48, 62, 72, 81, 87, 95, 96, 97, 98, 99, 101, 107, 112, 116, 125, 140, 142, 146, 147, 148, 157, 163, 166, 167, 169, 177, 183, 185, 187, 188, 190, 199, 205, 207, 210, 211, 213, 223, 229, 233, 237, 238, 240, 241, 251, 257, 265, 266, 267, 268, 269, 271, 277, 282, 286, 294, 309, 311, 315, 316, 317, 326, 332, 334, 336, 337, 339, 340, 348, 356, 362, 370, 371, 372, 373, 374, 376, 382, 387, 391, 399, 413, 415, 419, 420, 421, 430, 436, 439, 440, 442, 452, 461, 467, 475, 476, 477, 478, 479, 481, 487, 492, 496, 505, 520, 522, 526, 527, 528, 537, 543, 546, 547], "packag": [8, 13, 15, 16, 18, 19, 20, 22, 23, 25, 26, 27, 31, 32, 33, 35, 36, 37, 38, 39, 40, 42, 43, 44, 54, 59, 60, 77, 79, 82, 109, 110, 111, 115, 181, 185, 203, 207, 227, 233, 255, 279, 280, 281, 285, 299, 354, 384, 385, 386, 390, 457, 459, 462, 489, 490, 491, 495, 559], "aren": [8, 9, 48, 72, 111, 115, 177, 199, 223, 251, 281, 285, 348, 386, 390, 452, 491, 495], "won": [8, 9, 36, 37, 48, 49, 72, 81, 199, 223, 237, 251, 334, 348, 356, 452, 461], "properli": [8, 9, 11, 14, 16, 25, 31, 47, 48, 49, 50, 54, 65, 72, 105, 177, 178, 192, 199, 200, 215, 223, 224, 232, 243, 251, 252, 275, 342, 348, 380, 445, 452, 485], "latest": [8, 9, 14, 16, 25, 26, 31, 32, 33, 47, 48, 54, 72, 79, 159, 187, 207, 210, 233, 237, 299, 328, 348, 354, 432, 452, 459, 539], "rhel": [8, 13, 31, 41, 59, 60], "cento": [8, 13, 32], "sudo": [8, 9, 17, 18, 19, 20, 22, 27, 35, 36, 37, 38, 39, 43, 44, 66, 446], "yum": [8, 9, 32], "epel": [8, 9, 31, 32], "gcc": [8, 9], "autoconf": [8, 9, 27], "automak": [8, 9, 27], "libtool": [8, 9], "rpm": [8, 9, 25, 26, 31, 32, 43, 44], "libtirpc": [8, 9], "devel": [8, 9, 25, 26, 27, 32, 56], "libblkid": [8, 9, 144, 210, 237, 313, 417, 524], "libuuid": [8, 9], "libudev": [8, 9], "openssl": [8, 9], "zlib": [8, 9], "libaio": [8, 9], "libattr": [8, 9], "elfutil": [8, 9], "libelf": [8, 9], "unam": [8, 9, 43, 44], "python": [8, 9, 17, 27, 240], "python2": [8, 9], "setuptool": [8, 9], "cffi": [8, 9], "libffi": [8, 9], "ncompress": [8, 9], "libcurl": [8, 79, 354, 459], "enablerepo": [8, 9], "dkm": [8, 18, 19, 20, 22, 23, 25, 41, 43], "dnf": [8, 9, 25, 26, 31, 32], "skip": [8, 9, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 66, 72, 87, 103, 109, 110, 111, 115, 177, 199, 205, 223, 229, 233, 251, 257, 279, 280, 348, 362, 378, 384, 385, 386, 390, 446, 452, 467, 483, 489, 490, 491, 495], "broken": [8, 9, 48, 63, 146, 169, 190, 210, 213, 223, 237, 241, 251, 315, 340, 419, 443, 526, 563], "python3": [8, 9, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "powertool": [8, 9], "debian": [8, 13, 41, 43, 44, 49, 54, 59, 60, 62, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 232, 246, 249, 256, 257, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 271, 272, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 339, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547], "ubuntu": [8, 13, 14, 16, 18, 19, 20, 25, 31, 33, 41, 43, 44, 49, 54, 59, 60], "apt": [8, 9, 18, 19, 20, 22, 23, 35, 36, 37, 38, 39, 40], "gawk": [8, 9], "alien": [8, 9], "fakeroot": [8, 9], "uuid": [8, 9, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 54], "libssl": [8, 9], "zlib1g": [8, 9], "libattr1": [8, 9], "libcurl4": 8, "debhelp": [8, 9], "dh": [8, 9, 62, 240, 339, 442], "po": [8, 9], "debconf": [8, 9], "sphinx": [8, 9], "parallel": [8, 48, 68, 222, 250, 413, 448], "pkg": [8, 16, 18, 19, 20, 22, 27, 35, 43, 44], "autotool": [8, 9, 27], "gmake": [8, 27], "sysctl": [8, 27], "often": [8, 12, 23, 47, 48, 49, 54, 71, 72, 79, 81, 82, 187, 210, 237, 251, 334, 335, 348, 356, 357, 452, 459, 461, 462], "custom": [8, 13, 14, 16, 18, 19, 20, 22, 25, 28, 31, 32, 33, 35, 36, 38, 39, 43, 44, 54, 59, 60, 74, 88, 131, 144, 175, 197, 206, 208, 210, 221, 230, 235, 237, 249, 258, 300, 313, 350, 363, 404, 417, 454, 468, 511, 524], "best": [8, 9, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 72, 79, 80, 109, 110, 177, 185, 199, 207, 220, 223, 233, 251, 279, 280, 299, 348, 354, 384, 385, 452, 459, 460, 489, 490], "systemd": [8, 9, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 75, 78, 103, 156, 161, 207, 231, 233, 273, 298, 351, 353, 378, 429, 455, 458, 483, 536, 541], "dracut": [8, 25, 31, 43, 44, 76, 352, 456], "udev": [8, 14, 16, 25, 31, 43, 44, 48, 54, 69, 72, 74, 86, 175, 182, 197, 204, 218, 221, 228, 246, 249, 256, 345, 348, 350, 361, 449, 452, 454, 466], "rapidli": [8, 48, 50, 72, 177, 199, 220, 223, 248, 251, 347, 348, 452], "iter": [8, 12, 51, 72, 75, 79, 105, 111, 115, 177, 199, 223, 232, 233, 251, 275, 281, 285, 299, 348, 351, 354, 380, 386, 390, 452, 455, 459, 485, 491, 495], "patch": [8, 11, 13, 21, 36, 37, 47, 49, 59, 60], "work": [8, 10, 12, 18, 19, 20, 22, 27, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 60, 66, 68, 72, 75, 79, 81, 109, 110, 129, 132, 136, 140, 146, 149, 150, 163, 165, 173, 176, 177, 185, 187, 195, 198, 199, 207, 210, 217, 222, 223, 233, 236, 237, 245, 250, 251, 279, 280, 297, 299, 301, 315, 332, 334, 344, 348, 351, 354, 356, 384, 385, 402, 405, 409, 413, 419, 422, 423, 436, 438, 446, 448, 451, 452, 455, 459, 461, 489, 490, 509, 512, 516, 520, 526, 529, 530, 543, 545], "leverag": 8, "increment": [8, 12, 18, 19, 20, 22, 35, 36, 43, 44, 48, 54, 67, 72, 79, 80, 90, 105, 109, 110, 111, 115, 128, 171, 178, 185, 193, 199, 200, 207, 216, 223, 224, 232, 233, 244, 251, 252, 260, 275, 279, 280, 281, 285, 296, 299, 343, 348, 354, 355, 365, 380, 384, 385, 386, 390, 401, 447, 452, 459, 460, 470, 485, 489, 490, 491, 495, 508, 559], "unload": [8, 27, 48, 72, 79, 80, 84, 88, 89, 91, 102, 104, 119, 122, 128, 132, 177, 186, 199, 209, 223, 233, 236, 251, 254, 258, 259, 261, 272, 274, 289, 292, 296, 299, 301, 348, 354, 359, 363, 364, 366, 377, 379, 394, 397, 401, 405, 452, 459, 460, 464, 468, 469, 471, 482, 484, 499, 502, 508, 512], "suit": [8, 10, 48, 72, 79, 88, 91, 102, 121, 206, 223, 230, 233, 251, 258, 261, 272, 291, 299, 348, 354, 363, 366, 377, 396, 452, 459, 468, 471, 482, 501], "remaind": 8, "focus": [8, 54, 80, 460], "method": [8, 12, 25, 41, 43, 44, 47, 48, 49, 54, 65, 72, 74, 75, 80, 82, 109, 110, 175, 187, 192, 197, 210, 215, 220, 221, 237, 243, 248, 249, 251, 252, 335, 342, 348, 350, 351, 355, 357, 445, 452, 454, 455, 460, 462, 489, 490], "branch": [8, 10, 12, 17, 18, 19, 20, 22, 27, 29, 35, 36, 37, 38, 39, 42, 43, 44, 63, 169, 190, 213, 241, 340, 443], "seri": [8, 47, 48, 63, 66, 72, 169, 172, 177, 190, 194, 199, 213, 223, 241, 251, 340, 348, 443, 446, 452], "built": [8, 9, 12, 20, 25, 27, 32, 33, 37, 39, 47, 72, 105, 185, 199, 223, 232, 251, 275, 348, 380, 452, 485, 555], "y": [8, 9, 18, 19, 20, 22, 25, 26, 31, 32, 33, 35, 36, 37, 38, 39, 43, 87, 146, 187, 210, 229, 237, 257, 315, 362, 419, 467, 526], "z": [8, 9, 47, 48, 58, 65, 68, 72, 77, 79, 80, 87, 88, 96, 99, 116, 134, 137, 163, 164, 172, 173, 184, 185, 187, 194, 195, 206, 207, 210, 217, 230, 233, 237, 245, 257, 258, 266, 269, 286, 299, 333, 344, 354, 362, 363, 371, 374, 391, 437, 445, 448, 457, 459, 460, 467, 468, 476, 479, 496, 517, 544, 557], "match": [8, 11, 18, 19, 20, 32, 35, 36, 38, 43, 47, 48, 49, 54, 55, 63, 72, 74, 75, 79, 80, 81, 88, 91, 102, 109, 110, 121, 169, 175, 184, 185, 187, 190, 197, 199, 200, 206, 207, 210, 213, 221, 223, 224, 230, 233, 237, 241, 249, 251, 252, 258, 261, 272, 279, 280, 291, 299, 334, 340, 348, 350, 351, 354, 355, 356, 363, 366, 377, 384, 385, 396, 443, 452, 454, 455, 459, 460, 461, 468, 471, 482, 489, 490, 501], "http": [8, 9, 10, 12, 14, 16, 18, 19, 20, 22, 23, 25, 26, 27, 29, 31, 32, 34, 35, 36, 37, 38, 39, 40, 43, 44, 47, 49, 63, 75, 79, 103, 105, 166, 167, 169, 173, 190, 195, 213, 217, 232, 241, 245, 275, 340, 351, 354, 378, 380, 443, 455, 459, 483, 485, 546, 547, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 561, 562, 563], "alwai": [8, 10, 12, 18, 19, 20, 22, 26, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 72, 79, 80, 81, 91, 102, 103, 105, 109, 110, 111, 115, 121, 136, 140, 146, 149, 150, 177, 179, 185, 199, 201, 207, 210, 222, 223, 225, 231, 232, 233, 237, 250, 251, 253, 261, 272, 273, 275, 279, 280, 281, 285, 291, 299, 315, 334, 348, 354, 356, 366, 377, 378, 380, 384, 385, 386, 390, 396, 409, 413, 419, 422, 423, 452, 459, 460, 461, 471, 482, 483, 485, 489, 490, 491, 495, 501, 516, 520, 526, 529, 530, 554], "topic": [8, 10, 41], "easi": [8, 12, 33, 49, 54], "pull": [8, 12, 13, 17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44, 49], "request": [8, 9, 12, 13, 17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 57, 71, 72, 78, 79, 80, 82, 88, 105, 111, 115, 132, 140, 144, 146, 152, 158, 162, 164, 185, 187, 199, 207, 209, 210, 220, 223, 232, 233, 236, 237, 248, 251, 258, 275, 299, 301, 313, 315, 327, 333, 335, 347, 348, 354, 355, 357, 363, 380, 405, 413, 417, 419, 431, 435, 437, 451, 452, 458, 459, 460, 462, 468, 485, 491, 495, 512, 520, 524, 526, 532, 538, 542, 544, 550, 551, 552, 553, 555, 556, 557, 558, 559, 561, 562, 563], "latter": [8, 12, 32, 47, 49, 55, 72, 348, 452], "kept": [8, 12, 48, 79, 103, 185, 199, 207, 223, 231, 233, 251, 273, 299, 354, 378, 459, 483], "stabl": [8, 9, 12, 27, 47, 48, 51, 54, 72, 79, 81, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 299, 334, 348, 354, 356, 452, 459, 461], "regress": [8, 12, 36, 37, 38, 39, 49, 68, 173, 195, 217, 245, 344, 448], "everi": [8, 12, 32, 47, 48, 49, 51, 54, 55, 72, 77, 79, 80, 81, 82, 87, 89, 103, 119, 126, 132, 140, 146, 148, 163, 165, 166, 167, 173, 176, 177, 178, 183, 185, 187, 195, 198, 199, 200, 205, 207, 209, 210, 217, 222, 223, 224, 229, 233, 236, 237, 245, 250, 251, 252, 257, 259, 289, 295, 299, 301, 315, 317, 332, 334, 335, 348, 354, 355, 356, 357, 362, 364, 378, 394, 400, 405, 413, 419, 421, 436, 438, 452, 457, 459, 460, 461, 462, 467, 469, 483, 499, 506, 512, 520, 526, 528, 543, 545, 546, 547, 563], "befor": [8, 12, 13, 14, 16, 18, 19, 20, 21, 22, 25, 26, 27, 28, 31, 32, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 55, 66, 67, 68, 69, 71, 72, 74, 75, 78, 79, 80, 81, 82, 87, 91, 95, 97, 100, 102, 103, 104, 105, 107, 109, 110, 111, 115, 117, 118, 120, 121, 122, 123, 124, 125, 127, 128, 130, 132, 134, 135, 136, 138, 141, 145, 146, 149, 150, 151, 152, 154, 156, 161, 172, 175, 177, 178, 185, 186, 187, 194, 197, 199, 200, 205, 207, 209, 210, 217, 218, 220, 221, 223, 224, 229, 231, 232, 233, 236, 237, 245, 246, 248, 249, 251, 252, 257, 261, 267, 270, 272, 273, 274, 275, 277, 279, 280, 281, 285, 290, 291, 292, 293, 294, 296, 298, 299, 301, 303, 304, 307, 310, 314, 315, 318, 319, 320, 321, 323, 325, 330, 334, 335, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 362, 366, 372, 375, 377, 378, 379, 380, 382, 384, 385, 386, 390, 392, 395, 396, 397, 398, 399, 401, 403, 405, 407, 408, 409, 411, 414, 418, 419, 422, 423, 424, 425, 427, 429, 434, 446, 447, 448, 449, 451, 452, 454, 455, 458, 459, 460, 461, 462, 467, 471, 475, 477, 480, 482, 483, 484, 485, 487, 489, 490, 491, 495, 497, 500, 501, 502, 503, 504, 505, 507, 508, 510, 512, 514, 515, 516, 518, 521, 525, 526, 529, 530, 531, 532, 534, 536, 541, 559, 563], "effort": [8, 48, 49, 58], "catch": [8, 37, 39, 48, 72, 177, 199, 223, 251, 348, 452], "defect": 8, "earli": [8, 16, 25, 31, 47, 48, 49, 72, 103, 231, 273, 378, 452, 483], "comfort": 8, "frequent": [8, 9, 48, 49], "rebas": [8, 10], "walk": [8, 48, 54, 71, 220, 248, 347, 451], "through": [8, 10, 12, 18, 19, 20, 22, 35, 36, 38, 42, 43, 44, 47, 48, 49, 51, 54, 72, 78, 79, 80, 87, 105, 109, 110, 111, 115, 118, 128, 177, 178, 185, 194, 199, 200, 207, 223, 224, 231, 232, 233, 251, 252, 257, 273, 275, 279, 280, 281, 285, 296, 298, 299, 348, 353, 354, 355, 362, 380, 384, 385, 386, 390, 401, 452, 458, 459, 460, 467, 485, 489, 490, 491, 495, 508], "stock": [8, 27, 33, 37, 39], "desir": [8, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 71, 72, 79, 91, 102, 121, 166, 167, 177, 199, 220, 223, 233, 248, 251, 261, 272, 291, 347, 348, 366, 377, 396, 451, 452, 459, 471, 482, 501, 546, 547], "fashion": [8, 62, 81, 187, 210, 237, 240, 334, 339, 356, 442, 461], "cd": [8, 9, 10, 12, 16, 17, 25, 27, 28, 29, 31, 33, 37, 39], "checkout": [8, 10, 12], "autogen": [8, 9, 10, 12, 27], "j": [8, 12, 27, 88, 105, 172, 194, 232, 233, 275, 363, 380, 468, 485], "nproc": [8, 12], "path": [8, 9, 12, 17, 18, 19, 20, 22, 27, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 66, 68, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 86, 87, 88, 89, 93, 95, 97, 103, 104, 105, 107, 109, 110, 117, 119, 122, 125, 128, 129, 130, 132, 133, 137, 140, 144, 146, 148, 154, 158, 159, 164, 171, 172, 175, 176, 181, 182, 183, 184, 185, 186, 187, 193, 194, 197, 198, 203, 204, 205, 206, 207, 209, 210, 216, 220, 221, 222, 227, 228, 229, 230, 231, 232, 233, 236, 237, 244, 248, 249, 250, 251, 255, 256, 257, 258, 259, 263, 265, 267, 273, 274, 275, 277, 279, 280, 287, 289, 292, 294, 296, 297, 298, 299, 301, 302, 306, 313, 315, 317, 323, 327, 328, 333, 334, 335, 344, 347, 348, 350, 351, 353, 354, 355, 356, 357, 361, 362, 363, 364, 368, 370, 372, 378, 379, 380, 382, 384, 385, 392, 394, 397, 399, 401, 402, 403, 405, 406, 410, 413, 417, 419, 421, 427, 431, 432, 437, 446, 448, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 466, 467, 468, 469, 473, 475, 477, 483, 484, 485, 487, 489, 490, 497, 499, 502, 505, 508, 509, 510, 512, 513, 517, 520, 524, 526, 528, 534, 538, 539, 544, 557, 559], "obj": [8, 9], "locat": [8, 9, 12, 32, 33, 48, 49, 54, 66, 68, 71, 79, 80, 81, 82, 86, 88, 91, 92, 102, 109, 110, 113, 121, 140, 146, 173, 176, 178, 182, 184, 185, 187, 195, 198, 200, 204, 206, 207, 210, 217, 220, 222, 224, 228, 230, 233, 237, 245, 248, 250, 252, 256, 258, 261, 262, 272, 279, 280, 283, 291, 299, 315, 334, 335, 344, 347, 354, 355, 356, 357, 361, 363, 366, 367, 377, 384, 385, 388, 396, 413, 419, 446, 448, 451, 459, 460, 461, 462, 466, 468, 471, 472, 482, 489, 490, 493, 501, 520, 526, 549, 551, 554, 556], "debug": [8, 11, 12, 18, 19, 20, 21, 22, 35, 36, 37, 38, 39, 43, 44, 65, 67, 68, 71, 72, 75, 87, 103, 105, 132, 171, 173, 177, 183, 186, 192, 193, 195, 199, 205, 209, 215, 216, 217, 220, 223, 229, 232, 236, 243, 244, 245, 248, 251, 257, 275, 301, 342, 343, 344, 347, 348, 351, 362, 378, 380, 405, 445, 447, 448, 451, 452, 455, 467, 483, 485, 512], "assert": [8, 48, 71, 87, 105, 183, 205, 220, 229, 232, 248, 257, 275, 347, 362, 380, 451, 467, 485], "deb": [8, 9, 18, 19, 20, 22, 23, 35, 36, 38, 40], "convert": [8, 9, 18, 19, 48, 72, 88, 105, 109, 110, 129, 164, 166, 167, 184, 206, 230, 232, 233, 258, 275, 279, 280, 297, 336, 363, 380, 384, 385, 402, 439, 440, 452, 468, 485, 489, 490, 509, 544, 546, 547], "nativ": [8, 9, 18, 19, 20, 22, 27, 35, 36, 37, 38, 39, 43, 44, 47, 49, 54, 77, 79, 80, 82, 93, 96, 99, 101, 103, 109, 110, 116, 128, 164, 185, 200, 207, 224, 233, 252, 263, 266, 269, 271, 279, 280, 286, 296, 299, 333, 354, 355, 368, 371, 374, 376, 378, 384, 385, 391, 401, 437, 457, 459, 460, 462, 473, 476, 479, 481, 483, 489, 490, 496, 508, 544], "overrid": [8, 9, 13, 25, 48, 54, 68, 72, 79, 109, 110, 164, 173, 177, 183, 185, 195, 199, 207, 210, 217, 223, 233, 237, 245, 251, 279, 280, 299, 333, 344, 348, 354, 384, 385, 437, 448, 452, 459, 489, 490, 544], "debain": 8, "kver": [8, 9, 25, 31, 43, 44], "ksrc": [8, 9], "kobj": [8, 9], "attent": [8, 18, 19, 20, 22, 49], "On": [8, 10, 16, 18, 19, 20, 21, 22, 35, 36, 37, 38, 39, 40, 43, 44, 47, 48, 49, 54, 72, 79, 82, 156, 161, 176, 179, 185, 198, 199, 201, 220, 223, 225, 248, 251, 253, 299, 335, 348, 354, 357, 429, 452, 459, 462, 536, 541, 557], "extra": [8, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 79, 91, 102, 105, 121, 148, 164, 177, 185, 187, 199, 207, 210, 223, 232, 233, 237, 251, 261, 272, 275, 291, 299, 333, 348, 354, 366, 377, 380, 396, 437, 452, 459, 471, 482, 485, 501, 528, 544], "standard": [8, 9, 10, 12, 35, 36, 37, 38, 39, 47, 48, 49, 54, 55, 62, 66, 71, 75, 77, 79, 82, 88, 105, 109, 110, 111, 115, 128, 143, 146, 148, 159, 163, 165, 166, 167, 185, 187, 207, 210, 220, 232, 233, 237, 240, 248, 275, 279, 280, 281, 285, 296, 299, 312, 315, 317, 328, 332, 336, 339, 347, 351, 354, 363, 380, 384, 385, 386, 390, 401, 416, 419, 421, 432, 436, 438, 439, 440, 442, 446, 451, 455, 457, 459, 462, 468, 485, 489, 490, 491, 495, 508, 523, 526, 528, 539, 543, 545, 546, 547], "depmod": 8, "search": [8, 48, 49, 54, 66, 67, 72, 75, 87, 144, 146, 164, 171, 183, 187, 193, 199, 205, 210, 216, 223, 229, 237, 244, 251, 257, 313, 315, 333, 343, 348, 351, 362, 417, 419, 437, 446, 447, 452, 455, 467, 524, 526, 544, 549, 551, 554], "edit": [8, 13, 14, 16, 18, 19, 20, 22, 25, 27, 28, 29, 31, 35, 36, 37, 38, 39, 59, 60, 66, 77, 78, 79, 93, 96, 99, 109, 110, 116, 128, 185, 207, 233, 263, 266, 269, 279, 280, 286, 296, 298, 299, 353, 354, 368, 371, 374, 384, 385, 391, 401, 446, 457, 458, 459, 473, 476, 479, 489, 490, 496, 508], "conf": [8, 14, 16, 18, 19, 20, 22, 25, 26, 27, 31, 32, 33, 35, 36, 38, 43, 44, 48, 49, 58, 73, 75, 82, 86, 117, 128, 174, 182, 185, 196, 204, 207, 210, 219, 228, 233, 237, 247, 256, 287, 296, 335, 349, 351, 357, 361, 392, 401, 453, 455, 462, 466, 497, 508], "ldconfig": 8, "uninstal": [8, 32], "wish": [8, 10, 18, 19, 20, 22, 32, 35, 36, 37, 38, 39, 43, 44, 47, 48, 72, 166, 167, 177, 199, 223, 251, 348, 452, 546, 547], "zt": [8, 48], "ksh": 8, "few": [8, 10, 25, 37, 39, 46, 47, 48, 49, 54, 71, 72, 79, 82, 91, 102, 121, 144, 177, 185, 187, 199, 207, 210, 220, 223, 233, 237, 248, 251, 261, 272, 291, 299, 313, 335, 347, 348, 354, 357, 366, 377, 396, 417, 451, 452, 459, 462, 471, 482, 501, 524], "bc": 8, "bzip2": 8, "fio": [8, 27], "acl": [8, 11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 79, 91, 102, 121, 128, 185, 207, 233, 261, 272, 291, 296, 299, 354, 366, 377, 396, 401, 459, 471, 482, 501, 508], "sysstat": 8, "mdadm": [8, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "lsscsi": 8, "attr": [8, 128, 207, 233, 296, 401, 508], "rng": 8, "pax": 8, "dbench": 8, "selinux": [8, 25, 31, 48, 79, 128, 181, 185, 203, 207, 227, 233, 255, 296, 299, 354, 401, 459, 508], "quota": [8, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 79, 82, 89, 96, 97, 99, 100, 105, 106, 107, 116, 119, 120, 123, 125, 127, 128, 160, 164, 177, 185, 199, 207, 210, 223, 232, 233, 237, 251, 259, 267, 270, 275, 276, 277, 289, 290, 294, 296, 299, 329, 333, 335, 348, 354, 357, 364, 372, 375, 380, 381, 382, 394, 395, 399, 401, 433, 437, 452, 459, 462, 469, 476, 477, 479, 480, 485, 486, 487, 496, 499, 500, 503, 505, 507, 508, 540, 544], "common": [8, 18, 19, 20, 22, 35, 43, 44, 45, 48, 54, 63, 72, 78, 80, 88, 169, 178, 184, 185, 190, 200, 206, 207, 213, 223, 224, 230, 233, 241, 251, 252, 258, 298, 340, 348, 353, 355, 363, 443, 452, 458, 460, 468], "base64": [8, 27], "bash": [8, 16, 18, 19, 20, 22, 25, 27, 31, 35, 36, 37, 38, 39, 43, 44], "checkbash": [8, 27], "h": [8, 18, 19, 20, 22, 27, 35, 36, 37, 38, 39, 43, 44, 62, 63, 65, 66, 68, 85, 86, 87, 88, 95, 96, 97, 98, 99, 101, 107, 109, 110, 111, 112, 115, 116, 125, 128, 131, 132, 140, 142, 146, 148, 157, 163, 165, 169, 172, 181, 182, 183, 184, 185, 186, 187, 190, 192, 194, 203, 204, 205, 206, 207, 209, 210, 213, 215, 227, 228, 229, 230, 231, 233, 236, 237, 240, 241, 243, 255, 256, 257, 258, 265, 266, 267, 268, 269, 271, 273, 277, 279, 280, 281, 282, 285, 286, 294, 296, 300, 301, 309, 311, 315, 317, 326, 332, 339, 340, 342, 344, 360, 361, 362, 363, 370, 371, 372, 373, 374, 376, 382, 384, 385, 386, 387, 390, 391, 399, 401, 404, 405, 413, 415, 419, 421, 430, 436, 438, 442, 443, 445, 446, 448, 465, 466, 467, 468, 475, 476, 477, 478, 479, 481, 487, 489, 490, 491, 492, 495, 496, 505, 508, 511, 512, 520, 522, 526, 528, 537, 543, 545], "shellcheck": [8, 10, 14, 16, 25, 27, 28, 31], "ksh93": [8, 27], "pamtest": [8, 27], "flake8": [8, 10, 27], "helper": [8, 83, 85, 86, 179, 181, 182, 201, 203, 204, 225, 227, 228, 253, 255, 256, 358, 360, 361, 463, 465, 466], "design": [8, 12, 47, 48, 49, 54, 63, 71, 78, 79, 81, 111, 115, 128, 169, 185, 187, 190, 207, 210, 213, 220, 233, 237, 241, 248, 281, 285, 296, 298, 299, 334, 340, 347, 353, 354, 356, 386, 390, 401, 443, 451, 458, 459, 461, 491, 495, 508], "aid": [8, 71, 220, 248, 347, 451], "certain": [8, 22, 23, 35, 47, 48, 49, 72, 79, 81, 87, 88, 130, 183, 184, 185, 187, 205, 206, 207, 210, 223, 229, 230, 233, 237, 251, 257, 258, 299, 334, 348, 354, 356, 362, 363, 403, 452, 459, 461, 467, 468, 510, 560], "e": [8, 16, 17, 18, 19, 20, 22, 23, 25, 26, 31, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 65, 68, 71, 72, 79, 80, 81, 82, 87, 89, 91, 96, 99, 102, 104, 105, 109, 110, 111, 115, 116, 119, 121, 122, 132, 134, 137, 149, 150, 156, 159, 165, 166, 167, 169, 172, 173, 176, 177, 178, 182, 183, 185, 186, 187, 190, 194, 195, 198, 199, 200, 204, 205, 207, 209, 210, 213, 217, 220, 222, 223, 224, 228, 229, 231, 232, 233, 236, 237, 241, 245, 248, 250, 251, 252, 257, 259, 261, 266, 269, 272, 273, 274, 275, 279, 280, 281, 285, 286, 289, 291, 292, 299, 301, 318, 319, 334, 335, 340, 342, 344, 347, 348, 354, 355, 356, 357, 362, 364, 366, 371, 374, 377, 379, 380, 384, 385, 386, 390, 391, 394, 396, 397, 405, 410, 422, 423, 432, 438, 445, 448, 451, 452, 459, 460, 461, 462, 467, 469, 471, 476, 479, 482, 484, 485, 489, 490, 491, 495, 496, 499, 501, 502, 512, 517, 529, 530, 536, 539, 545, 546, 547], "zvol": [8, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 58, 69, 72, 79, 80, 81, 87, 93, 105, 109, 110, 128, 177, 178, 181, 183, 185, 199, 200, 203, 205, 207, 218, 223, 224, 227, 229, 232, 233, 246, 251, 252, 255, 257, 261, 263, 272, 275, 279, 280, 291, 296, 299, 345, 348, 354, 355, 362, 368, 380, 384, 385, 401, 449, 452, 459, 460, 461, 467, 473, 485, 489, 490, 508], "symlink": [8, 43, 44, 54, 69, 74, 88, 184, 197, 206, 218, 221, 230, 246, 249, 258, 345, 350, 363, 449, 454, 468], "link": [8, 12, 18, 19, 20, 21, 22, 23, 32, 35, 36, 37, 38, 39, 40, 41, 54, 69, 72, 74, 86, 95, 109, 110, 128, 133, 146, 148, 158, 159, 164, 175, 182, 185, 187, 197, 204, 207, 210, 218, 221, 228, 233, 237, 246, 249, 256, 265, 279, 280, 296, 302, 315, 317, 327, 328, 333, 345, 348, 350, 361, 370, 384, 385, 401, 406, 419, 421, 431, 432, 437, 449, 452, 454, 466, 475, 489, 490, 508, 513, 526, 528, 538, 539, 544], "place": [8, 47, 48, 49, 55, 71, 72, 79, 81, 82, 87, 91, 102, 105, 109, 110, 121, 131, 133, 144, 146, 148, 156, 158, 159, 162, 187, 205, 208, 210, 220, 223, 229, 232, 233, 235, 237, 248, 251, 257, 261, 272, 275, 291, 299, 300, 302, 313, 315, 317, 325, 327, 328, 334, 335, 347, 348, 354, 356, 357, 362, 366, 377, 380, 396, 404, 406, 417, 419, 421, 429, 431, 432, 435, 451, 452, 459, 461, 462, 467, 471, 482, 485, 489, 490, 501, 511, 513, 524, 526, 528, 536, 538, 539, 542, 550, 557], "successfulli": [8, 25, 48, 54, 72, 81, 92, 93, 105, 118, 185, 199, 207, 223, 232, 233, 237, 251, 262, 263, 275, 334, 348, 356, 367, 368, 380, 452, 461, 472, 473, 485, 559], "remov": [8, 11, 14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 32, 35, 36, 37, 38, 39, 43, 44, 47, 49, 54, 68, 72, 77, 78, 79, 80, 81, 82, 84, 88, 89, 91, 95, 98, 102, 109, 110, 111, 112, 115, 119, 121, 128, 130, 133, 135, 139, 140, 144, 146, 147, 148, 149, 150, 158, 159, 163, 164, 166, 167, 173, 176, 177, 178, 185, 187, 195, 198, 199, 200, 207, 210, 217, 220, 222, 223, 224, 233, 237, 245, 250, 251, 252, 254, 259, 261, 265, 268, 272, 279, 280, 281, 282, 285, 289, 291, 296, 298, 299, 302, 304, 308, 313, 315, 316, 317, 318, 319, 327, 328, 332, 333, 334, 335, 336, 344, 348, 353, 354, 355, 356, 357, 359, 364, 366, 370, 373, 377, 384, 385, 386, 387, 390, 394, 396, 401, 403, 406, 408, 412, 413, 417, 419, 420, 421, 422, 423, 431, 432, 436, 437, 439, 440, 448, 451, 452, 457, 458, 459, 460, 461, 462, 464, 468, 469, 471, 475, 478, 482, 489, 490, 491, 492, 495, 499, 501, 508, 510, 513, 515, 519, 520, 524, 526, 527, 528, 529, 530, 538, 539, 543, 544, 546, 547, 550, 552, 556, 557], "freshli": 8, "later": [8, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 49, 72, 77, 78, 79, 80, 81, 82, 95, 109, 110, 128, 135, 139, 141, 164, 177, 185, 187, 199, 207, 210, 223, 224, 233, 237, 251, 252, 265, 296, 299, 304, 308, 333, 334, 335, 348, 354, 355, 356, 357, 370, 401, 408, 412, 437, 452, 457, 458, 459, 460, 461, 462, 475, 489, 490, 508, 515, 519, 521, 544], "u": [8, 16, 18, 19, 22, 25, 31, 33, 37, 39, 45, 47, 48, 66, 67, 72, 74, 79, 80, 87, 89, 93, 96, 99, 104, 109, 110, 113, 116, 119, 122, 132, 145, 146, 148, 159, 163, 178, 183, 185, 186, 187, 197, 200, 205, 207, 209, 210, 220, 221, 224, 229, 233, 236, 237, 249, 251, 252, 257, 259, 274, 279, 280, 283, 289, 292, 301, 315, 317, 328, 332, 348, 350, 355, 362, 364, 368, 379, 384, 385, 388, 394, 397, 405, 418, 419, 421, 432, 436, 446, 447, 452, 454, 459, 460, 467, 469, 473, 476, 479, 484, 489, 490, 493, 496, 499, 502, 512, 525, 526, 528, 539, 543], "wrapper": [8, 83, 358, 463], "repeatedli": 8, "argument": [8, 54, 66, 74, 75, 81, 87, 89, 98, 105, 109, 110, 112, 113, 119, 128, 136, 160, 164, 171, 175, 182, 183, 185, 187, 193, 197, 204, 205, 207, 210, 216, 221, 228, 229, 231, 232, 233, 237, 244, 249, 257, 259, 268, 273, 275, 279, 280, 282, 283, 289, 305, 329, 333, 350, 351, 356, 362, 364, 373, 380, 384, 385, 387, 388, 394, 401, 409, 433, 437, 446, 454, 455, 461, 467, 469, 478, 485, 489, 490, 492, 493, 499, 508, 516, 540, 544, 550, 552], "user": [8, 9, 10, 11, 12, 14, 16, 18, 19, 20, 21, 22, 25, 27, 31, 32, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 49, 53, 54, 56, 60, 66, 67, 68, 72, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88, 89, 91, 94, 96, 97, 98, 99, 101, 102, 105, 107, 111, 112, 113, 115, 116, 118, 119, 121, 123, 125, 127, 128, 130, 143, 146, 163, 164, 168, 171, 172, 173, 178, 182, 184, 185, 187, 189, 192, 193, 194, 195, 200, 204, 206, 207, 208, 210, 212, 215, 216, 217, 223, 224, 228, 230, 232, 233, 235, 237, 239, 244, 245, 251, 252, 256, 258, 259, 261, 264, 266, 267, 268, 269, 271, 272, 275, 277, 281, 282, 285, 286, 289, 291, 294, 296, 298, 299, 300, 312, 315, 332, 333, 334, 335, 338, 343, 344, 348, 353, 354, 355, 356, 357, 360, 361, 363, 364, 366, 369, 371, 372, 373, 374, 376, 377, 380, 382, 386, 387, 390, 391, 394, 396, 399, 401, 403, 416, 419, 436, 437, 441, 446, 447, 448, 452, 457, 458, 459, 460, 461, 462, 465, 466, 467, 468, 469, 471, 474, 476, 477, 478, 479, 481, 482, 485, 487, 491, 492, 493, 495, 496, 498, 499, 501, 503, 505, 507, 508, 510, 523, 526, 543, 544, 548, 559], "stress": [8, 12, 48, 68, 72, 82, 172, 173, 194, 195, 217, 237, 245, 251, 335, 344, 348, 357, 448, 452, 462], "concurr": [8, 12, 46, 48, 49, 50, 51, 72, 88, 105, 132, 177, 184, 199, 206, 209, 220, 223, 230, 232, 233, 236, 248, 251, 258, 275, 301, 348, 363, 380, 405, 452, 468, 485, 512], "crash": [8, 11, 36, 37, 39, 47, 72, 80, 82, 187, 200, 210, 224, 237, 251, 252, 335, 348, 355, 357, 452, 460, 462], "encount": [8, 47, 48, 54, 72, 77, 80, 81, 87, 105, 152, 177, 178, 187, 199, 200, 210, 223, 224, 232, 233, 237, 251, 252, 257, 275, 321, 334, 348, 355, 356, 362, 380, 425, 452, 457, 460, 461, 467, 485, 532, 550, 551, 552, 553, 555], "associ": [8, 11, 45, 48, 72, 74, 79, 80, 81, 82, 86, 87, 88, 109, 110, 132, 136, 153, 164, 175, 177, 178, 182, 183, 184, 185, 187, 197, 199, 200, 204, 205, 206, 207, 209, 210, 221, 223, 224, 228, 229, 230, 231, 233, 236, 237, 249, 251, 252, 256, 257, 258, 273, 279, 280, 299, 301, 305, 322, 333, 334, 335, 348, 350, 354, 355, 356, 357, 361, 362, 363, 384, 385, 405, 409, 426, 437, 452, 454, 459, 460, 461, 462, 466, 467, 468, 489, 490, 512, 516, 533, 544], "collect": [8, 48, 78, 81, 109, 110, 164, 165, 185, 187, 207, 210, 233, 237, 279, 280, 298, 333, 334, 353, 356, 384, 385, 437, 438, 458, 461, 489, 490, 544, 545], "move": [8, 10, 18, 19, 20, 22, 24, 30, 35, 36, 37, 38, 39, 43, 44, 48, 49, 72, 80, 82, 108, 141, 177, 178, 185, 187, 199, 200, 207, 210, 220, 223, 224, 233, 237, 248, 251, 252, 278, 310, 335, 348, 355, 357, 383, 414, 452, 460, 462, 488, 521, 556, 558], "launch": 8, "spars": [8, 55, 72, 79, 80, 87, 89, 93, 119, 185, 205, 207, 229, 233, 251, 252, 257, 259, 263, 289, 299, 348, 354, 355, 362, 364, 368, 394, 452, 459, 460, 467, 469, 473, 499], "tmp": [8, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 66, 68, 103, 128, 131, 173, 185, 195, 207, 217, 231, 233, 245, 273, 296, 300, 344, 378, 401, 404, 446, 448, 483, 508, 511], "direct": [8, 32, 54, 74, 96, 99, 101, 105, 114, 116, 175, 185, 197, 207, 221, 232, 233, 249, 266, 269, 271, 275, 284, 286, 350, 371, 374, 376, 380, 389, 391, 454, 476, 479, 481, 485, 494, 496], "readm": [8, 10, 12, 33], "vx": 8, "deleg": [8, 79, 82, 89, 119, 123, 127, 128, 185, 187, 207, 210, 233, 237, 259, 289, 296, 299, 335, 354, 357, 364, 394, 401, 459, 462, 469, 499, 503, 507, 508], "permiss": [8, 27, 43, 44, 48, 79, 82, 88, 89, 91, 102, 119, 121, 128, 184, 185, 187, 206, 207, 210, 230, 233, 237, 258, 259, 261, 272, 289, 291, 296, 299, 335, 354, 357, 363, 364, 366, 377, 394, 396, 401, 459, 462, 468, 469, 471, 482, 499, 501, 508], "parent": [8, 72, 77, 78, 79, 80, 89, 91, 92, 93, 96, 99, 102, 105, 108, 111, 113, 115, 116, 119, 121, 128, 140, 176, 184, 185, 198, 206, 207, 222, 224, 230, 231, 232, 233, 250, 252, 258, 259, 261, 262, 263, 272, 273, 275, 278, 281, 283, 285, 289, 291, 296, 298, 299, 348, 353, 354, 355, 364, 366, 367, 368, 377, 380, 383, 386, 388, 390, 394, 396, 401, 413, 452, 457, 458, 459, 460, 469, 471, 472, 473, 476, 479, 482, 485, 488, 491, 493, 495, 496, 499, 501, 508, 520], "assum": [9, 12, 18, 19, 20, 33, 43, 44, 47, 72, 79, 81, 87, 111, 115, 133, 134, 164, 177, 183, 185, 187, 199, 205, 207, 210, 223, 229, 233, 237, 251, 257, 281, 285, 299, 333, 334, 348, 354, 356, 362, 386, 390, 437, 452, 459, 461, 467, 491, 495, 513, 544, 557], "newer": [9, 23, 32, 47, 48, 49, 54, 78, 80, 82, 210, 237, 335, 355, 357, 458, 460, 462, 559], "directli": [9, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 67, 72, 87, 97, 105, 107, 125, 171, 172, 185, 193, 194, 207, 216, 223, 231, 232, 233, 244, 251, 267, 273, 275, 277, 294, 343, 348, 372, 380, 382, 399, 447, 452, 467, 477, 485, 487, 505], "repositori": [9, 11, 12, 13, 16, 18, 19, 20, 22, 23, 25, 26, 31, 34, 35, 36, 38, 41, 42, 43, 44], "preferenti": [9, 89, 119, 185, 207, 233, 259, 289, 364, 394, 469, 499], "As": [9, 12, 18, 19, 20, 22, 25, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 54, 68, 69, 71, 72, 78, 79, 81, 82, 88, 91, 102, 121, 173, 177, 184, 185, 187, 195, 199, 206, 207, 210, 217, 218, 220, 223, 230, 233, 237, 245, 246, 248, 251, 258, 261, 272, 291, 298, 299, 334, 335, 344, 345, 347, 348, 353, 354, 356, 357, 363, 366, 377, 396, 448, 449, 451, 452, 458, 459, 461, 462, 468, 471, 482, 501], "rule": [9, 43, 44, 47, 48, 54, 63, 71, 74, 169, 175, 190, 197, 213, 220, 221, 241, 248, 249, 340, 347, 350, 443, 451, 454], "tightli": 9, "test": [9, 11, 13, 14, 16, 17, 18, 19, 20, 22, 25, 27, 29, 31, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 49, 54, 64, 65, 68, 71, 72, 79, 88, 92, 93, 94, 95, 103, 108, 109, 110, 111, 113, 115, 118, 128, 172, 173, 185, 192, 194, 195, 199, 206, 207, 215, 217, 220, 223, 230, 231, 233, 243, 245, 248, 251, 258, 273, 279, 280, 281, 285, 296, 299, 342, 344, 347, 348, 354, 363, 378, 384, 385, 386, 390, 401, 444, 445, 448, 451, 452, 459, 468, 472, 473, 474, 475, 483, 488, 489, 490, 491, 493, 495, 498, 508, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "choic": [9, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 556], "doesn": [9, 11, 18, 19, 33, 43, 44, 48, 49, 54, 75, 79, 105, 207, 232, 233, 275, 299, 351, 354, 380, 455, 459, 485], "re": [9, 10, 12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 68, 69, 72, 74, 75, 79, 81, 88, 91, 102, 103, 109, 110, 111, 115, 121, 139, 140, 145, 173, 175, 182, 185, 187, 195, 197, 199, 204, 207, 210, 217, 221, 223, 228, 231, 233, 237, 245, 249, 251, 258, 261, 272, 273, 279, 280, 281, 285, 291, 299, 308, 334, 344, 345, 348, 350, 351, 354, 356, 363, 366, 377, 378, 384, 385, 386, 390, 396, 412, 413, 418, 448, 449, 452, 454, 455, 459, 461, 468, 471, 482, 483, 489, 490, 491, 495, 501, 519, 520, 525, 551, 553, 556, 558, 559], "roll": [9, 11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 78, 94, 113, 114, 118, 128, 144, 185, 187, 207, 210, 233, 237, 284, 296, 298, 313, 353, 389, 401, 417, 458, 474, 493, 494, 498, 508, 524], "own": [9, 10, 12, 18, 19, 20, 22, 26, 35, 38, 39, 43, 44, 47, 48, 49, 54, 63, 79, 88, 89, 97, 98, 107, 108, 112, 119, 125, 128, 169, 184, 185, 190, 206, 207, 213, 230, 233, 241, 258, 267, 268, 277, 278, 282, 294, 296, 299, 340, 354, 363, 372, 373, 382, 383, 387, 399, 401, 443, 459, 468, 469, 477, 478, 487, 488, 492, 499, 505, 508], "awar": [9, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 49, 58, 82, 111, 115, 237, 281, 285, 335, 357, 386, 390, 462, 491, 495], "capabl": [9, 47, 48, 72, 185, 199, 207, 223, 233, 251, 348, 452], "choos": [9, 13, 14, 18, 19, 20, 21, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54], "exactli": [9, 22, 48, 63, 109, 110, 111, 115, 169, 185, 190, 207, 208, 213, 223, 233, 235, 241, 251, 279, 280, 281, 285, 300, 340, 384, 385, 386, 390, 443, 489, 490, 491, 495], "upgrad": [9, 18, 19, 20, 22, 32, 33, 35, 36, 37, 38, 39, 43, 44, 72, 79, 80, 82, 84, 105, 128, 164, 178, 185, 187, 200, 207, 210, 224, 232, 233, 237, 252, 254, 275, 296, 299, 333, 335, 354, 355, 357, 359, 380, 401, 437, 452, 459, 460, 462, 464, 485, 508, 544, 558, 559], "particularli": [9, 33, 48, 51, 54, 72, 79, 86, 177, 182, 185, 199, 204, 207, 223, 228, 233, 251, 256, 299, 348, 354, 361, 452, 459, 466, 554], "conveni": [9, 18, 19, 20, 22, 27, 35, 36, 38, 43, 44, 48, 49, 88, 184, 206, 230, 258, 363, 468], "desktop": [9, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44], "appropri": [9, 12, 18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 49, 50, 54, 63, 72, 78, 79, 88, 132, 140, 169, 177, 179, 181, 184, 185, 186, 190, 199, 201, 203, 206, 207, 209, 213, 222, 223, 225, 227, 230, 233, 236, 241, 250, 251, 253, 255, 258, 298, 299, 301, 340, 348, 353, 354, 363, 405, 413, 443, 452, 458, 459, 468, 512, 520, 553, 555, 557], "deploy": 9, "binari": [9, 14, 16, 25, 27, 31, 33, 45, 80, 83, 355, 358, 460, 463], "specif": [9, 11, 12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 41, 43, 44, 47, 48, 52, 54, 62, 71, 72, 74, 75, 77, 79, 80, 81, 82, 87, 105, 109, 110, 118, 132, 133, 137, 144, 158, 175, 178, 183, 185, 187, 197, 199, 200, 205, 207, 210, 220, 221, 223, 224, 229, 231, 232, 233, 236, 237, 240, 248, 249, 251, 252, 257, 273, 275, 279, 280, 296, 299, 301, 302, 306, 313, 327, 334, 335, 339, 347, 348, 350, 351, 354, 355, 356, 357, 362, 380, 384, 385, 405, 406, 410, 417, 431, 442, 451, 452, 454, 455, 457, 459, 460, 461, 462, 467, 485, 489, 490, 512, 513, 517, 524, 538], "enterpris": [9, 47, 54], "red": 9, "hat": 9, "applic": [9, 11, 12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 51, 72, 77, 79, 81, 82, 91, 102, 121, 128, 132, 149, 150, 177, 185, 187, 199, 207, 210, 223, 233, 236, 237, 251, 261, 272, 291, 296, 299, 301, 318, 319, 334, 348, 354, 356, 366, 377, 396, 401, 405, 422, 423, 452, 457, 459, 461, 462, 471, 482, 501, 508, 512, 529, 530, 556, 557, 561, 562, 563], "style": [9, 10, 12, 32, 48, 63, 66, 72, 79, 169, 177, 185, 190, 199, 207, 213, 223, 233, 241, 251, 299, 340, 348, 354, 443, 446, 452, 459], "either": [9, 10, 18, 19, 20, 22, 27, 32, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 55, 72, 75, 77, 79, 80, 81, 82, 85, 88, 91, 94, 102, 105, 109, 110, 111, 114, 115, 121, 128, 134, 137, 142, 157, 166, 167, 177, 178, 184, 185, 187, 199, 200, 206, 207, 210, 223, 224, 230, 231, 232, 233, 237, 251, 252, 258, 261, 264, 272, 273, 275, 279, 280, 281, 284, 285, 291, 296, 299, 303, 306, 311, 326, 334, 336, 348, 351, 354, 355, 356, 357, 360, 363, 366, 369, 377, 380, 384, 385, 386, 389, 390, 396, 401, 407, 410, 415, 430, 439, 440, 452, 455, 457, 459, 460, 461, 462, 465, 468, 471, 474, 482, 485, 489, 490, 491, 494, 495, 501, 508, 514, 517, 522, 537, 546, 547, 555, 556, 558, 563], "rebuilt": 9, "streamlin": 9, "Be": [9, 54, 75, 82, 88, 184, 206, 230, 237, 258, 335, 351, 357, 363, 455, 462, 468], "gnu": [9, 18, 19, 20, 22, 23, 41, 45], "To": [9, 10, 15, 18, 19, 20, 22, 23, 26, 27, 29, 32, 35, 36, 37, 38, 39, 40, 41, 43, 44, 47, 48, 49, 54, 63, 68, 72, 78, 79, 80, 81, 87, 89, 94, 100, 103, 105, 109, 110, 111, 113, 114, 115, 118, 119, 120, 123, 127, 128, 144, 146, 156, 169, 173, 176, 177, 178, 183, 185, 187, 190, 195, 198, 199, 200, 205, 207, 210, 213, 217, 222, 223, 224, 229, 231, 232, 233, 237, 241, 245, 250, 251, 252, 257, 259, 270, 273, 275, 279, 280, 281, 284, 285, 289, 290, 296, 298, 299, 313, 315, 325, 334, 340, 344, 348, 353, 354, 355, 356, 362, 364, 375, 378, 380, 384, 385, 386, 389, 390, 394, 395, 401, 417, 419, 429, 443, 448, 452, 458, 459, 460, 461, 467, 469, 474, 480, 483, 485, 489, 490, 491, 493, 494, 495, 498, 499, 500, 503, 507, 508, 524, 526, 536, 552, 559, 560, 563], "sure": [9, 10, 12, 18, 19, 20, 21, 22, 27, 28, 32, 33, 35, 36, 37, 38, 39, 40, 43, 44, 49, 54, 63, 79, 169, 190, 213, 233, 241, 299, 340, 354, 443, 459, 559, 561, 562, 563], "macro": [9, 63, 169, 190, 213, 241, 340, 443], "abi": 9, "stablelist": 9, "sed": [9, 12, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44], "crb": 9, "j1": 9, "localinstal": 9, "noarch": [9, 25, 26, 31, 32], "know": [9, 32, 48, 63, 72, 79, 156, 169, 185, 187, 190, 210, 213, 233, 237, 241, 251, 299, 325, 340, 348, 354, 429, 443, 452, 459, 536], "educ": 9, "guess": [9, 87, 205, 229, 257, 362, 467], "unabl": [9, 11, 47, 48, 49, 54, 72, 80, 81, 88, 89, 119, 128, 177, 184, 185, 199, 206, 207, 223, 230, 233, 251, 258, 296, 348, 356, 363, 401, 452, 460, 461, 468, 469, 499, 508, 550, 551, 552, 553, 554, 555], "exact": [9, 49, 65, 78, 79, 96, 97, 99, 101, 105, 107, 116, 125, 142, 146, 148, 152, 157, 159, 163, 185, 187, 192, 207, 210, 215, 232, 233, 237, 243, 266, 267, 269, 271, 275, 277, 286, 294, 298, 299, 311, 315, 317, 321, 326, 328, 332, 342, 353, 354, 371, 372, 374, 376, 380, 382, 391, 399, 415, 419, 421, 425, 430, 432, 436, 445, 458, 459, 476, 477, 479, 481, 485, 487, 496, 505, 522, 526, 528, 532, 537, 539, 543, 554], "produc": [9, 36, 65, 72, 79, 80, 111, 115, 165, 185, 200, 207, 223, 224, 233, 251, 252, 281, 285, 299, 342, 348, 354, 355, 386, 390, 438, 445, 452, 459, 460, 491, 495, 545, 557], "spec": 9, "redhat": [9, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "libpam0g": 9, "miss": [9, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 49, 62, 72, 79, 80, 88, 111, 115, 140, 144, 148, 164, 176, 177, 187, 198, 199, 207, 210, 222, 223, 233, 237, 240, 250, 251, 252, 281, 285, 299, 313, 333, 339, 348, 354, 355, 386, 390, 413, 417, 437, 442, 452, 459, 460, 468, 491, 495, 520, 524, 528, 544, 552, 553, 563, 564], "rm": [9, 14, 16, 18, 19, 25, 31, 35, 36, 37, 38, 39, 43], "dkms_": 9, "dracut_": 9, "initramf": [9, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44], "product": [9, 26, 32, 47, 48, 49, 72, 79, 88, 92, 93, 94, 108, 113, 118, 128, 184, 185, 206, 207, 223, 230, 233, 251, 258, 296, 299, 348, 354, 363, 401, 452, 459, 468, 472, 473, 474, 488, 493, 498, 508], "fetch": [9, 12, 79, 354, 459], "wget": [9, 12, 43], "tar": [9, 16, 25, 31, 37, 39], "gz": [9, 16, 25, 31, 33, 43], "xzf": [9, 33], "probabl": [9, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 82, 210, 223, 237, 251, 335, 357, 462], "weren": 9, "who": [9, 32, 47, 48, 72, 79, 177, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "intend": [9, 47, 48, 49, 68, 72, 74, 81, 82, 137, 165, 176, 187, 195, 197, 198, 199, 210, 217, 221, 222, 223, 237, 245, 249, 250, 251, 306, 334, 335, 344, 348, 350, 356, 357, 410, 438, 448, 452, 454, 461, 462, 517, 545], "modifi": [9, 10, 11, 12, 33, 37, 39, 47, 48, 49, 54, 66, 72, 78, 79, 82, 95, 105, 111, 115, 128, 130, 164, 166, 167, 177, 185, 187, 199, 207, 210, 223, 232, 233, 237, 238, 251, 265, 275, 281, 285, 296, 299, 333, 335, 337, 348, 354, 357, 370, 380, 386, 390, 401, 403, 437, 446, 452, 458, 459, 462, 475, 485, 491, 495, 508, 510, 544, 546, 547], "decid": [9, 43, 44, 72, 251, 348, 452], "kind": [9, 33, 68, 81, 237, 334, 344, 356, 448, 461], "jump": 9, "section": [9, 10, 12, 18, 19, 20, 22, 28, 32, 33, 35, 36, 37, 38, 39, 43, 44, 46, 48, 49, 54, 66, 72, 77, 79, 80, 81, 82, 85, 92, 93, 96, 99, 101, 104, 105, 116, 118, 122, 128, 133, 137, 140, 159, 176, 177, 181, 185, 187, 198, 199, 203, 207, 210, 222, 223, 227, 232, 233, 237, 250, 251, 255, 262, 263, 266, 269, 271, 274, 275, 286, 288, 292, 296, 299, 302, 306, 328, 334, 348, 354, 355, 356, 360, 367, 368, 371, 374, 376, 379, 380, 391, 393, 397, 401, 406, 410, 413, 432, 446, 452, 457, 459, 460, 461, 462, 465, 472, 473, 476, 479, 481, 484, 485, 496, 498, 502, 508, 513, 517, 520, 539, 550, 551, 552, 553, 555, 558, 559], "abov": [9, 10, 18, 19, 20, 22, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 54, 55, 57, 62, 66, 68, 72, 75, 94, 103, 109, 110, 111, 115, 144, 161, 173, 177, 185, 187, 195, 199, 207, 210, 217, 223, 233, 237, 240, 245, 251, 264, 270, 279, 280, 281, 285, 290, 313, 330, 339, 344, 348, 351, 369, 378, 384, 385, 386, 390, 417, 434, 442, 446, 448, 452, 455, 474, 483, 489, 490, 491, 495, 524, 541, 550, 552], "basic": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 52, 59, 60, 66, 75, 87, 140, 176, 183, 198, 205, 222, 229, 250, 257, 351, 362, 413, 446, 455, 467, 520], "rundown": 10, "contribut": [10, 12, 41, 48, 60, 134], "md": [10, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47], "ve": [10, 12, 21, 63, 72, 169, 177, 190, 199, 213, 223, 241, 251, 340, 348, 443, 452], "never": [10, 11, 18, 19, 20, 22, 35, 36, 43, 44, 47, 48, 49, 71, 72, 77, 79, 80, 81, 82, 83, 85, 88, 111, 115, 176, 178, 179, 181, 185, 187, 198, 199, 200, 201, 203, 206, 207, 210, 220, 222, 223, 224, 225, 227, 230, 233, 237, 248, 250, 251, 252, 253, 255, 258, 281, 285, 299, 334, 335, 347, 348, 354, 355, 356, 357, 358, 360, 363, 386, 390, 451, 452, 457, 459, 460, 461, 462, 463, 465, 468, 491, 495], "littl": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 48, 49, 68, 78, 87, 185, 205, 207, 229, 233, 257, 298, 344, 353, 362, 448, 451, 458, 467], "global": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 67, 68, 77, 79, 87, 89, 105, 119, 185, 187, 205, 207, 220, 229, 232, 233, 257, 259, 275, 289, 299, 343, 344, 354, 362, 364, 380, 394, 447, 448, 457, 459, 467, 469, 485, 499], "my": [10, 49, 66, 199, 223, 251, 348, 446], "myemail": 10, "norepli": 10, "easiest": 10, "get": [10, 11, 13, 14, 16, 18, 19, 20, 21, 22, 25, 27, 31, 32, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 59, 60, 68, 72, 79, 80, 81, 82, 84, 89, 99, 101, 116, 119, 128, 130, 146, 157, 164, 173, 176, 185, 187, 195, 198, 207, 210, 217, 220, 222, 223, 233, 237, 245, 250, 251, 254, 269, 271, 286, 296, 299, 315, 326, 333, 334, 344, 348, 354, 356, 359, 374, 376, 391, 401, 403, 419, 430, 437, 448, 452, 459, 460, 461, 462, 464, 469, 479, 481, 496, 499, 508, 510, 526, 537, 544, 556, 559], "click": [10, 43, 44, 49, 111, 115, 281, 285, 386, 390, 491, 495], "fork": [10, 12, 17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44], "icon": [10, 49], "comput": [10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 72, 79, 177, 199, 223, 233, 251, 299, 348, 354, 452, 459], "come": [10, 20, 22, 48, 49, 72, 86, 96, 99, 111, 115, 116, 177, 182, 185, 199, 204, 207, 223, 228, 233, 251, 256, 266, 269, 281, 285, 286, 348, 361, 371, 374, 386, 390, 391, 452, 466, 476, 479, 491, 495, 496], "handi": [10, 33], "establish": [10, 79, 97, 107, 125, 185, 207, 233, 267, 277, 294, 299, 354, 372, 382, 399, 459, 477, 487, 505], "remot": [10, 12, 18, 19, 48, 58, 68, 109, 110, 111, 115, 128, 185, 207, 233, 279, 280, 281, 285, 296, 344, 384, 385, 386, 390, 401, 448, 489, 490, 491, 495, 508], "let": 10, "unrel": [10, 36], "b": [10, 12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 54, 63, 65, 68, 72, 74, 79, 80, 81, 87, 88, 93, 95, 96, 99, 109, 110, 111, 115, 116, 128, 132, 137, 164, 169, 172, 175, 177, 178, 183, 185, 186, 187, 190, 192, 194, 197, 199, 200, 205, 207, 209, 210, 213, 215, 221, 223, 224, 229, 233, 236, 237, 241, 243, 249, 251, 252, 257, 263, 265, 266, 269, 281, 285, 286, 296, 299, 301, 333, 340, 342, 344, 348, 350, 354, 355, 362, 368, 370, 371, 374, 386, 390, 391, 401, 405, 437, 443, 445, 448, 452, 454, 459, 460, 461, 467, 468, 473, 475, 476, 479, 489, 490, 491, 495, 496, 508, 512, 517, 544], "next": [10, 36, 38, 46, 48, 49, 51, 63, 68, 72, 80, 87, 96, 99, 105, 116, 128, 169, 173, 177, 178, 185, 190, 195, 199, 200, 207, 213, 217, 223, 224, 232, 233, 241, 245, 251, 252, 257, 275, 296, 340, 344, 348, 355, 362, 380, 401, 443, 448, 452, 460, 467, 476, 479, 485, 496, 508], "step": [10, 12, 13, 33, 48, 72, 105, 111, 115, 251, 275, 281, 285, 348, 380, 386, 390, 452, 485, 491, 495, 559], "suno": 10, "local": [10, 12, 14, 16, 17, 18, 19, 20, 22, 25, 27, 28, 29, 31, 32, 35, 36, 37, 38, 39, 43, 44, 48, 57, 72, 88, 89, 93, 96, 99, 103, 109, 110, 111, 115, 116, 119, 128, 142, 157, 177, 184, 185, 187, 199, 206, 207, 210, 223, 230, 231, 233, 237, 251, 258, 259, 263, 266, 269, 273, 279, 280, 281, 285, 286, 289, 296, 311, 326, 348, 363, 364, 368, 371, 374, 378, 384, 385, 386, 390, 391, 394, 401, 415, 430, 452, 468, 469, 473, 476, 479, 483, 489, 490, 491, 495, 496, 499, 508, 522, 537], "highli": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 71, 72, 80, 178, 200, 220, 223, 224, 248, 251, 252, 347, 348, 355, 451, 452, 460], "virtual": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 52, 54, 81, 82, 88, 133, 137, 140, 164, 176, 184, 187, 198, 206, 210, 220, 222, 230, 237, 250, 258, 302, 303, 306, 308, 322, 333, 334, 335, 356, 357, 363, 406, 410, 413, 437, 461, 462, 468, 513, 517, 520, 544, 554], "host": [10, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 49, 68, 72, 75, 81, 82, 96, 99, 109, 110, 111, 115, 116, 128, 131, 136, 144, 175, 185, 187, 195, 197, 207, 210, 217, 221, 223, 233, 237, 245, 251, 296, 305, 313, 334, 335, 344, 348, 351, 356, 357, 401, 404, 409, 417, 448, 452, 455, 461, 462, 476, 479, 489, 490, 491, 495, 496, 508, 511, 516, 524, 560], "checkstyl": 10, "correctli": [10, 18, 19, 20, 33, 35, 36, 38, 43, 44, 47, 49, 54, 105, 111, 115, 129, 156, 187, 210, 232, 237, 275, 281, 285, 297, 325, 380, 386, 390, 402, 429, 485, 491, 495, 509, 536, 559], "signoff": [10, 17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44], "editor": [10, 36, 38], "unstag": 10, "pleas": [10, 12, 17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44, 54, 72, 79, 80, 178, 185, 200, 207, 224, 233, 252, 296, 299, 354, 355, 459, 460, 550, 551, 552, 553, 559], "enter": [10, 14, 16, 18, 19, 20, 21, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 51, 68, 72, 91, 102, 121, 144, 158, 173, 177, 195, 199, 217, 223, 233, 237, 245, 251, 261, 272, 291, 313, 327, 344, 348, 366, 377, 396, 417, 431, 448, 452, 471, 482, 501, 524, 538, 559], "ignor": [10, 14, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 54, 55, 72, 74, 79, 80, 83, 85, 87, 88, 93, 96, 99, 103, 116, 144, 169, 175, 177, 179, 181, 184, 185, 187, 190, 197, 199, 200, 201, 203, 206, 207, 210, 213, 221, 223, 224, 225, 227, 230, 231, 233, 237, 241, 249, 251, 252, 253, 255, 258, 263, 266, 269, 273, 286, 299, 313, 340, 348, 350, 354, 355, 358, 360, 363, 368, 371, 374, 378, 391, 417, 452, 454, 459, 460, 463, 465, 467, 468, 473, 476, 479, 483, 496, 524, 563], "empti": [10, 18, 19, 20, 36, 37, 38, 39, 43, 44, 48, 63, 71, 72, 79, 80, 82, 104, 105, 109, 110, 111, 115, 122, 126, 128, 137, 169, 178, 181, 185, 187, 190, 199, 200, 203, 207, 210, 213, 220, 223, 224, 227, 232, 233, 237, 241, 248, 251, 252, 255, 274, 275, 292, 295, 296, 299, 306, 335, 340, 347, 348, 354, 355, 357, 379, 380, 397, 400, 401, 410, 443, 451, 452, 459, 460, 462, 484, 485, 489, 490, 491, 495, 502, 506, 508, 517], "abort": [10, 72, 87, 109, 110, 111, 115, 140, 183, 205, 207, 222, 229, 233, 250, 257, 279, 280, 362, 384, 385, 386, 390, 413, 452, 467, 489, 490, 491, 495, 520], "reset": [10, 28, 88, 106, 184, 206, 230, 233, 258, 276, 363, 381, 468, 486], "hello": 10, "displai": [10, 48, 54, 62, 79, 87, 88, 89, 95, 96, 97, 99, 101, 104, 105, 107, 116, 119, 122, 124, 125, 128, 131, 133, 137, 140, 142, 143, 144, 146, 148, 152, 157, 158, 159, 162, 163, 164, 183, 184, 185, 187, 188, 205, 206, 207, 210, 211, 229, 230, 232, 233, 237, 238, 240, 257, 258, 259, 265, 266, 267, 269, 271, 274, 275, 277, 286, 289, 292, 293, 294, 296, 299, 300, 302, 306, 309, 311, 312, 313, 315, 317, 321, 326, 327, 328, 331, 332, 333, 337, 339, 354, 362, 363, 364, 370, 371, 372, 374, 376, 379, 380, 382, 391, 394, 397, 398, 399, 401, 404, 406, 410, 413, 415, 416, 417, 419, 421, 425, 430, 431, 432, 435, 436, 437, 442, 459, 467, 468, 469, 475, 476, 477, 479, 481, 484, 485, 487, 496, 499, 502, 504, 505, 508, 511, 513, 517, 520, 522, 523, 524, 526, 528, 532, 537, 538, 539, 542, 543, 544, 558], "guidelin": [10, 12], "charact": [10, 12, 36, 48, 74, 77, 79, 80, 82, 87, 88, 89, 95, 111, 115, 119, 128, 137, 146, 165, 175, 181, 183, 184, 185, 187, 197, 203, 205, 206, 207, 210, 221, 227, 229, 230, 233, 237, 249, 255, 257, 258, 259, 265, 281, 285, 289, 299, 306, 315, 335, 350, 354, 355, 357, 362, 363, 364, 370, 386, 390, 394, 410, 419, 438, 454, 457, 459, 460, 462, 467, 468, 469, 475, 491, 495, 499, 508, 517, 526, 545], "underneath": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 87, 467], "look": [10, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 51, 54, 66, 72, 78, 87, 130, 133, 144, 146, 148, 158, 159, 164, 177, 187, 199, 205, 210, 223, 229, 237, 251, 257, 302, 313, 315, 317, 327, 328, 333, 348, 362, 403, 406, 417, 419, 421, 431, 432, 437, 446, 452, 458, 467, 510, 513, 524, 526, 528, 538, 539, 544, 550, 555], "close": [10, 45, 48, 49, 72, 79, 82, 126, 140, 176, 177, 185, 187, 198, 199, 207, 210, 222, 223, 233, 237, 250, 251, 295, 299, 335, 348, 354, 357, 400, 413, 452, 459, 462, 506, 520], "9998": 10, "9999": 10, "save": [10, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 49, 72, 79, 80, 87, 109, 110, 111, 115, 178, 200, 207, 223, 224, 233, 251, 252, 279, 280, 281, 285, 299, 348, 354, 355, 384, 385, 386, 390, 452, 459, 460, 467, 489, 490, 491, 495], "exit": [10, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 66, 68, 69, 71, 83, 87, 88, 105, 128, 130, 146, 148, 164, 179, 185, 187, 201, 205, 207, 210, 217, 225, 229, 232, 233, 237, 245, 253, 257, 258, 275, 296, 315, 317, 333, 344, 345, 358, 362, 363, 380, 401, 403, 419, 421, 437, 446, 448, 449, 463, 467, 468, 485, 508, 510, 526, 528, 544], "home": [10, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 54, 66, 78, 87, 89, 92, 93, 94, 96, 99, 101, 114, 116, 118, 119, 128, 172, 183, 185, 194, 205, 207, 229, 233, 257, 296, 298, 353, 362, 401, 446, 458, 467, 469, 472, 473, 474, 476, 479, 481, 494, 496, 498, 499, 508], "stretch": [10, 23, 41], "now": [10, 18, 19, 20, 21, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 63, 67, 72, 80, 82, 94, 100, 108, 120, 123, 127, 156, 161, 169, 171, 185, 190, 193, 207, 213, 216, 233, 241, 244, 251, 264, 270, 278, 290, 340, 343, 348, 357, 369, 375, 383, 395, 429, 443, 447, 452, 460, 462, 474, 480, 488, 500, 503, 507, 536, 541, 557, 559], "ask": [10, 48, 54, 72, 75, 79, 91, 102, 103, 104, 117, 121, 122, 231, 233, 251, 261, 272, 273, 274, 291, 292, 299, 348, 351, 354, 366, 377, 378, 379, 392, 396, 397, 452, 455, 459, 471, 482, 483, 484, 497, 501, 502], "credenti": [10, 66, 446], "upload": 10, "button": 10, "recent": [10, 48, 49, 50, 71, 72, 82, 88, 109, 110, 114, 124, 140, 144, 164, 177, 184, 185, 199, 206, 207, 210, 220, 223, 230, 233, 237, 248, 251, 258, 279, 280, 284, 293, 309, 313, 333, 335, 347, 348, 357, 363, 384, 385, 389, 398, 413, 417, 437, 451, 452, 462, 468, 489, 490, 494, 504, 520, 524, 544, 558, 563], "sometim": [10, 43, 44, 80, 169, 190, 213, 241, 340, 355, 460], "plan": [10, 11, 14, 16, 25, 31, 48, 49, 54, 78, 185, 207, 233, 298, 353, 458], "along": [10, 48, 88, 91, 102, 105, 121, 126, 148, 163, 164, 184, 187, 206, 210, 230, 232, 233, 237, 258, 261, 272, 275, 291, 295, 317, 332, 333, 363, 366, 377, 380, 396, 400, 421, 436, 437, 468, 471, 482, 485, 501, 506, 528, 543, 544], "amend": [10, 12], "forc": [10, 12, 16, 18, 19, 20, 21, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 72, 75, 79, 87, 88, 104, 105, 109, 110, 113, 114, 122, 132, 133, 134, 137, 144, 149, 150, 154, 160, 164, 184, 185, 186, 187, 199, 206, 207, 209, 210, 223, 230, 231, 232, 233, 236, 237, 251, 258, 264, 273, 274, 275, 279, 280, 283, 284, 292, 299, 300, 301, 302, 303, 306, 307, 313, 318, 319, 323, 329, 333, 348, 351, 354, 362, 363, 379, 380, 384, 385, 388, 389, 397, 405, 406, 407, 410, 417, 422, 423, 427, 433, 437, 452, 455, 459, 467, 468, 484, 485, 489, 490, 493, 494, 502, 512, 513, 514, 517, 524, 529, 530, 534, 540, 544], "screen": [10, 18, 19, 20, 22, 35, 43, 44], "old": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 71, 72, 80, 87, 91, 102, 121, 134, 154, 169, 175, 177, 178, 183, 187, 190, 197, 199, 200, 205, 210, 213, 220, 221, 223, 224, 229, 237, 241, 248, 251, 252, 257, 261, 272, 291, 323, 347, 348, 355, 362, 366, 377, 396, 427, 451, 452, 460, 467, 471, 482, 501, 534, 557, 559], "ones": [10, 26, 49, 72, 80, 82, 88, 103, 111, 115, 237, 281, 285, 335, 357, 363, 378, 386, 390, 452, 460, 462, 468, 483, 491, 495, 559], "restart": [10, 18, 19, 20, 22, 28, 35, 36, 38, 43, 44, 48, 72, 80, 103, 153, 155, 156, 164, 177, 199, 210, 223, 224, 231, 237, 251, 252, 273, 322, 324, 325, 333, 348, 355, 378, 426, 428, 429, 437, 452, 460, 483, 533, 535, 536, 544], "excess": [10, 48, 49, 72, 251, 348, 452], "delai": [10, 52, 59, 60, 72, 75, 79, 82, 88, 132, 140, 176, 177, 184, 185, 198, 199, 206, 207, 209, 222, 223, 230, 233, 236, 237, 250, 251, 258, 299, 301, 335, 348, 351, 354, 357, 363, 405, 413, 452, 455, 459, 462, 468, 512, 520], "futur": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 66, 67, 72, 79, 88, 111, 115, 164, 166, 167, 171, 184, 185, 193, 206, 207, 210, 216, 230, 233, 237, 244, 258, 281, 285, 299, 333, 336, 343, 354, 363, 386, 390, 437, 439, 440, 446, 447, 452, 459, 468, 491, 495, 544, 546, 547, 549, 553, 555, 557, 563], "date": [10, 36, 37, 38, 39, 48, 66, 146, 148, 156, 159, 163, 187, 210, 237, 315, 317, 325, 328, 332, 419, 421, 429, 432, 436, 446, 526, 528, 536, 539, 543], "grab": [10, 48], "back": [10, 11, 12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 72, 77, 78, 80, 81, 91, 102, 105, 111, 114, 115, 121, 128, 136, 144, 164, 185, 187, 199, 207, 210, 220, 223, 224, 232, 233, 237, 248, 251, 252, 261, 272, 275, 281, 284, 285, 291, 296, 298, 313, 334, 348, 353, 355, 356, 366, 377, 380, 386, 389, 390, 396, 401, 409, 417, 437, 452, 457, 458, 460, 461, 471, 482, 485, 491, 494, 495, 501, 508, 516, 524, 544, 559], "mani": [10, 11, 12, 18, 19, 20, 21, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 68, 72, 78, 79, 80, 132, 140, 172, 173, 176, 177, 178, 186, 194, 195, 198, 199, 200, 209, 217, 222, 223, 224, 233, 236, 240, 245, 250, 251, 252, 299, 301, 344, 348, 354, 355, 405, 413, 448, 452, 458, 459, 460, 512, 520], "Not": [10, 11, 47, 54, 82, 88, 133, 134, 137, 144, 154, 169, 184, 187, 190, 206, 210, 213, 230, 237, 241, 258, 302, 303, 306, 313, 323, 335, 340, 357, 363, 406, 407, 410, 417, 427, 462, 468, 513, 514, 517, 524, 534], "anyth": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 49, 79, 111, 115, 281, 285, 386, 390, 491, 495], "touch": [10, 18, 19, 20, 36, 37, 38, 39, 43, 44, 54, 72, 103, 231, 251, 273, 348, 378, 452, 483], "advanc": [10, 47, 48, 49, 58, 72, 79, 87, 205, 207, 229, 233, 257, 299, 348, 354, 362, 452, 459, 467, 557], "wiki": [10, 14, 16, 17, 22, 25, 31, 43, 44, 54], "articl": [10, 47, 54], "atlassian": 10, "tutori": [10, 18, 19, 20, 22, 35, 36, 38, 43, 44], "commit": [11, 12, 13, 17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44, 48, 49, 51, 57, 72, 79, 82, 111, 115, 128, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 281, 285, 296, 299, 335, 348, 354, 357, 386, 390, 401, 452, 459, 462, 491, 495, 508, 563], "explicitli": [11, 48, 54, 71, 75, 79, 81, 89, 96, 99, 106, 116, 119, 128, 144, 172, 185, 187, 194, 207, 210, 220, 233, 237, 248, 259, 266, 269, 276, 286, 289, 296, 299, 313, 334, 347, 351, 354, 356, 364, 371, 374, 381, 391, 394, 401, 417, 451, 455, 459, 461, 469, 476, 479, 486, 496, 499, 508, 524, 549, 551], "given": [11, 18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 49, 54, 55, 65, 68, 72, 74, 80, 81, 86, 87, 88, 89, 90, 91, 93, 94, 95, 96, 97, 98, 99, 102, 103, 104, 105, 106, 107, 109, 110, 111, 112, 113, 115, 116, 117, 119, 121, 122, 125, 126, 128, 132, 133, 134, 137, 138, 140, 141, 142, 144, 146, 148, 152, 154, 157, 159, 162, 163, 164, 175, 176, 177, 182, 184, 185, 187, 192, 197, 198, 199, 200, 204, 205, 206, 207, 210, 215, 221, 222, 223, 224, 228, 229, 230, 232, 233, 236, 237, 243, 249, 250, 251, 252, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 271, 272, 274, 275, 276, 277, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 291, 292, 294, 295, 296, 301, 302, 303, 306, 307, 310, 311, 313, 315, 317, 323, 326, 328, 331, 332, 333, 334, 342, 348, 350, 355, 356, 361, 362, 363, 364, 365, 366, 368, 369, 370, 371, 372, 373, 374, 377, 378, 379, 380, 381, 382, 384, 385, 386, 387, 388, 390, 391, 392, 394, 396, 397, 399, 400, 401, 405, 406, 407, 410, 411, 413, 414, 415, 417, 419, 421, 427, 430, 432, 435, 436, 437, 445, 448, 452, 454, 460, 461, 466, 467, 468, 469, 470, 471, 473, 474, 475, 476, 477, 478, 479, 482, 483, 484, 485, 486, 487, 489, 490, 491, 492, 493, 495, 496, 497, 499, 501, 502, 505, 506, 508, 512, 513, 514, 517, 518, 520, 521, 522, 524, 526, 528, 532, 534, 537, 539, 542, 543, 544], "varieti": [11, 33, 54, 79, 185, 207, 233, 299, 354, 459], "track": [11, 12, 13, 41, 47, 48, 55, 59, 60, 72, 78, 80, 81, 87, 91, 102, 103, 111, 115, 121, 178, 187, 200, 210, 224, 231, 233, 237, 251, 252, 261, 272, 273, 281, 285, 291, 334, 348, 355, 356, 366, 377, 378, 386, 390, 396, 452, 458, 460, 461, 467, 471, 482, 483, 491, 495, 501], "comment": [11, 12, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 54, 74, 77, 80, 82, 169, 175, 187, 190, 197, 210, 213, 221, 237, 241, 249, 335, 340, 350, 355, 357, 454, 457, 460, 462], "isn": [11, 32, 33, 48, 50, 54, 72, 79, 140, 176, 177, 198, 199, 207, 222, 223, 231, 233, 250, 251, 273, 299, 348, 354, 413, 452, 459, 520], "lack": [11, 47, 54, 72, 79, 135, 185, 199, 207, 223, 233, 237, 251, 299, 304, 348, 354, 408, 452, 459, 515], "denot": [11, 48, 87, 184, 206, 230, 257, 258, 362, 467], "prior": [11, 47, 48, 49, 50, 55, 66, 72, 74, 79, 95, 128, 175, 177, 185, 197, 199, 207, 221, 223, 233, 249, 251, 296, 299, 348, 350, 354, 401, 446, 452, 454, 459, 475, 508], "appli": [11, 12, 22, 26, 28, 29, 32, 33, 35, 36, 37, 39, 47, 48, 49, 54, 66, 71, 72, 74, 79, 80, 91, 94, 96, 98, 99, 102, 109, 110, 111, 112, 115, 116, 121, 161, 175, 185, 197, 199, 207, 220, 221, 223, 233, 237, 248, 249, 251, 261, 264, 266, 268, 269, 272, 279, 280, 281, 282, 285, 286, 291, 299, 330, 347, 348, 350, 354, 355, 366, 369, 371, 373, 374, 377, 384, 385, 386, 387, 390, 391, 396, 434, 446, 451, 452, 454, 459, 460, 471, 474, 476, 478, 479, 482, 489, 490, 491, 492, 495, 496, 501, 541], "id": [11, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 57, 59, 60, 68, 74, 75, 77, 79, 80, 87, 88, 97, 105, 106, 107, 125, 128, 129, 131, 132, 140, 144, 164, 175, 176, 183, 184, 185, 186, 187, 195, 197, 198, 205, 206, 207, 209, 210, 217, 221, 222, 224, 229, 230, 232, 233, 236, 237, 245, 249, 250, 252, 257, 258, 267, 275, 276, 277, 294, 296, 297, 299, 301, 313, 333, 344, 350, 351, 354, 355, 362, 363, 372, 380, 381, 382, 399, 401, 402, 404, 405, 413, 417, 437, 448, 454, 455, 457, 459, 460, 467, 468, 477, 485, 486, 487, 505, 508, 509, 511, 512, 520, 524, 544, 564], "11453": 11, "check_disk": 11, "zol": [11, 12, 13, 18, 19, 20, 22, 35, 36, 38, 43, 44, 54, 55, 59, 60, 179, 185, 201, 225, 253], "11276": 11, "da68988": 11, "11052": 11, "2efea7c": 11, "11051": 11, "3b61ca3": 11, "10853": 11, "8dc2197": 11, "10844": 11, "61c3391": 11, "10842": 11, "d10b2f1": 11, "10841": 11, "944a372": 11, "10809": 11, "ee36c70": 11, "10808": 11, "2ef0f8c": 11, "10701": 11, "0091d66": 11, "10601": 11, "cc99f27": 11, "10573": 11, "48d3eb4": 11, "10572": 11, "edc1e71": 11, "10566": 11, "ab7615d": 11, "10554": 11, "bec1067": 11, "10500": 11, "03916905": 11, "10449": 11, "379ca9c": 11, "10406": 11, "da2feb4": 11, "10154": 11, "10067": 11, "remap": [11, 80, 86, 224, 252, 256, 355, 361, 460, 466], "9884": 11, "9851": 11, "9691": 11, "d9b4bf0": 11, "9683": 11, "devid": [11, 77, 164, 210, 237, 333, 437, 457, 544], "9680": 11, "9672": 11, "29445fe3": 11, "9647": 11, "a448a25": 11, "9626": 11, "59e6e7ca": 11, "9635": 11, "9623": 11, "22448f08": 11, "9621": 11, "305bc4b3": 11, "9539": 11, "5228cf01": 11, "9512": 11, "b4555c77": 11, "9487": 11, "48fbb9dd": 11, "9466": 11, "272b5d73": 11, "9440": 11, "f664f1e": 11, "ticket": 11, "land": [11, 48], "9433": 11, "0873bb63": 11, "9421": 11, "64c1dcef": 11, "9237": 11, "introduc": [11, 47], "8567": 11, "9194": 11, "9077": 11, "9027": 11, "4a5d7f82": 11, "9018": 11, "3ec34e55": 11, "8984": 11, "wip": 11, "nfsv4": [11, 36, 37, 38, 39, 79, 185, 207, 233, 299, 354, 459], "8969": 11, "8942": 11, "650258d7": 11, "8941": 11, "390d679a": 11, "8862": 11, "3b9edd7": 11, "8858": 11, "8856": 11, "encrypt": [11, 14, 16, 25, 27, 28, 31, 54, 72, 75, 78, 79, 80, 87, 89, 91, 102, 103, 104, 109, 110, 111, 115, 117, 119, 121, 122, 128, 144, 152, 156, 158, 166, 167, 223, 224, 231, 233, 237, 251, 252, 259, 261, 272, 273, 274, 279, 280, 281, 285, 289, 291, 292, 296, 299, 313, 321, 327, 348, 351, 354, 355, 364, 366, 377, 378, 379, 384, 385, 386, 390, 392, 394, 396, 397, 401, 417, 425, 431, 452, 455, 458, 459, 460, 467, 469, 471, 482, 483, 484, 489, 490, 491, 495, 497, 499, 501, 502, 508, 524, 532, 536, 538, 546, 547, 559], "b525630": 11, "8809": 11, "libfakekernel": 11, "refactor": 11, "8727": 11, "8713": 11, "871e0732": 11, "8661": 11, "1ce23dca": 11, "8648": 11, "f763c3d1": 11, "8602": 11, "a032ac4": 11, "8601": [11, 66, 446], "d99a015": 11, "equival": [11, 33, 47, 48, 49, 55, 63, 72, 75, 79, 81, 91, 93, 102, 103, 104, 105, 111, 115, 117, 121, 122, 132, 137, 154, 166, 167, 176, 181, 185, 187, 198, 203, 207, 209, 210, 227, 232, 233, 236, 237, 255, 261, 263, 272, 274, 275, 281, 285, 291, 292, 299, 301, 306, 323, 334, 336, 348, 351, 354, 356, 366, 368, 377, 378, 379, 380, 386, 390, 392, 396, 397, 405, 410, 427, 439, 440, 443, 452, 455, 459, 461, 471, 473, 482, 483, 484, 485, 491, 495, 497, 501, 502, 512, 517, 534, 546, 547], "initi": [11, 12, 13, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 48, 68, 72, 78, 79, 81, 82, 84, 91, 92, 94, 102, 109, 110, 111, 115, 121, 128, 133, 134, 143, 152, 154, 159, 161, 163, 164, 185, 187, 207, 210, 223, 233, 237, 251, 254, 261, 264, 272, 279, 280, 281, 285, 291, 296, 298, 299, 302, 303, 312, 321, 323, 328, 330, 332, 333, 334, 335, 344, 348, 353, 354, 356, 357, 359, 366, 369, 377, 384, 385, 386, 390, 396, 401, 406, 407, 416, 425, 427, 432, 434, 436, 437, 448, 452, 458, 459, 461, 462, 464, 471, 472, 474, 482, 489, 490, 491, 495, 501, 508, 513, 514, 523, 532, 534, 539, 541, 543, 544], "8590": 11, "935e2c2": 11, "8569": 11, "relev": [11, 12, 33, 43, 48, 100, 105, 120, 145, 161, 185, 207, 233, 237, 275, 299, 314, 330, 354, 375, 380, 395, 418, 434, 480, 485, 500, 525, 541], "8552": 11, "8521": 11, "ee6370a7": 11, "8502": 11, "7955": 11, "9485": 11, "1258bd7": 11, "8477": 11, "92e43c1": 11, "8454": 11, "8423": 11, "50c957f": 11, "8408": 11, "5f1346c": 11, "8379": 11, "8376": 11, "8311": 11, "assess": 11, "8304": 11, "8300": [11, 22, 35], "44f09cd": 11, "8265": 11, "large_dnod": [11, 79, 80, 200, 207, 224, 233, 252, 299, 354, 355, 459, 460], "8168": 11, "78d95ea": 11, "8138": 11, "spell": 11, "came": [11, 49, 559], "mdoc": 11, "convers": [11, 47, 48, 54, 71, 105, 220, 232, 248, 275, 347, 380, 451, 485], "8108": 11, "8068": 11, "a1d477c24c": 11, "evacu": [11, 80, 152, 224, 237, 252, 321, 355, 425, 460, 532], "8064": 11, "8022": 11, "e55ebf6": 11, "8021": 11, "7657def": 11, "8013": 11, "7982": 11, "7970": 11, "c30e58c": 11, "7956": 11, "cda0317": 11, "7869": 11, "df7eecc": 11, "7816": 11, "7803": 11, "upda": 11, "te_vdev_config_dev_str": 11, "7801": 11, "0eef1bd": 11, "f25efb3": 11, "7779": 11, "zfs_ctldir": 11, "rewritten": [11, 55], "7740": 11, "32d41fb": 11, "7739": 11, "582cc014": 11, "7730": 11, "e24e62a": 11, "7710": 11, "under": [11, 21, 26, 32, 33, 37, 39, 45, 47, 48, 49, 54, 69, 72, 74, 75, 78, 79, 80, 81, 82, 87, 88, 89, 96, 99, 116, 119, 128, 130, 148, 164, 176, 177, 181, 183, 184, 185, 187, 198, 199, 200, 203, 205, 206, 207, 210, 218, 220, 222, 223, 224, 227, 229, 230, 233, 237, 246, 248, 249, 250, 251, 252, 255, 257, 258, 259, 289, 296, 298, 299, 333, 334, 335, 345, 348, 350, 351, 353, 354, 355, 356, 357, 362, 363, 364, 394, 401, 403, 437, 449, 452, 454, 455, 458, 459, 460, 461, 462, 467, 468, 469, 476, 479, 496, 499, 508, 510, 528, 544, 557], "7602": 11, "7591": 11, "541a090": 11, "7586": 11, "c443487": 11, "7570": 11, "discard": [11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 54, 72, 79, 80, 81, 96, 99, 109, 110, 114, 116, 128, 135, 144, 163, 177, 185, 187, 199, 207, 210, 223, 224, 233, 237, 251, 252, 279, 280, 284, 296, 299, 304, 313, 332, 334, 348, 354, 355, 356, 384, 385, 389, 401, 408, 417, 436, 452, 459, 460, 461, 476, 479, 489, 490, 494, 496, 508, 515, 524, 543, 555], "asynchron": [11, 18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 51, 67, 69, 72, 81, 82, 146, 171, 177, 187, 193, 199, 210, 216, 218, 223, 237, 244, 246, 251, 315, 334, 335, 343, 345, 348, 356, 357, 419, 447, 449, 452, 461, 462, 526, 559], "unclear": 11, "purpos": [11, 47, 48, 49, 65, 68, 72, 77, 79, 80, 81, 82, 87, 111, 115, 137, 164, 173, 183, 185, 187, 192, 195, 199, 200, 205, 207, 210, 215, 217, 223, 224, 229, 233, 237, 243, 245, 251, 252, 257, 281, 285, 299, 333, 334, 335, 342, 344, 348, 354, 355, 356, 357, 362, 386, 390, 437, 445, 448, 452, 457, 459, 460, 461, 462, 467, 491, 495, 517, 544, 559], "7542": 11, "libshar": 11, "address": [11, 12, 14, 16, 18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 79, 80, 88, 96, 99, 116, 128, 184, 185, 206, 207, 220, 224, 230, 233, 252, 258, 296, 354, 355, 363, 401, 459, 460, 468, 476, 479, 496, 508], "eventu": [11, 72, 251, 348, 452], "retir": [11, 72, 452], "flexibli": 11, "share": [11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 72, 78, 79, 80, 81, 82, 84, 87, 88, 89, 94, 96, 99, 111, 115, 116, 119, 128, 133, 137, 141, 164, 172, 184, 185, 187, 194, 206, 207, 210, 230, 233, 237, 251, 254, 258, 259, 264, 281, 285, 289, 296, 298, 299, 302, 306, 310, 333, 334, 335, 348, 353, 354, 355, 356, 357, 359, 363, 364, 369, 386, 390, 394, 401, 406, 410, 414, 437, 452, 458, 459, 460, 461, 462, 464, 467, 468, 469, 474, 476, 479, 491, 495, 496, 499, 508, 513, 517, 521, 544, 555], "7512": 11, "7497": 11, "dtrace": [11, 105, 232, 275, 380, 485], "readili": 11, "7446": 11, "7430": 11, "68cbd56": 11, "7402": 11, "690fe64": 11, "7345": 11, "058ac9b": 11, "7278": 11, "arc": [11, 47, 49, 54, 62, 72, 79, 81, 132, 177, 185, 186, 199, 207, 209, 223, 233, 236, 240, 251, 299, 301, 334, 339, 348, 354, 356, 405, 442, 452, 459, 461, 512], "tune": [11, 18, 19, 20, 35, 36, 37, 38, 39, 43, 44, 47, 48, 50, 54, 59, 60, 72, 77, 79, 177, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 457, 459], "slightli": [11, 47, 48, 49, 68, 72, 80, 134, 156, 173, 195, 199, 217, 220, 223, 237, 245, 251, 325, 344, 348, 355, 429, 448, 452, 460, 536], "cover": [11, 41, 72, 348, 452], "arc_tuning_upd": 11, "7238": 11, "zvol_swap": 11, "alreadi": [11, 18, 19, 20, 22, 32, 33, 35, 36, 37, 38, 39, 48, 49, 50, 54, 72, 74, 79, 80, 91, 92, 93, 98, 102, 105, 109, 110, 112, 121, 133, 144, 155, 164, 166, 167, 175, 177, 178, 185, 187, 197, 199, 200, 207, 210, 221, 223, 224, 232, 233, 237, 249, 251, 252, 261, 262, 263, 268, 272, 275, 279, 280, 282, 291, 299, 313, 324, 333, 348, 350, 354, 355, 366, 367, 368, 373, 377, 380, 384, 385, 387, 396, 417, 428, 437, 452, 454, 459, 460, 471, 472, 473, 478, 482, 485, 489, 490, 492, 501, 513, 524, 535, 544, 546, 547], "7194": 11, "d7958b4": 11, "7164": 11, "b1b85c87": 11, "7041": 11, "33c0819": 11, "7016": 11, "d3c2ae1": 11, "6914": 11, "arc_meta_limit": [11, 48], "zfs_arc_meta_limit_perc": [11, 199, 223, 251, 348], "6875": 11, "6843": 11, "f5f087e": 11, "6841": 11, "4254acb": 11, "6781": 11, "15313c5": 11, "6765": 11, "6764": 11, "6763": 11, "6762": 11, "6648": 11, "6bb24f4": 11, "6578": 11, "6577": 11, "6575": 11, "6568": 11, "6528": 11, "6494": 11, "vdev_disk": 11, "vdev_fil": 11, "rework": 11, "propos": 11, "6468": 11, "6465": 11, "6434": 11, "472e7c6": 11, "6421": 11, "ca0bf58": 11, "6418": 11, "131cc95": 11, "6391": 11, "ee06391": [11, 12], "6390": 11, "85802aa": 11, "6388": 11, "0de7c55": 11, "6386": 11, "485c581": 11, "6385": 11, "f3ad9cd": 11, "6369": 11, "6368": 11, "2024041": 11, "6346": 11, "6334": 11, "1a04bab": 11, "6290": 11, "017da6": 11, "6250": 11, "6249": 11, "6248": 11, "6220": 11, "b_thaw": 11, "unus": [11, 47, 48, 54, 72, 80, 82, 199, 223, 237, 251, 335, 348, 355, 357, 452, 460, 462, 557], "6209": 11, "mutex": [11, 48, 72, 223, 251, 348, 452], "phtread": 11, "primit": [11, 48, 72, 223, 251, 348, 452], "6095": 11, "f866a4ea": 11, "6091": 11, "c11f100": 11, "6037": 11, "a8bd6dc": 11, "5984": 11, "480f626": 11, "5966": 11, "5961": 11, "22872ff": 11, "5882": 11, "83e9986": 11, "5815": 11, "5770": 11, "c3275b5": 11, "5769": 11, "dd26aa5": 11, "5768": 11, "5766": 11, "4dd1893": 11, "5693": 11, "0f7d2a4": 11, "5692": 11, "filefrag": 11, "5684": 11, "5503": 11, "0f676dc": 11, "deploi": [11, 47, 54], "7072": 11, "5502": 11, "f0ed6c7": 11, "5410": 11, "0bf8501": 11, "5409": 11, "b23d543": 11, "5379": 11, "zfs_putpag": 11, "5316": 11, "idmap": 11, "facil": [11, 48, 72, 78, 87, 177, 183, 199, 205, 223, 229, 251, 257, 348, 362, 452, 458, 467], "delta": [11, 177], "have_idmap": 11, "chunk": [11, 48, 72, 79, 172, 177, 185, 194, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "readabl": [11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 54, 67, 77, 79, 87, 91, 96, 99, 102, 116, 121, 171, 172, 183, 185, 193, 194, 205, 207, 216, 229, 233, 244, 257, 261, 266, 269, 272, 286, 291, 299, 343, 354, 362, 366, 371, 374, 377, 391, 396, 447, 457, 459, 467, 471, 476, 479, 482, 496, 501], "5313": 11, "ec8501": 11, "5312": 11, "fold": 11, "cleanup": [11, 12, 48, 72, 75, 172, 194, 251, 348, 351, 452, 455], "5219": 11, "ef56b07": 11, "5179": 11, "3f4058c": 11, "5154": 11, "9a49d3f": 11, "5149": 11, "zvol_max_discard_block": [11, 72, 177, 199, 223, 251, 348, 452], "5148": 11, "dkiocfre": 11, "ioctl": [11, 72, 140, 176, 198, 222, 250, 251, 348, 413, 452, 520], "5136": 11, "e8b96c6": 11, "4752": 11, "aa9af22": 11, "4745": 11, "411bf20": 11, "4698": 11, "4fcc437": 11, "4620": 11, "4573": 11, "10b7549": 11, "4571": 11, "6e1b9d0": 11, "4570": 11, "b1d13a6": 11, "4391": 11, "78e2739": 11, "4465": 11, "4263": 11, "4242": 11, "neither": [11, 66, 75, 79, 80, 89, 97, 107, 109, 110, 111, 115, 119, 125, 183, 185, 205, 207, 233, 259, 267, 277, 279, 280, 281, 285, 289, 294, 299, 351, 354, 364, 372, 382, 384, 385, 386, 390, 394, 399, 446, 455, 459, 460, 469, 477, 487, 489, 490, 491, 495, 499, 505], "vnode": 11, "4206": 11, "2820bc4": 11, "4188": 11, "2e7b765": 11, "4181": 11, "4161": 11, "reader": [11, 33, 37, 39, 48], "writer": [11, 48, 72, 199, 223, 251, 348, 452], "4128": 11, "ldi_ev_register_callback": 11, "notif": 11, "scsi": [11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 54, 74, 86, 175, 197, 221, 222, 249, 250, 256, 350, 361, 413, 454, 466], "handler": [11, 132, 186, 209, 236, 301, 405, 512], "4072": 11, "3998": 11, "417104bd": 11, "3947": 11, "7f9d994": 11, "3928": 11, "3871": 11, "d1d7e268": 11, "3747": 11, "090ff09": 11, "3705": 11, "lz4": [11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 67, 72, 79, 80, 166, 167, 171, 178, 185, 193, 200, 207, 216, 224, 233, 244, 252, 299, 343, 354, 355, 447, 452, 459, 460, 546, 547], "workspac": 11, "kmem": [11, 48, 71, 220, 248, 347, 451], "cach": [11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 58, 62, 67, 71, 72, 75, 79, 81, 82, 87, 103, 132, 133, 144, 146, 147, 152, 164, 171, 177, 183, 185, 186, 187, 193, 199, 205, 207, 209, 210, 216, 220, 223, 229, 231, 233, 236, 237, 240, 244, 248, 251, 257, 273, 299, 301, 313, 316, 321, 333, 334, 335, 339, 343, 347, 348, 351, 354, 356, 357, 362, 378, 405, 417, 420, 425, 437, 442, 447, 451, 452, 455, 459, 461, 462, 467, 483, 512, 513, 524, 526, 527, 532, 544, 564], "resolv": [11, 12, 16, 18, 19, 20, 25, 31, 32, 36, 38, 43, 44, 47, 54, 85, 133, 146, 148, 158, 159, 187, 210, 237, 302, 315, 317, 327, 328, 360, 406, 419, 421, 431, 432, 465, 513, 526, 528, 538, 539, 549, 560, 561, 562, 563], "stack": [11, 47, 48, 54, 68, 105, 173, 176, 195, 198, 217, 222, 232, 245, 250, 275, 344, 380, 448, 485], "3606": 11, "c5b247f": 11, "3580": 11, "3543": 11, "8dca0a9": 11, "3512": 11, "67629d0": 11, "3507": 11, "43a696": 11, "3444": 11, "3371": 11, "3311": 11, "3301": 11, "3258": 11, "9d81146": 11, "3254": 11, "3246": 11, "cc92e9d": 11, "2933": 11, "2897": 11, "fb82700": 11, "2665": 11, "32a9872": 11, "2130": 11, "460a021": 11, "1974": 11, "restructur": 11, "1898": 11, "vm": [11, 14, 16, 23, 25, 28, 31], "1700": 11, "1618": 11, "ca67b33": 11, "1337": 11, "2402458": 11, "1126": 11, "e43b290": 11, "763": 11, "3cee226": 11, "742": 11, "701": 11, "348": 11, "243": 11, "manual": [11, 14, 16, 18, 19, 20, 22, 25, 27, 31, 35, 36, 37, 38, 39, 43, 44, 47, 52, 54, 62, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 169, 173, 175, 176, 177, 178, 181, 182, 185, 190, 192, 195, 197, 198, 199, 200, 203, 204, 205, 207, 208, 210, 213, 215, 217, 218, 220, 221, 222, 223, 224, 227, 228, 229, 231, 232, 233, 235, 237, 240, 241, 243, 244, 245, 246, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 339, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 559, 561, 562], "184": 11, "act": [12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 66, 72, 79, 105, 185, 199, 207, 223, 232, 233, 251, 275, 299, 348, 354, 380, 446, 452, 459, 485], "regularli": [12, 47, 54], "outstand": [12, 47, 48, 50, 51, 54, 71, 72, 87, 177, 183, 199, 205, 220, 223, 229, 248, 251, 257, 347, 348, 362, 451, 452, 467, 559], "submit": [12, 25, 48, 72, 132, 140, 176, 198, 199, 209, 222, 223, 236, 250, 251, 301, 348, 405, 413, 452, 512, 520], "inclus": [12, 72, 81, 82, 94, 130, 185, 207, 210, 233, 237, 264, 334, 335, 356, 357, 369, 403, 452, 461, 462, 474, 510], "great": [12, 49, 54], "familiar": 12, "yourself": [12, 18, 19, 20, 35, 36, 38], "quickli": [12, 47, 48, 49, 51, 54, 71, 72, 78, 79, 134, 154, 161, 177, 185, 199, 207, 220, 223, 233, 237, 248, 251, 298, 303, 323, 330, 347, 348, 353, 407, 427, 434, 451, 452, 458, 459, 514, 534, 541], "valuabl": 12, "guid": [12, 14, 16, 17, 18, 19, 20, 22, 25, 28, 31, 34, 35, 36, 37, 38, 39, 43, 44, 47, 48, 53, 67, 72, 77, 79, 80, 81, 82, 105, 132, 133, 140, 146, 148, 158, 159, 164, 171, 176, 178, 185, 186, 187, 193, 198, 200, 207, 209, 210, 216, 222, 224, 232, 233, 236, 237, 244, 250, 251, 252, 275, 299, 301, 302, 315, 317, 327, 328, 333, 334, 335, 343, 348, 354, 355, 356, 357, 380, 405, 406, 413, 419, 421, 431, 432, 437, 447, 452, 457, 459, 460, 461, 462, 485, 512, 513, 520, 526, 528, 538, 539, 544], "web": 12, "person": [12, 33, 54], "slow": [12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 72, 78, 81, 159, 176, 185, 198, 199, 207, 223, 233, 237, 251, 298, 328, 348, 353, 432, 452, 458, 539], "connect": [12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 74, 82, 86, 109, 110, 175, 182, 185, 187, 197, 204, 207, 210, 221, 228, 233, 237, 249, 256, 279, 280, 335, 350, 357, 361, 384, 385, 454, 462, 466, 489, 490, 557, 561, 562, 563], "consult": [12, 144, 210, 237, 313, 417, 524], "select": [12, 18, 19, 20, 22, 25, 26, 35, 36, 38, 43, 44, 48, 49, 51, 58, 71, 72, 75, 79, 80, 105, 166, 167, 177, 185, 199, 200, 207, 220, 223, 224, 232, 233, 248, 251, 252, 275, 299, 347, 348, 351, 354, 355, 380, 451, 452, 455, 459, 460, 485, 546, 547], "yet": [12, 18, 19, 20, 22, 26, 35, 37, 39, 43, 44, 48, 72, 78, 79, 80, 82, 178, 184, 185, 187, 199, 200, 206, 210, 223, 224, 230, 237, 251, 252, 258, 299, 335, 348, 354, 355, 357, 452, 458, 459, 460, 462], "easier": [12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 50, 72, 79, 177, 199, 223, 251, 348, 354, 452, 459], "learn": 12, "whole": [12, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 54, 72, 78, 81, 82, 141, 187, 199, 210, 237, 251, 310, 334, 335, 348, 356, 357, 414, 452, 458, 461, 462, 521], "tri": [12, 62, 63, 80, 138, 169, 187, 190, 210, 213, 237, 240, 241, 307, 339, 340, 411, 442, 443, 460, 518], "mandatori": [12, 79, 172, 182, 185, 194, 204, 207, 228, 233, 299, 354, 459], "gitconfig": 12, "renamelimit": 12, "999999": 12, "mail": [12, 17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44, 47, 54, 58, 59, 60], "yourmail": 12, "raw": [12, 14, 16, 25, 28, 31, 48, 49, 54, 62, 72, 79, 87, 109, 110, 111, 115, 128, 146, 165, 183, 185, 205, 207, 210, 229, 233, 237, 240, 251, 257, 279, 280, 281, 285, 296, 299, 315, 339, 348, 354, 362, 384, 385, 386, 390, 401, 419, 438, 442, 452, 459, 467, 489, 490, 491, 495, 508, 526, 545, 559], "githubusercont": 12, "buildbot": [12, 13, 59, 60], "path_to_zfs_fold": 12, "openzfs_commit_hash": 12, "autoport": 12, "ozxxxx": 12, "xxxx": 12, "try": [12, 18, 19, 20, 21, 22, 27, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 68, 72, 78, 87, 140, 173, 176, 177, 183, 195, 198, 199, 205, 217, 222, 223, 229, 245, 250, 251, 257, 344, 348, 362, 413, 448, 452, 458, 467, 520, 551, 554], "cstyle": [12, 64, 170, 191, 214, 242, 341, 444], "success": [12, 48, 72, 105, 109, 110, 128, 130, 144, 164, 185, 187, 199, 207, 210, 223, 232, 233, 237, 251, 275, 279, 280, 288, 296, 313, 333, 348, 380, 384, 385, 393, 401, 403, 417, 437, 452, 485, 489, 490, 498, 508, 510, 524, 544, 555], "succe": [12, 47, 48, 71, 72, 78, 105, 132, 186, 207, 209, 220, 232, 233, 236, 248, 275, 298, 301, 347, 353, 380, 405, 451, 452, 458, 485, 512], "conflict": [12, 80, 105, 108, 133, 137, 178, 185, 187, 200, 207, 210, 224, 232, 233, 237, 252, 275, 278, 302, 306, 355, 380, 383, 406, 410, 460, 485, 488, 513, 517], "readi": [12, 49, 111, 115, 281, 285, 386, 390, 491, 495], "congratul": 12, "otherwis": [12, 14, 16, 18, 19, 20, 22, 25, 27, 28, 31, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 71, 72, 74, 75, 78, 79, 80, 81, 87, 91, 96, 99, 102, 105, 111, 115, 116, 121, 130, 131, 141, 144, 146, 159, 160, 164, 166, 167, 175, 177, 183, 185, 187, 197, 199, 205, 207, 208, 210, 220, 221, 223, 224, 229, 232, 233, 235, 237, 248, 249, 251, 252, 257, 261, 266, 269, 272, 275, 281, 285, 286, 291, 298, 299, 300, 310, 313, 315, 328, 329, 333, 334, 347, 348, 350, 351, 353, 354, 355, 356, 362, 366, 371, 374, 377, 380, 386, 390, 391, 396, 403, 404, 414, 417, 419, 432, 433, 437, 451, 452, 454, 455, 458, 459, 460, 461, 467, 471, 476, 479, 482, 485, 491, 495, 496, 501, 510, 511, 521, 524, 526, 539, 540, 544, 546, 547, 550, 551, 552, 553, 555, 557, 558], "meld": 12, "diff": [12, 84, 89, 118, 119, 128, 185, 207, 233, 254, 259, 288, 289, 296, 359, 364, 393, 394, 401, 464, 469, 498, 499, 508], "mergetool": 12, "g": [12, 17, 18, 19, 20, 22, 23, 32, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 63, 66, 68, 72, 79, 81, 86, 87, 89, 91, 96, 99, 102, 104, 105, 109, 110, 116, 119, 121, 122, 132, 133, 134, 137, 146, 148, 158, 159, 164, 166, 167, 172, 173, 177, 178, 182, 183, 185, 186, 187, 194, 195, 199, 200, 204, 205, 207, 209, 210, 217, 223, 224, 228, 229, 231, 232, 233, 236, 237, 245, 251, 252, 256, 257, 259, 261, 266, 269, 272, 273, 274, 275, 279, 280, 286, 289, 291, 292, 299, 301, 302, 315, 317, 327, 328, 333, 334, 335, 344, 348, 354, 356, 361, 362, 364, 366, 371, 374, 377, 379, 380, 384, 385, 391, 394, 396, 397, 405, 406, 410, 419, 421, 431, 432, 437, 443, 446, 448, 452, 459, 461, 466, 467, 469, 471, 476, 479, 482, 484, 485, 489, 490, 496, 499, 501, 502, 512, 513, 517, 526, 528, 538, 539, 544, 546, 547], "someth": [12, 18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 47, 63, 81, 111, 115, 169, 187, 190, 210, 213, 237, 241, 281, 285, 334, 340, 356, 386, 390, 443, 461, 491, 495, 555], "push": [12, 13, 17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44], "easili": [12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 54, 96, 99, 116, 185, 207, 233, 266, 269, 286, 371, 374, 391, 476, 479, 496], "nr": [12, 91, 102, 121, 233, 261, 272, 291, 366, 377, 396, 471, 482, 501], "notic": [12, 23, 33, 48, 68, 71, 79, 82, 140, 173, 176, 185, 187, 195, 198, 207, 210, 217, 220, 222, 233, 237, 245, 248, 250, 299, 335, 344, 347, 354, 357, 413, 448, 451, 459, 462, 520], "laid": [12, 63, 190, 213, 241, 340, 443], "organization": 12, "much": [12, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 37, 39, 43, 44, 47, 48, 49, 54, 62, 63, 72, 77, 78, 80, 81, 82, 87, 111, 115, 128, 135, 156, 169, 177, 185, 187, 190, 199, 207, 210, 213, 223, 229, 233, 237, 240, 241, 251, 252, 257, 281, 285, 296, 298, 304, 334, 335, 339, 340, 348, 353, 355, 356, 357, 362, 386, 390, 401, 408, 429, 442, 443, 452, 457, 458, 460, 461, 462, 467, 491, 495, 508, 515, 536], "flatter": 12, "That": [12, 18, 19, 20, 27, 36, 37, 38, 39, 43, 44, 49, 105, 118, 164, 166, 167, 232, 275, 380, 437, 485, 544, 546, 547], "zfs2zol": 12, "translat": [12, 47, 50, 72, 97, 105, 107, 125, 132, 177, 185, 186, 199, 207, 209, 223, 232, 233, 236, 251, 267, 275, 277, 294, 301, 348, 372, 380, 382, 399, 405, 452, 477, 485, 487, 505, 512], "stdout": 12, "hash": [12, 48, 49, 72, 74, 79, 80, 175, 177, 197, 199, 200, 221, 223, 224, 233, 249, 251, 252, 299, 348, 350, 354, 355, 452, 454, 459, 460], "cleanli": [12, 559, 560], "mind": [12, 32, 47, 81, 187, 237, 334, 356, 461], "why": [12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 68, 72, 80, 109, 110, 130, 173, 178, 195, 200, 217, 224, 245, 251, 252, 344, 348, 355, 403, 448, 452, 460, 489, 490, 510], "hunk": [12, 36, 37], "drop": [12, 18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 72, 75, 177, 199, 223, 251, 348, 351, 452, 455, 559], "preserv": [12, 18, 19, 20, 36, 38, 43, 44, 48, 72, 88, 91, 101, 102, 111, 115, 121, 184, 185, 206, 207, 223, 230, 233, 251, 258, 261, 271, 272, 281, 285, 291, 348, 363, 366, 376, 377, 386, 390, 396, 452, 468, 471, 481, 482, 491, 495, 501, 559], "intent": [12, 48, 72, 81, 87, 137, 140, 164, 176, 177, 183, 187, 198, 199, 205, 210, 222, 223, 229, 237, 250, 251, 257, 333, 334, 348, 356, 362, 413, 437, 452, 461, 467, 517, 520, 544, 564], "am": [12, 57], "authorship": 12, "squash": 12, "care": [12, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 75, 79, 81, 82, 87, 94, 185, 187, 207, 210, 223, 233, 237, 251, 264, 299, 334, 335, 348, 351, 354, 356, 357, 369, 452, 455, 459, 461, 462, 467, 474], "long": [12, 18, 19, 20, 22, 25, 35, 36, 38, 43, 44, 47, 48, 49, 54, 65, 72, 77, 78, 79, 80, 89, 91, 102, 105, 109, 110, 119, 121, 131, 141, 143, 159, 177, 185, 187, 192, 199, 207, 208, 210, 215, 223, 232, 233, 235, 237, 243, 251, 259, 261, 272, 275, 279, 280, 289, 291, 298, 299, 300, 310, 312, 328, 342, 348, 353, 354, 364, 366, 377, 380, 384, 385, 394, 396, 404, 414, 416, 432, 445, 451, 452, 457, 458, 459, 469, 471, 482, 485, 489, 490, 499, 501, 511, 521, 523, 539, 557], "truncat": [12, 48, 71, 80, 200, 220, 224, 248, 252, 347, 355, 451, 460], "pretti": 12, "onelin": 12, "leav": [12, 18, 19, 20, 22, 25, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 72, 79, 105, 109, 110, 177, 199, 207, 223, 232, 233, 251, 275, 279, 280, 299, 348, 354, 380, 384, 385, 452, 459, 485, 489, 490, 560], "blank": [12, 77, 94, 130, 146, 185, 207, 210, 233, 237, 264, 315, 369, 403, 419, 457, 474, 510, 526], "Then": [12, 18, 19, 20, 22, 33, 35, 36, 38, 40, 43, 44, 49, 103, 231, 273, 378, 483, 559, 563], "wrap": [12, 48, 91, 102, 105, 121, 199, 223, 232, 251, 261, 272, 275, 291, 366, 377, 380, 396, 471, 482, 485, 501], "exce": [12, 46, 47, 48, 49, 51, 66, 71, 72, 79, 81, 140, 146, 177, 185, 187, 199, 207, 210, 220, 222, 223, 233, 237, 248, 250, 251, 299, 315, 334, 347, 348, 354, 356, 413, 419, 446, 451, 452, 459, 461, 520, 526], "final": [12, 13, 33, 49, 81, 111, 115, 237, 281, 285, 334, 356, 386, 390, 461, 491, 495], "contact": [12, 47], "form": [12, 18, 19, 20, 22, 27, 35, 43, 44, 45, 47, 48, 67, 74, 79, 80, 81, 87, 88, 89, 96, 99, 105, 111, 115, 116, 119, 128, 146, 154, 164, 171, 178, 183, 184, 185, 187, 193, 197, 200, 205, 206, 207, 210, 216, 221, 224, 229, 230, 232, 233, 237, 244, 249, 252, 257, 258, 259, 266, 269, 271, 275, 281, 285, 286, 289, 296, 299, 315, 323, 333, 334, 343, 350, 354, 355, 356, 362, 363, 364, 371, 374, 380, 386, 390, 391, 394, 401, 419, 427, 437, 447, 454, 459, 460, 461, 467, 468, 469, 476, 479, 485, 491, 495, 496, 499, 508, 526, 534, 544, 557], "author": [12, 39, 79, 171, 172, 173, 179, 181, 186, 192, 193, 194, 195, 201, 203, 209, 215, 216, 217, 225, 227, 236, 240, 243, 244, 245, 253, 255, 301, 354, 459], "review": [12, 47, 56], "approv": 12, "www": [12, 14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 47, 49, 63, 105, 169, 190, 213, 232, 241, 275, 340, 380, 443, 485], "6873": 12, "zfs_destroy_snaps_nvl": 12, "leak": [12, 48, 49, 72, 82, 87, 91, 102, 121, 177, 183, 199, 205, 223, 229, 233, 251, 257, 261, 272, 291, 348, 357, 362, 366, 377, 396, 452, 462, 467, 471, 482, 501], "errlist": 12, "chri": 12, "williamson": 12, "matthew": 12, "ahren": 12, "mahren": 12, "paul": 12, "dagneli": 12, "pcd": 12, "deni": [12, 79, 80, 89, 119, 185, 207, 233, 259, 289, 299, 354, 364, 394, 459, 460, 469, 499], "rtveliashvili": 12, "lzc_destroy_snap": 12, "nvlist": [12, 48, 72, 105, 132, 186, 209, 232, 236, 251, 275, 301, 348, 380, 405, 452, 485, 512], "nvlist_fre": 12, "warn": [12, 18, 19, 20, 22, 25, 26, 29, 35, 36, 37, 38, 39, 43, 44, 48, 54, 71, 72, 80, 87, 111, 115, 144, 159, 164, 183, 185, 187, 205, 210, 220, 229, 237, 248, 251, 257, 313, 328, 333, 347, 348, 355, 362, 386, 390, 417, 432, 437, 451, 452, 460, 467, 491, 495, 524, 539, 544], "checker": [12, 63, 83, 169, 179, 190, 201, 213, 225, 241, 253, 340, 358, 443, 463], "print": [12, 16, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 62, 63, 65, 66, 68, 71, 72, 82, 85, 86, 87, 93, 94, 97, 98, 101, 103, 105, 107, 109, 110, 111, 112, 115, 125, 129, 132, 140, 146, 148, 152, 158, 159, 163, 165, 166, 167, 172, 173, 181, 182, 183, 185, 186, 187, 192, 194, 195, 203, 204, 205, 207, 209, 210, 215, 217, 220, 223, 227, 228, 229, 232, 233, 236, 237, 240, 243, 245, 248, 251, 255, 256, 257, 263, 264, 267, 268, 271, 275, 276, 277, 279, 280, 281, 282, 285, 294, 297, 301, 309, 315, 317, 321, 327, 328, 332, 335, 336, 339, 342, 344, 347, 348, 357, 360, 361, 362, 368, 369, 372, 373, 376, 378, 380, 382, 384, 385, 386, 387, 390, 399, 402, 405, 413, 419, 421, 425, 431, 432, 436, 438, 439, 440, 442, 443, 445, 446, 448, 451, 452, 462, 465, 466, 467, 473, 474, 477, 478, 481, 483, 485, 487, 489, 490, 491, 492, 495, 505, 509, 512, 520, 526, 528, 532, 538, 539, 543, 545, 546, 547], "queu": [12, 48, 51, 72, 140, 146, 176, 177, 198, 199, 210, 222, 223, 237, 250, 251, 315, 348, 413, 419, 452, 520, 526], "autom": [12, 25, 43, 48, 140, 164, 210, 237, 309, 333, 413, 437, 520, 544, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "rang": [12, 32, 48, 49, 50, 54, 58, 71, 72, 80, 82, 87, 94, 132, 140, 156, 176, 177, 185, 186, 194, 198, 199, 207, 209, 220, 222, 223, 233, 236, 237, 248, 250, 251, 252, 257, 264, 301, 335, 347, 348, 355, 357, 362, 369, 405, 413, 429, 451, 452, 460, 462, 467, 474, 512, 520, 536], "batteri": 12, "post": [12, 18, 19, 20, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 66, 72, 88, 109, 110, 177, 184, 199, 206, 223, 230, 251, 258, 348, 363, 446, 452, 468, 489, 490], "investig": [12, 22, 35], "reproduc": [12, 54], "trigger": [12, 48, 54, 63, 72, 80, 169, 190, 199, 213, 223, 224, 241, 251, 252, 340, 348, 355, 443, 452, 460], "round": [12, 48, 49, 68, 72, 93, 173, 177, 185, 195, 199, 207, 217, 223, 233, 245, 251, 263, 344, 348, 368, 448, 452, 473], "lastli": [12, 89, 119, 185, 207, 233, 259, 289, 364, 394, 469, 499], "happi": [12, 33], "thei": [12, 18, 19, 20, 21, 22, 32, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 54, 55, 63, 66, 69, 71, 72, 77, 78, 79, 80, 81, 82, 88, 94, 104, 105, 108, 111, 113, 115, 117, 122, 129, 133, 137, 141, 144, 146, 164, 166, 167, 169, 177, 178, 184, 185, 187, 190, 199, 200, 206, 207, 210, 213, 220, 223, 224, 230, 232, 233, 237, 241, 248, 251, 252, 258, 264, 274, 275, 278, 281, 283, 285, 292, 298, 299, 302, 306, 313, 315, 333, 334, 340, 345, 347, 348, 353, 354, 355, 356, 363, 369, 379, 380, 383, 386, 388, 390, 392, 397, 402, 406, 410, 417, 419, 437, 443, 446, 449, 451, 452, 457, 458, 459, 460, 461, 462, 468, 474, 484, 485, 488, 491, 493, 495, 497, 502, 509, 513, 517, 521, 524, 526, 544, 546, 547, 559], "mark": [12, 22, 35, 37, 39, 47, 48, 49, 54, 66, 72, 79, 80, 81, 89, 90, 94, 105, 119, 128, 141, 178, 185, 187, 199, 200, 207, 210, 223, 224, 232, 233, 237, 251, 252, 260, 264, 275, 296, 299, 310, 334, 348, 354, 355, 356, 365, 369, 380, 401, 414, 446, 452, 459, 460, 461, 469, 470, 474, 485, 499, 508, 521, 550, 557], "thank": 12, "builder": 13, "except": [13, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 51, 59, 60, 66, 72, 77, 79, 82, 85, 87, 89, 91, 97, 102, 105, 107, 109, 110, 113, 119, 121, 125, 144, 177, 181, 185, 187, 199, 203, 207, 210, 223, 227, 232, 233, 237, 251, 255, 257, 259, 261, 267, 272, 275, 277, 279, 280, 283, 289, 291, 294, 299, 313, 335, 348, 354, 357, 360, 362, 364, 366, 372, 377, 380, 382, 384, 385, 388, 394, 396, 399, 417, 446, 452, 457, 459, 462, 465, 467, 469, 471, 477, 482, 485, 487, 489, 490, 493, 499, 501, 505, 524], "beginn": [13, 43, 44, 59, 60], "setup": [13, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 38, 43, 44, 54, 66, 79, 82, 111, 115, 207, 210, 233, 237, 276, 281, 285, 299, 335, 354, 357, 386, 390, 446, 459, 462, 491, 495], "word": [13, 36, 38, 80, 81, 87, 111, 115, 140, 176, 198, 199, 205, 222, 223, 224, 229, 250, 251, 252, 257, 281, 285, 355, 356, 362, 386, 390, 413, 460, 461, 467, 491, 495, 520], "zfsbootmenu": [14, 16, 25, 31], "bootload": [14, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47], "free": [14, 16, 18, 25, 31, 45, 48, 54, 62, 71, 72, 77, 79, 80, 82, 87, 91, 102, 121, 128, 132, 135, 138, 140, 146, 148, 159, 161, 163, 164, 176, 177, 178, 183, 185, 186, 187, 198, 199, 200, 205, 207, 209, 210, 220, 222, 223, 224, 229, 233, 236, 237, 240, 248, 250, 251, 252, 257, 261, 272, 291, 296, 299, 301, 304, 307, 317, 330, 332, 333, 335, 339, 347, 348, 354, 355, 357, 362, 366, 377, 396, 401, 405, 408, 411, 413, 421, 434, 436, 437, 442, 451, 452, 457, 459, 460, 462, 467, 471, 482, 501, 508, 512, 515, 518, 520, 526, 528, 539, 541, 543, 544, 559], "zbm": [14, 16, 25, 31], "layout": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 54, 81, 144, 187, 210, 237, 313, 417, 461, 524], "site": [14, 16, 25, 31, 47], "reboot": [14, 16, 18, 19, 20, 22, 25, 26, 27, 28, 29, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 72, 80, 81, 128, 149, 150, 178, 185, 187, 200, 207, 210, 223, 224, 233, 237, 251, 252, 296, 318, 319, 334, 348, 355, 356, 401, 422, 423, 452, 460, 461, 508, 529, 530, 561, 562], "well": [14, 16, 25, 31, 47, 48, 49, 54, 72, 75, 77, 78, 79, 80, 82, 85, 87, 93, 109, 110, 111, 114, 115, 128, 137, 144, 146, 165, 178, 185, 187, 200, 207, 210, 224, 233, 237, 251, 252, 263, 279, 280, 281, 284, 285, 296, 298, 299, 306, 313, 315, 335, 348, 351, 353, 354, 355, 357, 360, 368, 384, 385, 386, 389, 390, 401, 410, 417, 419, 438, 452, 455, 457, 458, 459, 460, 462, 465, 467, 473, 489, 490, 491, 494, 495, 508, 517, 524, 526, 545, 557, 559], "avoid": [14, 16, 18, 19, 20, 25, 27, 31, 32, 35, 36, 38, 43, 44, 47, 48, 49, 54, 71, 72, 79, 80, 87, 111, 115, 177, 178, 185, 187, 199, 200, 205, 207, 220, 223, 224, 229, 233, 248, 251, 252, 257, 281, 285, 299, 347, 348, 354, 355, 362, 386, 390, 451, 452, 459, 460, 467, 491, 495, 549], "paramount": [14, 16, 25, 31], "uefi": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 38, 43, 44, 49], "secur": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 38, 47, 48, 80, 88, 91, 102, 109, 110, 111, 115, 121, 161, 184, 200, 206, 224, 230, 233, 237, 252, 258, 261, 272, 279, 280, 281, 285, 291, 330, 355, 363, 366, 377, 384, 385, 386, 390, 396, 434, 460, 468, 471, 482, 489, 490, 491, 495, 501, 541], "live": [14, 16, 25, 28, 29, 31, 33, 37, 39, 105, 137, 156, 187, 210, 232, 237, 275, 306, 325, 380, 410, 429, 485, 517, 536], "gpg": [14, 16, 25, 31, 32, 43, 44, 57], "auto": [14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 54, 62, 75, 79, 82, 207, 210, 231, 233, 237, 240, 273, 299, 335, 339, 351, 354, 357, 442, 455, 459, 462], "retriev": [14, 16, 25, 31, 80, 105, 142, 157, 164, 187, 210, 232, 237, 275, 311, 326, 333, 380, 415, 430, 437, 460, 485, 522, 537, 544, 556], "keyserv": [14, 16, 25, 31, 57], "hkp": [14, 16, 25, 31, 57], "asc": [14, 16, 25, 31], "dd": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 79, 87, 183, 205, 229, 233, 257, 299, 354, 362, 459, 467], "1m": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 49, 50, 54, 66, 72, 87, 172, 177, 178, 183, 185, 188, 194, 199, 200, 205, 211, 223, 224, 229, 233, 251, 252, 257, 299, 348, 354, 355, 362, 446, 452, 467], "login": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44], "password": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 66, 75, 103, 111, 115, 128, 185, 207, 231, 233, 273, 281, 285, 296, 351, 378, 386, 390, 401, 446, 455, 483, 491, 495, 508], "network": [14, 16, 18, 19, 20, 22, 25, 28, 29, 31, 35, 36, 37, 38, 39, 43, 44, 75, 103, 109, 110, 137, 187, 207, 210, 233, 237, 279, 280, 306, 351, 378, 384, 385, 410, 455, 483, 489, 490, 517, 557], "servic": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 51, 72, 74, 75, 79, 103, 111, 115, 123, 127, 132, 140, 176, 177, 185, 197, 198, 199, 207, 209, 221, 222, 223, 231, 233, 236, 249, 250, 251, 273, 281, 285, 299, 301, 348, 350, 351, 354, 378, 386, 390, 405, 413, 452, 454, 455, 459, 483, 491, 495, 503, 507, 512, 520, 561, 562], "wlan0": [14, 16, 25, 31], "wifi": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 38, 43, 44], "ssid": [14, 16, 25, 31], "ip": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 38, 43, 44, 96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508], "dhcp": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 38], "finish": [14, 16, 22, 25, 31, 37, 39, 47, 49, 50, 72, 78, 80, 88, 105, 111, 115, 126, 134, 135, 140, 145, 176, 177, 178, 198, 199, 200, 207, 222, 223, 224, 232, 233, 250, 251, 252, 275, 281, 285, 295, 298, 303, 304, 314, 348, 353, 355, 363, 380, 386, 390, 400, 407, 408, 413, 418, 452, 458, 460, 468, 485, 491, 495, 506, 514, 515, 520, 525], "netconfig": [14, 16, 25, 31], "wireless": [14, 16, 25, 31], "further": [14, 16, 18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 51, 54, 71, 72, 81, 177, 187, 199, 210, 220, 223, 237, 248, 251, 334, 347, 348, 356, 451, 452, 461], "wpa_supplic": [14, 16, 25, 31], "apk": [14, 15, 16, 25, 31], "ssh": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 109, 110, 111, 115, 128, 185, 207, 233, 281, 285, 296, 386, 390, 401, 489, 490, 491, 495, 508], "sshd": [14, 16, 25, 28, 31, 43, 44], "openssh": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 38, 43, 44], "prohibit": [14, 16, 25, 31, 135, 237, 304, 408, 515], "public": [14, 16, 25, 31, 32, 45, 47, 54, 57], "verbatim": [14, 47, 75, 87, 183, 205, 229, 257, 351, 362, 455, 467], "authorized_kei": [14, 16, 18, 19, 25, 28, 31], "strong": [14, 47], "192": [14, 16, 18, 19, 25, 28, 31, 48, 79, 233, 299, 354, 459], "168": [14, 16, 18, 19, 25, 28, 31], "91": [14, 16, 25, 28, 31, 156, 429, 536], "ntp": [14, 16, 18, 19, 25, 31], "client": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 38, 48, 49, 79, 185, 207, 233, 299, 354, 459], "synchron": [14, 16, 18, 19, 25, 31, 37, 39, 47, 48, 51, 72, 78, 79, 80, 81, 103, 146, 177, 178, 184, 185, 187, 199, 200, 206, 207, 210, 223, 224, 230, 231, 233, 237, 251, 252, 258, 273, 298, 299, 315, 334, 348, 353, 354, 355, 356, 378, 419, 452, 458, 459, 460, 461, 483, 526, 563], "busybox": [14, 16, 25, 31, 43, 44], "repo": [14, 16, 17, 25, 29, 31, 32, 41, 43, 44], "press": [14, 16, 18, 19, 20, 25, 31, 35, 36, 37, 38, 39, 43, 44, 187, 210, 237, 315, 317], "bar": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44, 49, 105, 232, 275, 380, 485], "apkrepo": [14, 16, 25, 31], "throughout": [14, 16, 25, 31, 88, 109, 110, 184, 206, 207, 230, 233, 258, 279, 280, 363, 384, 385, 468, 489, 490], "predict": [14, 16, 25, 31, 48, 67, 72, 87, 171, 183, 193, 199, 205, 216, 223, 229, 244, 251, 257, 343, 348, 362, 447, 452, 467], "eudev": [14, 16, 25, 31], "devd": [14, 16, 25, 31], "mdev": 14, "del": 14, "target": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 51, 62, 72, 75, 78, 92, 93, 94, 103, 105, 106, 109, 110, 111, 113, 115, 132, 145, 161, 177, 185, 199, 207, 209, 223, 231, 232, 233, 236, 237, 240, 251, 262, 263, 264, 273, 275, 276, 279, 280, 281, 283, 285, 298, 301, 314, 330, 339, 348, 351, 353, 367, 368, 369, 378, 380, 381, 384, 385, 386, 388, 390, 405, 418, 434, 442, 452, 455, 458, 472, 473, 474, 483, 485, 486, 489, 490, 491, 493, 495, 512, 525, 541], "virtio": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44], "bu": [14, 16, 25, 28, 31, 54], "serial": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44, 48, 49, 54, 66, 220, 446], "qemu": [14, 16, 25, 28, 31], "disk2": [14, 16, 18, 19, 20, 25, 28, 31, 35, 36, 38], "img": [14, 16, 22, 25, 28, 31, 37, 39], "aabb": [14, 16, 25, 28, 31], "libvirt": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44], "domain": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 77, 79, 82, 185, 207, 233, 299, 354, 457, 459, 462], "xml": [14, 16, 25, 28, 31], "declar": [14, 16, 25, 28, 31], "arrai": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44, 47, 48, 49, 72, 105, 140, 176, 177, 198, 199, 222, 223, 232, 250, 251, 275, 348, 380, 413, 452, 485, 520], "ata": [14, 16, 25, 28, 31, 48], "foo": [14, 16, 25, 28, 31, 105, 231, 232, 273, 275, 380, 485, 556], "nvme": [14, 16, 18, 19, 25, 28, 31, 54], "disk1": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44], "mount": [14, 16, 18, 19, 20, 21, 22, 25, 27, 28, 31, 33, 35, 37, 38, 39, 43, 44, 48, 54, 72, 75, 78, 79, 80, 82, 84, 87, 89, 91, 92, 93, 94, 96, 99, 100, 102, 105, 109, 110, 111, 113, 115, 116, 117, 118, 119, 120, 121, 122, 123, 127, 128, 129, 137, 144, 158, 180, 185, 187, 202, 207, 210, 226, 232, 233, 237, 251, 252, 254, 259, 261, 262, 263, 264, 270, 272, 275, 279, 280, 281, 283, 285, 289, 290, 291, 292, 296, 297, 298, 299, 306, 313, 327, 335, 348, 351, 353, 354, 355, 357, 359, 364, 366, 367, 368, 369, 375, 377, 380, 384, 385, 386, 388, 390, 392, 394, 395, 396, 397, 401, 402, 410, 417, 431, 452, 455, 458, 459, 460, 462, 464, 467, 469, 471, 472, 473, 474, 476, 479, 480, 482, 485, 489, 490, 491, 493, 495, 496, 497, 498, 499, 500, 501, 502, 503, 507, 508, 509, 517, 524, 538, 551, 559], "mnt": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 128, 185, 207, 233, 296, 401, 508], "mktemp": [14, 16, 25, 28, 31, 37, 39], "partit": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 54, 77, 79, 81, 82, 137, 141, 146, 164, 177, 187, 199, 207, 210, 223, 233, 237, 251, 299, 310, 315, 333, 334, 335, 354, 356, 357, 414, 419, 437, 457, 459, 461, 462, 517, 521, 526, 544], "swap": [14, 16, 25, 28, 31, 33, 36, 37, 38, 39, 58, 79, 87, 93, 176, 183, 185, 198, 205, 207, 229, 233, 257, 263, 299, 354, 362, 368, 459, 467, 473], "gb": [14, 16, 25, 28, 31, 47, 77, 79, 80, 185, 207, 233, 252, 299, 354, 355, 457, 459, 460], "too": [14, 16, 21, 25, 28, 31, 36, 37, 38, 39, 47, 48, 71, 72, 87, 105, 177, 183, 199, 205, 220, 223, 229, 232, 248, 251, 257, 275, 347, 348, 362, 380, 451, 452, 467, 485], "swapsiz": [14, 16, 25, 28, 31], "left": [14, 16, 21, 25, 28, 31, 33, 48, 49, 79, 94, 101, 144, 158, 166, 167, 185, 207, 233, 237, 264, 271, 299, 313, 327, 354, 369, 376, 417, 431, 459, 474, 481, 524, 538, 546, 547], "1gb": [14, 16, 25, 28, 31, 48, 177, 199, 223, 251, 334, 348, 356], "reserv": [14, 16, 25, 28, 31, 48, 72, 77, 79, 81, 82, 89, 93, 96, 99, 116, 119, 128, 135, 137, 185, 187, 207, 210, 223, 233, 237, 251, 259, 263, 289, 296, 299, 304, 306, 334, 335, 348, 354, 356, 357, 364, 368, 394, 401, 408, 410, 452, 457, 459, 461, 462, 469, 473, 476, 479, 496, 499, 508, 515, 517], "e2fsprog": [14, 16, 25, 31], "cryptsetup": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44], "clear": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 72, 77, 79, 82, 84, 91, 96, 99, 102, 105, 106, 111, 115, 116, 121, 128, 140, 145, 164, 176, 177, 185, 187, 198, 199, 207, 210, 222, 223, 233, 237, 250, 251, 254, 261, 266, 269, 272, 275, 276, 281, 285, 286, 291, 296, 299, 309, 333, 335, 348, 354, 357, 359, 366, 371, 374, 377, 380, 381, 386, 390, 391, 396, 401, 413, 418, 437, 452, 457, 459, 462, 464, 471, 476, 479, 482, 485, 486, 491, 495, 496, 501, 508, 520, 525, 544, 555, 557, 559, 561, 562, 563], "structur": [14, 16, 25, 28, 31, 47, 48, 72, 78, 80, 81, 87, 91, 102, 121, 178, 183, 199, 200, 205, 223, 224, 229, 233, 251, 252, 257, 261, 272, 291, 334, 348, 355, 356, 362, 366, 377, 396, 452, 458, 460, 461, 467, 471, 482, 501], "flash": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 49, 52], "blkdiscard": [14, 16, 18, 19, 25, 28, 31, 38, 82, 237, 335, 357, 462], "partition_disk": [14, 16, 25, 28, 31], "true": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 66, 105, 164, 179, 201, 210, 225, 231, 232, 237, 253, 273, 275, 333, 380, 437, 446, 485, 544], "align": [14, 16, 25, 28, 31, 47, 54, 68, 72, 82, 140, 173, 176, 187, 195, 198, 210, 217, 222, 223, 237, 245, 250, 251, 335, 344, 348, 357, 413, 448, 452, 462, 520], "mklabel": [14, 16, 25, 28, 31], "gpt": [14, 16, 25, 28, 31, 33, 36, 38, 49, 54, 75, 82, 335, 351, 357, 455, 462], "mkpart": [14, 16, 25, 28, 31], "efi": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 38, 43, 44, 77, 141, 187, 210, 237, 310, 414, 457, 521], "1mib": [14, 16, 25, 28, 31], "4gib": [14, 16, 25, 28, 31, 48, 72, 452], "rpool": [14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 54, 87, 90, 105, 128, 146, 148, 156, 159, 161, 164, 183, 185, 187, 205, 207, 210, 229, 232, 233, 237, 257, 275, 296, 333, 362, 380, 401, 429, 437, 467, 470, 485, 508, 526, 528, 536, 539, 541, 544], "gib": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 48, 72, 78, 81, 148, 164, 207, 233, 298, 353, 452, 458, 461, 528, 544], "esp": [14, 16, 18, 19, 20, 25, 28, 31, 33, 36, 38, 43, 44], "partprob": [14, 16, 25, 28, 31, 37, 39], "temporari": [14, 16, 18, 19, 20, 25, 28, 31, 36, 38, 43, 44, 48, 49, 72, 75, 79, 82, 85, 96, 99, 104, 116, 122, 144, 149, 150, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 266, 269, 274, 286, 292, 299, 313, 318, 319, 335, 348, 351, 354, 357, 360, 371, 374, 379, 391, 397, 417, 422, 423, 452, 455, 459, 462, 465, 476, 479, 484, 496, 502, 524, 529, 530], "memori": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 49, 52, 58, 62, 71, 72, 78, 80, 81, 87, 88, 105, 111, 115, 133, 146, 152, 164, 177, 184, 185, 187, 199, 206, 207, 210, 220, 223, 224, 230, 232, 233, 237, 240, 248, 251, 252, 258, 275, 281, 285, 298, 321, 333, 334, 339, 347, 348, 353, 355, 356, 363, 380, 386, 390, 425, 437, 442, 451, 452, 458, 460, 461, 467, 468, 485, 491, 495, 513, 526, 532, 544], "plain": [14, 16, 25, 28, 31, 87, 105, 134, 232, 233, 257, 275, 362, 380, 467, 485, 556], "part3": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44], "mkswap": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44, 185, 207, 233, 263], "mapper": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 58, 82, 130, 146, 210, 237, 315, 335, 357, 403, 419, 462, 510, 526], "swapon": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 185, 207, 233, 263, 368], "modprob": [14, 15, 16, 20, 22, 25, 26, 29, 31, 43, 44, 48, 54], "unencrypt": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 109, 110, 111, 115, 233, 279, 280, 281, 285, 384, 385, 386, 390, 489, 490, 491, 495], "sc2046": [14, 16, 25, 28, 31], "autotrim": [14, 16, 18, 19, 25, 28, 31, 36, 38, 82, 161, 237, 330, 335, 357, 434, 462, 541], "acltyp": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 79, 89, 96, 99, 116, 119, 128, 185, 207, 233, 259, 289, 296, 299, 354, 364, 394, 401, 459, 469, 476, 479, 496, 499, 508], "posixacl": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 79, 185, 207, 233, 299, 354, 459], "canmount": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 75, 79, 89, 96, 99, 103, 116, 119, 128, 185, 207, 231, 233, 259, 273, 289, 296, 299, 351, 354, 364, 378, 394, 401, 455, 459, 469, 476, 479, 483, 496, 499, 508], "dnodes": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 54, 79, 80, 89, 119, 200, 207, 224, 233, 252, 299, 354, 355, 364, 394, 459, 460, 469, 499], "formd": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 79, 185, 207, 233, 299, 354, 459], "relatim": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 49, 79, 89, 103, 119, 185, 207, 231, 233, 273, 299, 354, 364, 378, 394, 459, 469, 483, 499], "xattr": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 48, 54, 72, 79, 80, 89, 96, 99, 116, 119, 128, 181, 185, 203, 207, 227, 233, 255, 259, 289, 296, 299, 354, 364, 394, 401, 452, 459, 460, 469, 476, 479, 496, 499, 508], "sa": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 45, 48, 54, 72, 74, 79, 80, 86, 175, 182, 185, 197, 204, 207, 221, 228, 233, 249, 256, 299, 350, 354, 361, 452, 454, 459, 460, 466], "mountpoint": [14, 16, 18, 19, 20, 21, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 75, 78, 79, 85, 89, 92, 93, 96, 99, 101, 103, 104, 113, 116, 117, 119, 122, 128, 137, 181, 185, 187, 203, 207, 210, 227, 231, 233, 237, 255, 259, 262, 263, 271, 273, 274, 283, 287, 289, 292, 296, 298, 299, 306, 351, 353, 354, 360, 364, 367, 368, 376, 378, 379, 388, 392, 394, 397, 401, 410, 455, 458, 459, 465, 469, 472, 473, 476, 479, 481, 483, 484, 493, 496, 497, 499, 502, 508, 517], "printf": [14, 16, 25, 28, 31], "part2": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44], "noauto": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 37, 39, 43, 44, 79, 103, 185, 207, 231, 233, 273, 299, 354, 378, 459, 483], "mkdir": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 103, 378, 483], "afterward": [14, 16, 25, 28, 31, 72, 75, 79, 128, 185, 207, 223, 233, 251, 296, 299, 348, 351, 354, 401, 452, 455, 459, 508], "mkf": [14, 16, 25, 28, 31, 48], "vfat": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44], "part1": [14, 16, 25, 28, 31, 36, 38], "fmask": [14, 16, 25, 28, 31], "0077": [14, 16, 25, 28, 31], "dmask": [14, 16, 25, 28, 31], "iocharset": [14, 16, 25, 28, 31], "iso8859": [14, 16, 25, 28, 31], "break": [14, 16, 25, 28, 31, 36, 37, 38, 39, 43, 44, 48, 80, 91, 102, 121, 135, 233, 237, 261, 272, 291, 304, 355, 366, 377, 396, 408, 460, 471, 482, 501, 515], "lt": [14, 15, 26, 72, 77, 82, 103, 128, 164, 171, 172, 173, 175, 177, 179, 181, 182, 185, 186, 187, 192, 193, 194, 195, 197, 199, 200, 201, 203, 204, 207, 209, 210, 215, 216, 217, 221, 223, 224, 225, 227, 228, 229, 232, 233, 236, 237, 243, 244, 245, 251, 252, 253, 255, 257, 275, 296, 299, 301, 333, 335, 348, 354, 357, 378, 452, 457, 462, 483, 508, 544], "refind": [14, 16, 25, 31], "loader": [14, 16, 18, 19, 20, 22, 25, 27, 31, 35, 36, 38, 43, 44], "rodsbook": [14, 16, 25, 31], "html": [14, 16, 17, 18, 19, 20, 22, 25, 29, 31, 35, 36, 37, 38, 39, 43, 44, 47, 49], "zip": [14, 16, 25, 31], "curl": [14, 16, 25, 31, 36, 37, 39], "l": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 47, 49, 74, 79, 87, 88, 89, 91, 95, 97, 102, 104, 107, 111, 115, 117, 119, 121, 122, 125, 132, 133, 134, 143, 144, 146, 148, 158, 159, 164, 166, 167, 172, 183, 184, 185, 186, 187, 194, 197, 205, 206, 207, 209, 210, 221, 229, 230, 233, 236, 237, 249, 257, 258, 259, 261, 265, 267, 272, 274, 277, 281, 285, 289, 291, 292, 294, 299, 301, 302, 312, 313, 315, 317, 327, 328, 333, 350, 354, 362, 363, 364, 366, 370, 372, 377, 379, 382, 386, 390, 392, 394, 396, 397, 399, 405, 406, 416, 417, 419, 421, 431, 432, 437, 454, 459, 467, 468, 469, 471, 475, 477, 482, 484, 487, 491, 495, 497, 499, 501, 502, 505, 512, 513, 523, 524, 526, 528, 538, 539, 544, 546, 547], "sourceforg": [14, 16, 25, 31], "net": [14, 16, 25, 31, 36, 37, 47, 49, 79, 128, 169, 185, 190, 207, 213, 233, 241, 296, 299, 354, 401, 459, 508], "bin": [14, 16, 17, 18, 19, 20, 22, 25, 27, 31, 35, 36, 37, 38, 39, 43, 44, 68, 75, 78, 172, 194, 296, 351, 448, 455, 458], "unzip": [14, 16, 25, 31], "refind_x64": [14, 16, 25, 31], "print0": [14, 16, 25, 31, 233, 276], "xarg": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 38, 43, 44], "0i": [14, 16, 25, 31], "mv": [14, 16, 18, 19, 20, 25, 31, 36, 37, 39, 43, 44, 49], "bootx64": [14, 16, 25, 31], "rf": [14, 16, 25, 31, 37, 39, 72, 75, 348, 351, 452, 455], "entri": [14, 16, 18, 19, 20, 22, 25, 27, 31, 33, 35, 36, 38, 43, 44, 48, 49, 58, 72, 79, 80, 87, 88, 105, 106, 111, 115, 140, 146, 176, 177, 181, 183, 185, 198, 199, 203, 205, 207, 210, 222, 223, 227, 229, 232, 233, 237, 250, 251, 255, 257, 275, 276, 281, 285, 299, 315, 348, 354, 362, 380, 381, 386, 390, 413, 419, 452, 459, 467, 468, 485, 486, 491, 495, 520, 526], "tee": [14, 16, 25, 28, 31, 36, 43, 44], "eof": [14, 16, 25, 28, 31, 37, 39, 43, 44], "unmount": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 48, 78, 79, 84, 89, 94, 100, 104, 109, 110, 113, 114, 119, 120, 123, 126, 127, 128, 138, 141, 185, 187, 207, 210, 233, 237, 254, 259, 264, 270, 274, 279, 280, 283, 284, 289, 290, 295, 296, 298, 299, 307, 310, 353, 354, 359, 364, 369, 375, 379, 384, 385, 388, 389, 394, 395, 400, 401, 411, 414, 458, 459, 464, 469, 474, 480, 484, 489, 490, 493, 494, 499, 500, 503, 506, 507, 508, 518, 521], "snapshot": [14, 16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 54, 58, 67, 72, 75, 78, 79, 80, 81, 82, 84, 85, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 102, 105, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 119, 121, 124, 125, 128, 135, 166, 167, 171, 177, 178, 185, 187, 193, 199, 200, 207, 210, 216, 223, 224, 232, 233, 237, 244, 251, 252, 254, 259, 260, 261, 262, 264, 265, 266, 267, 268, 269, 271, 272, 275, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 289, 291, 293, 294, 296, 298, 299, 304, 334, 335, 343, 348, 351, 353, 354, 355, 356, 357, 359, 360, 364, 365, 366, 367, 369, 370, 371, 372, 373, 374, 376, 377, 380, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 394, 396, 398, 399, 401, 408, 447, 452, 455, 458, 459, 460, 461, 462, 464, 465, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 481, 482, 485, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 499, 501, 504, 505, 508, 515, 546, 547, 559], "umount": [14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 78, 79, 89, 119, 185, 207, 233, 259, 289, 298, 299, 353, 354, 364, 394, 458, 459, 469, 499], "rl": [14, 16, 25, 28, 31], "sync": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 37, 39, 43, 44, 48, 51, 54, 72, 79, 80, 84, 89, 105, 119, 156, 164, 176, 177, 185, 198, 199, 207, 210, 222, 223, 232, 233, 237, 250, 251, 254, 275, 299, 325, 333, 348, 354, 359, 364, 380, 394, 429, 437, 452, 459, 460, 464, 469, 485, 499, 536, 544], "incompat": [16, 17, 19, 25, 31, 38, 80, 144, 187, 210, 237, 252, 313, 355, 417, 460, 524, 559, 564], "alpin": [16, 25, 31, 41, 59, 60], "ship": [16, 25, 29, 31, 36, 38, 47, 49], "extract": [16, 25, 31, 87, 467], "america": 16, "pkgbuild": 16, "iso": [16, 18, 19, 20, 43, 44, 66, 446], "archlinux": [16, 17], "bootstrap": [16, 18, 19], "x86_64": [16, 18, 19, 20, 22, 25, 31, 32, 35, 36, 38, 43, 44, 54, 173], "rootf": [16, 21, 25, 31, 33], "sig": 16, "gnupg": 16, "ln": [16, 19, 20, 22, 25, 35, 36, 37, 43, 44, 103, 231, 273, 378, 483], "af": [16, 25, 31, 54], "edg": [16, 25, 31, 55], "1commun": [16, 25, 31], "fstab": [16, 18, 19, 20, 22, 25, 27, 31, 33, 35, 36, 37, 38, 39, 43, 44, 54, 75, 78, 83, 85, 179, 181, 185, 201, 203, 207, 225, 227, 233, 253, 255, 298, 351, 353, 358, 360, 455, 458, 463, 465], "genfstab": [16, 25, 31], "partuuid": [16, 22, 25, 31, 35, 43, 44, 54], "grep": [16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 47, 48], "rw": [16, 25, 27, 31, 79, 96, 99, 116, 128, 140, 146, 176, 185, 198, 207, 210, 222, 233, 237, 250, 296, 299, 315, 354, 401, 413, 419, 459, 476, 479, 496, 508, 520, 526, 559], "idl": [16, 25, 31, 47, 71, 72, 88, 177, 199, 251, 258, 348, 363, 451, 452, 468], "timeout": [16, 22, 25, 31, 35, 43, 44, 48, 66, 72, 184, 199, 206, 223, 230, 251, 348, 446, 452], "1min": [16, 25, 31, 348], "automount": [16, 18, 19, 20, 25, 31, 36, 38, 43, 44, 48], "nofail": [16, 22, 25, 31, 35, 103, 231, 273, 378, 483], "chroot": [16, 18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39, 43, 44, 68, 448], "cp": [16, 18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39, 43, 44, 49, 78, 458], "rbind": [16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44], "usr": [16, 18, 19, 20, 21, 22, 25, 27, 31, 33, 35, 36, 37, 38, 39, 43, 44, 68, 75, 80, 82, 172, 185, 194, 351, 355, 357, 448, 455, 460, 462], "env": [16, 18, 19, 20, 25, 27, 31, 35, 36, 37, 38, 39, 43, 44, 54, 75, 351, 455], "archzf": [16, 17], "pacman": [16, 17], "init": [16, 33, 37, 39, 47, 68, 344, 448], "refresh": [16, 18, 19, 20, 22, 35, 36, 38, 43, 44, 103, 378, 483], "popul": [16, 54, 72, 92, 93, 94, 108, 113, 118, 128, 185, 207, 223, 233, 251, 296, 348, 401, 452, 472, 473, 474, 488, 493, 498, 508], "gpgdir": 16, "lsign": 16, "ddf7db817396a49b2a2723f7403bd972f75d9d76": 16, "mirrorlist": 16, "franc": 16, "germani": 16, "sum7": 16, "eu": [16, 37, 39], "biocraft": 16, "india": 16, "themindsmaz": 16, "unit": [16, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 68, 77, 103, 146, 156, 161, 172, 173, 194, 195, 210, 217, 231, 237, 245, 273, 315, 344, 378, 419, 429, 448, 457, 483, 526, 536, 541], "zxcvfdsa": 16, "prefix": [16, 23, 36, 72, 74, 88, 131, 140, 176, 184, 197, 198, 206, 221, 222, 230, 249, 250, 258, 300, 350, 363, 404, 413, 452, 454, 468, 511, 520], "workaround": [16, 18, 19, 20, 22, 35, 44, 47, 48, 54], "ci": [16, 63, 169, 190, 213, 241, 340, 443], "noconfirm": 16, "mg": 16, "mandoc": 16, "efibootmgr": [16, 18, 19, 20, 22, 33, 35, 43, 44], "mkinitcpio": 16, "kernel_compatible_with_zf": 16, "si": 16, "awk": [16, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 210, 237, 315], "zst": 16, "physic": [16, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 54, 68, 71, 72, 74, 77, 78, 79, 81, 82, 86, 87, 137, 140, 146, 149, 150, 154, 159, 164, 173, 175, 176, 177, 182, 183, 185, 187, 195, 197, 198, 199, 204, 205, 207, 210, 217, 220, 221, 222, 223, 228, 229, 233, 237, 245, 248, 249, 250, 251, 256, 257, 298, 299, 306, 315, 316, 318, 319, 323, 328, 333, 334, 335, 344, 347, 348, 350, 353, 354, 356, 357, 361, 362, 410, 413, 419, 422, 423, 427, 432, 437, 448, 451, 452, 454, 457, 458, 459, 461, 462, 466, 467, 517, 520, 526, 529, 530, 534, 539, 544, 550, 552], "firmwar": [16, 18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 72, 130, 177, 199, 223, 251, 348, 403, 452, 510], "ucod": 16, "amd": 16, "synchronis": [16, 25], "systemctl": [16, 18, 19, 20, 22, 25, 28, 35, 36, 37, 38, 39, 43, 44, 103, 156, 161, 231, 273, 378, 429, 483, 536, 541], "timesyncd": [16, 18, 19, 25], "zgenhostid": [16, 25, 31, 43, 75, 82, 84, 202, 210, 226, 237, 254, 335, 351, 357, 359, 455, 462, 464], "hostid": [16, 25, 28, 29, 31, 43, 68, 71, 75, 82, 131, 195, 208, 210, 217, 220, 235, 237, 245, 248, 300, 335, 344, 347, 351, 357, 404, 448, 451, 455, 462, 511, 564], "en_u": [16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44], "utf": [16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 79, 185, 207, 233, 299, 354, 459], "gen": 16, "keymap": [16, 25, 31], "timezon": [16, 25, 31], "hostnam": [16, 18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44, 96, 99, 116, 128, 143, 185, 187, 207, 210, 233, 237, 296, 312, 401, 416, 476, 479, 496, 508, 523], "localtim": [16, 25, 31], "firstboot": [16, 25, 31], "utc": [16, 25, 31, 32], "testhost": [16, 25, 31], "passwd": [16, 18, 19, 20, 22, 25, 31, 35, 36, 38, 43, 44, 79, 128, 185, 207, 233, 299, 354, 401, 459, 508], "yourpassword": [16, 25, 31], "chpasswd": [16, 25, 31], "zfs_import_dir": 16, "reach": [17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 54, 72, 177, 199, 223, 251, 348, 452], "irc": [17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44], "libera": [17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44], "chat": [17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44], "howto": [17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44], "mention": [17, 18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44, 49, 54, 555], "ne9z": [17, 29], "licens": [17, 58, 59, 60, 88, 184, 206, 230, 258, 363, 468], "third": [17, 25, 26, 47, 79, 91, 102, 111, 115, 121, 183, 231, 233, 261, 272, 273, 281, 285, 291, 299, 354, 366, 377, 386, 390, 396, 459, 471, 482, 491, 495, 501], "parti": [17, 25, 26, 47, 80, 237, 355, 460], "pip": [17, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "pip3": [17, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "doc": [17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 561, 562, 563], "txt": [17, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "bashrc": [17, 18, 19, 20, 22, 27, 35, 36, 37, 38, 39, 43, 44], "sensibl": [17, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "browser": [17, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "_build": [17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44], "index": [17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44, 49, 52, 105, 132, 177, 232, 236, 275, 301, 380, 405, 485, 512], "dual": [18, 19, 20, 22, 35, 36, 38, 43, 44], "backup": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 78, 79, 87, 109, 110, 111, 115, 185, 207, 233, 281, 285, 298, 299, 353, 354, 384, 385, 386, 390, 458, 459, 467, 489, 490, 491, 495, 551, 553, 555, 556, 558, 559], "64": [18, 19, 20, 22, 37, 39, 43, 44, 48, 58, 71, 72, 79, 80, 87, 89, 105, 119, 140, 176, 177, 185, 198, 199, 200, 205, 207, 220, 222, 223, 224, 229, 232, 233, 248, 250, 251, 252, 257, 259, 275, 289, 299, 347, 348, 354, 355, 362, 364, 380, 394, 413, 451, 452, 459, 460, 467, 469, 485, 499, 520], "w": [18, 19, 20, 43, 44, 48, 66, 87, 109, 110, 111, 115, 134, 135, 140, 145, 146, 152, 154, 156, 161, 176, 198, 210, 222, 233, 237, 250, 279, 280, 281, 285, 303, 304, 314, 315, 321, 323, 325, 330, 384, 385, 386, 390, 407, 408, 413, 418, 419, 425, 427, 429, 434, 446, 467, 489, 490, 491, 495, 514, 515, 520, 525, 526, 532, 534, 536, 541, 559], "gui": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "gnome": [18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44], "strongli": [18, 19, 20, 22, 32, 43, 44, 48, 54, 72, 77, 79, 81, 82, 185, 187, 207, 210, 233, 237, 299, 334, 348, 354, 356, 452, 457, 459, 461, 462, 555], "encourag": [18, 19, 20, 22, 32, 41, 43, 44, 45, 48, 54, 58, 71, 79, 185, 207, 220, 233, 248, 299, 347, 354, 451, 459], "kib": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 79, 80, 81, 111, 115, 348, 452, 459, 460, 461, 491, 495], "4kn": [18, 19, 20, 22, 35, 36, 38, 43, 44], "bio": [18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 52, 72, 77, 457], "slowli": [18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 109, 110, 489, 490], "dedupl": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 54, 78, 79, 81, 87, 91, 102, 109, 110, 111, 115, 121, 159, 166, 167, 183, 185, 187, 205, 207, 210, 229, 233, 237, 257, 261, 272, 279, 280, 281, 285, 291, 298, 299, 328, 334, 336, 353, 354, 356, 362, 366, 377, 384, 385, 386, 390, 396, 432, 439, 440, 458, 459, 461, 467, 471, 482, 489, 490, 491, 495, 501, 539, 546, 547], "massiv": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "perman": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 82, 164, 177, 187, 199, 210, 223, 237, 251, 333, 348, 357, 437, 452, 462, 544, 556], "revert": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 72, 87, 96, 99, 114, 116, 128, 149, 150, 185, 187, 207, 210, 229, 233, 237, 251, 257, 266, 269, 284, 286, 296, 318, 319, 348, 362, 371, 374, 389, 391, 401, 422, 423, 451, 452, 467, 476, 479, 494, 496, 508, 529, 530, 555, 559], "rlaager": [18, 19, 20, 22, 35, 36, 37, 38, 39], "luk": [18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44], "With": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 66, 68, 72, 80, 111, 115, 223, 224, 251, 252, 344, 348, 355, 446, 448, 452, 460, 491, 495], "cours": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 559, 563], "happen": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54, 72, 109, 110, 199, 223, 251, 348, 452, 489, 490, 553, 555, 561, 562, 563], "natur": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 105, 111, 115, 232, 275, 281, 285, 380, 386, 390, 485, 491, 495], "initrd": [18, 19, 20, 22, 23, 25, 28, 31, 35, 36, 37, 38, 39, 41, 43, 44, 75, 351, 455], "put": [18, 19, 20, 22, 27, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 72, 82, 149, 150, 172, 187, 194, 210, 237, 251, 318, 319, 334, 335, 348, 357, 422, 423, 452, 462, 529, 530], "sensit": [18, 19, 20, 36, 37, 38, 39, 43, 44, 49, 77, 79, 96, 99, 111, 115, 116, 128, 185, 207, 233, 281, 285, 296, 299, 354, 386, 390, 401, 457, 459, 476, 479, 491, 495, 496, 508], "passphras": [18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44, 79, 91, 102, 109, 110, 121, 233, 261, 272, 279, 280, 291, 299, 354, 366, 377, 384, 385, 396, 459, 471, 482, 489, 490, 501, 559], "consol": [18, 19, 20, 22, 23, 35, 36, 37, 38, 39, 43, 44, 48, 71, 75, 82, 177, 187, 199, 210, 220, 223, 237, 248, 251, 335, 347, 351, 357, 451, 455, 462], "even": [18, 19, 20, 21, 22, 27, 29, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 72, 78, 79, 81, 82, 87, 91, 102, 103, 104, 109, 110, 111, 113, 115, 121, 122, 129, 133, 134, 137, 141, 144, 154, 156, 177, 185, 187, 199, 207, 210, 220, 223, 231, 233, 237, 248, 251, 261, 272, 273, 274, 279, 280, 281, 283, 285, 291, 292, 297, 298, 299, 302, 303, 306, 310, 313, 323, 325, 334, 335, 348, 353, 354, 356, 357, 366, 377, 378, 379, 384, 385, 386, 388, 390, 396, 397, 402, 406, 407, 410, 414, 417, 427, 429, 452, 458, 459, 461, 462, 467, 471, 482, 483, 484, 489, 490, 491, 493, 495, 501, 502, 509, 513, 514, 517, 521, 524, 534, 536, 554, 555], "topologi": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 54, 74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "everyth": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 85, 111, 115, 177, 199, 223, 251, 281, 285, 348, 360, 386, 390, 452, 465, 491, 495], "sit": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "prompt": [18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44, 54, 66, 75, 79, 91, 102, 103, 104, 109, 110, 111, 115, 117, 121, 122, 144, 158, 231, 233, 237, 261, 272, 273, 274, 279, 280, 281, 285, 291, 292, 299, 313, 327, 351, 354, 366, 377, 378, 379, 384, 385, 386, 390, 392, 396, 397, 417, 431, 446, 455, 459, 471, 482, 483, 484, 489, 490, 491, 495, 497, 501, 502, 524, 538], "usernam": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 66, 111, 115, 281, 285, 386, 390, 446, 491, 495], "join": [18, 19, 20, 22, 35, 36, 38, 43, 44], "termin": [18, 19, 20, 35, 36, 38, 43, 44, 66, 72, 88, 91, 102, 104, 109, 110, 117, 121, 122, 184, 187, 206, 207, 230, 233, 258, 261, 272, 274, 279, 280, 291, 292, 363, 366, 377, 379, 384, 385, 392, 396, 397, 446, 452, 468, 471, 482, 484, 489, 490, 497, 501, 502], "vi": [18, 19, 20, 22, 23, 35, 36, 37, 38, 39, 43, 44], "contrib": [18, 19, 20, 22, 23], "second": [18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 51, 62, 65, 66, 68, 71, 72, 77, 79, 88, 94, 105, 111, 113, 115, 118, 128, 132, 146, 148, 153, 161, 163, 164, 173, 177, 184, 185, 186, 187, 192, 195, 199, 206, 207, 209, 210, 215, 217, 220, 223, 230, 231, 232, 233, 236, 237, 240, 243, 245, 248, 251, 258, 273, 275, 281, 283, 285, 296, 299, 301, 315, 317, 322, 328, 330, 332, 339, 342, 344, 347, 348, 354, 363, 380, 386, 388, 390, 401, 405, 419, 421, 426, 432, 434, 436, 437, 442, 445, 446, 448, 451, 452, 457, 459, 468, 474, 485, 491, 493, 495, 498, 508, 512, 526, 528, 533, 541, 543, 544, 555, 563], "hint": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 79, 82, 185, 207, 210, 233, 237, 299, 335, 354, 357, 459, 462], "addr": [18, 19, 20, 22, 35, 36, 38, 43, 44], "scope": [18, 19, 20, 22, 35, 36, 38, 43, 44, 49, 128, 185, 207, 233, 296, 401, 508], "inet": [18, 19, 20, 22, 35, 36, 38, 43, 44], "offset": [18, 19, 20, 36, 38, 43, 44, 48, 65, 72, 80, 87, 140, 166, 167, 172, 176, 183, 192, 194, 198, 205, 215, 222, 223, 224, 229, 243, 250, 251, 252, 257, 342, 348, 355, 362, 413, 445, 452, 460, 467, 520, 546, 547], "previou": [18, 19, 20, 33, 36, 37, 38, 39, 41, 43, 44, 48, 72, 79, 91, 102, 105, 108, 121, 128, 140, 144, 149, 150, 176, 185, 187, 198, 199, 207, 210, 222, 223, 232, 233, 237, 250, 251, 261, 272, 275, 278, 284, 288, 291, 296, 299, 309, 313, 318, 319, 348, 354, 366, 377, 380, 383, 393, 396, 401, 413, 417, 422, 423, 451, 452, 459, 471, 482, 485, 488, 498, 501, 508, 520, 524, 529, 530, 559], "gset": [18, 19, 20, 36, 38, 43, 44], "fals": [18, 19, 20, 22, 29, 35, 36, 38, 43, 44, 54, 66, 105, 232, 237, 275, 334, 380, 446, 485], "debootstrap": [18, 19, 20, 22, 35, 36, 38], "gdisk": [18, 19, 20, 22, 35, 36, 38, 43, 44], "zfsutil": [18, 19, 20, 23, 36, 37, 38, 39, 40, 85, 181, 203, 227, 255, 360, 465], "sata_disk1": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "alias": [18, 19, 20, 22, 25, 31, 32, 35, 36, 38, 43, 44, 54, 74, 86, 101, 105, 175, 182, 197, 204, 221, 228, 232, 249, 256, 275, 350, 361, 380, 454, 466, 481, 485], "node": [18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 72, 79, 86, 164, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 256, 299, 333, 348, 354, 361, 437, 452, 459, 466, 544], "sporad": [18, 19, 20, 22, 35, 36, 38, 43, 44], "especi": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 68, 72, 80, 173, 177, 195, 199, 217, 223, 245, 251, 252, 344, 348, 355, 448, 452, 460], "la": [18, 19, 20, 22, 35, 36, 38, 43, 44], "vda": [18, 19, 20, 22, 35, 36, 38, 43, 44], "around": [18, 19, 20, 21, 22, 33, 35, 36, 37, 38, 39, 43, 44, 49, 68, 72, 105, 173, 195, 217, 232, 245, 275, 344, 380, 448, 452, 485], "100m": [18, 19, 20, 35, 36, 38, 50, 54, 72, 156, 177, 199, 223, 251, 348, 429, 452, 536], "low": [18, 19, 20, 35, 36, 38, 48, 54, 56, 71, 79, 81, 185, 187, 207, 210, 220, 222, 233, 237, 248, 250, 299, 334, 347, 354, 356, 413, 451, 459, 461], "regener": [18, 19, 20, 35, 36, 38, 48, 140, 176, 198, 222, 250, 413, 520], "85m": [18, 19, 20, 35, 36, 38], "swapoff": [18, 19, 20, 36, 37, 38, 39], "previous": [18, 19, 20, 21, 22, 32, 35, 36, 38, 43, 44, 47, 48, 79, 81, 82, 91, 102, 111, 115, 121, 144, 185, 187, 207, 210, 233, 237, 261, 272, 281, 285, 291, 299, 313, 334, 335, 354, 356, 357, 366, 377, 386, 390, 396, 417, 451, 459, 461, 462, 471, 482, 491, 495, 501, 524], "cat": [18, 19, 20, 36, 37, 38, 39, 43, 44, 48, 66, 80, 355, 446, 460], "mdstat": [18, 19, 20, 36, 38, 43, 44], "stop": [18, 19, 20, 36, 37, 38, 39, 43, 44, 47, 48, 49, 63, 72, 80, 105, 126, 152, 156, 163, 169, 177, 187, 190, 199, 210, 213, 223, 232, 233, 237, 241, 251, 275, 295, 321, 325, 332, 340, 348, 355, 380, 400, 425, 429, 436, 443, 452, 460, 485, 506, 532, 536, 543], "md0": [18, 19, 20, 36, 38, 43, 44], "superblock": [18, 19, 20, 22, 35, 36, 38, 43, 44], "wipef": [18, 19, 37, 38, 39], "trim": [18, 19, 38, 54, 72, 82, 84, 91, 102, 121, 140, 145, 146, 159, 163, 164, 199, 223, 237, 251, 254, 261, 272, 291, 314, 315, 328, 332, 333, 335, 348, 357, 359, 366, 377, 396, 418, 419, 432, 436, 437, 452, 462, 464, 471, 482, 501, 525, 526, 539, 543, 544], "unmap": [18, 19, 38, 48], "sgdisk": [18, 19, 20, 22, 35, 36, 38, 43, 44], "zap": [18, 19, 20, 22, 35, 36, 38, 43, 44, 49, 72, 80, 87, 223, 251, 257, 348, 362, 452, 460, 467], "a1": [18, 19, 20, 22, 35, 36, 38, 43, 44, 54], "n1": [18, 19, 20, 22, 35, 36, 38, 43, 44], "24k": [18, 19, 20, 22, 35, 36, 38, 43, 44], "1000k": [18, 19, 20, 22, 35, 36, 38, 43, 44], "t1": [18, 19, 20, 22, 35, 36, 38, 43, 44], "ef02": [18, 19, 20, 22, 35, 36, 38, 43, 44], "n2": [18, 19, 20, 22, 35, 36, 38, 43, 44], "512m": [18, 19, 20, 22, 35, 36, 38, 43, 44, 54], "t2": [18, 19, 20, 22, 35, 36, 38, 43, 44], "ef00": [18, 19, 20, 22, 35, 36, 38, 43, 44], "n3": [18, 19, 20, 22, 35, 36, 38, 43, 44], "1g": [18, 19, 20, 22, 35, 43, 44, 87, 183, 205, 229, 257, 362, 467], "t3": [18, 19, 20, 22, 35, 36, 38, 43, 44], "bf01": [18, 19, 20, 22, 35, 43, 44], "n4": [18, 19, 20, 22, 35, 36, 38, 43, 44], "t4": [18, 19, 20, 22, 35, 36, 38, 43, 44], "bf00": [18, 19, 20, 36, 38, 43, 44], "8309": [18, 19, 20, 36, 38, 43, 44], "repeat": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 223, 251, 348, 452], "grub2": [18, 38, 80, 355, 460], "cachefil": [18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 47, 48, 54, 67, 82, 87, 137, 144, 171, 183, 187, 193, 205, 210, 216, 229, 237, 244, 257, 306, 313, 335, 343, 357, 362, 410, 417, 447, 462, 467, 517, 524], "bpool": [18, 19, 20, 22, 35, 36, 38, 39, 43, 44], "spa_feature_nam": [18, 19, 20, 22, 35, 36, 38, 43, 44], "restrict": [18, 35, 36, 38, 45, 48, 72, 78, 79, 89, 96, 99, 109, 110, 111, 115, 116, 119, 128, 137, 165, 185, 207, 233, 259, 279, 280, 281, 285, 289, 296, 299, 354, 364, 384, 385, 386, 390, 394, 401, 410, 438, 452, 458, 459, 469, 476, 479, 489, 490, 491, 495, 496, 499, 508, 517, 545, 559], "sata_disk2": [18, 19, 20, 22, 35, 36, 38, 43, 44], "arbitrari": [18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 49, 54, 74, 77, 78, 79, 82, 96, 99, 101, 105, 116, 140, 142, 146, 148, 157, 163, 172, 175, 185, 187, 194, 197, 207, 210, 221, 232, 233, 237, 249, 266, 269, 271, 275, 286, 298, 299, 309, 311, 315, 317, 326, 332, 350, 353, 354, 371, 374, 376, 380, 391, 413, 415, 419, 421, 430, 436, 454, 457, 458, 459, 462, 476, 479, 481, 485, 496, 520, 522, 526, 528, 537, 543], "consist": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 58, 72, 74, 77, 79, 80, 81, 82, 87, 94, 109, 110, 113, 118, 128, 137, 144, 164, 175, 183, 185, 187, 197, 205, 207, 210, 221, 224, 229, 233, 237, 249, 251, 252, 257, 279, 280, 296, 299, 306, 313, 333, 335, 348, 350, 354, 355, 357, 362, 384, 385, 401, 410, 417, 437, 452, 454, 457, 459, 460, 461, 462, 467, 474, 489, 490, 493, 498, 508, 517, 524, 544, 559], "convent": [18, 19, 20, 22, 35, 43, 44, 77, 79, 80, 82, 105, 168, 178, 185, 189, 200, 207, 212, 224, 232, 233, 239, 252, 275, 299, 338, 354, 355, 380, 441, 457, 459, 460, 462, 485, 548, 559], "part4": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "keyloc": [18, 19, 20, 36, 37, 38, 39, 43, 44, 75, 79, 89, 91, 102, 103, 104, 109, 110, 111, 115, 117, 119, 121, 122, 144, 158, 231, 233, 237, 261, 272, 273, 274, 279, 280, 281, 285, 291, 292, 299, 313, 327, 351, 354, 364, 366, 377, 378, 379, 384, 385, 386, 390, 392, 394, 396, 397, 417, 431, 455, 459, 469, 471, 482, 483, 484, 489, 490, 491, 495, 497, 499, 501, 502, 524, 538, 559], "keyformat": [18, 19, 20, 36, 37, 38, 39, 43, 44, 79, 89, 91, 102, 109, 110, 119, 121, 233, 261, 272, 279, 280, 291, 299, 354, 364, 366, 377, 384, 385, 394, 396, 459, 469, 471, 482, 489, 490, 499, 501, 559], "luksformat": [18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44], "ae": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 79, 223, 233, 251, 299, 348, 354, 452, 459], "xt": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "plain64": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "512": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 71, 72, 79, 80, 81, 172, 177, 178, 185, 194, 199, 200, 207, 220, 223, 224, 233, 248, 251, 252, 299, 334, 347, 348, 354, 355, 451, 452, 459, 460, 461], "luksopen": [18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44], "luks1": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "todai": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 92, 93, 94, 108, 113, 118, 128, 185, 207, 233, 296, 401, 472, 473, 474, 488, 493, 498, 508], "though": [18, 19, 20, 22, 27, 35, 36, 37, 38, 39, 43, 44, 48, 72, 78, 79, 81, 82, 87, 91, 96, 99, 102, 105, 116, 121, 154, 183, 185, 187, 205, 207, 210, 229, 232, 233, 237, 251, 257, 261, 266, 269, 272, 275, 286, 291, 298, 299, 323, 334, 335, 348, 353, 354, 356, 357, 362, 366, 371, 374, 377, 380, 391, 396, 427, 452, 458, 459, 461, 462, 467, 471, 476, 479, 482, 485, 496, 501, 534, 550, 554, 555, 556, 557], "posix": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 63, 79, 81, 97, 107, 125, 128, 169, 172, 185, 187, 190, 194, 207, 210, 213, 233, 237, 241, 267, 277, 294, 296, 299, 334, 340, 354, 356, 372, 382, 399, 401, 443, 459, 461, 477, 487, 505, 508], "lowercas": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 77, 79, 82, 185, 207, 233, 299, 354, 457, 459, 462], "journald": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "vastli": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "attribut": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 45, 47, 49, 79, 80, 87, 91, 102, 121, 128, 181, 185, 203, 207, 224, 227, 233, 252, 255, 261, 272, 291, 296, 299, 354, 355, 366, 377, 396, 401, 459, 460, 467, 471, 482, 501, 508], "insid": [18, 19, 20, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44, 63, 87, 100, 120, 123, 127, 132, 169, 186, 190, 205, 209, 213, 229, 236, 241, 257, 270, 290, 301, 340, 362, 375, 395, 405, 443, 467, 480, 500, 503, 507, 512], "window": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 49, 177, 199], "besid": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "omit": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 80, 81, 88, 96, 99, 111, 115, 116, 178, 184, 185, 187, 200, 206, 207, 210, 224, 230, 233, 237, 252, 258, 266, 269, 281, 285, 286, 315, 334, 355, 356, 363, 371, 374, 386, 390, 391, 460, 461, 468, 476, 479, 491, 495, 496], "fine": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 72, 177, 199, 223, 251, 348, 452], "corner": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "impli": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 54, 55, 93, 94, 144, 166, 167, 185, 187, 207, 210, 231, 233, 237, 238, 263, 264, 273, 313, 336, 337, 368, 369, 417, 439, 440, 473, 474, 524, 546, 547], "utf8onli": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 79, 89, 96, 99, 116, 119, 128, 185, 207, 233, 259, 289, 296, 299, 354, 364, 394, 401, 459, 469, 476, 479, 496, 499, 508], "discuss": [18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 49, 56, 60, 79, 82, 185, 207, 233, 237, 299, 335, 354, 357, 459, 462], "problem": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 45, 47, 48, 49, 58, 63, 72, 78, 80, 132, 140, 159, 169, 178, 186, 190, 200, 207, 209, 213, 222, 223, 224, 233, 236, 237, 241, 250, 251, 252, 298, 301, 328, 340, 348, 353, 355, 405, 413, 432, 443, 452, 458, 460, 512, 520, 539, 549, 555, 560, 561, 562, 563], "enforc": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 77, 79, 82, 111, 115, 177, 181, 185, 199, 203, 207, 223, 227, 233, 251, 255, 281, 285, 299, 348, 354, 386, 390, 452, 457, 459, 462, 491, 495], "unset": [18, 19, 20, 21, 25, 31, 35, 36, 37, 38, 39, 43, 44, 72, 81, 82, 103, 210, 231, 237, 273, 335, 348, 356, 357, 378, 452, 461, 462, 483], "128": [18, 19, 20, 35, 36, 37, 38, 39, 43, 44, 47, 48, 72, 79, 80, 91, 102, 111, 115, 121, 166, 167, 177, 185, 199, 207, 223, 233, 251, 261, 272, 291, 299, 348, 354, 366, 377, 396, 452, 459, 460, 471, 482, 491, 495, 501, 546, 547], "blog": [18, 19, 20, 35, 36, 37, 38, 39, 43, 44, 47, 49], "middl": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 72, 223, 251, 348, 452], "ground": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47], "classic": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 72], "atim": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 79, 89, 96, 99, 103, 116, 119, 128, 185, 207, 231, 233, 259, 273, 289, 296, 299, 354, 364, 378, 394, 401, 459, 469, 476, 479, 483, 496, 499, 508], "behavior": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 66, 71, 72, 74, 77, 79, 82, 87, 91, 93, 94, 101, 102, 105, 121, 128, 133, 134, 136, 140, 146, 149, 150, 164, 175, 176, 177, 183, 185, 187, 197, 198, 199, 205, 207, 210, 220, 221, 222, 223, 229, 231, 232, 233, 237, 248, 249, 250, 251, 257, 261, 263, 264, 271, 272, 273, 275, 291, 296, 299, 302, 315, 333, 335, 347, 348, 350, 354, 357, 362, 366, 368, 369, 376, 377, 380, 396, 401, 406, 409, 413, 419, 422, 423, 437, 446, 451, 452, 454, 457, 459, 462, 467, 471, 473, 474, 481, 482, 485, 501, 508, 513, 516, 520, 526, 529, 530, 544], "30": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 54, 72, 77, 79, 97, 98, 107, 112, 124, 125, 130, 164, 177, 199, 223, 233, 237, 251, 259, 260, 262, 263, 264, 265, 266, 267, 268, 269, 271, 276, 277, 278, 281, 282, 284, 285, 286, 287, 288, 289, 293, 294, 296, 298, 299, 328, 348, 353, 354, 369, 372, 373, 382, 383, 387, 398, 399, 401, 403, 432, 437, 452, 457, 459, 477, 478, 487, 492, 504, 505, 510, 544], "portion": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 77, 79, 80, 81, 82, 94, 109, 110, 111, 115, 128, 178, 185, 187, 200, 207, 210, 224, 233, 237, 252, 264, 279, 280, 296, 299, 334, 354, 355, 356, 369, 384, 385, 401, 457, 459, 460, 461, 462, 474, 489, 490, 491, 495, 508, 557], "forget": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 55], "256": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 71, 72, 77, 79, 80, 82, 91, 102, 121, 177, 185, 199, 200, 207, 220, 223, 224, 233, 248, 251, 252, 261, 272, 291, 296, 299, 347, 348, 354, 355, 366, 377, 396, 451, 452, 457, 459, 460, 462, 471, 482, 501], "gcm": [18, 19, 20, 36, 37, 38, 39, 43, 44, 48, 72, 79, 223, 233, 251, 299, 348, 354, 452, 459], "mode": [18, 19, 20, 22, 28, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 65, 66, 67, 72, 74, 79, 80, 81, 82, 86, 101, 132, 140, 142, 144, 146, 148, 157, 163, 165, 171, 175, 178, 182, 185, 186, 187, 192, 193, 197, 200, 204, 207, 209, 210, 215, 216, 221, 223, 224, 228, 233, 236, 237, 243, 244, 249, 251, 252, 256, 271, 299, 301, 309, 311, 313, 315, 317, 326, 332, 334, 335, 342, 343, 348, 350, 354, 355, 356, 357, 361, 376, 405, 413, 415, 417, 419, 421, 430, 436, 438, 445, 446, 447, 452, 454, 459, 460, 461, 462, 466, 481, 512, 520, 522, 524, 526, 528, 537, 543, 545], "half": [18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 68, 72, 199, 223, 251, 348, 448, 452], "thu": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 50, 51, 72, 79, 80, 81, 87, 89, 119, 140, 177, 178, 185, 199, 200, 205, 207, 222, 223, 224, 229, 233, 237, 250, 251, 252, 257, 259, 289, 299, 334, 348, 354, 355, 356, 362, 364, 394, 413, 452, 459, 460, 461, 467, 469, 499, 520], "weakest": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "wise": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 46, 72, 177, 187, 199, 210, 223, 237, 251, 348, 452], "faq": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 58, 59, 60, 93, 473], "guidanc": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47], "luks2": [18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44], "solari": [18, 19, 20, 22, 35, 43, 44, 49, 53, 54, 79, 185, 207, 233, 299, 354, 459], "suffix": [18, 19, 20, 22, 35, 43, 44, 77, 79, 96, 99, 116, 172, 185, 194, 207, 233, 266, 269, 286, 299, 354, 371, 374, 391, 457, 459, 476, 479, 496], "beadm": [18, 19, 20, 22, 35, 43, 44], "zsy": [18, 19, 20, 35, 36, 37, 38, 39, 43, 44], "complic": [18, 19, 20, 35, 43, 44, 47, 185], "life": [18, 19, 20, 55, 140, 176, 177, 198, 222, 250, 413, 520], "said": [18, 19, 20, 33, 43, 44, 47, 79, 111, 115, 233, 281, 285, 299, 354, 386, 390, 459, 491, 495], "simplic": [18, 19, 20, 43, 44, 54], "situat": [18, 19, 20, 22, 33, 35, 37, 39, 43, 44, 47, 48, 72, 199, 223, 251, 348, 452], "chmod": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 79, 89, 119, 128, 185, 207, 233, 296, 299, 354, 401, 459, 469, 499, 508], "700": [18, 19, 20, 36, 38, 39, 43, 44], "lib": [18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39, 43, 44, 68, 75, 351, 448, 455], "spool": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "sun": [18, 19, 20, 22, 35, 43, 44, 48, 49, 54, 63, 156, 169, 185, 190, 213, 241, 340, 429, 443, 536, 555], "1777": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "srv": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "game": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 52], "accountsservic": [18, 19, 20, 22, 35, 37, 38, 39, 43, 44], "networkmanag": [18, 19, 35, 36, 37, 38, 39], "docker": [18, 19, 20, 22, 35, 38, 39, 43, 44], "snap": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 94, 101, 105, 109, 110, 111, 115, 118, 128, 185, 207, 232, 233, 264, 275, 288, 296, 369, 380, 393, 401, 474, 481, 485, 489, 490, 491, 495, 498, 508], "tmpf": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "noth": [18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 45, 75, 79, 109, 110, 111, 115, 179, 201, 207, 225, 231, 233, 251, 253, 273, 279, 280, 281, 285, 351, 354, 384, 385, 386, 390, 455, 459, 489, 490, 491, 495], "maximum": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 46, 47, 48, 49, 51, 54, 71, 72, 79, 80, 87, 105, 128, 140, 164, 177, 183, 184, 185, 199, 205, 206, 207, 220, 222, 223, 224, 229, 230, 232, 233, 237, 248, 250, 251, 252, 257, 258, 275, 296, 299, 333, 347, 348, 354, 355, 362, 380, 401, 413, 437, 451, 452, 459, 460, 467, 485, 508, 520, 544], "zfs_initrd_additional_dataset": [18, 19, 38, 39], "matter": [18, 19, 20, 33, 35, 36, 38, 39, 43, 44, 47, 48, 49, 54, 72, 164, 177, 199, 223, 251, 333, 348, 437, 452, 544], "lock": [18, 19, 20, 22, 26, 35, 36, 37, 38, 39, 43, 44, 48, 71, 72, 79, 88, 177, 184, 185, 199, 206, 207, 220, 223, 230, 233, 248, 251, 258, 299, 347, 348, 354, 363, 451, 452, 459, 468], "unconfigur": [18, 19, 20, 22, 35, 36, 38], "entireti": [18, 19, 20, 22, 35, 36, 38], "127": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 128, 185, 207, 233, 296, 401, 508], "real": [18, 19, 20, 22, 23, 35, 36, 37, 38, 39, 43, 44, 65, 75, 79, 80, 132, 133, 146, 148, 158, 159, 178, 185, 187, 200, 207, 209, 210, 224, 233, 236, 237, 252, 299, 301, 302, 315, 317, 327, 328, 342, 351, 354, 355, 405, 406, 419, 421, 431, 432, 445, 455, 459, 460, 512, 513, 526, 528, 538, 539], "dn": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 77, 79, 80, 82, 96, 99, 116, 128, 178, 185, 200, 207, 224, 233, 252, 296, 299, 354, 355, 401, 457, 459, 460, 462, 476, 479, 496, 508], "fqdn": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "nano": [18, 19, 20, 22, 28, 35, 36, 37, 38, 39, 43, 44], "confus": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 63, 79, 97, 107, 125, 169, 185, 190, 207, 210, 213, 233, 241, 267, 277, 294, 299, 340, 354, 372, 382, 399, 443, 459, 477, 487, 505], "adjust": [18, 19, 20, 22, 27, 33, 35, 36, 38, 43, 44, 46, 47, 48, 49, 50, 72, 79, 108, 177, 185, 199, 207, 223, 233, 251, 278, 299, 348, 354, 383, 452, 459, 488], "ifac": [18, 19, 20, 22], "src": [18, 19, 20, 22, 23, 25, 27], "bind": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 71, 103, 128, 220, 248, 347, 378, 401, 451, 483, 508], "livecd": [18, 19, 20, 22, 35, 36, 38, 43, 44], "privat": [18, 19, 36, 37, 38, 39, 43, 44, 85, 181, 203, 227, 255, 360, 465], "english": [18, 19, 20, 22, 25, 31, 35, 36, 37, 38, 39, 43, 44], "languag": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 105, 128, 232, 275, 296, 380, 401, 485, 508], "dpkg": [18, 19, 20, 22, 23, 35, 36, 37, 38, 39, 43, 44], "reconfigur": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 88, 184, 206, 230, 258, 363, 468], "tzdata": [18, 19, 20, 22, 35, 36, 37, 38, 39], "keyboard": [18, 19, 20, 36, 38], "remake_initrd": [18, 19, 20], "sai": [18, 19, 20, 33, 36, 37, 38, 43, 44], "couldn": [18, 19, 20, 36, 38, 104, 122, 274, 292, 379, 397, 484, 502], "crypttab": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "blkid": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 132, 177, 186, 199, 209, 223, 236, 251, 301, 348, 405, 452, 512], "although": [18, 19, 21, 43, 44, 47, 49, 54, 79, 111, 115, 185, 207, 233, 281, 285, 299, 354, 386, 390, 459, 491, 495], "brows": [18, 19], "clock": [18, 19, 47, 48, 177, 199], "drift": [18, 19], "pc": [18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44], "dosfstool": [18, 19, 20, 22, 35, 36, 38, 43, 44], "mkdosf": [18, 19, 20, 22, 35, 36, 38, 43, 44], "32": [18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 58, 68, 71, 72, 79, 81, 87, 131, 148, 164, 177, 187, 199, 205, 208, 210, 220, 223, 229, 233, 235, 237, 248, 251, 257, 299, 300, 333, 344, 347, 348, 354, 362, 404, 437, 448, 451, 452, 459, 461, 467, 511, 528, 544], "amd64": [18, 19, 20, 21, 22, 35, 36, 38, 49], "meet": [18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 199, 223, 251, 348], "cluster": [18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 82, 172, 187, 194, 210, 237, 335, 357, 462], "mib": [18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 65, 71, 72, 79, 105, 192, 215, 220, 243, 248, 342, 347, 348, 445, 451, 452, 459, 485], "fat32": [18, 19, 20, 22, 33, 35, 36, 38, 43, 44], "prober": [18, 19, 20, 35, 36, 38, 43, 44], "purg": [18, 19, 20, 35, 36, 37, 38, 39], "whether": [18, 19, 20, 22, 25, 35, 43, 44, 47, 48, 49, 51, 54, 55, 72, 74, 79, 82, 109, 110, 111, 115, 140, 144, 146, 175, 177, 185, 187, 197, 199, 207, 210, 221, 222, 223, 233, 237, 249, 250, 251, 279, 280, 281, 285, 299, 313, 315, 335, 348, 350, 354, 357, 384, 385, 386, 390, 413, 417, 419, 452, 454, 459, 462, 489, 490, 491, 495, 520, 524, 526], "defaultdepend": [18, 19, 20, 22, 35, 43, 44], "oneshot": [18, 19, 20, 22, 35, 43, 44], "remainafterexit": [18, 19, 20, 22, 35, 43, 44], "execstart": [18, 19, 20, 22, 35, 43, 44], "sbin": [18, 19, 20, 22, 27, 33, 35, 43, 44, 47, 172, 185, 194], "execstartpr": [18, 19, 20, 43, 44], "preboot_zpool": [18, 19, 20, 43, 44], "execstartpost": [18, 19, 20, 43, 44], "wantedbi": [18, 19, 20, 22, 35, 43, 44, 103, 231, 273, 378, 483], "indic": [18, 19, 46, 47, 48, 66, 67, 71, 72, 79, 80, 81, 83, 85, 87, 91, 95, 102, 105, 111, 115, 121, 128, 135, 140, 144, 158, 159, 171, 177, 178, 179, 181, 185, 187, 193, 199, 200, 201, 203, 207, 210, 216, 220, 222, 223, 224, 225, 227, 232, 233, 237, 244, 248, 250, 251, 252, 253, 255, 257, 261, 265, 272, 275, 281, 285, 291, 296, 299, 304, 313, 327, 328, 334, 343, 347, 348, 354, 355, 356, 358, 360, 362, 366, 370, 377, 380, 386, 390, 396, 401, 408, 413, 417, 431, 432, 446, 447, 451, 452, 459, 460, 461, 463, 465, 467, 471, 475, 482, 485, 491, 495, 501, 508, 515, 520, 524, 538, 539, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "chose": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 72, 177, 199, 223, 251, 348, 452], "mutual": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 172, 194], "sshd_config": [18, 19, 20, 36, 38, 43, 44], "permitrootlogin": [18, 19, 20, 36, 38, 43, 44], "dropbear": [18, 19], "unlock": [18, 19, 20, 21, 22, 35, 36, 38, 43, 44, 48, 72, 87, 177, 199, 223, 251, 348, 452, 467], "ecdsa": [18, 19], "ed25519": [18, 19], "rsa": [18, 19], "ssh_host_": [18, 19], "_kei": [18, 19], "keygen": [18, 19], "m": [18, 19, 33, 36, 37, 38, 39, 47, 48, 63, 66, 67, 68, 72, 77, 79, 86, 87, 88, 95, 96, 99, 105, 109, 110, 116, 128, 132, 137, 144, 146, 169, 171, 172, 173, 182, 183, 184, 185, 186, 187, 190, 193, 194, 195, 204, 205, 206, 207, 209, 210, 213, 216, 217, 223, 228, 229, 230, 232, 233, 236, 237, 241, 244, 245, 251, 256, 257, 258, 265, 266, 269, 275, 279, 280, 286, 296, 299, 301, 306, 313, 315, 340, 343, 344, 354, 361, 362, 363, 370, 371, 374, 380, 384, 385, 391, 401, 405, 410, 417, 419, 443, 446, 447, 448, 452, 457, 459, 466, 467, 468, 475, 476, 479, 485, 489, 490, 496, 508, 512, 517, 524, 526], "pem": [18, 19], "dropbearconvert": [18, 19], "dropbear_": [18, 19], "_host_kei": [18, 19], "static": [18, 19, 81, 181, 187, 203, 210, 227, 237, 255, 334, 356, 461], "syntax": [18, 19, 49, 89, 105, 119, 128, 185, 207, 232, 233, 271, 275, 296, 380, 401, 469, 485, 499, 508], "gatewai": [18, 19], "mask": [18, 19, 36, 38, 43, 44, 48, 132, 140, 176, 198, 220, 222, 236, 248, 250, 301, 405, 413, 512, 520], "nic": [18, 19], "100": [18, 19, 25, 36, 37, 38, 39, 46, 48, 49, 50, 66, 72, 79, 82, 105, 132, 156, 177, 185, 186, 187, 199, 207, 209, 210, 223, 232, 233, 236, 237, 251, 275, 299, 301, 325, 348, 354, 380, 405, 429, 446, 452, 459, 462, 485, 512, 536], "255": [18, 19, 128, 508], "myhostnam": [18, 19], "ens3": [18, 19], "mismatch": [18, 19, 36, 38, 558, 564], "understand": [18, 19, 33, 36, 48, 50, 67, 72, 87, 177, 199, 223, 251, 343, 348, 447, 452, 467], "zfsunlock": [18, 19], "cryptroot": [18, 19], "front": [18, 19], "kindli": [18, 19, 20, 22], "popcon": [18, 19, 20, 22], "popular": [18, 19, 20, 22, 47, 80, 355, 460], "contest": [18, 19, 20, 22], "term": [18, 19, 20, 22, 25, 47, 72, 87, 172, 183, 184, 194, 205, 206, 229, 230, 251, 257, 258, 348, 362, 452, 467], "recogn": [18, 19, 20, 22, 35, 36, 38, 43, 44, 49, 74, 80, 141, 154, 175, 187, 197, 210, 221, 237, 249, 310, 323, 350, 355, 414, 427, 454, 460, 521, 534], "probe": [18, 19, 20, 22, 35, 36, 38, 43, 44, 132, 140, 176, 198, 209, 222, 236, 250, 301, 405, 413, 512, 520], "grub_cmdline_linux": [18, 19, 20, 22, 35, 43, 44], "quiet": [18, 19, 20, 22, 35, 36, 38, 43, 44, 66, 132, 186, 209, 236, 301, 405, 446, 512], "grub_cmdline_linux_default": [18, 19, 20, 22, 35, 36, 38, 43, 44], "uncom": [18, 19, 20, 22, 35, 36, 38, 43, 44], "grub_termin": [18, 19, 20, 22, 35, 36, 38, 43, 44], "quit": [18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 48, 54], "twice": [18, 19, 20, 22, 35, 36, 37, 38, 43, 44, 47, 48, 49, 72, 183, 199, 223, 251, 348, 452], "undo": [18, 19, 20, 22, 35, 36, 38, 43, 44, 105, 144, 232, 237, 275, 313, 380, 417, 485, 524], "osprob": [18, 19, 20, 22, 35, 36, 38, 43, 44], "mbr": [18, 19, 20, 22, 35, 36, 38, 43, 44], "recheck": [18, 19, 20, 22, 35, 36, 38, 43, 44], "floppi": [18, 19, 20, 22, 35, 36, 38, 43, 44], "turn": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 71, 72, 75, 79, 80, 83, 164, 178, 185, 200, 207, 220, 223, 224, 233, 248, 251, 252, 299, 347, 348, 351, 354, 355, 358, 437, 451, 452, 455, 459, 460, 463, 544], "rsyslog": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "privatetmp": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "fg": [18, 19, 20, 36, 37, 38, 39, 43, 44], "ctrl": [18, 19, 20, 35, 36, 37, 38, 39, 43, 44, 187], "ei": [18, 19, 20, 36, 37, 38, 39, 43, 44, 59, 60, 564], "tac": [18, 19, 20, 22, 35, 36, 38, 43, 44], "lf": [18, 19, 20, 22, 35, 36, 38, 43, 44], "newli": [18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44, 48, 71, 72, 79, 80, 81, 89, 91, 93, 94, 102, 109, 110, 118, 119, 121, 128, 134, 178, 185, 187, 200, 207, 210, 220, 224, 233, 237, 248, 252, 259, 261, 272, 279, 280, 289, 291, 296, 299, 334, 347, 354, 355, 356, 364, 366, 368, 377, 384, 385, 394, 396, 401, 451, 452, 459, 460, 461, 469, 471, 473, 474, 482, 489, 490, 498, 499, 501, 508, 551, 552], "your_usernam": [18, 19, 20, 36, 37, 38, 39], "addus": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "skel": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "chown": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49], "usermod": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "audio": [18, 19, 20, 22, 35, 43, 44], "cdrom": [18, 19, 20, 22, 33, 35, 36, 37, 38, 39, 43, 44], "dip": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47], "netdev": [18, 19, 20, 22, 35, 43, 44], "plugdev": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "video": [18, 19, 20, 22, 35, 43, 44, 52], "hit": [18, 19, 20, 22, 35, 43, 44, 48, 49, 51, 62, 72, 177, 199, 223, 240, 251, 339, 348, 442, 452], "grubx64": [18, 19, 20, 22, 43, 44], "extrem": [18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 49, 72, 78, 79, 80, 87, 94, 144, 177, 183, 185, 187, 199, 200, 205, 207, 210, 223, 224, 229, 233, 237, 251, 252, 257, 264, 298, 299, 313, 348, 353, 354, 355, 362, 369, 417, 452, 458, 459, 460, 467, 474, 524], "high": [18, 19, 20, 22, 35, 43, 44, 47, 48, 49, 54, 58, 72, 79, 80, 87, 177, 178, 185, 199, 200, 205, 207, 222, 223, 224, 229, 233, 250, 251, 252, 257, 299, 348, 354, 355, 362, 413, 452, 459, 460, 467], "pressur": [18, 19, 20, 22, 35, 43, 44, 48, 72, 81, 88, 184, 199, 206, 220, 223, 230, 248, 251, 258, 334, 348, 356, 363, 452, 461, 468], "lockup": [18, 19, 20, 22, 35, 43, 44], "getconf": [18, 19, 20, 22, 35, 43, 44, 54], "pages": [18, 19, 20, 22, 35, 43, 44, 54], "zle": [18, 19, 20, 22, 35, 43, 44, 49, 79, 87, 166, 167, 185, 205, 207, 229, 233, 257, 299, 354, 362, 459, 467, 546, 547], "logbia": [18, 19, 20, 22, 35, 43, 44, 48, 49, 54, 72, 79, 89, 119, 185, 199, 207, 223, 233, 251, 299, 348, 354, 364, 394, 452, 459, 469, 499], "throughput": [18, 19, 20, 22, 35, 43, 44, 47, 48, 49, 50, 51, 54, 65, 72, 79, 177, 185, 192, 199, 207, 215, 223, 233, 243, 251, 299, 342, 348, 354, 445, 452, 459], "primarycach": [18, 19, 20, 22, 35, 43, 44, 49, 54, 79, 89, 96, 99, 116, 119, 128, 185, 207, 233, 259, 289, 296, 299, 354, 364, 394, 401, 459, 469, 476, 479, 496, 499, 508], "secondarycach": [18, 19, 20, 22, 35, 43, 44, 49, 79, 89, 96, 99, 116, 119, 128, 185, 207, 233, 259, 289, 296, 299, 354, 364, 394, 401, 459, 469, 476, 479, 496, 499, 508], "cheapest": [18, 19, 20, 22, 35, 43, 44], "zdx": [18, 19, 20, 22, 35, 43, 44], "resum": [18, 19, 20, 22, 33, 35, 43, 44, 72, 79, 80, 88, 109, 110, 111, 115, 136, 140, 145, 156, 161, 164, 166, 167, 178, 199, 200, 207, 210, 222, 223, 224, 233, 237, 250, 251, 252, 258, 279, 280, 281, 285, 299, 305, 314, 325, 330, 333, 336, 348, 354, 355, 363, 384, 385, 386, 390, 409, 413, 418, 429, 434, 437, 439, 440, 452, 459, 460, 468, 489, 490, 491, 495, 516, 520, 525, 536, 541, 544, 546, 547, 551, 552], "hibern": [18, 19, 20, 22, 35, 43, 44], "hang": [18, 19, 20, 22, 23, 35, 37, 39, 43, 44, 48, 72, 223, 251, 348, 452, 561, 562], "av": [18, 19, 20, 22, 35, 43, 44], "dist": [18, 19, 20, 22, 25, 26, 31, 32, 35, 36, 37, 38, 39], "regular": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 48, 72, 75, 78, 79, 81, 83, 85, 95, 109, 110, 111, 115, 156, 172, 179, 185, 187, 194, 201, 207, 210, 223, 225, 233, 237, 251, 253, 265, 279, 280, 281, 285, 298, 299, 334, 348, 351, 353, 354, 356, 358, 360, 370, 384, 385, 386, 390, 452, 455, 458, 459, 461, 463, 465, 475, 489, 490, 491, 495, 536], "tasksel": [18, 19, 20, 22], "unselect": [18, 19, 20, 22], "logrot": [18, 19, 20, 22, 35, 36, 37, 38, 39], "burn": [18, 19, 20, 22, 35, 36, 37, 38, 39], "gain": [18, 19, 20, 22, 35, 36, 37, 38, 39, 48, 49, 72, 79, 103, 185, 207, 223, 231, 233, 251, 273, 299, 348, 354, 378, 452, 459, 483], "wast": [18, 19, 20, 22, 35, 36, 37, 38, 39, 48, 72, 251, 334, 348, 356, 452], "uncompress": [18, 19, 20, 22, 35, 36, 37, 38, 39, 48, 49, 72, 79, 111, 115, 166, 167, 233, 251, 299, 348, 354, 452, 459, 491, 495, 546, 547], "loop": [18, 19, 20, 22, 35, 36, 37, 38, 39, 48, 68, 72, 223, 251, 344, 348, 448, 452], "past": [18, 19, 20, 22, 35, 36, 37, 38, 39, 47, 48, 72, 79, 177, 185, 199, 207, 233, 251, 299, 348, 354, 452, 459], "eq": [18, 19, 20, 22, 35, 36, 37, 38, 39], "fi": [18, 19, 20, 22, 25, 31, 33, 35, 36, 37, 38, 39], "delet": [18, 19, 20, 22, 33, 35, 37, 39, 43, 44, 49, 54, 72, 79, 80, 87, 91, 94, 95, 102, 105, 109, 110, 111, 114, 115, 121, 126, 128, 185, 199, 207, 223, 232, 233, 251, 252, 257, 261, 264, 272, 275, 279, 280, 281, 285, 291, 295, 296, 299, 348, 354, 355, 362, 366, 369, 377, 380, 384, 385, 386, 390, 396, 400, 401, 452, 459, 460, 467, 471, 474, 475, 482, 485, 489, 490, 491, 494, 495, 501, 506, 508], "destroi": [18, 19, 20, 21, 22, 35, 37, 39, 43, 44, 48, 67, 71, 72, 78, 79, 80, 81, 82, 84, 89, 90, 92, 93, 98, 105, 108, 109, 110, 111, 112, 113, 114, 115, 118, 119, 126, 128, 137, 140, 144, 147, 163, 164, 171, 176, 177, 178, 185, 187, 193, 198, 199, 200, 207, 210, 216, 220, 222, 223, 224, 232, 233, 237, 244, 248, 250, 251, 252, 254, 259, 260, 263, 268, 275, 278, 279, 280, 281, 282, 284, 285, 288, 289, 295, 296, 298, 299, 306, 313, 316, 332, 333, 334, 335, 343, 347, 348, 353, 354, 355, 356, 357, 359, 364, 365, 368, 373, 380, 383, 384, 385, 386, 387, 389, 390, 393, 394, 400, 401, 410, 413, 417, 420, 436, 437, 447, 451, 452, 458, 459, 460, 461, 462, 464, 469, 470, 472, 473, 478, 485, 488, 489, 490, 491, 492, 493, 494, 495, 498, 499, 506, 508, 517, 520, 524, 527, 543, 544, 551, 553, 555, 556, 558, 559], "earlier": [18, 19, 20, 36, 38, 43, 44, 47, 48, 72, 79, 111, 115, 185, 207, 233, 281, 285, 299, 354, 386, 390, 459, 491, 495, 555, 559], "graphic": [18, 19, 20, 22, 35, 36, 38, 43, 44], "nicer": [18, 19, 20, 22, 35, 36, 38, 43, 44], "luksheaderbackup": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "dat": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "somewher": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 78, 79, 103, 185, 207, 233, 298, 299, 353, 354, 378, 458, 459, 483], "safe": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 48, 49, 54, 67, 68, 71, 80, 171, 173, 193, 195, 216, 217, 220, 244, 245, 248, 252, 343, 344, 347, 355, 447, 448, 451, 460, 557], "cloud": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "whatev": [18, 19, 20, 22, 35, 36, 38, 43, 44, 80, 130, 200, 224, 252, 355, 403, 460, 510], "arcsa": [18, 19, 20, 22, 35, 36, 38, 43, 44], "blob": [18, 19, 20, 22, 35, 36, 38, 43, 44], "driver": [18, 19, 20, 22, 25, 31, 35, 36, 38, 43, 44, 47, 48, 49, 71, 72, 74, 86, 140, 141, 185, 187, 199, 210, 222, 223, 237, 249, 250, 251, 256, 310, 347, 348, 350, 361, 413, 414, 451, 452, 454, 466, 520, 521, 557], "downgrad": [18, 19, 20, 22, 35, 36, 38, 43, 44], "rip": [18, 19, 20, 22, 35, 36, 38, 43, 44], "0010": [18, 19, 20, 22, 35, 36, 38, 43, 44], "ffffffff8101b316": [18, 19, 20, 22, 35, 36, 38, 43, 44], "native_read_tsc": [18, 19, 20, 22, 35, 36, 38, 43, 44], "0x6": [18, 19, 20, 22, 35, 36, 38, 43, 44], "0x20": [18, 19, 20, 22, 35, 36, 38, 43, 44, 48], "anywher": [18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 54, 92, 113, 185, 207, 233, 262, 283, 367, 388, 472, 493], "emit": [18, 19, 20, 22, 35, 36, 38, 43, 44], "involv": [18, 19, 20, 22, 35, 36, 38, 43, 44, 47, 48, 49, 72, 105, 177, 199, 220, 223, 232, 251, 275, 348, 380, 452, 485], "hardwar": [18, 19, 20, 22, 28, 33, 35, 36, 38, 43, 44, 48, 49, 52, 58, 59, 60, 68, 72, 78, 80, 81, 91, 102, 109, 110, 121, 132, 140, 156, 173, 185, 186, 187, 195, 199, 200, 207, 209, 210, 217, 222, 223, 224, 233, 236, 237, 245, 250, 251, 252, 261, 272, 291, 298, 301, 325, 334, 344, 348, 353, 355, 356, 366, 377, 396, 405, 413, 429, 448, 452, 458, 460, 461, 471, 482, 489, 490, 501, 512, 520, 536, 557], "ibm": [18, 19, 20, 22, 35, 36, 38, 43, 44], "m1015": [18, 19, 20, 22, 35, 36, 38, 43, 44], "oem": [18, 19, 20, 22, 35, 36, 38, 43, 44], "brand": [18, 19, 20, 22, 35, 36, 38, 43, 44], "card": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "lsi": [18, 19, 20, 22, 35, 36, 38, 43, 44], "visibl": [18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 63, 78, 79, 80, 94, 169, 178, 185, 190, 200, 207, 213, 224, 233, 241, 252, 264, 298, 299, 340, 353, 354, 355, 369, 443, 458, 459, 460, 474], "hotplug": [18, 19, 20, 22, 35, 36, 38, 43, 44], "member": [18, 19, 20, 22, 35, 36, 38, 43, 44, 48, 67, 72, 89, 119, 128, 171, 185, 193, 199, 207, 216, 223, 233, 244, 251, 296, 343, 348, 401, 447, 452, 469, 499, 508], "330": [18, 19, 20, 22, 35, 36, 38, 43, 44, 47], "perfectli": [18, 19, 20, 22, 35, 36, 38, 43, 44, 54], "glitch": [18, 19, 20, 22, 35, 36, 38, 43, 44], "zfs_initrd_pre_mountroot_sleep": [18, 19, 20, 22, 35, 36, 38, 43, 44], "qcow2": [18, 19, 20, 22, 35, 36, 38, 43, 44], "1234567890": [18, 19, 20, 22, 35, 36, 38, 43, 44], "abl": [18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 47, 48, 49, 50, 54, 72, 79, 80, 87, 111, 115, 128, 132, 137, 148, 164, 177, 178, 185, 187, 199, 200, 207, 209, 210, 220, 223, 224, 233, 236, 237, 248, 251, 252, 281, 285, 296, 299, 301, 315, 333, 347, 348, 354, 355, 386, 390, 401, 405, 410, 437, 452, 459, 460, 467, 491, 495, 508, 512, 517, 528, 544, 556, 557, 559], "guest": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 49, 79, 185, 207, 233, 299, 354, 459], "ovmf": [18, 19, 20, 22, 35, 36, 38, 43, 44], "nvram": [18, 19, 20, 22, 35, 36, 38, 43, 44, 81, 187, 210, 237, 334, 356, 461], "ovmf_cod": [18, 19, 20, 22, 35, 36, 38, 43, 44], "fd": [18, 19, 20, 22, 27, 35, 36, 38, 43, 44], "ovmf_var": [18, 19, 20, 22, 35, 36, 38, 43, 44], "secboot": [18, 19, 20, 36, 38, 43, 44], "aavmf": [18, 19, 20, 22, 35, 36, 38, 43, 44], "aavmf_cod": [18, 19, 20, 22, 35, 36, 38, 43, 44], "aavmf_var": [18, 19, 20, 22, 35, 36, 38, 43, 44], "aavmf32_cod": [18, 19, 20, 36, 38, 43, 44], "aavmf32_var": [18, 19, 20, 36, 38, 43, 44], "libvirtd": [18, 19, 20, 35, 36, 38, 43, 44], "enableuuid": [18, 19, 20, 22, 35, 36, 38, 43, 44], "vmx": [18, 19, 20, 22, 35, 36, 38, 43, 44], "vsphere": [18, 19, 20, 22, 35, 36, 38, 43, 44], "bookworm": [19, 23, 41], "async_destroi": [19, 20, 22, 35, 36, 43, 44, 48, 67, 72, 80, 171, 178, 193, 199, 200, 216, 223, 224, 244, 251, 252, 343, 348, 355, 447, 452, 460], "bookmark": [19, 20, 22, 35, 36, 43, 44, 78, 79, 80, 84, 89, 94, 96, 99, 101, 105, 111, 114, 115, 116, 118, 119, 128, 132, 178, 185, 186, 200, 207, 209, 224, 232, 233, 236, 252, 254, 264, 266, 269, 271, 275, 281, 284, 285, 286, 288, 296, 298, 299, 301, 353, 354, 355, 359, 364, 369, 371, 374, 376, 380, 386, 389, 390, 391, 393, 394, 401, 405, 458, 459, 460, 464, 469, 474, 476, 479, 481, 485, 491, 494, 495, 496, 498, 499, 508, 512, 559], "embedded_data": [19, 20, 22, 35, 36, 43, 44, 80, 91, 102, 111, 115, 121, 178, 185, 200, 207, 224, 233, 252, 261, 272, 281, 285, 291, 355, 366, 377, 386, 390, 396, 460, 471, 482, 491, 495, 501], "empty_bpobj": [19, 20, 22, 35, 36, 43, 44, 67, 80, 171, 178, 193, 200, 216, 224, 244, 252, 343, 355, 447, 460], "enabled_txg": [19, 20, 22, 35, 36, 43, 44, 80, 178, 200, 224, 252, 355, 460], "extensible_dataset": [19, 20, 22, 35, 36, 43, 44, 79, 80, 109, 110, 178, 200, 207, 224, 233, 252, 279, 280, 355, 384, 385, 459, 460, 489, 490], "filesystem_limit": [19, 20, 22, 35, 36, 43, 44, 79, 80, 89, 119, 178, 185, 200, 207, 224, 233, 252, 259, 289, 299, 354, 355, 364, 394, 459, 460, 469, 499], "hole_birth": [19, 20, 22, 35, 36, 43, 44, 48, 72, 80, 177, 178, 199, 200, 223, 224, 251, 252, 348, 355, 452, 460], "large_block": [19, 20, 22, 35, 36, 43, 44, 48, 49, 79, 80, 111, 115, 178, 185, 200, 207, 224, 233, 252, 281, 285, 299, 354, 355, 386, 390, 459, 460, 491, 495], "livelist": [19, 38, 72, 80, 87, 251, 252, 257, 348, 355, 362, 452, 460, 467], "lz4_compress": [19, 20, 22, 35, 36, 43, 44, 67, 79, 80, 111, 115, 171, 178, 185, 193, 200, 207, 216, 224, 233, 244, 252, 281, 285, 299, 343, 354, 355, 386, 390, 447, 459, 460, 491, 495], "spacemap_histogram": [19, 20, 22, 35, 36, 43, 44, 80, 178, 200, 224, 252, 355, 460], "zpool_checkpoint": [19, 20, 38, 43, 44, 80, 224, 252, 355, 460], "allocation_class": [19, 20, 36, 38, 43, 44, 80, 224, 252, 355, 460], "someon": [19, 20, 36, 38, 43, 44], "sens": [19, 20, 33, 36, 38, 43, 44, 49], "rather": [19, 20, 32, 33, 36, 38, 43, 44, 47, 48, 49, 50, 51, 54, 66, 68, 72, 78, 79, 80, 87, 88, 105, 109, 110, 111, 115, 173, 177, 178, 183, 185, 195, 199, 200, 205, 207, 217, 223, 224, 229, 232, 233, 245, 251, 252, 257, 258, 275, 279, 280, 281, 285, 299, 344, 348, 354, 355, 362, 363, 380, 384, 385, 386, 390, 446, 448, 452, 458, 459, 460, 467, 468, 485, 489, 490, 491, 495], "device_rebuild": [19, 38, 80, 252, 355, 460], "practic": [19, 20, 22, 35, 36, 38, 43, 44, 48, 49, 54, 68, 72, 78, 79, 109, 110, 173, 185, 195, 199, 207, 217, 223, 233, 245, 251, 279, 280, 298, 299, 344, 348, 353, 354, 384, 385, 448, 451, 452, 458, 459, 489, 490], "log_spacemap": [19, 38, 80, 252, 355, 460], "spacemap_v2": [19, 20, 36, 38, 43, 44, 80, 224, 252, 355, 460], "project_quota": [19, 20, 36, 38, 43, 44, 80, 224, 252, 355, 460], "resilver_def": [19, 20, 36, 38, 43, 44, 48, 72, 80, 155, 224, 237, 251, 252, 324, 348, 355, 428, 452, 460, 535], "enough": [19, 20, 36, 38, 43, 44, 47, 48, 49, 54, 71, 72, 79, 108, 137, 185, 207, 220, 233, 237, 248, 251, 278, 299, 306, 347, 348, 354, 383, 410, 451, 452, 459, 488, 517, 554], "userobj_account": [19, 20, 22, 35, 36, 38, 43, 44, 80, 200, 224, 252, 355, 460], "theori": [19, 20, 35, 36, 38, 43, 44], "invalid": [19, 20, 35, 36, 38, 43, 44, 72, 79, 105, 128, 140, 145, 161, 164, 176, 185, 187, 198, 207, 210, 222, 232, 233, 237, 250, 251, 275, 296, 299, 314, 330, 333, 348, 354, 380, 401, 413, 418, 434, 437, 452, 459, 485, 508, 520, 525, 541, 544, 552, 553], "dnode": [19, 20, 35, 36, 38, 43, 44, 48, 72, 79, 80, 87, 132, 177, 183, 186, 199, 200, 205, 207, 209, 223, 224, 229, 233, 236, 251, 252, 257, 299, 301, 348, 354, 355, 362, 405, 452, 459, 460, 467, 512], "anywai": [19, 20, 35, 36, 38, 43, 44, 559, 560], "mtab": [19, 20, 22, 35, 43, 44, 85, 181, 203, 227, 255, 360, 465], "timedatectl": 19, "bullsey": [20, 23, 41], "backport": [20, 22, 23, 35, 36, 38], "just": [20, 22, 32, 33, 37, 39, 47, 48, 49, 50, 63, 71, 78, 79, 111, 115, 130, 141, 159, 169, 185, 187, 190, 207, 210, 213, 220, 231, 233, 237, 241, 248, 273, 281, 285, 298, 299, 310, 328, 340, 347, 353, 354, 386, 390, 403, 414, 432, 443, 451, 458, 459, 491, 495, 510, 521, 539], "critic": [20, 22, 48, 79, 81, 237, 334, 354, 356, 459, 461, 549, 551, 553, 554, 555, 556, 563], "opt": [20, 22, 32, 33, 35, 43, 44, 75, 79, 81, 185, 207, 233, 237, 299, 334, 351, 354, 356, 455, 459, 461], "90_zf": [20, 22, 23], "pin": [20, 22, 23, 48, 177, 199, 223, 251, 348], "prioriti": [20, 22, 23, 48, 51, 71, 72, 146, 199, 210, 220, 223, 237, 248, 251, 315, 347, 348, 419, 451, 452, 526], "990": [20, 22, 23], "zfs_debug": [21, 103, 378, 483], "zfs_forc": [21, 75, 351, 455], "root": [21, 27, 41, 49, 66, 75, 78, 79, 80, 81, 82, 87, 88, 89, 91, 94, 96, 99, 100, 102, 103, 104, 105, 109, 110, 111, 115, 116, 117, 118, 119, 120, 121, 122, 123, 127, 128, 137, 144, 158, 164, 178, 181, 183, 184, 185, 187, 200, 203, 205, 206, 207, 210, 224, 227, 229, 230, 231, 232, 233, 237, 255, 257, 258, 259, 261, 270, 272, 273, 274, 275, 279, 280, 289, 290, 291, 292, 296, 298, 299, 306, 313, 327, 333, 334, 335, 351, 353, 354, 356, 357, 362, 363, 364, 366, 375, 377, 378, 379, 380, 384, 385, 392, 394, 395, 396, 397, 401, 410, 417, 431, 437, 446, 455, 458, 459, 460, 461, 462, 467, 468, 469, 471, 474, 476, 479, 480, 482, 483, 484, 485, 489, 490, 491, 495, 496, 497, 498, 499, 500, 501, 502, 503, 507, 508, 517, 524, 538, 544, 559], "bootf": [21, 36, 37, 38, 39, 75, 82, 140, 176, 187, 198, 207, 210, 222, 237, 250, 335, 351, 357, 413, 455, 462, 520], "lot": [21, 48, 54, 80, 178, 200, 223, 224, 251, 252, 355, 451, 460], "use_disk_by_id": 21, "vmlinuz": [21, 22, 33, 37, 39, 43, 44], "10": [21, 22, 32, 35, 37, 39, 47, 48, 49, 54, 72, 74, 75, 77, 79, 80, 105, 128, 132, 148, 156, 164, 175, 177, 178, 185, 187, 197, 199, 200, 207, 209, 210, 221, 223, 224, 232, 233, 236, 237, 249, 251, 252, 275, 296, 299, 301, 333, 348, 350, 351, 354, 355, 380, 401, 405, 429, 437, 452, 454, 455, 457, 459, 460, 485, 508, 512, 528, 536, 544, 555], "some_snapshot": 21, "ro": [21, 79, 185, 207, 233, 299, 354, 459], "debian_some_snapshot": 21, "alon": [21, 91, 102, 121, 233, 261, 272, 291, 366, 377, 396, 471, 482, 501], "bewar": [21, 27, 48, 199], "blindingli": 21, "undon": [21, 128, 185, 207, 233, 296, 401, 508], "destruct": [21, 48, 71, 72, 81, 94, 164, 177, 185, 199, 207, 220, 223, 233, 237, 248, 251, 264, 333, 334, 347, 348, 356, 369, 437, 451, 452, 461, 474, 544], "null": [21, 36, 37, 38, 39, 68, 72, 75, 146, 173, 195, 210, 217, 237, 245, 315, 344, 348, 351, 419, 448, 452, 455, 526], "discov": [21, 48, 72, 82, 156, 187, 210, 223, 237, 251, 325, 335, 348, 357, 429, 452, 462, 536], "san": 21, "float": 21, "usr_some_snapshot": 21, "Or": [21, 105, 232, 275, 380, 485], "buster": [22, 23, 41], "4kib": [22, 48, 82, 187, 210, 237, 335, 357, 462], "els": [22, 35, 48, 63, 72, 78, 89, 105, 111, 115, 119, 128, 137, 169, 177, 185, 187, 190, 199, 207, 210, 213, 223, 232, 233, 237, 241, 251, 275, 281, 285, 296, 298, 306, 340, 348, 353, 380, 386, 390, 401, 410, 443, 452, 458, 469, 485, 491, 495, 499, 508, 517], "2a": [22, 35], "2b": [22, 35], "4a": 22, "4b": 22, "512b": [22, 48, 82, 187, 200, 210, 223, 224, 233, 237, 251, 252, 299, 335, 348, 354, 355, 356, 357, 462], "platform_code_differ": 22, "__": [22, 48], "unimpl": [22, 172, 194], "transient": [22, 35, 72, 348, 452], "8a": [22, 35, 59, 60, 564], "8b": [22, 35], "6a": [22, 35], "i386": [22, 43, 44], "6b": [22, 35], "mod": 22, "race": [22, 35, 72, 251, 348, 452], "5754": [22, 35], "seem": [22, 35, 36, 37, 38, 39, 451], "guarante": [22, 35, 37, 39, 47, 48, 54, 72, 79, 80, 105, 111, 115, 144, 161, 185, 187, 199, 207, 210, 223, 232, 233, 237, 251, 275, 281, 285, 299, 313, 330, 348, 354, 380, 386, 390, 417, 434, 452, 459, 460, 485, 491, 495, 524, 541, 559], "nodev": [22, 35, 79, 207, 233, 299, 354, 459], "yourusernam": 22, "administr": [22, 47, 48, 49, 53, 54, 60, 72, 77, 78, 79, 80, 81, 82, 89, 100, 105, 119, 120, 123, 127, 128, 137, 140, 144, 160, 164, 168, 177, 178, 179, 181, 184, 185, 186, 187, 188, 189, 199, 200, 201, 203, 206, 207, 209, 210, 211, 212, 223, 224, 225, 227, 230, 232, 233, 236, 237, 238, 239, 251, 252, 259, 270, 275, 289, 290, 296, 298, 299, 306, 309, 313, 329, 333, 334, 335, 338, 348, 353, 354, 355, 356, 357, 364, 375, 380, 394, 395, 401, 410, 413, 417, 433, 437, 441, 452, 457, 458, 459, 460, 461, 462, 469, 480, 485, 499, 500, 503, 507, 508, 517, 520, 524, 540, 544, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "7734": [22, 35], "right": [22, 33, 36, 38, 48, 49, 54, 101, 185, 207, 233, 271, 376, 481], "poorli": [23, 54], "pop": 23, "indefinit": [23, 48, 72, 223, 251, 348, 452], "circumv": 23, "debian_frontend": 23, "noninteract": 23, "unoffici": [25, 43], "closer": [25, 79, 185, 207, 233, 299, 354, 459], "kickstart": 25, "workstat": [25, 47], "kde": 25, "greg": 25, "demand": [25, 31, 48, 54, 62, 71, 72, 78, 79, 80, 82, 94, 118, 128, 161, 164, 178, 185, 199, 200, 207, 220, 223, 224, 233, 237, 240, 248, 251, 252, 296, 298, 299, 330, 333, 335, 339, 347, 348, 353, 354, 355, 357, 401, 434, 437, 442, 451, 452, 458, 459, 460, 462, 474, 498, 508, 541, 544], "dl": [25, 31, 89, 119, 207, 233, 259, 289, 364, 394, 469, 499], "fedoraproject": 25, "pub": 25, "39": [25, 555], "xz": [25, 31, 37, 39], "sha256sum": [25, 28, 31], "treat": [25, 31, 47, 48, 49, 54, 66, 72, 79, 80, 147, 185, 187, 199, 207, 210, 223, 233, 237, 251, 299, 316, 348, 354, 355, 420, 446, 452, 459, 460, 527], "z0": [25, 31, 36, 37, 38, 39], "sha256checksum": [25, 31], "rootfs_tar": 25, "rootfs_tar_dir": 25, "dirnam": 25, "unlink": [25, 48, 72, 126, 199, 223, 251, 295, 348, 400, 452, 506], "interfer": [25, 31, 80, 178, 200, 224, 252, 355, 460], "unalia": [25, 31], "eval": [25, 26, 31, 32], "tail": 25, "n10": 25, "modpost": 25, "gpl": 25, "symbol": [25, 48, 79, 86, 95, 133, 146, 148, 158, 159, 177, 182, 185, 187, 199, 204, 207, 210, 223, 228, 233, 237, 251, 256, 265, 299, 302, 315, 317, 327, 328, 348, 354, 361, 370, 406, 419, 421, 431, 432, 459, 466, 475, 513, 526, 528, 538, 539], "bio_start_io_acct": 25, "bio_end_io_acct_remap": 25, "makefil": 25, "138": 25, "symver": 25, "1977": 25, "fc36": 25, "933": 25, "recurs": [25, 79, 89, 91, 94, 96, 98, 99, 101, 102, 105, 106, 109, 110, 112, 113, 114, 116, 118, 119, 121, 185, 207, 232, 233, 259, 261, 264, 266, 268, 269, 271, 272, 275, 276, 279, 280, 282, 283, 284, 286, 288, 289, 291, 299, 354, 364, 366, 369, 371, 373, 374, 376, 377, 380, 381, 384, 385, 387, 388, 389, 391, 393, 394, 396, 459, 469, 471, 474, 476, 478, 479, 481, 482, 485, 486, 489, 490, 492, 493, 494, 496, 498, 499, 501], "794": 25, "copr": [25, 26], "fedorainfracloud": [25, 26], "kwizart": [25, 26], "longterm": [25, 26], "add_dracutmodul": [25, 31], "force_driv": [25, 31], "mpt3sa": [25, 31], "virtio_blk": [25, 31], "exec": [25, 31, 79, 89, 96, 99, 103, 116, 119, 128, 185, 207, 231, 233, 259, 273, 289, 296, 299, 354, 364, 378, 394, 401, 459, 469, 476, 479, 483, 496, 499, 508], "maxdepth": [25, 31], "mindepth": [25, 31], "vxc": [25, 31], "dep": [25, 31], "basenam": [25, 31, 66, 446], "relabel": [25, 31], "fixfil": [25, 31], "onboot": [25, 31], "glibc": [25, 31], "langpack": [25, 31], "en": [25, 31, 47], "mainten": [25, 31, 164, 333, 437, 544], "hidden": [25, 31, 35, 36, 38, 79, 96, 99, 116, 128, 185, 207, 233, 296, 299, 354, 401, 459, 476, 479, 496, 508], "queri": [25, 31, 79, 459], "fuse": 26, "circumst": [26, 47, 48, 72, 185, 207, 220, 223, 248, 251, 347, 348, 452], "nodep": 26, "risk": [26, 48, 49, 72, 79, 81, 185, 199, 207, 223, 233, 237, 251, 299, 334, 348, 354, 356, 452, 459, 461, 559], "pend": [26, 48, 71, 72, 79, 105, 146, 185, 199, 207, 210, 220, 223, 232, 233, 237, 248, 251, 275, 299, 315, 347, 348, 354, 380, 419, 451, 452, 459, 485, 526], "forward": [27, 36, 38, 48, 72, 177, 199, 223, 251, 348, 452], "sysutil": 27, "kmod": [27, 31, 41], "rest": [27, 49], "accordingli": [27, 166, 167, 546, 547], "openzfs_load": 27, "zfs_load": 27, "migrat": [27, 47, 552], "elsewher": 27, "sysdir": 27, "arcstat": [27, 48, 64, 72, 242, 341, 348, 444, 452], "arc_summari": [27, 48], "dbufstat": [27, 48, 72, 223, 251, 348, 452], "substitut": [27, 47], "nopasswd": 27, "hw": 27, "ncpu": 27, "profil": [27, 43, 44], "cshrc": 27, "rapid": [27, 48], "uf": [27, 49, 137, 187, 210, 237, 306, 410, 517], "without_zf": 27, "fdescf": 27, "temporarili": [27, 43, 44, 48, 49, 62, 72, 104, 122, 177, 185, 199, 207, 223, 233, 240, 251, 274, 292, 339, 348, 379, 397, 442, 452, 484, 502, 551, 552], "your_passwd": 28, "multi": [28, 63, 68, 72, 169, 190, 213, 241, 340, 344, 348, 443, 448, 452], "nix": [28, 29], "abcd1234": 28, "supportedfilesystem": 29, "forceimportroot": 29, "yourhostid": 29, "c4": 29, "urandom": [29, 36, 37, 38, 39, 79, 233, 299, 354, 459], "od": 29, "x4": 29, "devshel": 29, "xdg": 29, "rockylinux": 31, "vault": 31, "20230513": 31, "alloweras": 31, "rocki": [32, 41], "signatur": [32, 58, 79, 185, 233, 299, 354, 459], "authent": [32, 79, 128, 185, 207, 233, 296, 299, 354, 401, 459, 508, 559], "fingerprint": [32, 43, 44, 57], "pki": 32, "el7": 32, "el8": 32, "el9": 32, "key1": 32, "older": [32, 47, 48, 49, 68, 80, 87, 105, 109, 110, 124, 128, 164, 178, 183, 185, 200, 205, 207, 210, 224, 229, 232, 233, 237, 251, 252, 257, 275, 279, 280, 293, 296, 333, 355, 362, 380, 384, 385, 398, 401, 437, 448, 460, 467, 485, 489, 490, 504, 508, 544, 559], "36": [32, 72, 80, 452], "pgp": [32, 57], "mit": [32, 57], "edu": [32, 57, 63, 169, 190, 213, 241, 340, 443], "c93a": 32, "fffd": 32, "9f3f": 32, "7b03": 32, "c310": 32, "ceb6": 32, "a9d5": 32, "a1c0": 32, "f14a": 32, "b620": 32, "key2": 32, "37": [32, 43, 44], "7dc7": 32, "299d": 32, "cf7c": 32, "7fd9": 32, "cd87": 32, "701b": 32, "a599": 32, "fd5e": 32, "9db8": 32, "4141": 32, "el6": 32, "And": [32, 54, 103, 378, 483], "switch": [32, 33, 36, 37, 39, 48, 49, 54, 74, 86, 171, 175, 177, 182, 193, 197, 204, 216, 221, 228, 244, 249, 256, 350, 361, 454, 466], "reinstal": [32, 43, 44], "releasev": 32, "did": [32, 37, 39, 47, 48, 55, 72, 80, 81, 111, 115, 178, 187, 200, 210, 224, 233, 237, 252, 281, 285, 334, 355, 356, 386, 390, 460, 461, 491, 495], "vagrant": 32, "localhost": [32, 185, 207, 233, 296], "showdupl": 32, "08": 32, "ago": [32, 49], "tor": 32, "31": [32, 57, 103, 126, 147, 151, 210, 237, 251, 355, 378, 400, 411, 420, 424, 483, 506, 527, 531, 555], "jan": 32, "2023": [32, 75, 78, 79, 81, 87, 109, 110, 111, 115, 130, 134, 156, 348, 351, 386, 390, 403, 452, 455, 458, 459, 461, 462, 467, 489, 490, 491, 495, 510, 520, 536], "05": [32, 36, 74, 197, 221, 249, 350, 454], "former": [32, 49, 55, 72, 233, 276, 348, 452], "feedback": 32, "stabil": [32, 43, 44, 47, 51, 54, 72, 81, 128, 164, 177, 187, 199, 207, 210, 223, 233, 237, 251, 296, 333, 334, 348, 356, 401, 437, 452, 461, 508, 544], "upcom": [32, 48, 72, 251, 348, 452], "countless": 33, "flexibl": [33, 48, 68, 71, 80, 109, 110, 173, 178, 195, 200, 217, 220, 224, 233, 245, 248, 252, 279, 280, 344, 347, 355, 384, 385, 448, 451, 460, 489, 490], "recip": 33, "give": [33, 36, 38, 43, 44, 47, 49, 72, 79, 80, 95, 141, 169, 177, 178, 185, 187, 190, 199, 200, 207, 210, 213, 223, 224, 233, 237, 241, 251, 252, 265, 299, 310, 348, 354, 355, 370, 414, 452, 459, 460, 475, 521], "mini": 33, "huge": 33, "mkinitrd": [33, 43, 44], "absolut": [33, 47, 66, 74, 79, 80, 82, 101, 137, 175, 185, 187, 197, 207, 210, 221, 233, 237, 249, 271, 299, 306, 350, 354, 355, 357, 376, 410, 446, 454, 459, 460, 462, 481, 517], "down": [33, 48, 51, 54, 72, 88, 164, 177, 199, 206, 223, 230, 251, 258, 348, 363, 437, 451, 452, 468, 544], "slight": 33, "elilo": 33, "straight": 33, "efi_stub": 33, "lilo": 33, "grub": [33, 39, 43, 44, 49, 54, 207, 224, 233], "zstd": [33, 49, 72, 79, 80, 166, 167, 252, 299, 354, 355, 452, 459, 460, 546, 547], "ext4": [33, 37, 39, 49, 54], "raspberri": [33, 40, 41], "pi": [33, 40, 41, 62, 442], "simplest": [33, 66, 446], "repartit": 33, "room": 33, "exercis": [33, 49, 87, 467], "resize2f": 33, "shrink": [33, 48, 72, 79, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "reloc": [33, 141, 164, 187, 210, 237, 333, 437, 521, 544], "sfdisk": [33, 37, 39], "stuff": [33, 78, 185, 207, 233, 298, 353, 458], "rsync": 33, "recoveri": [33, 48, 72, 78, 81, 87, 144, 159, 183, 185, 187, 199, 205, 207, 210, 223, 229, 233, 237, 251, 257, 298, 313, 328, 334, 348, 353, 356, 362, 417, 432, 452, 458, 461, 467, 524, 539, 555], "goe": [33, 80, 111, 115, 185, 207, 233, 281, 285, 386, 390, 460, 491, 495], "wrong": [33, 48, 72, 81, 169, 177, 187, 190, 199, 210, 213, 223, 237, 241, 251, 334, 340, 348, 356, 452, 461], "enlarg": 33, "rescu": 33, "box": [33, 54], "sda1": [33, 137, 146, 159, 164, 187, 210, 237, 333, 437, 517, 526, 539, 544], "sda2": 33, "oppos": 33, "54": [33, 146, 159, 164, 333, 437, 526, 539, 544], "67": [33, 48, 177, 199, 251], "startup": [33, 48, 72, 82, 187, 199, 210, 223, 237, 251, 335, 348, 357, 452, 462], "shutdown": [33, 47, 75, 104, 109, 110, 117, 122, 185, 207, 233, 274, 279, 280, 287, 292, 351, 379, 384, 385, 392, 397, 455, 484, 489, 490, 497, 502], "our": [33, 72, 251, 348, 452], "tweak": [33, 49], "tast": 33, "itself": [33, 37, 39, 47, 49, 54, 72, 77, 78, 79, 80, 87, 91, 102, 109, 110, 121, 178, 185, 200, 207, 224, 233, 251, 252, 261, 272, 279, 280, 291, 298, 299, 353, 354, 355, 366, 377, 384, 385, 396, 457, 458, 459, 460, 467, 471, 482, 489, 490, 501, 563], "uppercas": [33, 88, 140, 176, 184, 198, 206, 222, 230, 250, 258, 363, 413, 468, 520], "lead": [33, 36, 47, 48, 51, 54, 55, 68, 72, 78, 81, 88, 97, 107, 125, 140, 141, 173, 176, 177, 181, 184, 185, 187, 195, 198, 199, 203, 206, 207, 210, 217, 222, 223, 227, 230, 233, 237, 245, 250, 251, 255, 258, 267, 277, 294, 298, 310, 334, 344, 348, 353, 356, 363, 372, 382, 399, 413, 414, 448, 452, 458, 461, 468, 477, 487, 505, 520, 521], "userspac": [33, 34, 79, 84, 87, 97, 107, 128, 185, 207, 233, 254, 267, 277, 296, 299, 354, 359, 372, 382, 401, 459, 464, 467, 477, 487, 508], "strictli": [33, 37, 39, 48, 49], "refus": [33, 79, 94, 114, 139, 185, 187, 207, 210, 233, 237, 264, 284, 299, 308, 354, 369, 389, 412, 459, 474, 494, 519], "m755": 33, "statement": [33, 47, 63, 105, 169, 190, 213, 232, 241, 275, 340, 380, 443, 485], "rootdev": 33, "arg": [33, 105, 232, 275, 296, 333, 380, 485], "cut": [33, 36, 37, 38, 39], "f2": 33, "label": [33, 36, 37, 38, 39, 48, 49, 67, 72, 81, 87, 132, 140, 141, 144, 147, 164, 176, 183, 185, 186, 187, 198, 205, 209, 210, 222, 229, 236, 237, 250, 251, 257, 301, 310, 313, 316, 333, 334, 348, 356, 362, 405, 413, 414, 417, 420, 437, 447, 452, 461, 467, 512, 520, 521, 524, 527, 544, 564], "collaps": [33, 48, 72, 199, 223, 251, 348, 452], "resumedev": 33, "insert": [33, 37, 39, 72, 184, 206, 223, 230, 251, 258, 348, 452], "librari": [33, 49, 105, 232, 233, 275, 380, 485], "libzf": [33, 105, 130, 232, 275, 380, 403, 485, 510], "rememb": [33, 54, 79, 207, 233, 299, 354, 459], "append": [33, 75, 79, 81, 82, 109, 110, 185, 207, 233, 279, 280, 299, 335, 351, 354, 356, 357, 384, 385, 455, 459, 461, 462, 489, 490], "rootfstyp": [33, 37, 39, 75, 351, 455], "doublecheck": 33, "referenc": [33, 48, 54, 72, 78, 79, 80, 87, 96, 99, 101, 111, 115, 116, 128, 159, 177, 183, 185, 187, 199, 205, 207, 210, 223, 229, 233, 237, 251, 252, 257, 271, 276, 281, 285, 296, 299, 328, 348, 354, 355, 362, 376, 386, 390, 401, 432, 452, 458, 459, 460, 467, 476, 479, 481, 491, 495, 496, 508, 539], "emerg": 33, "snapshost": 33, "emb": [33, 111, 115, 207, 233, 281, 285, 386, 390, 491, 495], "mysnapshot": 33, "promot": [33, 78, 79, 84, 89, 92, 93, 94, 105, 113, 118, 119, 128, 185, 207, 232, 233, 254, 259, 262, 275, 289, 296, 298, 299, 353, 354, 359, 364, 367, 380, 394, 401, 458, 459, 464, 469, 472, 473, 474, 485, 493, 498, 499, 508], "biardi": 33, "slackbuild": 34, "15": [34, 43, 48, 72, 79, 87, 96, 99, 116, 128, 164, 183, 185, 187, 205, 207, 210, 220, 223, 229, 233, 237, 248, 257, 296, 299, 303, 323, 328, 333, 354, 362, 401, 407, 437, 452, 459, 467, 476, 479, 496, 508, 514, 544, 560], "20": [35, 39, 40, 41, 43, 44, 48, 49, 72, 79, 96, 99, 116, 128, 148, 164, 185, 187, 199, 207, 210, 223, 233, 237, 240, 251, 296, 299, 333, 348, 354, 401, 437, 452, 459, 476, 479, 496, 508, 528, 544], "bionic": 35, "alt": [35, 36, 38, 68, 344, 448], "univers": [35, 36, 38, 40], "3a": 35, "3b": 35, "5a": 35, "5b": 35, "netplan": [35, 36, 37, 38, 39], "netcfg": [35, 36, 37, 38, 39], "yaml": [35, 36, 37, 38, 39], "ethernet": [35, 36, 37, 38, 39], "dhcp4": [35, 36, 37, 38, 39], "multivers": [35, 36, 38], "hwe": 35, "addgroup": [35, 36, 37, 38, 39], "lpadmin": [35, 36, 37, 38, 39], "sambashar": [35, 36, 37, 38, 39], "grub_timeout_styl": [35, 36, 38], "grub_timeout": [35, 36, 38], "grub_recordfail_timeout": [35, 36, 38], "splash": [35, 36, 38], "shimx64": 35, "gdm3": 35, "initialsetupen": 35, "render": [35, 36, 37, 38, 39, 47, 48, 72, 81, 178, 187, 200, 210, 224, 237, 251, 252, 334, 348, 356, 452, 461], "grave": 36, "ubuntu_uuid": [36, 38], "underli": [36, 48, 51, 54, 72, 79, 81, 82, 130, 132, 140, 146, 159, 161, 164, 177, 178, 186, 187, 199, 200, 207, 209, 210, 222, 223, 224, 233, 236, 237, 250, 251, 252, 299, 301, 315, 328, 330, 333, 334, 335, 348, 354, 356, 357, 403, 405, 413, 419, 432, 434, 437, 452, 459, 461, 462, 510, 512, 520, 526, 539, 541, 544], "efi2": 36, "cfg": [36, 43, 44], "renam": [36, 38, 47, 48, 54, 72, 84, 89, 91, 92, 93, 94, 95, 102, 105, 108, 118, 119, 121, 128, 177, 185, 199, 207, 223, 233, 251, 254, 259, 261, 265, 272, 278, 288, 289, 291, 296, 348, 359, 364, 366, 370, 377, 383, 393, 394, 396, 401, 452, 464, 469, 471, 472, 473, 474, 475, 482, 485, 488, 498, 499, 501, 508], "had": [36, 43, 44, 47, 48, 49, 54, 55, 72, 80, 94, 109, 110, 126, 164, 178, 185, 199, 200, 207, 210, 223, 224, 233, 237, 251, 252, 264, 279, 280, 295, 333, 348, 355, 369, 384, 385, 400, 437, 451, 452, 460, 474, 489, 490, 506, 544, 563], "typo": 36, "plural": 36, "accountservic": 36, "harm": [36, 48, 72, 199, 223, 251, 348, 452], "rollback": [36, 75, 84, 89, 105, 109, 110, 118, 119, 128, 144, 185, 187, 207, 210, 232, 233, 237, 254, 259, 275, 279, 280, 288, 289, 296, 313, 351, 359, 364, 380, 384, 385, 393, 394, 401, 417, 455, 464, 469, 485, 489, 490, 498, 499, 508, 524], "rmdir": 36, "nearli": [36, 47, 49, 78, 185, 207, 233, 298, 353, 458], "bidirect": 36, "collabor": 36, "far": [36, 47, 48, 72, 199, 223, 251, 348, 452], "trivial": [36, 79, 172, 194, 299, 354, 459], "hack": [36, 47], "partn": 36, "cipher": [36, 38, 79, 91, 102, 109, 110, 121, 233, 261, 272, 279, 280, 291, 299, 354, 366, 377, 384, 385, 396, 459, 471, 482, 489, 490, 501], "hopefulli": [36, 38], "focal": [36, 37], "vim": [36, 38], "tini": [36, 38], "n5": [36, 38], "t5": [36, 38], "simpler": [36, 38], "proof": [36, 38, 55], "motherboard": [36, 38], "deadlock": [36, 38, 48, 54, 71, 93, 128, 220, 248, 347, 401, 451, 473, 508], "trade": [36, 38, 48, 49, 72, 82, 187, 210, 223, 237, 251, 335, 348, 357, 452, 462], "bother": [36, 38], "500m": [36, 38, 72, 348, 452], "8200": [36, 38], "fd00": [36, 38], "swize": [36, 38], "hiber": [36, 38], "2g": [36, 38, 185], "be00": [36, 38], "constrain": [36, 38, 48, 220, 248, 347], "500": [36, 38, 48, 50, 72, 79, 80, 177, 199, 223, 251, 252, 299, 348, 354, 355, 452, 459, 460], "inabl": [36, 38], "_must_": [36, 38], "realli": [36, 38, 48, 54, 55, 144, 187, 210, 237, 313, 417, 524], "10_linux_zf": [36, 38], "appar": [36, 37, 38, 39], "umask": [36, 37, 38, 39], "ccm": [36, 37, 79, 233, 299, 354, 459], "tr": [36, 37, 38, 39], "dc": [36, 37, 38, 39, 47, 54], "ubuntu_": [36, 37, 38, 39], "userdata": [36, 37, 38, 39], "root_": [36, 37, 38, 39], "fat": [36, 38, 49, 72, 452], "grubenv": [36, 38], "recordfail": [36, 38], "menu": [36, 38, 49], "duplic": [36, 38, 47, 48, 49, 72, 78, 87, 177, 183, 185, 199, 205, 207, 229, 233, 251, 257, 298, 348, 353, 362, 452, 458, 467], "irrelev": [36, 38, 47, 49], "install_devic": [36, 38], "raid5": [36, 38], "raid6": [36, 38], "lxd": [36, 37, 38, 39], "launchpadlibrarian": [36, 37], "478315221": [36, 37], "2150": [36, 37], "p1": [36, 37, 39], "1875577": [36, 37], "init_on_alloc": [36, 37, 38, 39], "fallback": [36, 38], "ever": [36, 38, 49, 54, 68, 80, 111, 115, 128, 137, 173, 178, 187, 195, 200, 207, 210, 217, 224, 233, 237, 245, 252, 281, 285, 296, 306, 344, 355, 386, 390, 401, 410, 448, 460, 491, 495, 508, 517], "requiresmountsfor": [36, 38, 103, 231, 273, 378, 483], "history_ev": [36, 37, 43, 44, 103, 231, 273, 378, 483], "cacher": [36, 37, 43, 44, 103, 231, 273, 378, 483], "root_d": [36, 37, 38, 39], "_": [36, 37, 38, 39, 48, 77, 79, 82, 137, 185, 187, 207, 210, 233, 237, 299, 306, 354, 410, 457, 459, 462, 517], "adm": [36, 37, 38, 39], "microsd": [37, 39], "jeff": [37, 39], "geerl": [37, 39], "comparison": [37, 39, 79, 185, 207, 233, 299, 354, 459], "enclosur": [37, 39, 54, 74, 77, 86, 130, 136, 146, 149, 150, 159, 164, 175, 182, 197, 204, 210, 221, 228, 237, 249, 256, 315, 350, 361, 403, 409, 419, 422, 423, 432, 437, 454, 457, 466, 510, 516, 526, 529, 530, 539, 544], "uasp": [37, 39], "solid": [37, 39, 47, 49], "ssd": [37, 39, 49, 52, 54, 82, 161, 237, 330, 335, 357, 434, 462, 541], "eeprom": [37, 39], "09": [37, 39, 49], "attach": [37, 39, 48, 54, 65, 72, 74, 80, 81, 82, 84, 88, 100, 120, 123, 127, 128, 133, 135, 139, 140, 145, 154, 156, 163, 164, 175, 176, 187, 192, 197, 198, 210, 215, 221, 222, 237, 243, 249, 250, 251, 252, 254, 270, 290, 296, 302, 304, 308, 314, 323, 325, 333, 334, 335, 342, 348, 350, 355, 356, 357, 359, 363, 375, 395, 401, 406, 408, 412, 413, 418, 427, 429, 437, 445, 452, 454, 460, 461, 462, 464, 468, 480, 500, 503, 507, 508, 513, 515, 519, 520, 525, 534, 536, 544, 550, 551, 552, 554, 557, 558, 563], "rpi": [37, 39], "boot_ord": [37, 39], "0xf41": [37, 39], "misc": [37, 39], "folder": [37, 39, 49], "decompress": [37, 39, 48, 49, 79, 80, 87, 111, 115, 132, 166, 167, 178, 183, 185, 200, 205, 207, 223, 224, 229, 233, 236, 251, 252, 257, 281, 285, 299, 301, 354, 355, 362, 386, 390, 405, 459, 460, 467, 491, 495, 512, 546, 547], "postinst": [37, 39], "unpack": [37, 39], "cdimag": [37, 39], "preinstal": [37, 39], "arm64": [37, 39], "raspi": [37, 39], "combin": [37, 39, 45, 48, 49, 54, 58, 67, 72, 79, 80, 81, 86, 87, 109, 110, 111, 115, 137, 182, 185, 187, 204, 205, 207, 210, 223, 228, 229, 231, 233, 237, 251, 252, 256, 257, 273, 279, 280, 281, 285, 299, 306, 334, 348, 354, 355, 356, 361, 362, 384, 385, 386, 390, 410, 447, 452, 459, 460, 461, 466, 467, 489, 490, 491, 495, 517], "0xddbefb06": 37, "img1": [37, 39], "2048": [37, 39, 72, 223, 251, 348, 452], "524288": [37, 39], "bootabl": [37, 39, 82, 187, 210, 237, 335, 357, 462], "img2": [37, 39], "526336": [37, 39], "6285628": 37, "83": [37, 39], "certainli": [37, 39], "mmcblk0": [37, 39], "sdx": [37, 39, 54], "letter": [37, 39, 63, 77, 79, 82, 87, 137, 169, 185, 187, 190, 207, 210, 213, 233, 237, 241, 299, 306, 340, 354, 410, 443, 457, 459, 462, 467, 517], "lsblk": [37, 39], "diskp": [37, 39], "proceed": [37, 39, 48, 72, 177, 199, 223, 251, 348, 452], "that_partit": [37, 39], "labelclear": [37, 39, 84, 137, 139, 152, 164, 187, 210, 237, 254, 306, 308, 321, 333, 359, 410, 412, 425, 437, 464, 517, 519, 532, 544], "expand": [37, 39, 47, 48, 65, 68, 72, 77, 80, 82, 134, 140, 148, 149, 150, 164, 176, 187, 198, 210, 222, 237, 250, 318, 319, 333, 335, 342, 357, 413, 422, 423, 437, 445, 457, 462, 520, 528, 529, 530, 544], "unboot": [37, 39], "losetup": [37, 39], "fp": [37, 39, 185, 207, 233], "writabl": [37, 39, 78, 92, 128, 185, 207, 233, 296, 298, 353, 401, 458, 472, 508], "destin": [37, 39, 55, 72, 78, 80, 87, 109, 110, 111, 115, 185, 207, 224, 233, 252, 279, 280, 281, 285, 355, 362, 384, 385, 386, 390, 452, 458, 460, 467, 489, 490, 491, 495], "p3": [37, 39], "p2": [37, 39], "conv": [37, 39], "fsync": [37, 39, 49, 79, 81, 128, 185, 187, 207, 210, 233, 237, 296, 299, 334, 354, 356, 401, 459, 461, 508], "se": [37, 39, 74, 197, 221, 249, 350, 454], "xxxxxxxx": [37, 39], "zcat": [37, 39], "qf": [37, 39], "vmlinux": [37, 39], "usercfg": [37, 39], "followkernel": [37, 39], "boot_delai": [37, 39], "zz": [37, 39], "bak": [37, 39], "vmlinuxtmp": [37, 39], "controlpersist": [37, 39], "kill": [37, 39, 54, 66, 68, 146, 148, 173, 195, 217, 245, 258, 344, 419, 421, 446, 448, 526, 528], "slot": [37, 39, 48, 54, 72, 74, 86, 136, 149, 150, 159, 164, 175, 182, 197, 204, 221, 228, 249, 251, 256, 348, 350, 361, 409, 422, 423, 432, 437, 452, 454, 466, 516, 529, 530, 539, 544, 550, 552], "pv": [37, 39], "unattend": [37, 39], "background": [37, 39, 48, 72, 80, 126, 128, 152, 163, 164, 177, 178, 199, 200, 223, 224, 237, 251, 252, 295, 296, 321, 332, 333, 348, 355, 400, 401, 425, 436, 437, 452, 460, 506, 508, 532, 543, 544], "safeti": [37, 39, 48, 49, 54, 80, 355, 460], "flush": [37, 39, 47, 48, 49, 54, 72, 79, 132, 160, 177, 185, 186, 199, 207, 209, 223, 233, 236, 251, 299, 301, 348, 354, 405, 433, 452, 459, 512, 540], "o_": [37, 39], "transact": [37, 39, 48, 49, 51, 52, 55, 59, 60, 72, 79, 80, 81, 87, 132, 144, 177, 178, 183, 185, 186, 187, 199, 200, 205, 207, 209, 210, 223, 224, 229, 233, 236, 237, 251, 252, 257, 299, 301, 313, 334, 348, 354, 355, 356, 362, 405, 417, 452, 459, 460, 461, 467, 512, 524, 563], "persist": [37, 39, 48, 54, 72, 77, 79, 81, 82, 87, 128, 132, 147, 149, 150, 164, 185, 186, 187, 207, 209, 210, 223, 233, 236, 237, 251, 257, 296, 301, 316, 318, 319, 333, 334, 335, 348, 356, 357, 362, 401, 405, 420, 422, 423, 437, 452, 457, 459, 461, 462, 467, 508, 512, 527, 529, 530, 544, 557], "databas": [37, 39, 52, 79, 81, 185, 187, 207, 210, 233, 237, 299, 334, 354, 356, 459, 461], "cf": [37, 39, 72, 75, 348, 351, 452, 455], "du": [37, 39, 79, 185, 207, 233, 299, 354, 459], "sxm": [37, 39], "f1": [37, 39], "cmdline": [37, 39, 75, 103, 351, 378, 455, 483], "fixrtc": [37, 39], "180": [37, 39, 80, 200, 224, 252, 355, 460], "nosplash": [37, 39], "reread": [37, 39], "delus": [37, 39], "eth0": [37, 39], "autoremov": [37, 39], "bcach": [37, 39], "btrf": [37, 39, 54], "prog": [37, 39], "lvm2": [37, 39], "multipath": [37, 39, 54, 74, 86, 130, 146, 175, 182, 197, 204, 210, 221, 228, 237, 249, 256, 315, 350, 361, 403, 419, 454, 466, 510, 526], "iscsi": [37, 39, 48, 49, 185], "overlayroot": [37, 39], "xfsprog": [37, 39], "dtoverlai": [37, 39], "vc4": [37, 39], "fkm": [37, 39], "v3d": [37, 39], "jammi": [38, 39], "welcom": [38, 60], "beta": [39, 92, 93, 94, 108, 113, 118, 128, 185, 207, 233, 296, 401, 472, 473, 474, 488, 493, 498, 508], "0x638274e3": 39, "7193932": 39, "codenam": 40, "18": [40, 41, 43, 87, 128, 185, 207, 233, 296, 300, 401, 462, 467, 508], "aaron": [41, 53], "toponc": [41, 53], "excel": [41, 47, 54], "overview": [41, 75, 78, 81, 128, 164, 296, 298, 333, 334, 351, 353, 356, 401, 437, 455, 458, 461, 508, 544], "gentoo": [41, 54, 59, 60], "nixo": [41, 59, 60], "opensus": [41, 59, 60], "extern": 41, "leap": [41, 42], "tumblewe": [41, 42], "kabi": 41, "minor": [41, 43, 44, 557], "el": [41, 185], "slackwar": [41, 59, 60], "sle": 42, "yast2": [43, 44], "zypper": [43, 44], "experi": [43, 44, 47, 49], "peopl": [43, 44, 47, 54], "zaryob": [43, 44], "lroz": 43, "lsb_releas": 43, "addrepo": [43, 44], "kmp": [43, 44], "suse": [43, 44], "flatpak": [43, 44], "oss": [43, 44], "nonoss": 43, "trust": [43, 44, 47, 82, 111, 115, 185, 187, 210, 237, 281, 285, 335, 357, 386, 390, 462, 491, 495], "22c07ba5": [43, 44], "34178cd0": [43, 44], "2efe22aa": [43, 44], "b88b2fd4": [43, 44], "3dbdc284": [43, 44], "mon": [43, 44, 555], "40": [43, 44, 47, 54], "2014": [43, 44, 47, 49], "pubkei": [43, 44], "53674dd4": [43, 44], "reject": [43, 44, 79, 185, 207, 233, 299, 354, 459], "pattern": [43, 44, 48, 51, 72, 75, 79, 185, 187, 207, 210, 223, 233, 237, 251, 299, 306, 348, 351, 354, 452, 455, 459], "Thats": [43, 44], "enhanced_bas": [43, 44], "But": [43, 44, 48, 54, 71, 79, 81, 185, 207, 220, 233, 248, 299, 347, 354, 451, 459, 461], "bloat": [43, 44], "annoi": [43, 44], "enhanc": [43, 44, 80, 224, 252, 355, 460], "yast2_basi": 43, "utf8": [43, 44], "yout": [43, 44], "localectl": [43, 44], "lang": [43, 44], "iputil": [43, 44], "ca": [43, 44], "certif": [43, 44, 79, 354, 459], "mozilla": [43, 44], "pam": [43, 44], "shadow": [43, 44, 79, 185, 207, 233, 299, 354, 459], "dbu": [43, 44], "libutempter0": [43, 44], "deinstal": [43, 44], "lsb": 43, "dm_name": [43, 44], "dm": [43, 44, 62, 74, 86, 175, 182, 197, 204, 221, 228, 240, 249, 256, 339, 350, 361, 442, 454, 466], "crypt": [43, 44], "4537537": 43, "genhostid": [43, 131, 208, 235, 300, 404, 511], "gzip": [43, 48, 49, 72, 79, 80, 128, 166, 167, 185, 199, 207, 223, 233, 251, 252, 296, 299, 348, 354, 355, 401, 452, 459, 460, 508, 546, 547], "processor": [43, 48, 49], "32bit": 43, "kernel_vers": 43, "eo": 43, "digit": [43, 47, 131, 208, 235, 300, 404, 511], "volume_cryptsetup_fail": [43, 44], "troubl": [43, 44], "zpool_vdev_name_path": [43, 44, 164, 187, 210, 237, 333, 437, 544], "suggest": [43, 44, 47, 49, 54, 77, 79, 82, 185, 207, 233, 299, 354, 457, 459, 462, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "mkconfig": [43, 44], "menuentri": 43, "150300": 43, "59": [43, 87, 183, 205, 229, 257, 362, 467, 555], "60": [43, 48, 66, 68, 72, 177, 199, 223, 251, 344, 348, 446, 448, 452], "bootctl": [43, 44], "cmd": 43, "repli": 43, "medium": 43, "opensuse_leap": 43, "titl": [43, 44, 111, 115, 146, 210, 237, 315, 386, 390, 419, 491, 495, 526], "legacy_boot": [43, 44], "zenlinux": [43, 44], "opensuse_tumblewe": 44, "proce": [44, 48], "nameserv": 44, "cddl": [45, 184, 206, 230, 258], "creativ": 45, "sharealik": 45, "cc": 45, "BY": 45, "spi": 45, "501": 45, "nonprofit": 45, "donat": 45, "financ": 45, "legal": [45, 66, 79, 185, 187, 207, 210, 233, 237, 299, 354, 446, 459], "gplv2": 45, "piec": [45, 46, 72, 80, 177, 178, 199, 200, 223, 224, 251, 252, 348, 355, 452, 460], "opinion": 45, "freedom": [45, 50, 72, 177, 199, 223, 251, 348, 452], "law": 45, "center": [45, 47], "conserv": [45, 48, 72, 81, 177, 199, 223, 251, 348, 452, 461], "foundat": 45, "linear": [46, 48, 72, 177, 199, 223, 251, 348, 452], "defin": [46, 48, 50, 51, 54, 72, 74, 77, 79, 82, 88, 89, 96, 99, 105, 111, 115, 116, 119, 128, 132, 140, 175, 176, 177, 184, 185, 187, 197, 198, 199, 206, 207, 209, 210, 221, 222, 223, 230, 232, 233, 236, 237, 249, 250, 251, 258, 259, 275, 281, 285, 289, 296, 299, 301, 335, 348, 350, 354, 357, 363, 364, 380, 386, 390, 394, 401, 405, 413, 452, 454, 457, 459, 462, 468, 469, 476, 479, 485, 491, 495, 496, 499, 508, 512, 520], "zfs_vdev_async_write_max_act": [46, 51, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_async_write_min_act": [46, 51, 72, 177, 199, 223, 251, 348, 452], "_______": [46, 72, 177, 199, 223, 251, 348, 452], "______": [46, 72, 87, 177, 183, 199, 205, 223, 229, 251, 257, 348, 362, 452, 467], "_________": [46, 72, 177, 199, 223, 251, 348, 452], "zfs_dirty_data_max": [46, 50, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_async_write_active_max_dirty_perc": [46, 50, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_async_write_active_min_dirty_perc": [46, 72, 177, 199, 223, 251, 348, 452], "dirti": [46, 48, 49, 50, 51, 72, 160, 164, 177, 199, 210, 223, 237, 251, 329, 333, 348, 433, 437, 452, 540, 544], "percentag": [46, 48, 49, 50, 62, 68, 72, 77, 82, 87, 132, 159, 173, 177, 183, 186, 187, 195, 199, 205, 209, 210, 217, 223, 229, 236, 237, 240, 245, 251, 257, 301, 328, 335, 339, 344, 348, 357, 362, 405, 432, 442, 448, 452, 457, 462, 467, 512, 539], "schedul": [46, 48, 50, 52, 59, 60, 71, 72, 77, 155, 177, 199, 220, 223, 237, 248, 251, 324, 347, 348, 428, 451, 452, 457, 535], "threshold": [46, 48, 49, 68, 71, 72, 77, 79, 173, 177, 187, 195, 199, 210, 217, 220, 223, 233, 237, 245, 248, 251, 299, 344, 347, 348, 354, 448, 451, 452, 457, 459], "linearli": [46, 48, 49, 72, 177, 199, 223, 251, 348, 452], "busi": [46, 48, 72, 79, 177, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "stai": [46, 48, 72, 111, 115, 177, 199, 223, 251, 281, 285, 348, 386, 390, 452, 491, 495], "slope": [46, 72, 177, 199, 223, 251, 348, 452], "rate": [46, 47, 48, 49, 50, 54, 71, 72, 161, 173, 177, 195, 199, 217, 220, 223, 237, 245, 248, 251, 330, 347, 348, 434, 451, 452, 541], "incom": [46, 50, 72, 177, 199, 223, 251, 348, 452], "backend": [46, 50, 72, 177, 199, 223, 251, 348, 452], "silent": [47, 81, 156, 187, 210, 237, 325, 356, 429, 461, 536], "industri": 47, "superior": [47, 49], "bui": 47, "adher": 47, "potenti": [47, 48, 71, 72, 78, 80, 81, 87, 141, 144, 177, 185, 187, 199, 200, 207, 210, 223, 224, 233, 237, 251, 252, 257, 298, 310, 313, 334, 348, 353, 355, 356, 362, 414, 417, 452, 458, 460, 461, 467, 521, 524, 557], "reliabl": [47, 54, 58, 72, 452], "serv": [47, 48, 49, 50, 72, 81, 177, 187, 199, 210, 223, 237, 251, 334, 348, 356, 452, 461], "handicap": 47, "compet": [47, 48, 49], "microprocessor": 47, "complex": [47, 54], "errata": [47, 564], "modern": [47, 48, 49, 72, 176, 177, 198, 199, 223, 251, 348, 452], "quasi": 47, "chip": 47, "bundl": [47, 49, 79, 354, 459], "interact": [47, 72, 91, 102, 104, 117, 121, 122, 233, 251, 261, 272, 274, 291, 292, 348, 366, 377, 379, 392, 396, 397, 452, 471, 482, 484, 497, 501, 502], "regist": [47, 69, 80, 200, 218, 224, 246, 252, 345, 355, 449, 460], "flip": [47, 48, 54, 72, 132, 177, 199, 223, 236, 251, 301, 348, 405, 452, 512, 557], "fairli": [47, 48, 72, 177, 199, 223, 251, 348, 452], "dramat": 47, "consequ": [47, 49, 79, 184, 185, 206, 207, 230, 233, 299, 354, 459], "techniqu": 47, "ordinari": 47, "radiat": 47, "randomli": [47, 48, 54, 72, 79, 109, 110, 111, 115, 131, 172, 194, 208, 223, 233, 235, 251, 279, 280, 281, 285, 299, 300, 348, 354, 384, 385, 386, 390, 404, 452, 459, 489, 490, 491, 495, 511], "undefin": [47, 68, 79, 93, 185, 195, 207, 217, 233, 245, 263, 299, 344, 354, 368, 448, 459, 473], "four": [47, 48, 49, 63, 72, 111, 115, 169, 172, 183, 190, 194, 213, 220, 241, 281, 285, 340, 348, 386, 390, 443, 452, 491, 495], "runtim": [47, 65, 72, 192, 199, 215, 223, 243, 251, 342, 348, 445, 452], "alter": [47, 49, 91, 102, 111, 115, 121, 233, 261, 272, 281, 285, 291, 366, 377, 386, 390, 396, 471, 482, 491, 495, 501], "routin": 47, "reload": [47, 48, 72, 103, 231, 251, 273, 348, 378, 452, 483], "realiz": [47, 54], "unimport": [47, 178, 185, 200, 224, 252], "poor": [47, 49], "Such": [47, 48, 94, 140, 176, 185, 198, 207, 222, 233, 250, 264, 369, 413, 474, 520], "extraordinarili": 47, "rare": [47, 48, 49, 54, 71, 72, 80, 177, 178, 199, 200, 220, 223, 224, 248, 251, 252, 347, 348, 355, 451, 452, 460], "interpos": 47, "multipli": [47, 48, 49, 72, 79, 177, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "unrecogn": [47, 80, 355, 460], "smart": 47, "passthrough": [47, 79, 185, 207, 233, 299, 354, 459], "erc": 47, "unreli": [47, 54], "bandwidth": [47, 48, 72, 87, 146, 159, 164, 177, 183, 199, 205, 210, 223, 229, 237, 251, 257, 333, 348, 362, 437, 452, 467, 526, 539, 544], "pci": [47, 54, 74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "express": [47, 48, 72, 77, 79, 132, 161, 172, 177, 185, 186, 194, 199, 207, 208, 209, 223, 233, 235, 236, 237, 251, 299, 300, 301, 330, 348, 354, 405, 434, 452, 457, 459, 512, 541], "unnecessari": [47, 48, 49, 72, 80, 178, 200, 223, 224, 251, 252, 348, 355, 452, 460], "marc": 47, "bevand": 47, "he": 47, "opportun": [47, 49], "reconstruct": [47, 48, 72, 80, 81, 87, 134, 154, 223, 229, 251, 252, 257, 303, 323, 348, 355, 356, 362, 407, 427, 452, 460, 461, 467, 514, 534], "necessarili": [47, 79, 81, 87, 91, 102, 121, 159, 185, 207, 233, 237, 257, 261, 272, 291, 299, 328, 354, 362, 366, 377, 396, 432, 459, 461, 467, 471, 482, 501, 539], "overhead": [47, 48, 49, 72, 78, 220, 251, 348, 452, 458], "partial": [47, 48, 49, 72, 79, 105, 109, 110, 111, 115, 153, 177, 199, 207, 223, 232, 233, 237, 251, 275, 279, 280, 281, 285, 299, 322, 348, 354, 380, 384, 385, 386, 390, 426, 452, 459, 485, 489, 490, 491, 495, 533], "certainti": 47, "suffer": [47, 48, 49, 51, 72, 81, 177, 199, 223, 237, 251, 334, 348, 356, 452, 461, 557], "obtain": [47, 49, 79, 91, 102, 121, 185, 207, 233, 261, 272, 291, 299, 354, 366, 377, 396, 459, 471, 482, 501], "misreport": [47, 49], "transit": [47, 72, 80, 200, 224, 252, 348, 355, 452, 460], "xp": [47, 49], "eol": 47, "misalign": 47, "solv": [47, 48, 49], "model": [47, 49, 54, 77, 146, 159, 164, 210, 237, 333, 437, 457, 526, 539, 544], "manufactur": 47, "mitig": [47, 184, 206, 230, 258], "manner": [47, 48, 66, 79, 88, 92, 93, 95, 105, 109, 110, 113, 133, 134, 137, 154, 184, 185, 187, 206, 207, 210, 230, 232, 233, 237, 258, 262, 263, 265, 275, 279, 280, 283, 299, 302, 303, 306, 323, 354, 363, 367, 368, 370, 380, 384, 385, 388, 406, 407, 410, 427, 446, 459, 468, 472, 473, 475, 485, 489, 490, 493, 513, 514, 517, 534], "ineffici": 47, "flight": 47, "weaker": 47, "embed": [47, 87, 91, 102, 121, 205, 229, 233, 257, 261, 272, 291, 362, 366, 377, 396, 467, 471, 482, 501], "lower": [47, 48, 49, 51, 54, 63, 71, 72, 79, 82, 87, 100, 120, 169, 177, 185, 190, 199, 213, 220, 223, 237, 241, 248, 251, 257, 270, 290, 299, 335, 340, 347, 348, 354, 357, 362, 375, 395, 443, 451, 452, 459, 462, 467, 480, 500], "inspect": [47, 63, 87, 143, 169, 190, 213, 231, 241, 273, 340, 416, 443, 467, 523], "anyon": [47, 54, 89, 119, 128, 185, 207, 233, 296, 401, 469, 499, 508], "expos": [47, 48, 72, 78, 79, 80, 178, 185, 200, 207, 224, 233, 251, 252, 298, 299, 348, 353, 354, 355, 452, 458, 459, 460], "di": [47, 62, 442], "behav": [47, 48, 49, 81, 87, 128, 183, 185, 205, 207, 229, 233, 257, 296, 356, 362, 401, 461, 467, 508], "vendor": [47, 49, 54, 146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "inclin": 47, "hba": [47, 54, 74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "histor": [47, 48, 63, 72, 169, 177, 190, 199, 213, 223, 241, 251, 340, 348, 443, 452], "2009": [47, 96, 99, 116, 128, 173, 185, 195, 207, 217, 233, 296, 401, 476, 479, 496, 508, 555], "4096": [47, 48, 49, 72, 82, 172, 177, 187, 194, 199, 210, 223, 237, 251, 335, 348, 357, 452, 462], "2tb": 47, "market": [47, 49, 111, 115, 281, 285, 386, 390, 491, 495], "2013": [47, 171, 172, 177, 178, 179, 181, 184, 185, 186, 193, 194, 201, 203, 206, 209, 216, 225, 227, 230, 236], "believ": 47, "jumper": 47, "proper": [47, 49, 54, 75, 351, 455], "63": [47, 48, 210, 237], "themselv": [47, 48, 91, 102, 111, 115, 121, 233, 261, 272, 281, 285, 291, 366, 377, 386, 390, 396, 471, 482, 491, 495, 501], "behind": 47, "advers": [47, 79, 185, 207, 233, 299, 354, 459], "neg": [47, 48, 49, 62, 72, 79, 105, 220, 223, 232, 240, 248, 251, 275, 299, 339, 348, 354, 380, 442, 452, 459, 485], "cheap": [47, 49], "notabl": [47, 48], "western": 47, "polar": 47, "region": [47, 48, 55, 72, 80, 132, 145, 164, 172, 177, 186, 194, 199, 209, 223, 224, 236, 237, 251, 252, 301, 314, 333, 348, 355, 405, 418, 437, 452, 460, 512, 525, 544], "magnet": [47, 49], "surfac": 47, "pose": 47, "imperfect": 47, "vibrat": 47, "compos": 47, "respond": 47, "retri": [47, 48, 72, 452], "conclud": [47, 55], "substanti": [47, 82, 237, 335, 357, 462], "stall": [47, 48, 72, 177, 199, 223, 251, 348, 452], "tler": 47, "seagat": [47, 146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "hitachi": 47, "samsung": [47, 54], "permit": [47, 48, 105, 111, 115, 232, 275, 281, 285, 380, 386, 390, 485, 491, 495], "willing": [47, 49], "spend": [47, 48, 72, 199, 223, 251, 348, 452], "arbitrarili": [47, 81, 461], "minut": [47, 48, 68, 128, 164, 173, 195, 217, 245, 251, 344, 448, 508, 544], "advis": [47, 49, 77, 457], "seek": [47, 48, 233], "sacrific": 47, "densiti": [47, 48], "factor": [47, 48, 49, 54, 72, 79, 82, 177, 185, 199, 207, 223, 233, 237, 251, 299, 335, 348, 354, 357, 452, 459, 462], "counterpart": [47, 105, 232, 275, 380, 485], "15k": 47, "millisecond": [47, 48, 50, 72, 132, 159, 164, 176, 177, 198, 199, 209, 223, 236, 237, 251, 301, 328, 333, 348, 405, 432, 437, 452, 512, 539, 544], "averag": [47, 48, 49, 72, 132, 146, 177, 199, 209, 210, 223, 236, 237, 251, 301, 315, 348, 405, 419, 452, 512, 526], "presum": 47, "awai": 47, "Being": 47, "slower": [47, 48, 49, 71, 72, 80, 199, 220, 223, 248, 251, 252, 347, 348, 355, 451, 452, 460], "7200": [47, 72, 223, 251, 348, 452], "empir": [47, 48], "measur": [47, 48, 65, 72, 80, 88, 140, 144, 146, 176, 177, 184, 187, 192, 198, 199, 206, 210, 215, 222, 223, 230, 237, 243, 250, 251, 258, 313, 315, 342, 348, 355, 363, 413, 417, 419, 445, 452, 460, 468, 520, 524, 526], "5400": 47, "zil": [47, 49, 54, 72, 80, 81, 87, 160, 164, 177, 183, 187, 199, 205, 210, 223, 229, 237, 251, 257, 329, 333, 334, 348, 356, 362, 433, 437, 452, 460, 461, 467, 540, 544], "l2arc": [47, 49, 62, 72, 74, 79, 81, 82, 87, 147, 175, 177, 185, 197, 199, 207, 221, 223, 233, 240, 249, 251, 257, 299, 316, 334, 335, 339, 348, 350, 354, 356, 357, 362, 420, 442, 452, 454, 459, 461, 462, 467, 527], "slog": [47, 49, 54, 72, 199, 223, 251, 348, 452], "higher": [47, 48, 49, 51, 68, 72, 79, 80, 173, 177, 178, 185, 195, 199, 200, 207, 217, 223, 224, 233, 245, 251, 252, 299, 344, 348, 354, 355, 448, 452, 459, 460], "queue": [47, 48, 51, 71, 72, 103, 126, 146, 177, 199, 210, 220, 223, 237, 248, 251, 295, 315, 347, 348, 378, 400, 419, 451, 452, 483, 506, 526], "reorder": 47, "pata": 47, "object": [47, 48, 57, 71, 72, 78, 79, 80, 87, 89, 119, 129, 132, 140, 166, 167, 176, 177, 178, 183, 185, 186, 198, 199, 200, 205, 207, 209, 220, 222, 223, 224, 229, 233, 236, 248, 250, 251, 252, 257, 259, 289, 297, 298, 299, 301, 347, 348, 353, 354, 355, 362, 364, 394, 402, 405, 413, 451, 452, 458, 459, 460, 467, 469, 499, 509, 512, 520, 546, 547], "metaslab": [47, 72, 80, 87, 132, 177, 183, 186, 199, 205, 209, 223, 229, 236, 251, 252, 257, 301, 348, 355, 362, 405, 452, 460, 467, 512], "metastab": 47, "year": [47, 72, 80, 355, 460], "2003": 47, "2004": 47, "emul": [47, 54, 131, 208, 235, 300, 404, 511], "hdparm": [47, 49], "camcontrol": 47, "domin": 47, "focu": [47, 48], "2017": [47, 199, 205, 208, 220, 235], "predominantli": 47, "primarili": [47, 72, 75, 81, 187, 210, 237, 251, 334, 348, 351, 356, 452, 455, 461], "electr": 47, "buse": 47, "t10": 47, "dif": 47, "crc": 47, "rel_perf": 47, "lba": [47, 48, 49, 72, 80, 177, 199, 223, 251, 252, 348, 355, 452, 460], "smartctl": [47, 146, 210, 237, 315, 419, 526], "device_namespac": 47, "nvme1n1": [47, 49], "plu": [47, 79, 233, 299, 354, 459], "fmt": 47, "field": [47, 48, 49, 62, 77, 80, 87, 96, 97, 99, 101, 107, 116, 125, 140, 142, 146, 148, 157, 163, 185, 187, 207, 210, 222, 233, 237, 240, 250, 252, 257, 266, 267, 269, 271, 277, 286, 294, 309, 311, 315, 317, 326, 332, 339, 355, 362, 371, 372, 374, 376, 382, 391, 399, 413, 415, 419, 421, 430, 436, 442, 457, 460, 467, 476, 477, 479, 481, 487, 496, 505, 520, 522, 526, 528, 537, 543], "tradition": [47, 79, 185, 207, 233, 299, 354, 459], "vulner": [47, 49, 91, 102, 121, 233, 261, 272, 291, 366, 377, 396, 471, 482, 501], "simultan": [47, 48, 67, 72, 137, 156, 184, 206, 230, 237, 258, 306, 410, 447, 452, 517, 536], "conclus": [47, 66, 446], "brick": 47, "vanish": 47, "literatur": 47, "2015": [47, 57, 176, 198], "claim": [47, 82, 132, 140, 176, 186, 198, 209, 222, 236, 250, 301, 335, 357, 405, 413, 462, 512, 520], "robust": [47, 105, 137, 232, 237, 275, 306, 380, 410, 485, 517], "kingston": 47, "concept": [47, 52, 55, 59, 60, 72, 78, 223, 251, 298, 348, 353, 452, 458], "sole": [47, 79, 81, 140, 176, 185, 198, 207, 222, 233, 237, 250, 299, 334, 354, 356, 413, 459, 461, 520], "unflush": [47, 72, 251, 348, 452], "beyond": [47, 48, 49, 54, 72, 82, 156, 166, 167, 177, 199, 223, 237, 251, 325, 335, 348, 357, 429, 452, 462, 536, 546, 547], "hurt": [47, 72, 199, 223, 251, 348, 452], "laptop": [47, 49], "datacent": 47, "ipmi": 47, "experienc": [47, 550, 551, 552, 553, 556, 557, 561, 562, 563], "exhaust": [47, 48, 65, 72, 78, 105, 185, 192, 199, 207, 215, 223, 232, 233, 243, 251, 275, 298, 342, 348, 353, 380, 445, 452, 458, 485], "750": 47, "p3500": 47, "p3600": [47, 54], "p3608": 47, "p3700": [47, 54], "micron": 47, "7300": 47, "7400": 47, "7450": 47, "max": [47, 48, 50, 51, 54, 65, 68, 72, 128, 164, 177, 199, 223, 251, 342, 344, 348, 445, 448, 452, 508, 544], "pm963": 47, "pm1725": 47, "pm1725a": 47, "xs1715": 47, "toshiba": 47, "zd6300": 47, "nytro": 47, "5000": [47, 49, 71, 348, 452], "xp1920le30002": 47, "inexpens": [47, 48, 71, 220, 248, 347, 451], "22110": 47, "mlc": 47, "mostli": [47, 48, 49, 72, 81, 187, 210, 223, 237, 251, 334, 348, 356, 451, 452, 461], "airflow": 47, "suffici": [47, 49, 72, 81, 87, 141, 144, 187, 205, 210, 229, 237, 251, 257, 310, 313, 334, 348, 356, 362, 414, 417, 452, 461, 467, 521, 524, 550, 552, 563], "fan": 47, "overheat": 47, "thermal": 47, "latenc": [47, 48, 51, 72, 79, 81, 132, 146, 165, 177, 185, 187, 199, 207, 209, 210, 223, 233, 236, 237, 251, 299, 301, 315, 334, 348, 354, 356, 405, 419, 438, 452, 459, 461, 512, 526, 545], "hundr": [47, 185], "hotter": 47, "namespac": [47, 49, 72, 77, 78, 79, 82, 89, 93, 98, 112, 119, 123, 127, 128, 185, 207, 233, 259, 263, 268, 282, 289, 296, 298, 299, 353, 354, 364, 368, 373, 387, 394, 401, 452, 457, 458, 459, 462, 469, 473, 478, 492, 499, 503, 507, 508], "eras": [47, 91, 102, 121, 161, 237, 261, 272, 291, 330, 366, 377, 396, 434, 471, 482, 501, 541], "passiv": 47, "heatsink": 47, "sticker": 47, "closest": 47, "capacitor": 47, "undesir": 47, "overh": 47, "allevi": 47, "gigabyt": [47, 48, 96, 99, 116, 185, 207, 233, 266, 269, 286, 371, 374, 391, 476, 479, 496], "cool": 47, "76": 47, "degre": [47, 50, 72, 177, 199, 223, 251, 348, 452], "celsiu": 47, "74": [47, 87, 183, 205, 229, 257, 362, 467], "evalu": [47, 89, 101, 119, 185, 207, 233, 259, 271, 289, 364, 376, 394, 469, 481, 499], "temperatur": 47, "overcool": 47, "pm1633": 47, "pm1633a": 47, "sm1625": 47, "pm853t": 47, "px05shb": 47, "px04shb": 47, "px04shq": 47, "px05slb": 47, "px04slb": 47, "px04slq": 47, "px05smb": 47, "px04smb": 47, "px04smq": 47, "px05srb": 47, "px04srb": 47, "px04srq": 47, "px05svb": 47, "px04svb": 47, "px04svq": 47, "crucial": [47, 49], "mx100": 47, "mx200": 47, "mx300": 47, "m500": 47, "m550": 47, "m600": 47, "320": [47, 49], "335": [47, 223, 251], "710": 47, "730": 47, "s3500": 47, "s3510": 47, "s3610": [47, 54], "s3700": [47, 54], "s3710": [47, 54], "dc500r": 47, "dc500m": 47, "5210": 47, "ion": 47, "qlc": 47, "pm863": 47, "pm863a": 47, "sm843t": 47, "sm843": 47, "sm863": [47, 54], "sm863a": 47, "845dc": 47, "evo": 47, "hk4e": 47, "hk3e2": 47, "hk4r": 47, "hk3r2": 47, "hk3r": 47, "volunt": 47, "mainli": [47, 49, 82, 187, 210, 237, 335, 357, 462], "richard": 47, "yao": 47, "trustworthi": 47, "neutral": 47, "perceiv": 47, "bia": [47, 48], "toward": [47, 49, 54, 58, 72, 251, 348, 452], "confirm": [47, 109, 110, 233, 279, 280, 384, 385, 489, 490], "presenc": [47, 48, 66, 87, 164, 257, 333, 362, 437, 446, 467, 544, 557], "adequ": 47, "whose": [47, 48, 49, 51, 72, 78, 79, 80, 87, 92, 105, 109, 110, 128, 137, 140, 177, 178, 185, 187, 199, 200, 205, 207, 210, 222, 223, 224, 229, 232, 233, 237, 250, 251, 252, 257, 275, 279, 280, 296, 298, 299, 306, 348, 353, 354, 355, 362, 380, 384, 385, 401, 410, 413, 452, 458, 459, 460, 467, 472, 485, 489, 490, 508, 517, 520], "remain": [47, 48, 49, 50, 66, 72, 77, 79, 80, 81, 82, 88, 94, 105, 109, 110, 113, 118, 128, 156, 158, 163, 177, 178, 184, 185, 187, 199, 200, 206, 207, 210, 223, 224, 230, 232, 233, 237, 251, 252, 258, 275, 279, 280, 296, 299, 325, 327, 332, 334, 335, 348, 354, 355, 356, 357, 363, 380, 384, 385, 401, 429, 431, 436, 446, 451, 452, 457, 459, 460, 461, 462, 468, 474, 485, 489, 490, 493, 498, 508, 536, 538, 543], "unlist": 47, "pictur": 47, "anandtech": [47, 49], "sheet": 47, "accept": [47, 48, 54, 72, 81, 96, 99, 111, 115, 116, 142, 157, 177, 185, 187, 199, 207, 210, 223, 233, 237, 251, 266, 269, 281, 285, 286, 311, 326, 334, 348, 356, 371, 374, 386, 390, 391, 415, 430, 452, 461, 476, 479, 491, 495, 496, 522, 537, 557], "honor": [47, 48, 54, 132, 186, 209, 236, 301, 405, 512], "misstat": 47, "realiti": 47, "honest": 47, "smallest": [47, 49], "incorrectli": [47, 48, 54, 72, 223, 251, 348, 452], "8192": [47, 49, 77, 79, 82, 173, 177, 199, 207, 223, 233, 251, 299, 354, 457, 459, 462], "gbit": 47, "16384": [47, 71, 72, 173, 199, 223, 251, 347, 348, 451, 452], "punch": [47, 82, 237, 335, 357, 462], "conform": [47, 128, 185, 207, 233, 296, 401, 508], "drain": [47, 126, 295, 400, 506], "difficult": [47, 48, 54, 82, 237, 335, 357, 462], "distinguish": [47, 77, 79, 80, 81, 82, 178, 185, 187, 200, 207, 210, 224, 233, 237, 252, 299, 334, 354, 355, 356, 457, 459, 460, 461, 462], "endur": [47, 54], "circuitri": 47, "p4800x": 47, "p4801x": 47, "p1600x": 47, "4gb": [47, 49], "plug": [47, 48], "receptacl": 47, "wire": [47, 72, 251, 348, 452], "voltag": 47, "brownout": 47, "condition": 47, "outright": 47, "exhibit": [47, 48, 159, 187, 210, 237, 328, 432, 539], "undocu": 47, "suppos": [47, 192, 215, 243], "deassert": 47, "deviat": 47, "brown": 47, "strict": [47, 48, 72, 452], "toler": [47, 81, 134, 187, 210, 237, 334, 356, 461, 550, 552], "transfer": [47, 79, 109, 110, 217, 233, 245, 279, 280, 299, 354, 384, 385, 459, 489, 490], "taken": [47, 48, 54, 72, 74, 78, 79, 81, 82, 94, 111, 115, 175, 185, 187, 197, 199, 207, 210, 221, 223, 233, 237, 249, 251, 264, 281, 285, 288, 299, 334, 335, 348, 350, 354, 356, 357, 369, 386, 390, 393, 452, 454, 458, 459, 461, 462, 474, 491, 495, 498, 549, 551, 553, 554, 555, 556, 559, 560, 561, 562, 563], "suppli": [47, 91, 102, 111, 115, 121, 207, 233, 261, 272, 281, 285, 291, 366, 377, 386, 390, 396, 471, 482, 491, 495, 501], "atx": 47, "invers": [47, 50, 72, 164, 177, 199, 223, 251, 348, 452, 544], "holdup": 47, "ag": [47, 48, 72, 220, 223, 248, 251, 348, 452], "equip": 47, "substandard": 47, "doubt": [47, 54], "hybrid": 47, "94": 47, "acid": 47, "outag": [47, 80, 178, 200, 224, 252, 355, 460], "vari": [47, 48, 72, 82, 172, 194, 199, 223, 237, 251, 335, 348, 357, 452, 462], "footnot": [47, 49], "lkcl": 47, "ssd_analysi": 47, "usenix": 47, "confer": 47, "fast13": 47, "final80": 47, "pdf": [47, 63, 169, 190, 213, 241, 340, 443], "engin": [47, 184, 206, 230, 258], "nordeu": 47, "apc": 47, "fa158934": 47, "sysf": [48, 130, 146, 164, 210, 237, 315, 403, 419, 437, 510, 526, 544], "newvalu": 48, "xzy": 48, "problem_descript": 48, "your_nam": 48, "individu": [48, 66, 71, 72, 81, 82, 83, 87, 114, 137, 140, 145, 146, 148, 164, 176, 179, 185, 187, 198, 199, 201, 207, 210, 220, 222, 223, 225, 229, 233, 237, 240, 248, 250, 251, 253, 257, 284, 306, 314, 315, 317, 333, 334, 335, 347, 348, 356, 357, 358, 362, 389, 410, 413, 418, 419, 421, 437, 446, 451, 452, 461, 462, 463, 467, 494, 517, 520, 525, 526, 528, 544], "icp": 48, "ala": 48, "quick": [48, 54], "captur": 48, "wisdom": 48, "practition": 48, "synopsi": [48, 62, 63, 65, 66, 67, 68, 69, 75, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 169, 171, 172, 173, 179, 181, 182, 183, 184, 185, 186, 187, 188, 190, 192, 193, 194, 195, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 215, 216, 217, 218, 225, 227, 228, 229, 230, 231, 232, 233, 235, 236, 237, 238, 240, 241, 243, 244, 245, 246, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 336, 337, 339, 340, 342, 343, 344, 345, 351, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 455, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547], "modinfo": 48, "resist": 48, "hierarch": 48, "represent": [48, 111, 115, 146, 148, 159, 163, 185, 187, 207, 210, 233, 237, 281, 285, 315, 317, 328, 332, 386, 390, 419, 421, 432, 436, 491, 495, 526, 528, 539, 543], "assist": [48, 49], "row": [48, 72, 101, 185, 207, 233, 251, 271, 348, 376, 452, 481], "keyword": [48, 54, 74, 81, 86, 89, 105, 119, 175, 182, 185, 187, 197, 204, 207, 210, 221, 228, 232, 233, 237, 249, 256, 259, 275, 289, 334, 350, 356, 361, 364, 380, 394, 454, 461, 466, 469, 485, 499], "suspect": [48, 72, 177, 199, 223, 251, 348, 452], "boolean": [48, 105, 232, 275, 380, 485], "birth": [48, 80, 178, 200, 224, 252, 355, 460], "tbd": 48, "elig": [48, 72, 145, 161, 164, 177, 199, 223, 237, 251, 314, 330, 333, 348, 418, 434, 437, 452, 525, 541, 544], "turbo": [48, 72, 128, 177, 185, 199, 207, 223, 233, 251, 296, 348, 401, 452, 508], "warm": [48, 72, 199, 223, 251, 348, 452], "cold": [48, 72, 199, 223, 251, 348, 452], "interv": [48, 51, 62, 71, 72, 78, 146, 148, 159, 163, 177, 185, 187, 199, 207, 210, 223, 233, 237, 240, 251, 298, 315, 317, 328, 332, 339, 348, 353, 419, 421, 432, 436, 442, 452, 458, 526, 528, 539, 543], "aggress": [48, 71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452], "feed": [48, 72, 177, 199, 223, 251, 348, 452], "wake": 48, "uint64": [48, 105, 222, 232, 250, 275, 380, 413, 485], "1000": [48, 72, 79, 128, 176, 177, 185, 198, 199, 207, 223, 233, 251, 296, 299, 348, 354, 401, 452, 459, 508], "200": [48, 72, 80, 87, 177, 178, 183, 199, 200, 205, 223, 224, 229, 251, 252, 257, 348, 355, 362, 452, 460, 467], "readonli": [48, 79, 80, 82, 89, 96, 99, 103, 105, 116, 119, 128, 144, 178, 185, 187, 200, 207, 210, 224, 231, 232, 233, 237, 252, 259, 273, 275, 289, 296, 299, 313, 335, 354, 355, 357, 364, 378, 380, 394, 401, 417, 459, 460, 462, 469, 476, 479, 483, 485, 496, 499, 508, 524], "onto": 48, "uint64_max": [48, 72, 452], "cacheabl": [48, 72, 81, 199, 223, 251, 334, 348, 356, 452, 461], "overal": [48, 49, 51, 72, 87, 177, 183, 199, 205, 223, 229, 251, 257, 348, 362, 452, 467], "l2a": 48, "rc_headroom_boost": 48, "percent": [48, 49, 66, 72, 77, 94, 177, 185, 199, 207, 223, 233, 251, 264, 348, 369, 446, 452, 457, 474], "headroom": 48, "boost": [48, 80, 200, 224, 252, 355, 460], "v0": [48, 59, 60, 61], "evict": [48, 49, 62, 72, 177, 199, 223, 240, 251, 339, 348, 442, 452], "irrationali": [48, 251], "enorm": 48, "int": [48, 71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452], "33": [48, 72, 148, 164, 187, 210, 237, 251, 333, 348, 437, 452, 528, 544], "v2": [48, 49, 59, 60, 61, 80, 224, 252, 355, 460], "mfu": [48, 62, 72, 240, 251, 339, 348, 442, 452], "mru": [48, 62, 72, 240, 251, 339, 348, 442, 452], "dai": 48, "antiqu": 48, "073": [48, 223, 251], "741": [48, 223, 251], "824": [48, 223, 251], "ahead": [48, 54, 72, 223, 251, 348, 452], "characterist": [48, 78, 79, 81, 82, 185, 187, 207, 210, 233, 237, 298, 299, 334, 335, 353, 354, 356, 357, 458, 459, 461, 462], "accommod": [48, 50, 71, 72, 79, 108, 177, 185, 199, 207, 220, 223, 233, 248, 251, 278, 299, 347, 348, 354, 383, 451, 452, 459, 488], "64mb": [48, 251, 348], "effeci": 48, "ulong": [48, 71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452], "388": [48, 177, 199, 223, 251], "608": [48, 177, 199, 223, 251], "sec": [48, 49, 79, 173, 185, 195, 199, 207, 217, 223, 233, 245, 251, 299, 354, 459], "granular": [48, 72, 177, 199, 223, 251, 348, 452], "nomin": 48, "roughli": [48, 49, 50, 72, 132, 177, 199, 209, 223, 236, 251, 301, 348, 405, 452, 512], "monitor": [48, 88, 103, 105, 133, 134, 146, 152, 164, 184, 187, 206, 210, 230, 231, 232, 237, 258, 273, 275, 321, 333, 363, 378, 380, 425, 437, 468, 483, 485, 513, 526, 532, 544, 557], "contigu": [48, 71, 72, 87, 183, 205, 220, 223, 229, 248, 251, 257, 347, 348, 362, 451, 452, 467], "decreas": [48, 51, 54, 71, 72, 79, 82, 101, 164, 177, 185, 187, 199, 207, 210, 220, 223, 233, 237, 248, 251, 271, 299, 333, 335, 347, 348, 354, 357, 376, 437, 451, 452, 459, 462, 481, 544, 555], "524": [48, 177, 199, 223, 251], "288": [48, 177, 199, 223, 251], "bias": [48, 49, 72, 177, 199, 223, 251, 348, 452], "spread": 48, "favor": [48, 49, 63, 72, 169, 190, 213, 241, 340, 443, 452], "largest": [48, 71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452], "segment": [48, 72, 199, 223, 251, 348, 452], "_metaslab_segment_weight_en": 48, "bucket": [48, 72, 87, 165, 183, 199, 205, 223, 229, 251, 257, 348, 362, 438, 452, 467, 545], "plenti": 48, "freed": [48, 49, 71, 72, 79, 80, 81, 82, 177, 185, 199, 207, 220, 223, 224, 233, 237, 248, 251, 252, 299, 334, 335, 347, 348, 354, 355, 356, 357, 451, 452, 459, 460, 461, 462], "penalti": [48, 49, 79, 185, 207, 233, 299, 354, 459], "metric": [48, 49, 72, 79, 165, 177, 199, 207, 223, 233, 251, 299, 348, 354, 438, 452, 459, 545], "weight": [48, 72, 78, 177, 185, 199, 207, 223, 233, 251, 298, 348, 353, 452, 458], "meta": [48, 72, 87, 177, 183, 199, 205, 223, 229, 251, 257, 348, 362, 452, 467], "lab_fragmentation_factor_en": 48, "preload": [48, 72, 177, 199, 223, 251, 348, 452], "uniform": 48, "constant": [48, 72, 177, 199, 223, 251, 348, 452], "angular": [48, 72, 177, 199, 223, 251, 348, 452], "veloc": [48, 72, 177, 199, 223, 251, 348, 452], "outer": 48, "record": [48, 55, 72, 79, 80, 83, 87, 105, 111, 115, 131, 132, 143, 166, 167, 178, 179, 183, 185, 186, 187, 199, 200, 201, 205, 207, 208, 209, 210, 223, 224, 225, 229, 232, 233, 235, 236, 237, 251, 252, 253, 257, 275, 281, 285, 299, 300, 301, 312, 336, 348, 354, 355, 358, 362, 380, 386, 390, 404, 405, 416, 439, 440, 452, 459, 460, 463, 467, 485, 491, 495, 511, 512, 523, 546, 547, 563], "zone": [48, 79, 84, 89, 96, 99, 116, 119, 123, 128, 143, 185, 187, 207, 210, 233, 237, 259, 289, 296, 299, 312, 354, 364, 394, 401, 416, 459, 464, 469, 476, 479, 496, 499, 503, 508, 523], "inner": 48, "diamet": 48, "repres": [48, 50, 51, 72, 74, 79, 80, 81, 82, 93, 105, 111, 115, 175, 177, 185, 187, 197, 199, 207, 210, 221, 223, 224, 232, 233, 237, 249, 251, 252, 263, 275, 281, 285, 299, 335, 348, 350, 354, 355, 356, 357, 368, 380, 386, 390, 452, 454, 459, 460, 461, 462, 473, 485, 491, 495], "rotat": [48, 72, 91, 102, 121, 177, 199, 223, 233, 251, 261, 272, 291, 348, 366, 377, 396, 452, 471, 482, 501], "misrepres": 48, "disk_nam": 48, "inconveni": 48, "string": [48, 62, 72, 77, 79, 82, 87, 88, 101, 105, 172, 184, 185, 187, 194, 199, 206, 207, 210, 223, 230, 232, 233, 237, 240, 251, 258, 271, 275, 299, 335, 339, 348, 354, 357, 362, 363, 376, 380, 442, 452, 457, 459, 462, 467, 468, 481, 485], "invoc": [48, 66, 67, 75, 87, 132, 137, 169, 183, 190, 205, 209, 213, 229, 236, 237, 241, 257, 301, 306, 343, 351, 362, 405, 410, 446, 447, 455, 467, 512, 517], "estim": [48, 49, 72, 80, 152, 156, 159, 177, 187, 199, 210, 223, 237, 251, 252, 321, 325, 328, 348, 355, 425, 429, 432, 452, 460, 532, 536, 539], "consumpt": [48, 72, 79, 177, 185, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "valid": [48, 65, 66, 72, 77, 79, 82, 87, 93, 105, 111, 115, 131, 133, 134, 137, 139, 140, 144, 148, 154, 166, 167, 176, 177, 185, 187, 188, 192, 198, 199, 205, 207, 210, 211, 215, 222, 223, 229, 232, 233, 237, 238, 243, 250, 251, 257, 263, 275, 281, 285, 299, 300, 302, 303, 306, 308, 313, 317, 323, 335, 336, 337, 342, 348, 354, 357, 362, 368, 380, 386, 390, 404, 406, 407, 410, 412, 413, 417, 421, 427, 439, 440, 445, 446, 452, 457, 459, 462, 467, 473, 485, 491, 495, 511, 513, 514, 517, 519, 520, 524, 528, 534, 546, 547, 556, 557, 563], "realist": [48, 72, 177, 199, 223, 251, 348, 452], "inflat": [48, 72, 87, 177, 183, 199, 205, 223, 229, 251, 257, 348, 362, 452, 467], "altogeth": [48, 72, 251, 348, 452], "condit": [48, 72, 80, 81, 82, 177, 187, 199, 210, 223, 237, 251, 334, 335, 348, 356, 357, 452, 460, 461, 462], "optimist": [48, 82, 237, 335, 357, 462], "misbehav": [48, 184, 206, 230], "rewind": [48, 72, 80, 81, 87, 135, 144, 164, 177, 183, 199, 205, 223, 224, 229, 237, 251, 252, 257, 304, 313, 333, 334, 348, 355, 356, 362, 408, 417, 437, 452, 460, 461, 467, 515, 524, 544], "travers": [48, 72, 80, 177, 178, 199, 200, 223, 224, 251, 252, 348, 355, 452, 460], "toggl": [48, 72, 177, 199, 223, 231, 251, 273, 348, 452], "max_int": 48, "000": [48, 72, 177, 199, 223, 251, 348, 452], "unaccount": [48, 72, 177, 199, 223, 251, 348, 452], "mo": [48, 67, 72, 87, 132, 171, 177, 183, 186, 193, 199, 205, 209, 216, 223, 229, 236, 244, 251, 257, 301, 343, 348, 362, 405, 447, 452, 467, 512], "zpl": [48, 72, 87, 172, 177, 183, 194, 199, 205, 223, 229, 251, 257, 348, 362, 452, 467], "enospc": [48, 72, 79, 105, 177, 185, 199, 207, 223, 232, 233, 251, 275, 299, 348, 354, 380, 452, 459, 485], "slop": 48, "shift": [48, 54, 68, 71, 177, 199, 220, 223, 248, 251, 344, 347, 348, 448, 451], "upper": [48, 49, 72, 87, 223, 251, 257, 348, 362, 452, 467], "4tb": 48, "unsign": [48, 68, 87, 205, 223, 229, 257, 344, 362, 448, 467], "max_ulong": 48, "048": [48, 177, 199, 223, 251], "576": [48, 177, 199, 223, 251], "uint": [48, 65, 71, 72, 177, 199, 220, 223, 248, 251, 342, 347, 348, 445, 451, 452], "uint_max": 48, "overlap": 48, "max_uint": 48, "arc_prun": [48, 72, 348, 452], "arc_dnode_s": 48, "arc_dnode_limit": 48, "max_uint64": 48, "zfs_arc_dnode_lim": 48, "it_perc": 48, "zfs_arc_d": 48, "node_limit": 48, "assumpt": [48, 49, 72, 177, 199, 223, 251, 258, 348, 452], "blocksiz": [48, 72, 79, 93, 185, 194, 207, 233, 251, 263, 299, 348, 354, 368, 452, 459, 473], "usag": [48, 54, 68, 72, 79, 80, 82, 85, 86, 87, 97, 101, 107, 125, 132, 146, 148, 164, 165, 172, 173, 181, 182, 185, 186, 187, 194, 195, 200, 203, 204, 205, 207, 209, 210, 217, 223, 224, 227, 228, 229, 233, 236, 237, 245, 251, 252, 255, 256, 257, 267, 271, 277, 294, 299, 301, 315, 317, 333, 335, 344, 348, 354, 355, 357, 360, 361, 362, 372, 376, 382, 399, 405, 419, 421, 437, 438, 448, 452, 459, 460, 462, 465, 466, 467, 477, 481, 487, 505, 512, 526, 528, 544, 545], "777": [48, 199, 223, 251], "216": [48, 199, 223, 251], "sublist": [48, 72, 251, 348, 452], "batch": [48, 72, 177, 199, 223, 251, 348, 452], "multilist": [48, 72, 199, 223, 251, 348, 452], "int_max": 48, "shrunk": 48, "grow": [48, 62, 71, 72, 177, 220, 240, 248, 339, 347, 348, 442, 451, 452], "damper": 48, "oscil": 48, "cycl": [48, 72, 140, 176, 198, 222, 250, 413, 452, 520], "arcstat_memory_throttle_count": 48, "all_system_memori": [48, 72, 348, 452], "caveat": [48, 49, 54, 63, 72, 79, 169, 190, 199, 213, 223, 233, 241, 251, 299, 340, 348, 354, 443, 452, 459], "induc": [48, 72, 199, 223, 251, 348, 452], "108": [48, 177, 199, 251], "864": [48, 177, 199, 251], "column": [48, 79, 82, 95, 96, 99, 101, 116, 142, 146, 157, 159, 164, 169, 177, 185, 187, 190, 207, 210, 213, 233, 237, 241, 265, 266, 269, 271, 286, 299, 311, 315, 326, 328, 333, 335, 340, 354, 357, 370, 371, 374, 376, 391, 415, 419, 430, 432, 437, 459, 462, 475, 476, 479, 481, 496, 522, 526, 537, 539, 544], "c_max": 48, "096": [48, 54, 177, 199, 223, 251], "reclaim": [48, 62, 71, 72, 80, 82, 161, 164, 177, 178, 187, 199, 200, 210, 220, 223, 224, 237, 240, 248, 251, 252, 330, 333, 335, 339, 347, 348, 355, 357, 434, 437, 442, 451, 452, 460, 462, 541, 544], "explicit": [48, 75, 82, 109, 110, 187, 199, 207, 210, 223, 233, 237, 251, 279, 280, 335, 348, 351, 357, 384, 385, 455, 462, 489, 490], "75": [48, 72, 177, 199, 223, 251, 348, 452], "devot": [48, 49, 177, 199, 223, 251], "metadata_s": 48, "arc_meta_min": 48, "dentri": [48, 177, 199, 223, 251, 348], "znode": [48, 72, 251, 348, 452], "prune": [48, 177, 199, 223, 251, 348], "strategi": [48, 49, 72, 137, 187, 199, 210, 223, 237, 251, 306, 348, 410, 452, 517], "meta_onli": [48, 199, 223, 251, 348], "balanc": [48, 49, 51, 54, 63, 72, 79, 81, 169, 177, 185, 187, 190, 199, 207, 210, 213, 223, 233, 237, 241, 251, 299, 334, 340, 348, 354, 356, 443, 452, 459, 461], "enum": 48, "c_min": 48, "554": 48, "432": [48, 199, 223, 251], "prescient": [48, 72, 199, 223, 251, 348, 452], "meant": [48, 49, 55, 72, 223, 251, 348, 452], "fs_arc_min_prescient_prefetch_m": 48, "6000": [48, 223, 251], "grain": [48, 72, 177, 199, 223, 251, 348, 452], "overflow": [48, 68, 72, 173, 177, 195, 199, 217, 223, 245, 251, 344, 348, 448, 452], "formula": [48, 177, 199, 223, 251], "256th": [48, 177, 199, 223, 251], "arc_p_min_shift": [48, 199, 223, 251, 348], "ghost": [48, 62, 72, 240, 339, 442, 452], "cap": [48, 50, 71, 72, 82, 88, 148, 164, 177, 187, 199, 210, 220, 223, 237, 248, 251, 333, 335, 347, 348, 357, 437, 451, 452, 462, 468, 528, 544], "behaviour": [48, 72, 199, 223, 251, 348, 452], "arc_shrink_shift": [48, 72, 199, 223, 251, 348, 452], "reduct": [48, 49, 166, 167, 546, 547], "shortfal": 48, "shrinkag": 48, "plai": [48, 49, 68, 72, 173, 195, 199, 217, 223, 245, 251, 344, 348, 448, 452], "nice": [48, 54, 72, 199, 223, 251, 348, 452], "lru": [48, 49, 72, 199, 223, 251, 348, 452], "pagecach": [48, 72, 199, 223, 251, 348, 452], "nr_file_pag": [48, 72, 199, 223, 251, 348, 452], "scanner": 48, "512k": [48, 177, 194, 199, 223, 251], "margin": [48, 71, 80, 200, 220, 224, 248, 252, 347, 355, 451, 460], "ulong_max": [48, 251, 348], "lwb": [48, 72, 199, 223, 251, 348, 452], "itx": [48, 72, 199, 223, 251, 348, 452], "facilit": [48, 71, 72, 75, 80, 177, 199, 220, 223, 248, 251, 347, 348, 351, 355, 451, 452, 455, 460], "view": [48, 72, 88, 176, 184, 198, 199, 206, 222, 223, 230, 250, 251, 258, 348, 363, 452, 468], "dbuf": [48, 72, 177, 199, 223, 251, 348, 452], "spa_sync": [48, 177, 199], "haven": [48, 72, 199, 348, 452], "invok": [48, 72, 78, 79, 85, 88, 93, 104, 105, 109, 110, 117, 122, 172, 181, 184, 185, 194, 203, 206, 207, 223, 227, 230, 232, 233, 251, 255, 258, 263, 274, 275, 279, 280, 287, 292, 298, 299, 348, 353, 354, 360, 363, 368, 379, 380, 384, 385, 392, 397, 452, 458, 459, 465, 468, 473, 484, 485, 489, 490, 497, 502], "300": [48, 49, 68, 72, 173, 177, 195, 199, 217, 223, 245, 251, 344, 348, 448, 452], "spa_deadman": [48, 177], "fire": [48, 177], "txg": [48, 51, 55, 72, 79, 80, 87, 132, 144, 177, 183, 186, 187, 199, 205, 207, 209, 210, 223, 229, 233, 236, 237, 251, 252, 257, 299, 301, 313, 348, 354, 355, 362, 405, 417, 452, 459, 460, 467, 512, 524], "600": [48, 77, 128, 164, 223, 251, 457, 508, 544], "recov": [48, 72, 82, 140, 144, 176, 177, 187, 198, 199, 210, 222, 223, 237, 250, 251, 313, 335, 348, 357, 413, 417, 452, 462, 520, 524, 549, 557, 563], "ddt": [48, 49, 72, 87, 183, 205, 223, 229, 251, 257, 348, 362, 452, 467], "spent": [48, 72, 80, 146, 178, 199, 200, 210, 223, 224, 237, 251, 252, 315, 348, 355, 419, 452, 460, 526], "ultim": 48, "480": [48, 199, 223, 251], "infin": [48, 72, 177, 199, 223, 251, 348, 452], "smoothest": [48, 72, 177, 199, 223, 251, 348, 452], "billion": [48, 72, 177, 199, 223, 251, 348, 452], "smoothli": [48, 72, 177, 199, 223, 251, 348, 452], "10x": [48, 177, 199, 223, 251], "10th": [48, 177, 199, 223, 251], "scalar": [48, 72, 199, 223, 251, 348, 452], "nanosecond": [48, 88, 140, 146, 184, 206, 210, 222, 230, 237, 250, 258, 315, 348, 363, 413, 419, 452, 468, 520, 526], "exceed": [48, 71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452, 557], "preced": [48, 72, 74, 75, 80, 87, 103, 175, 177, 197, 199, 221, 223, 249, 251, 257, 348, 350, 351, 355, 362, 378, 452, 454, 455, 460, 467, 483], "zfs_d": 48, "irty_data_max_max": 48, "physical_ram": [48, 72, 348, 452], "min": [48, 50, 51, 72, 177, 199, 223, 251, 348, 452], "1gib": [48, 72, 452], "zfs_vdev_async_write_ac": 48, "tive_min_dirty_perc": 48, "zfs_dirt": 48, "y_data_sync": 48, "selector": [48, 72, 199, 223, 251, 348, 452], "endian": [48, 68, 87, 141, 187, 205, 210, 222, 229, 237, 250, 257, 310, 344, 362, 413, 414, 448, 467, 521], "big": [48, 222, 250, 413], "transform": [48, 134, 187, 210, 237, 303, 407, 514], "superscalar": 48, "superscalar4": 48, "sse2": [48, 72, 199, 223, 251, 348, 452], "ssse3": [48, 72, 199, 223, 251, 348, 452], "avx2": [48, 72, 199, 223, 251, 348, 452], "avx512f": [48, 72, 199, 223, 251, 348, 452], "aarch64_neon": [48, 72, 199, 223, 251, 348, 452], "free_bpobj": [48, 72, 199, 223, 251, 348, 452], "uint32": 48, "zfs_vdev_ma": 48, "x_activ": 48, "async": [48, 51, 52, 59, 60, 72, 177, 199, 223, 251, 348, 452], "interpol": [48, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_asyn": 48, "c_write_active_max_dirty_perc": 48, "io_schedul": 48, "sch": 48, "edul": 48, "zfs_dirty_d": 48, "ata_max": 48, "c_write_active_min_dirty_perc": 48, "fs_vdev_async_write_active_max_d": 48, "irty_perc": 48, "zio": [48, 52, 59, 60, 65, 72, 177, 192, 199, 210, 215, 223, 243, 251, 342, 348, 445, 452], "chedul": 48, "zfs_vdev_max": 48, "_activ": 48, "poorer": [48, 72, 177, 199, 223, 251, 348, 452], "compromis": [48, 72, 91, 102, 121, 177, 187, 199, 210, 223, 237, 251, 261, 272, 291, 334, 348, 366, 377, 396, 452, 471, 482, 501, 550, 552], "_vdev_async_write_max_act": 48, "sum": [48, 49, 51, 72, 79, 165, 177, 185, 199, 207, 223, 233, 251, 299, 348, 354, 438, 452, 459, 545], "max_act": [48, 51, 72, 177, 199, 223, 251, 348, 452], "priorit": [48, 51, 72, 177, 199, 223, 251, 348, 452], "min_act": [48, 177, 199, 223], "uint32_max": 48, "zfs_vd": 48, "ev_max_act": 48, "zfs_vdev_scrub_max": 48, "zfs_vdev_m": 48, "ax_act": 48, "imbalanc": [48, 72, 177, 199, 223, 251, 348, 452], "fuller": [48, 72, 199, 223, 251, 348, 452], "tend": [48, 72, 199, 223, 251, 348, 452], "subdirectori": [48, 106, 233, 276, 381, 486], "no_root_squash": [48, 72, 177, 185, 199, 207, 223, 251, 348, 452], "manipul": [48, 79, 82, 166, 167, 185, 207, 233, 299, 336, 354, 439, 440, 459, 462, 546, 547], "0x1": 48, "zfs_debug_dprintf": [48, 72, 177, 199, 223, 251, 348, 452], "dprintf": [48, 72, 177, 199, 223, 251, 348, 452], "0x2": 48, "zfs_debug_dbuf_verifi": [48, 72, 177, 199, 223, 251, 348, 452], "0x4": 48, "zfs_debug_dnode_verifi": [48, 72, 177, 199, 223, 251, 348, 452], "0x8": 48, "zfs_debug_snapnam": [48, 72, 177, 199, 223, 251, 348, 452], "0x10": 48, "zfs_debug_modifi": [48, 72, 177, 199, 223, 251, 348, 452], "illeg": [48, 72, 177, 185, 199, 223, 251, 348, 452], "zfs_debug_spa": [48, 177, 199], "spa_dbgmsg": [48, 177, 199], "0x40": 48, "zfs_debug_zio_fre": [48, 72, 177, 199, 223, 251, 348, 452], "0x80": 48, "fs_debug_histogram_verifi": 48, "spacemap": [48, 72, 80, 87, 132, 177, 183, 186, 199, 205, 209, 223, 229, 236, 251, 252, 257, 301, 348, 355, 362, 405, 452, 460, 467, 512], "histogram": [48, 72, 87, 146, 159, 165, 176, 177, 183, 187, 198, 199, 205, 210, 223, 229, 237, 251, 257, 315, 328, 348, 362, 419, 432, 438, 452, 467, 526, 539, 545], "0x100": 48, "zfs_debug_metaslab_verifi": [48, 72, 199, 223, 251, 348, 452], "range_tre": [48, 72, 199, 223, 251, 348, 452], "0x200": 48, "zfs_debug_set_error": [48, 72, 199, 223, 251, 348, 452], "set_error": [48, 72, 199, 223, 251, 348, 452], "eio": [48, 72, 82, 105, 132, 177, 186, 187, 199, 209, 210, 223, 232, 236, 237, 251, 275, 301, 335, 348, 357, 380, 405, 452, 462, 485, 512, 556], "indirect": [48, 49, 50, 72, 80, 81, 87, 140, 177, 183, 199, 205, 222, 223, 224, 229, 237, 250, 251, 252, 257, 334, 348, 355, 356, 362, 413, 452, 460, 461, 467, 520], "perhap": [48, 54, 72, 177, 199, 223, 251, 348, 452], "suspend": [48, 72, 136, 145, 161, 177, 199, 223, 237, 251, 305, 314, 330, 348, 409, 418, 434, 452, 516, 525, 541], "terminologi": 48, "768": [48, 177, 199, 220, 223, 248, 251], "zil_itx_indirect_count": 48, "weigh": [48, 72, 177, 199, 223, 251, 348, 452], "bound": [48, 49, 72, 87, 109, 110, 140, 207, 222, 233, 250, 251, 257, 279, 280, 348, 362, 384, 385, 413, 452, 467, 489, 490, 520], "pipelin": [48, 72, 132, 140, 176, 198, 199, 209, 222, 223, 236, 250, 251, 301, 348, 405, 413, 452, 512, 520], "zio_buf_": 48, "zio_data_buf_": 48, "zdb": [48, 49, 54, 55, 68, 84, 88, 129, 136, 173, 180, 184, 195, 202, 206, 217, 226, 230, 245, 254, 258, 297, 305, 344, 359, 363, 402, 409, 448, 464, 468, 509, 516], "mm": [48, 62, 87, 205, 229, 240, 257, 339, 362, 442, 467], "zfs_metaslab_fragmentation_thresh": 48, "fr": 48, "agment": 48, "70": [48, 68, 72, 173, 177, 195, 199, 217, 223, 245, 251, 344, 348, 448, 452], "85": [48, 54, 74, 175, 177, 197, 199, 221, 249, 350, 454], "heavili": [48, 54, 72, 78, 79, 80, 177, 185, 199, 207, 223, 233, 251, 252, 298, 299, 348, 353, 354, 355, 452, 458, 459, 460], "lesser": [48, 72, 177, 199, 223, 251, 348, 452], "acquir": [48, 72, 177, 185, 199, 223, 251, 348, 452], "zfs_mg_alloc_failur": [48, 72, 177, 199, 223, 251, 348, 452], "multihost": [48, 72, 82, 136, 137, 199, 210, 223, 237, 251, 305, 306, 335, 348, 357, 409, 410, 452, 462, 516, 517], "multimodifi": 48, "subsystem": [48, 137, 141, 187, 210, 220, 237, 248, 306, 310, 347, 410, 414, 517, 521, 550, 551, 552, 553], "frequenc": [48, 72, 132, 186, 199, 209, 223, 236, 251, 301, 348, 405, 452, 512], "leaf": [48, 51, 72, 77, 145, 146, 159, 161, 177, 199, 210, 223, 237, 251, 314, 315, 328, 330, 348, 418, 419, 432, 434, 452, 457, 525, 526, 539, 541], "uberblock": [48, 72, 87, 183, 199, 205, 223, 229, 251, 257, 348, 362, 452, 467], "overwhelm": 48, "serd": 48, "checksum_n": [48, 77, 457], "checksum_t": [48, 77, 457], "crawl": [48, 72, 199, 223, 251, 348, 452], "barrier": 48, "volatil": [48, 72, 187, 210, 223, 237, 251, 348, 452], "nonvolatil": 48, "op": [48, 49, 87, 91, 93, 94, 102, 111, 115, 121, 152, 158, 185, 207, 233, 237, 261, 263, 264, 272, 281, 285, 291, 321, 366, 368, 369, 377, 386, 390, 396, 425, 431, 467, 471, 473, 474, 482, 491, 495, 501, 532, 538], "occasion": [48, 49], "nop": [48, 177, 199, 223, 251], "crytograph": 48, "seek_hol": [48, 72, 199, 223, 251, 348, 452], "seek_data": [48, 72, 199, 223, 251, 348, 452], "exchang": 48, "int32": 48, "int32_max": 48, "52": [48, 177, 199, 223, 251], "428": [48, 177, 199, 223, 251], "800": [48, 177, 199, 223, 251], "consecut": [48, 72, 251, 348, 452], "side": [48, 49, 54, 55, 80, 109, 110, 111, 115, 178, 185, 200, 207, 224, 233, 252, 279, 280, 281, 285, 355, 384, 385, 386, 390, 460, 489, 490, 491, 495, 559], "otim": 48, "poolnam": [48, 49, 80, 87, 103, 178, 183, 200, 205, 224, 229, 231, 252, 257, 273, 355, 362, 378, 460, 467, 483], "pipe": [48, 95, 128, 185, 207, 233, 265, 370, 401, 475, 508], "intact": [48, 72, 199, 223, 251, 348, 452], "efficaci": 48, "statist": [48, 62, 72, 77, 79, 82, 87, 128, 146, 148, 159, 164, 165, 177, 183, 185, 187, 188, 199, 205, 207, 210, 211, 223, 229, 233, 237, 238, 240, 251, 257, 296, 299, 315, 317, 328, 333, 335, 337, 339, 348, 354, 357, 362, 401, 419, 421, 432, 437, 438, 442, 452, 457, 459, 462, 467, 508, 526, 528, 539, 544, 545], "fatal": [48, 72, 83, 87, 105, 177, 179, 183, 199, 201, 205, 223, 225, 229, 232, 251, 253, 257, 275, 348, 358, 362, 380, 452, 463, 467, 485], "zfs_panic_recov": 48, "resort": [48, 72, 144, 177, 187, 199, 210, 223, 237, 251, 313, 348, 417, 452, 524], "wors": [48, 49, 72, 177, 199, 223, 251, 348, 452], "context": [48, 49, 72, 79, 89, 105, 119, 181, 185, 203, 207, 227, 232, 233, 251, 255, 275, 299, 348, 354, 364, 380, 394, 452, 459, 469, 485, 499], "extent": [48, 72, 184, 206, 223, 230, 251, 258, 348, 452], "gap": [48, 72, 140, 176, 177, 198, 199, 222, 223, 250, 251, 348, 413, 452, 520], "defer": [48, 72, 79, 80, 94, 105, 155, 177, 185, 199, 207, 223, 224, 232, 233, 237, 251, 252, 264, 275, 299, 324, 348, 354, 355, 369, 380, 428, 452, 459, 460, 474, 485, 535], "adjac": [48, 72, 79, 140, 207, 222, 223, 233, 250, 251, 299, 348, 354, 413, 452, 459, 520], "coalesc": [48, 72, 223, 251, 348, 452], "sort": [48, 72, 97, 101, 107, 125, 156, 185, 207, 223, 233, 251, 267, 271, 277, 294, 348, 372, 376, 382, 399, 429, 452, 477, 481, 487, 505, 536], "gather": [48, 72, 105, 223, 232, 233, 251, 275, 348, 380, 452, 485], "soon": [48, 72, 80, 178, 200, 223, 224, 251, 252, 348, 355, 451, 452, 460, 552], "097": 48, "sio_cach": 48, "procf": 48, "slabinfo": 48, "slab": [48, 71, 220, 248, 347, 451], "slabtop": 48, "divisor": 48, "hard": [48, 49, 52, 72, 79, 88, 185, 207, 223, 233, 251, 299, 348, 354, 363, 452, 459, 468], "soft": [48, 72, 223, 251, 348, 452], "zfs_scan_mem": 48, "_lim_fact": 48, "strike": 48, "194": 48, "304": 48, "unread": [48, 87, 183, 205, 229, 257, 362, 467], "0x2f5baddb10c": 48, "cooki": 48, "gang": [48, 68, 72, 87, 173, 183, 195, 205, 217, 223, 229, 245, 251, 257, 344, 348, 362, 448, 452, 467], "dsl": 48, "dp_sync_taskq": [48, 223, 251, 348, 452], "shorter": 48, "intens": [48, 49, 78, 156, 185, 187, 207, 210, 233, 237, 298, 325, 353, 429, 458, 536], "aggreg": [48, 51, 72, 82, 146, 165, 177, 199, 210, 223, 237, 251, 315, 335, 348, 357, 419, 438, 452, 462, 526, 545], "131": [48, 177, 199, 223, 251], "072": [48, 177, 199, 223, 251], "iostat": [48, 84, 133, 155, 156, 159, 160, 164, 165, 187, 210, 237, 254, 324, 325, 328, 329, 333, 359, 428, 429, 432, 433, 437, 438, 464, 513, 535, 536, 539, 540, 544, 545], "thusit": 48, "vdev_cache_stat": 48, "inop": 48, "65": 48, "536": 48, "384": [48, 177, 199, 220, 223, 248, 251], "nonrot": 48, "distanc": [48, 72, 199, 223, 251, 348, 452], "fs_vdev_mirror_rotating_seek_inc": 48, "zfs_vdev_mirror_rotating_seek_off": 48, "fewer": [48, 49, 72, 132, 186, 209, 236, 251, 301, 348, 405, 452, 512], "zfs_v": 48, "dev_mirror_non_rotating_seek_inc": 48, "noop": [48, 177, 199], "cfq": [48, 199], "bfq": [48, 199], "deadlin": [48, 199], "changeabl": 48, "scsi_mq": 48, "unchang": [48, 57, 106, 109, 110, 207, 233, 276, 279, 280, 381, 384, 385, 486, 489, 490], "clearli": 48, "enclos": [48, 72, 199, 223, 251, 348, 452], "vdev_raidz_bench": 48, "x86": [48, 49, 54, 72, 199, 223, 251, 348, 452], "avx512bw": [48, 72, 199, 223, 251, 348, 452], "aarch64": [48, 54, 72, 199, 223, 251, 348, 452], "armv8": [48, 72, 199, 223, 251, 348, 452], "neon": [48, 72, 199, 223, 251, 348, 452], "aarch64_neonx2": [48, 72, 199, 223, 251, 348, 452], "unrol": [48, 72, 199, 223, 251, 348, 452], "80": [48, 54, 72, 80, 87, 177, 178, 183, 199, 200, 205, 223, 224, 229, 251, 252, 257, 348, 355, 362, 452, 460, 467], "itxg": 48, "clean": [48, 49, 63, 72, 169, 190, 213, 223, 241, 251, 340, 348, 443, 452], "dispatch": [48, 72, 223, 251, 348, 452], "dp_zil_clean_taskq": [48, 72, 223, 251, 348, 452], "zil_clean": 48, "zfs_zil_clean_taskq_minallo": 48, "zfs_zil_clean_taskq_maxallo": 48, "024": 48, "brought": [48, 54, 136, 165, 187, 210, 237, 409, 438, 516, 545], "replai": [48, 72, 140, 176, 177, 198, 199, 222, 223, 250, 251, 348, 413, 452, 520, 563], "abus": [48, 72, 199, 223, 251, 348, 452], "786": [48, 199, 223, 251], "worker": [48, 71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452], "z_wr_iss": 48, "instanc": [48, 49, 72, 81, 82, 109, 110, 187, 210, 223, 233, 237, 251, 279, 280, 334, 335, 356, 357, 384, 385, 461, 462, 489, 490], "recompil": 48, "multiprocessor": 48, "inhibit": 48, "230": [48, 72, 177, 199, 223, 251, 348, 452], "aka": [48, 57, 199, 223, 251], "uncommon": [48, 54], "unfortun": [48, 49, 54, 55, 556], "8kb": [48, 49, 251, 348, 354], "heavi": [48, 54, 79, 80, 81, 88, 184, 187, 206, 207, 210, 230, 233, 237, 252, 258, 299, 334, 354, 355, 356, 363, 459, 460, 461, 468], "discard_max_byt": 48, "discard_max_hw_byt": 48, "volume_inst": 48, "submitt": [48, 72, 199, 223, 251, 348, 452], "similarli": [48, 54, 83, 109, 110, 132, 179, 201, 209, 225, 233, 236, 253, 279, 280, 301, 358, 384, 385, 405, 463, 489, 490, 512], "avgqu": 48, "sz": 48, "aqu": [48, 87, 205, 229, 257, 362, 467], "volmod": [48, 72, 79, 89, 119, 199, 207, 223, 233, 251, 299, 348, 354, 364, 394, 452, 459, 469, 499], "bsd": [48, 54], "geom": [48, 79, 144, 207, 233, 299, 313, 354, 417, 459, 524], "synonym": 48, "hide": [48, 79, 207, 233, 299, 354, 459], "outsid": [48, 79, 94, 100, 120, 123, 127, 128, 140, 176, 185, 198, 207, 222, 233, 250, 264, 270, 290, 296, 299, 354, 369, 375, 395, 401, 413, 459, 474, 480, 500, 503, 507, 508, 520], "zfs_qat_": 48, "compress_dis": 48, "hiwat": 48, "lowat": 48, "dbug": 48, "fall": [48, 54], "held": [48, 71, 72, 220, 248, 251, 347, 348, 451, 452], "104": [48, 223, 251], "857": [48, 223, 251], "experiment": [48, 72, 81, 137, 164, 187, 210, 237, 333, 334, 356, 437, 452, 461, 517, 544], "lowest": [48, 49, 51, 79, 140, 222, 250, 299, 354, 413, 459, 520], "scatter": [48, 72, 80, 223, 251, 252, 348, 355, 452, 460], "zio_": 48, "data_": 48, "buf_": 48, "abd_chunk_cach": 48, "kmem_cach": 48, "abdstat": 48, "buddi": 48, "incres": 48, "collis": [48, 72, 80, 105, 137, 187, 200, 210, 224, 232, 237, 251, 252, 275, 306, 348, 355, 380, 410, 452, 460, 485, 517], "birthdai": 48, "400": [48, 72, 251, 348, 452], "trillion": 48, "resiz": [48, 82, 187, 210, 237, 335, 357, 462], "therein": 48, "finer": [48, 251], "arc_min_prefetch_lifespan": 48, "tick": [48, 176, 177, 198, 199], "dtl": [48, 72, 132, 186, 199, 209, 223, 236, 251, 301, 348, 405, 452, 512], "treatment": 48, "highest": [48, 51, 79, 87, 183, 185, 205, 207, 229, 233, 257, 299, 354, 362, 459, 467], "bracket": [48, 72, 348, 452], "sub": [48, 72, 177, 199, 223, 251, 348, 452], "2kb": [48, 251, 348], "1kb": [48, 223, 251, 348], "buf": [48, 62, 339, 442], "spill": [48, 72, 80, 81, 200, 223, 224, 237, 251, 252, 334, 348, 355, 356, 452, 460, 461], "5kb": [48, 348], "1536": [48, 223, 251], "polici": [48, 181, 203, 220, 227, 248, 255], "remount": [48, 72, 75, 79, 80, 113, 132, 185, 186, 200, 207, 209, 223, 224, 233, 236, 251, 252, 283, 299, 301, 348, 351, 354, 355, 388, 405, 452, 455, 459, 460, 493, 512], "inflight": [48, 87, 183, 205, 229, 257, 362, 467], "maxinflight": 48, "inevit": 48, "failmod": [48, 82, 140, 176, 187, 198, 210, 222, 237, 250, 335, 357, 413, 462, 520, 561, 562], "accident": [48, 80, 87, 205, 229, 257, 355, 362, 460, 467, 557], "recover": [48, 72, 223, 251, 348, 452, 554], "chanc": [48, 77, 79, 82, 185, 207, 233, 299, 354, 457, 459, 462, 555], "inadvert": [48, 550], "dbuf_metadata_cache_sh": 48, "ift": 48, "node_export": 48, "prometheu": [48, 165, 438, 545], "telegraf": [48, 165, 438, 545], "plugin": [48, 165, 438, 545], "channel": [48, 54, 72, 74, 86, 105, 128, 175, 182, 197, 204, 221, 223, 228, 232, 233, 249, 251, 256, 275, 296, 348, 350, 361, 380, 401, 452, 454, 466, 485, 508], "spa_minblocks": 48, "spa_maxblocks": 48, "217": [48, 223, 251], "span": [48, 72, 223, 251, 348, 452], "cancel": [48, 72, 81, 132, 134, 145, 152, 154, 161, 163, 186, 187, 209, 210, 223, 236, 237, 251, 301, 303, 314, 321, 323, 330, 332, 334, 348, 356, 405, 407, 418, 425, 427, 434, 436, 452, 461, 512, 514, 525, 532, 534, 541, 543], "inceas": 48, "sleep": [48, 72, 251, 348, 452], "zfs_conden": 48, "e_indirect_commit_entry_delay_m": 48, "condens": [48, 72, 223, 251, 348, 452], "obsolet": [48, 72, 80, 87, 223, 224, 251, 252, 257, 348, 355, 362, 452, 460, 467], "s_condense_indirect_vdevs_en": 48, "zfs_vdev_max_": 48, "zfs_vde": 48, "v_initializing_max_act": 48, "zfs_vdev": 48, "_max_act": [48, 72, 251, 348, 452], "iv": [48, 109, 110, 233, 279, 280, 384, 385, 489, 490], "dev_max_act": 48, "zfs_vdev_trim_m": 48, "0xdeadbeef": [48, 131, 208, 235, 300, 404, 511], "0xdeadbeefdeadbee": [48, 72, 223, 251, 348, 452], "lua": [48, 72, 105, 128, 223, 232, 233, 251, 275, 296, 348, 380, 401, 452, 485, 508], "nest": [48, 72, 81, 128, 187, 210, 223, 233, 237, 251, 296, 334, 348, 356, 401, 452, 461, 508], "deepli": 48, "impract": 48, "predefin": [48, 72, 223, 251, 348, 452], "resid": [48, 79, 81, 103, 185, 207, 233, 299, 354, 356, 378, 459, 461, 483], "computation": [48, 72, 79, 223, 233, 251, 299, 348, 354, 452, 459], "particip": [48, 72, 223, 251, 348, 452], "zfs_recon": 48, "struct_indirect_combinations_max": 48, "unmodifi": [48, 72, 79, 185, 207, 223, 233, 251, 299, 348, 354, 452, 459], "backward": [48, 49, 72, 82, 111, 115, 128, 187, 210, 223, 237, 251, 281, 285, 296, 335, 348, 357, 386, 390, 401, 452, 462, 491, 495, 508], "recreat": [48, 49, 72, 82, 109, 110, 185, 187, 207, 210, 223, 233, 237, 251, 279, 280, 335, 348, 357, 384, 385, 452, 462, 489, 490, 553, 559], "zfs_trim_extent_bi": 48, "tes_min": 48, "134": [48, 223, 251], "728": [48, 223, 251], "unalloc": [48, 145, 164, 223, 237, 251, 314, 333, 418, 437, 525, 544], "max_": 48, "uniniti": [48, 49, 72, 82, 140, 176, 187, 198, 210, 222, 223, 237, 250, 251, 335, 348, 357, 413, 452, 462, 520], "thinli": [48, 72, 161, 164, 223, 237, 251, 330, 333, 348, 434, 437, 452, 541, 544], "provis": [48, 72, 79, 81, 161, 164, 185, 207, 223, 233, 237, 251, 299, 330, 333, 334, 348, 354, 356, 434, 437, 452, 459, 461, 541, 544], "q": [48, 66, 87, 132, 146, 172, 186, 194, 205, 209, 210, 229, 236, 237, 257, 301, 315, 362, 405, 419, 446, 467, 512, 526], "opposit": [48, 72, 223, 251, 348, 452], "stride": 48, "blkdev": 48, "v_aggregation_limit_non_rot": 48, "diagnost": [48, 72, 223, 251, 348, 452], "denomin": [48, 223, 251], "zevent": [48, 72, 88, 177, 184, 199, 206, 223, 230, 251, 258, 348, 363, 452, 468], "inappropri": [48, 101, 185, 207, 233, 271, 376, 481], "ivset": [48, 72, 251, 348, 452], "crypt_keydata": 48, "to_ivset_guid": 48, "heurist": [48, 72, 169, 190, 213, 241, 251, 340, 348, 452], "16mb": [48, 177, 199, 223, 251, 348], "postpon": [48, 80, 224, 252, 355, 460], "constraint": 48, "freez": [48, 68, 72, 251, 344, 348, 448, 452], "paus": [48, 72, 134, 140, 156, 163, 164, 210, 222, 223, 237, 250, 251, 325, 332, 333, 348, 413, 429, 436, 437, 452, 520, 536, 543, 544], "s_count_limit": 48, "_min_ms_count": 48, "assign": [48, 54, 66, 72, 79, 80, 81, 185, 207, 224, 233, 237, 252, 299, 334, 354, 355, 356, 446, 452, 459, 460, 461], "factori": 48, "294": 48, "967": 48, "295": 48, "0xffffffff": 48, "zfs_hostid": [48, 68, 132, 195, 217, 245, 344, 405, 448, 512], "kmem_alloc": [48, 71, 220, 248, 347, 451], "kmalloc_max_s": [48, 71, 220, 248, 347, 451], "4x": [48, 71, 220, 248, 251, 347, 451], "vmem_alloc": [48, 71, 220, 248, 347, 451], "kmalloc": [48, 71, 220, 248, 347, 451], "vmalloc": [48, 54, 71, 220, 248, 347, 451], "eight": [48, 71, 220, 248, 347, 451], "seriou": [48, 71, 220, 248, 347, 451], "concern": [48, 71, 220, 248, 347, 451], "largish": [48, 71, 220, 248, 347, 451], "caught": [48, 71, 220, 248, 347, 451], "magazin": [48, 71, 220, 248, 347, 451], "notifi": [48, 220, 248], "bitmask": 48, "0x01": [48, 220, 248], "0x02": [48, 220, 248], "increasingli": [48, 220], "cutoff": [48, 71, 220, 248, 347, 451], "quarter": [48, 72, 220, 348, 452], "page_s": [48, 220], "footprint": [48, 71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452], "likelihood": [48, 220, 248, 347], "task": [48, 54, 71, 86, 88, 140, 164, 182, 184, 204, 206, 210, 220, 228, 230, 237, 248, 256, 258, 309, 333, 347, 361, 363, 413, 437, 451, 466, 468, 520, 544], "halt": [48, 71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452], "spawn": [48, 71, 220, 248, 347, 451], "taskq_dynam": [48, 71, 220, 248, 347, 451], "promptli": [48, 71, 220, 248, 347, 451], "item": [48, 71, 146, 210, 220, 237, 248, 315, 347, 419, 451, 526], "interrupt": [48, 71, 80, 109, 110, 111, 115, 178, 200, 207, 220, 224, 233, 248, 252, 279, 280, 281, 285, 347, 355, 384, 385, 386, 390, 451, 460, 489, 490, 491, 495], "ramp": [48, 71, 220, 248, 347, 451], "spl_kmem_cach": [48, 71, 220, 248, 347, 451], "realloc": [48, 71, 220, 248, 347, 451], "contend": [48, 71, 220, 248, 347, 451], "decad": 49, "necess": [49, 75, 351, 455], "evicit": 49, "outperform": 49, "dedic": [49, 54, 68, 72, 80, 81, 137, 187, 210, 224, 237, 252, 306, 334, 348, 355, 356, 410, 452, 460, 461, 517], "devicenam": 49, "oracl": [49, 53, 185], "contrast": [49, 82, 111, 115, 185, 207, 233, 237, 281, 285, 335, 357, 386, 390, 462, 491, 495], "stand": [49, 79, 185, 207, 233, 299, 354, 459], "immut": 49, "logarithm": [49, 72, 348, 452], "accord": [49, 51, 66, 67, 72, 79, 81, 82, 85, 92, 93, 113, 117, 132, 134, 177, 181, 185, 186, 187, 199, 203, 207, 209, 210, 223, 227, 233, 236, 237, 251, 255, 262, 263, 283, 287, 299, 301, 334, 335, 348, 354, 356, 357, 360, 367, 368, 388, 392, 405, 446, 447, 452, 459, 461, 462, 465, 472, 473, 493, 497, 512, 552], "incur": [49, 91, 102, 121, 233, 261, 272, 291, 366, 377, 396, 471, 482, 501], "implicit": [49, 78, 79, 111, 115, 185, 207, 233, 281, 285, 298, 299, 353, 354, 386, 390, 458, 459, 491, 495], "world": 49, "2007": [49, 560], "nand": [49, 52], "gnop": 49, "2011": 49, "maczf": 49, "osx": 49, "flaw": 49, "reli": [49, 54, 79, 185, 207, 233, 299, 354, 459], "compens": 49, "ambigu": 49, "lun": [49, 74, 82, 175, 187, 197, 210, 221, 237, 249, 335, 350, 357, 454, 462], "speak": [49, 50, 72, 177, 199, 223, 251, 348, 452], "belong": [49, 77, 82, 111, 115, 187, 210, 237, 335, 357, 386, 390, 457, 462, 491, 495], "difficulti": [49, 79, 233, 299, 354, 459], "necessit": 49, "4kb": [49, 178, 200, 224, 252, 348, 354, 355, 356], "128kb": [49, 178, 185, 200, 207, 223, 224, 233, 251, 252, 281, 285, 348, 354, 355, 386, 390], "16kb": [49, 348, 355], "lzjb": [49, 79, 80, 166, 167, 178, 185, 200, 207, 224, 233, 252, 299, 354, 355, 459, 460, 546, 547], "satisfi": [49, 51, 72, 81, 128, 177, 187, 199, 210, 223, 231, 237, 251, 273, 334, 348, 356, 452, 461, 508], "fair": [49, 54], "incompress": [49, 72, 80, 178, 200, 224, 252, 355, 452, 460], "lempel": 49, "ziv": 49, "encod": [49, 54, 79, 80, 207, 224, 233, 252, 299, 354, 355, 459, 460], "zstandard": 49, "offer": [49, 80, 111, 115, 252, 281, 285, 355, 386, 390, 460, 491, 495], "decod": [49, 87, 205, 229, 257, 362, 467], "uncertain": 49, "figur": 49, "silesia": 49, "corpu": 49, "worthwhil": [49, 79, 233, 299, 354, 459], "megabyt": [49, 96, 99, 116, 172, 185, 194, 199, 207, 223, 233, 251, 266, 269, 286, 371, 374, 391, 476, 479, 496], "recv": [49, 54, 57, 84, 91, 102, 109, 121, 166, 167, 185, 207, 233, 254, 261, 272, 279, 291, 359, 366, 377, 384, 396, 464, 471, 482, 489, 501, 546, 547], "16m": [49, 146, 194, 237, 315, 419, 526], "zfs_max_records": [49, 72, 177, 199, 223, 251, 348, 452], "analog": 49, "decent": [49, 79, 185, 207, 233, 299, 354, 459], "amplif": 49, "fse": 49, "meaningless": 49, "fragment": [49, 54, 72, 77, 80, 82, 87, 148, 177, 187, 199, 210, 223, 237, 251, 252, 317, 335, 348, 355, 357, 421, 452, 457, 460, 462, 467, 528], "insuffici": [49, 72, 81, 93, 133, 137, 187, 210, 237, 263, 302, 306, 334, 348, 356, 368, 406, 410, 452, 461, 473, 513, 517, 551, 553, 561, 562], "7200rpm": 49, "uncach": [49, 62, 442], "400kb": 49, "simul": [49, 68, 87, 132, 173, 183, 186, 205, 209, 229, 236, 257, 301, 344, 362, 405, 448, 467, 512], "mac": [49, 91, 102, 121, 233, 261, 272, 291, 366, 377, 396, 471, 482, 501], "spin": 49, "metaslab_lba_weighting_en": [49, 72, 177, 199, 223, 251, 348, 452], "tuanbl": 49, "fit": 49, "tell": [49, 128, 401, 508], "mmm": [49, 87, 205, 229, 257, 362, 467], "compani": [49, 54, 58], "elev": 49, "whole_disk": 49, "precaut": 49, "flow": 49, "determinist": [49, 105, 232, 275, 380, 485], "cope": [49, 78, 185, 207, 233, 298, 353, 458], "ephemer": [49, 97, 107, 125, 185, 207, 233, 267, 277, 294, 372, 382, 399, 477, 487, 505], "amazon": 49, "ec2": 49, "stamp": [49, 146, 148, 159, 163, 187, 210, 237, 315, 317, 328, 332, 419, 421, 432, 436, 526, 528, 539, 543], "inherit": [49, 75, 77, 78, 79, 80, 84, 91, 92, 93, 96, 102, 103, 105, 106, 109, 110, 113, 116, 121, 128, 185, 187, 207, 224, 233, 252, 254, 261, 262, 263, 266, 272, 275, 276, 279, 280, 283, 286, 291, 296, 298, 299, 351, 353, 354, 355, 359, 366, 367, 368, 371, 377, 378, 380, 381, 384, 385, 388, 391, 396, 401, 455, 457, 458, 459, 460, 464, 471, 472, 473, 476, 482, 483, 485, 486, 489, 490, 493, 496, 501, 508], "10gb": [49, 187, 210, 237, 333, 437], "bottleneck": 49, "o_sync": [49, 79, 185, 207, 233, 299, 354, 459], "optan": [49, 52], "3d": [49, 52], "xpoint": [49, 52], "overprovison": 49, "somewhat": [49, 185], "alright": 49, "mix": [49, 51, 54, 79, 109, 110, 137, 185, 187, 207, 210, 233, 237, 279, 280, 299, 306, 354, 384, 385, 410, 459, 489, 490, 517, 559], "unpartit": [49, 54], "sanit": 49, "explain": [49, 128, 508], "rewrit": [49, 72, 134, 177, 199, 223, 251, 348, 452], "defrag": 49, "redundant_metadata": [49, 79, 89, 119, 185, 207, 233, 299, 354, 364, 394, 459, 469, 499], "16k": [49, 54, 68, 71, 79, 87, 183, 195, 205, 207, 217, 220, 229, 233, 245, 248, 257, 299, 344, 347, 354, 362, 448, 451, 459, 467], "innodb_doublewrit": 49, "cnf": 49, "percona": 49, "advoc": 49, "recant": 49, "advic": 49, "aio": 49, "bare": 49, "codepath": 49, "innodb_use_native_aio": 49, "innodb_use_atomic_writ": 49, "wal": 49, "64k": [49, 79, 185, 199, 207, 223, 233, 251, 299, 354, 459], "full_page_writ": 49, "65536": [49, 54, 172, 194, 199, 223, 251], "merit": 49, "casesensit": [49, 79, 89, 96, 99, 109, 110, 116, 119, 128, 185, 207, 233, 259, 279, 280, 289, 296, 299, 354, 364, 384, 385, 394, 401, 459, 469, 476, 479, 489, 490, 496, 499, 508], "insensit": [49, 79, 185, 207, 233, 299, 354, 459], "smb": [49, 79, 89, 97, 107, 117, 119, 125, 128, 185, 207, 233, 259, 267, 277, 287, 289, 294, 296, 299, 354, 364, 372, 382, 392, 394, 399, 401, 459, 469, 477, 487, 497, 499, 505, 508], "despit": [49, 80, 178, 200, 224, 252, 355, 460, 550, 552], "humbl": 49, "saw": 49, "asset": 49, "tab": [49, 63, 93, 95, 96, 97, 98, 99, 101, 107, 112, 116, 125, 140, 142, 146, 148, 157, 163, 169, 185, 187, 190, 207, 210, 213, 233, 237, 241, 263, 265, 266, 267, 268, 269, 271, 277, 282, 286, 294, 309, 311, 315, 317, 326, 332, 340, 368, 370, 371, 372, 373, 374, 376, 382, 387, 391, 399, 413, 415, 419, 421, 430, 436, 443, 473, 475, 476, 477, 478, 479, 481, 487, 492, 496, 505, 520, 522, 526, 528, 537, 543], "dialogu": 49, "proton": 49, "maxim": [49, 79, 207, 233, 299, 354, 459], "6489": 49, "patpro": 49, "php": 49, "2617": 49, "pragma": 49, "pragma_page_s": 49, "pgszchng2016": 49, "13790": 49, "patchwork": 49, "20190626121943": 49, "131390": 49, "glider": 49, "googl": 49, "22731857": 49, "12406": 49, "waiter": [50, 72, 177, 199, 223, 251, 348, 452], "credit": [50, 72, 177, 199, 223, 251, 348, 452], "min_tim": [50, 72, 177, 199, 223, 251, 348, 452], "zfs_delay_scal": [50, 72, 177, 199, 223, 251, 348, 452], "zfs_delay_min_dirty_perc": [50, 72, 177, 199, 223, 251, 348, 452], "curv": [50, 72, 177, 199, 223, 251, 348, 452], "midpoint": [50, 72, 177, 199, 223, 251, 348, 452], "10m": [50, 72, 177, 199, 223, 251, 348, 452], "9m": [50, 72, 87, 177, 183, 199, 205, 223, 229, 251, 257, 348, 362, 452, 467], "8m": [50, 72, 177, 199, 223, 251, 348, 452], "7m": [50, 72, 177, 199, 210, 223, 237, 251, 348, 452], "6m": [50, 72, 177, 199, 210, 223, 237, 251, 348, 452], "5m": [50, 72, 177, 199, 223, 251, 348, 452], "4m": [50, 72, 156, 172, 177, 194, 199, 223, 251, 348, 429, 452, 536], "3m": [50, 72, 177, 199, 223, 251, 348, 452], "2m": [50, 72, 177, 199, 223, 251, 348, 452], "microsecond": 50, "2000": [50, 72, 177, 199, 223, 251, 348, 452], "shape": [50, 72, 177, 199, 223, 251, 348, 452], "accumul": [50, 72, 165, 177, 199, 223, 251, 348, 438, 452, 545], "yield": [50, 72, 177, 199, 223, 251, 348, 452], "100u": [50, 72, 177, 199, 223, 251, 348, 452], "10u": [50, 72, 177, 199, 223, 251, 348, 452], "steep": [50, 72, 177, 199, 223, 251, 348, 452], "five": [51, 72, 177, 199, 223, 251, 348, 452, 563], "prefetch": [51, 62, 72, 79, 177, 199, 223, 240, 251, 339, 348, 442, 452], "zfs_vdev_max_act": [51, 72, 177, 199, 223, 251, 348, 452], "met": [51, 72, 94, 177, 185, 199, 207, 223, 233, 251, 264, 348, 369, 452, 474], "zfs_vdev_sync_read_min_act": [51, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_sync_read_max_act": [51, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_sync_write_min_act": [51, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_sync_write_max_act": [51, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_async_read_min_act": [51, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_async_read_max_act": [51, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_scrub_min_act": [51, 72, 177, 199, 223, 251, 348, 452], "zfs_vdev_scrub_max_act": [51, 72, 177, 199, 223, 251, 348, 452], "stage": [51, 72, 140, 176, 177, 198, 199, 222, 223, 250, 251, 348, 413, 452, 520], "burst": [51, 72, 177, 199, 223, 251, 348, 452], "zfs_txg_timeout": [51, 72, 177, 199, 223, 251, 348, 452], "bursti": [51, 72, 177, 199, 223, 251, 348, 452], "broad": [51, 72, 177, 199, 223, 251, 348, 452], "stroke": [51, 72, 177, 199, 223, 251, 348, 452], "microcod": 52, "ecc": [52, 58], "torrent": 52, "wine": 52, "encompass": 54, "wikipedia": 54, "afford": [54, 68, 173, 195, 217, 245, 344, 448], "numer": [54, 62, 77, 78, 79, 87, 96, 97, 99, 101, 105, 107, 116, 125, 144, 164, 175, 185, 187, 197, 207, 210, 221, 232, 233, 237, 240, 266, 267, 269, 271, 275, 277, 286, 294, 298, 299, 313, 333, 339, 353, 354, 362, 371, 372, 374, 376, 380, 382, 391, 399, 417, 437, 442, 457, 458, 459, 467, 476, 477, 479, 481, 485, 487, 496, 505, 524, 544, 549, 552, 560], "opensourc": 54, "umbrella": 54, "8gb": 54, "2gb": 54, "strongest": 54, "cosmic": 54, "rai": 54, "undetect": 54, "grade": 54, "justifi": 54, "arm": 54, "ppc": 54, "ppc64": 54, "oldest": [54, 94, 113, 118, 128, 185, 207, 233, 264, 296, 369, 401, 474, 493, 498, 508], "promin": 54, "importantli": 54, "discourag": [54, 79, 81, 185, 187, 207, 210, 233, 237, 299, 334, 354, 356, 459, 461], "bump": 54, "vmap": 54, "4198400": 54, "conting": 54, "wean": 54, "tighter": 54, "drawback": 54, "hdx": 54, "human": [54, 67, 77, 79, 96, 99, 116, 165, 171, 172, 185, 193, 194, 207, 216, 233, 244, 266, 269, 286, 299, 343, 354, 371, 374, 391, 438, 447, 457, 459, 476, 479, 496, 545], "friendli": [54, 86, 182, 204, 228, 256, 361, 466], "desk": 54, "prone": [54, 93, 473], "sata_hitachi_hts7220071201dp1d10dgg6hmrp": 54, "cabl": [54, 563], "ti": [54, 78, 79, 185, 207, 233, 298, 299, 353, 354, 458, 459], "cumbersom": 54, "0000": 54, "1f": 54, "jbod": [54, 74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "pick": [54, 156, 237, 325, 429, 536], "meaning": [54, 77, 79, 82, 128, 185, 207, 233, 296, 299, 354, 401, 457, 459, 462, 508], "clarifi": 54, "emploi": 54, "deriv": [54, 74, 175, 197, 221, 249, 350, 454], "wwn": [54, 74, 175, 197, 221, 249, 350, 454], "b1": 54, "a2": 54, "b2": 54, "think": [54, 79, 207, 233, 299, 354, 459], "partlabel": 54, "sas_direct": [54, 74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "phys_per_port": [54, 74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "pci_slot": [54, 74, 175, 197, 221, 249, 350, 454], "sas_switch": [54, 74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "definit": [54, 62, 74, 86, 175, 182, 197, 204, 221, 228, 240, 249, 256, 339, 350, 361, 442, 454, 466], "86": [54, 74, 175, 197, 221, 249, 350, 454], "qualifi": [54, 74, 94, 96, 99, 116, 128, 175, 185, 197, 207, 221, 233, 249, 264, 296, 350, 369, 401, 454, 474, 476, 479, 496, 508], "d1": [54, 74, 175, 197, 221, 249, 350, 454], "0x5000c5002de3b9ca": [54, 74, 175, 197, 221, 249, 350, 454], "d2": [54, 74, 175, 197, 221, 249, 350, 454], "0x5000c5002def789": [54, 74, 175, 197, 221, 249, 350, 454], "udevadm": [54, 75, 351, 455], "a0": 54, "b0": 54, "a3": 54, "b3": 54, "a4": 54, "b4": 54, "a5": [54, 59, 60, 564], "b5": 54, "a6": 54, "b6": 54, "a7": 54, "b7": 54, "stale": [54, 231, 273], "failov": [54, 82, 210, 237, 335, 357, 462], "rc1": [54, 55], "sender": [54, 55, 111, 115, 281, 285, 386, 390, 491, 495], "unaffect": [54, 79, 109, 110, 185, 207, 233, 279, 280, 299, 354, 384, 385, 459, 489, 490, 557], "6224": 54, "filestor": 54, "rbd": 54, "cephf": 54, "objectstor": 54, "s3": 54, "rado": 54, "xf": 54, "osd": 54, "gear": 54, "filestore_max_inline_xattr": 54, "filestore_max_inline_xattr_s": 54, "filestore_max_xattr_value_s": 54, "journal": [54, 75, 351, 455], "colloc": 54, "terribl": 54, "upfront": 54, "dsync": 54, "qualiti": [54, 58], "WILL": 54, "NOT": [54, 79, 185, 207, 233, 299, 354, 459, 559], "830": 54, "840": 54, "850": 54, "sm853": 54, "200gb": 54, "4x10gb": 54, "4x20gb": 54, "disappoint": 54, "interoper": 54, "wholedisk": 54, "untouch": 54, "volsiz": [54, 79, 89, 93, 109, 110, 119, 185, 207, 233, 259, 263, 279, 280, 289, 299, 354, 364, 368, 384, 385, 394, 459, 469, 473, 489, 490, 499], "reus": [54, 72, 79, 233, 251, 299, 348, 354, 452, 459], "aris": 54, "ex": [54, 236, 301, 405], "fstrim": 54, "dom0_mem": 54, "16384m": 54, "zfs_arc_max": [54, 72, 177, 199, 223, 251, 348, 452], "6442450944": 54, "balloon": 54, "xl": 54, "watch": 54, "id_part_entry_schem": 54, "id_fs_typ": 54, "zfs_member": 54, "id_part_entry_typ": 54, "6a898cc3": 54, "1dd2": 54, "11b2": 54, "99a6": 54, "080020736631": 54, "udisks_ignor": 54, "tracker": [54, 58, 59, 60], "quicker": [54, 72, 251, 348, 452], "exception": 54, "vmm": 54, "trace": [54, 105, 183, 205, 232, 275, 380, 485], "technic": [55, 56, 60, 111, 115, 281, 285, 386, 390, 491, 495], "scrape": 55, "combinator": 55, "nightmar": 55, "feasibli": 55, "birth_tim": 55, "wonder": 55, "knowledg": [55, 87, 183, 205, 229, 257, 362, 467], "surround": 55, "oh": 55, "ignore_hole_birth": [55, 72, 177, 199, 223, 251, 348, 452], "send_holes_without_birth_tim": [55, 72, 80, 223, 224, 251, 252, 348, 355, 452, 460], "announc": 56, "traffic": [56, 79, 185, 207, 233, 299, 354, 459], "ned": 57, "bass": 57, "c77b9667": 57, "29d5": 57, "610e": 57, "ae29": 57, "41e3": 57, "55a2": 57, "fe8a": 57, "b974": 57, "67aa": 57, "c77b": 57, "9667": 57, "toni": 57, "hutter": 57, "d4598027": 57, "4f3b": 57, "a9ab": 57, "6d1f": 57, "8d68": 57, "3dc2": 57, "dfb5": 57, "6ad8": 57, "60ee": 57, "d459": 57, "8027": 57, "brian": [57, 172, 181, 194, 203, 227, 255], "behlendorf": [57, 172, 181, 194, 203, 227, 255], "c6af658b": 57, "c33d": 57, "f142": 57, "657e": 57, "d1f7": 57, "c328": 57, "a296": 57, "0ab9": 57, "e991": 57, "c6af": 57, "658b": 57, "ring": 57, "behlendorf1": [57, 172, 181, 194, 203, 227, 255], "llnl": [57, 172, 181, 184, 194, 203, 206, 227, 230, 255, 258], "gov": [57, 172, 181, 194, 203, 227, 255], "7a27ad00ae142b38d4aef8cc0af7a72b4c0e44f": 57, "tagger": 57, "1441996302": 57, "0700": 57, "fri": [57, 560], "sep": [57, 555], "42": [57, 148, 164, 187, 210, 237, 333, 437, 528, 544, 560], "pdt": 57, "dsa": 57, "bring": [58, 81, 82, 130, 144, 149, 150, 158, 164, 187, 210, 237, 313, 318, 319, 327, 333, 335, 357, 403, 417, 422, 423, 431, 437, 461, 462, 510, 524, 529, 530, 538, 544, 550], "togeth": [58, 72, 80, 109, 110, 111, 113, 115, 172, 177, 194, 199, 223, 233, 251, 276, 279, 280, 281, 283, 285, 348, 355, 384, 385, 386, 388, 390, 452, 460, 489, 490, 491, 493, 495], "annual": 58, "rais": [58, 79, 233, 299, 354, 459], "ongo": [58, 156, 429, 536], "admin": [58, 59, 60], "vdev_id": [58, 73, 75, 82, 84, 174, 180, 196, 202, 210, 219, 226, 237, 247, 254, 335, 349, 351, 357, 359, 453, 455, 462, 464], "ceph": 58, "xen": 58, "hypervisor": 58, "dom0": 58, "udisks2": 58, "conduct": 58, "roadmap": [58, 59, 60], "8000": [59, 60, 564], "2q": [59, 60, 563, 564], "3c": [59, 60, 564], "4j": [59, 60, 564], "5e": [59, 60, 564], "6x": [59, 60, 564], "9p": [59, 60, 564], "er": [59, 60, 564], "hc": [59, 60, 562, 564], "jq": [59, 60, 564], "k4": [59, 60, 564], "favorit": 60, "x2014": [62, 63, 65, 66, 67, 68, 69, 71, 72, 74, 75, 77, 78, 79, 80, 81, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 205, 207, 208, 210, 218, 229, 232, 233, 235, 237, 246, 249, 256, 257, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 339, 340, 342, 343, 344, 345, 347, 348, 350, 351, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 442, 443, 445, 446, 447, 448, 449, 451, 452, 454, 455, 457, 458, 459, 460, 461, 462, 463, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547], "havxp": [62, 240, 339, 442], "x2026": [62, 63, 66, 75, 79, 80, 82, 83, 87, 88, 89, 92, 93, 94, 96, 97, 98, 99, 101, 103, 106, 107, 111, 112, 115, 116, 118, 119, 125, 126, 133, 136, 137, 141, 142, 143, 144, 145, 146, 148, 149, 150, 152, 153, 155, 156, 157, 158, 159, 160, 161, 162, 163, 165, 166, 167, 339, 340, 351, 355, 357, 358, 362, 364, 367, 368, 369, 371, 372, 373, 374, 376, 378, 381, 382, 386, 387, 390, 391, 393, 394, 399, 400, 406, 409, 410, 414, 415, 416, 417, 418, 419, 421, 422, 423, 425, 426, 428, 429, 430, 431, 432, 433, 434, 435, 436, 438, 439, 440, 442, 443, 446, 455, 459, 460, 462, 463, 467, 468, 469, 472, 473, 474, 476, 477, 478, 479, 481, 483, 486, 487, 491, 492, 495, 496, 498, 499, 505, 506, 513, 516, 517, 521, 522, 523, 524, 525, 526, 528, 529, 530, 532, 533, 535, 536, 537, 538, 539, 540, 541, 542, 543, 545, 546, 547], "vmstat": [62, 240, 339, 442], "ddh": [62, 442], "ddi": [62, 442], "ddm": [62, 442], "dmh": [62, 442], "dmi": [62, 240, 339, 442], "dmm": [62, 442], "mh": [62, 240, 339, 442], "mi": [62, 442], "ph": [62, 187, 240, 339, 442], "pm": [62, 240, 339, 442], "pdh": [62, 442], "pdi": [62, 442], "pdm": [62, 442], "pmh": [62, 442], "pmi": [62, 240, 339, 442], "pmm": [62, 442], "dhit": [62, 240, 339, 442], "dioh": [62, 442], "ddhit": [62, 442], "ddioh": [62, 442], "ddmi": [62, 442], "dmhit": [62, 442], "dmioh": [62, 442], "dmmi": [62, 442], "ioh": [62, 442], "mfug": [62, 240, 339, 442], "mhit": [62, 240, 339, 442], "mioh": [62, 442], "mmi": [62, 240, 339, 442], "mrug": [62, 240, 339, 442], "phit": [62, 240, 339, 442], "pioh": [62, 442], "pdhit": [62, 442], "pdioh": [62, 442], "pdmi": [62, 442], "pmhit": [62, 442], "pmioh": [62, 442], "pmmi": [62, 442], "arcsz": [62, 240, 339, 442], "unc": [62, 442], "dread": [62, 240, 339, 442], "ddread": [62, 442], "dmread": [62, 442], "eskip": [62, 240, 339, 442], "evict_skip": [62, 240, 339, 442], "mread": [62, 240, 339, 442], "pread": [62, 240, 339, 442], "pdread": [62, 442], "pmread": [62, 442], "l2hit": [62, 240, 339, 442], "l2miss": [62, 240, 339, 442], "l2read": [62, 240, 339, 442], "l2pref": [62, 339, 442], "l2mfu": [62, 339, 442], "l2mru": [62, 339, 442], "l2data": [62, 339, 442], "l2meta": [62, 339, 442], "l2size": [62, 240, 339, 442], "mtxmi": [62, 240, 339, 442], "mutex_miss": [62, 240, 339, 442], "l2byte": [62, 240, 339, 442], "l2asiz": [62, 240, 339, 442], "stat": [62, 67, 87, 97, 107, 125, 128, 146, 171, 185, 193, 205, 207, 210, 216, 229, 233, 237, 240, 244, 257, 267, 277, 294, 296, 315, 339, 343, 362, 372, 382, 399, 401, 419, 442, 447, 467, 477, 487, 505, 508, 526], "parsabl": [62, 93, 94, 95, 96, 97, 99, 101, 107, 111, 115, 116, 125, 142, 146, 148, 152, 157, 159, 163, 185, 207, 210, 233, 237, 240, 263, 264, 265, 266, 267, 269, 271, 277, 281, 285, 286, 294, 311, 315, 317, 321, 326, 328, 332, 339, 368, 369, 370, 371, 372, 374, 376, 382, 386, 390, 391, 399, 415, 419, 421, 425, 430, 432, 436, 442, 473, 474, 475, 476, 477, 479, 481, 487, 491, 495, 496, 505, 522, 526, 528, 532, 537, 539, 543], "operand": [62, 240, 339, 442], "sampl": [62, 146, 165, 210, 237, 240, 315, 339, 419, 438, 442, 526, 545], "decemb": [62, 187, 270, 290, 368, 442], "23": [62, 68, 80, 128, 148, 164, 173, 185, 187, 195, 207, 210, 217, 233, 237, 245, 296, 333, 344, 401, 437, 442, 448, 460, 508, 528, 544], "2022": [62, 77, 80, 89, 90, 92, 93, 94, 95, 96, 99, 108, 113, 114, 116, 118, 119, 123, 127, 128, 133, 137, 138, 141, 144, 146, 148, 152, 159, 162, 164, 166, 167, 354, 442, 457, 460, 469, 470, 472, 473, 474, 475, 476, 479, 488, 493, 494, 496, 498, 499, 503, 507, 508, 513, 517, 518, 521, 524, 526, 528, 532, 539, 542, 544, 546, 547], "stylist": [63, 169, 190, 213, 241, 340, 443], "chpvcp": [63, 169, 190, 213, 241, 340, 443], "upenn": [63, 169, 190, 213, 241, 340, 443], "lee": [63, 169, 190, 213, 241, 340, 443], "06cse480": [63, 169, 190, 213, 241, 340, 443], "emptor": [63, 169, 190, 213, 241, 340, 443], "indent": [63, 169, 190, 213, 241, 340, 443], "picki": [63, 169, 190, 213, 241, 340, 443], "ansi": [63, 128, 164, 169, 190, 213, 241, 333, 340, 401, 437, 443, 508, 544], "endif": [63, 169, 190, 213, 241, 340, 443], "cast": [63, 169, 190, 213, 241, 340, 443], "putback": [63, 169, 190, 213, 241, 340, 443], "u_int": [63, 169, 190, 213, 241, 340, 443], "u_long": [63, 169, 190, 213, 241, 340, 443], "uint_t": [63, 169, 190, 213, 241, 340, 443], "ulong_t": [63, 169, 190, 213, 241, 340, 443], "nonempti": [63, 443], "parenthesi": [63, 105, 169, 190, 213, 232, 241, 275, 340, 380, 443, 485], "preprocessor": [63, 169, 190, 213, 241, 340, 443], "unmatch": [63, 169, 190, 213, 241, 340, 443], "cpp": [63, 169, 190, 213, 241, 340, 443], "all_cap": [63, 169, 190, 213, 241, 340, 443], "deserv": [63, 169, 190, 213, 241, 340, 443], "this_is_a_long_vari": [63, 169, 190, 213, 241, 340, 443], "another_vari": [63, 169, 190, 213, 241, 340, 443], "Will": [63, 105, 144, 169, 187, 190, 210, 213, 232, 237, 241, 275, 313, 340, 380, 417, 443, 485, 524], "do_someth": [63, 169, 190, 213, 241, 340, 443], "amp": [63, 68, 72, 75, 88, 128, 140, 164, 169, 184, 190, 199, 206, 213, 222, 223, 230, 241, 250, 251, 258, 296, 333, 340, 344, 348, 351, 363, 401, 413, 437, 443, 448, 452, 455, 468, 508, 520, 544], "26": [63, 65, 66, 67, 68, 74, 83, 86, 87, 88, 131, 132, 165, 183, 205, 210, 229, 232, 237, 249, 256, 257, 275, 339, 340, 342, 343, 344, 350, 358, 361, 362, 363, 404, 405, 438, 443, 445, 446, 447, 448, 454, 463, 466, 467, 468, 511, 512, 545], "2021": [63, 65, 66, 67, 68, 69, 74, 83, 85, 86, 88, 100, 103, 105, 106, 117, 120, 126, 131, 132, 135, 136, 145, 147, 151, 153, 154, 155, 156, 158, 161, 163, 165, 249, 251, 256, 275, 299, 300, 339, 340, 342, 343, 344, 345, 350, 355, 356, 357, 358, 360, 361, 363, 364, 365, 367, 370, 371, 374, 375, 376, 378, 380, 381, 389, 391, 392, 393, 394, 395, 400, 404, 405, 406, 408, 409, 410, 411, 413, 418, 419, 420, 424, 426, 427, 428, 429, 431, 432, 434, 436, 437, 438, 439, 440, 443, 445, 446, 447, 448, 449, 454, 463, 465, 466, 468, 480, 483, 485, 486, 497, 500, 506, 511, 512, 515, 516, 525, 527, 531, 533, 534, 535, 536, 538, 541, 543, 545], "raidz_test": [64, 191, 214, 242, 341, 444], "zhack": [64, 170, 191, 214, 242, 341, 444], "zvol_wait": [64, 214, 242, 341, 444], "benchmark": [65, 72, 192, 199, 215, 223, 243, 251, 342, 348, 445, 452], "stbevtd": [65, 342, 445], "zio_off_shift": [65, 192, 215, 243, 342, 445], "raidz_data_disk": [65, 192, 215, 243, 342, 445], "zio_size_shift": [65, 192, 215, 243, 342, 445], "reflow_offset": [65, 342, 445], "sweep": [65, 192, 215, 243, 342, 445], "19": [65, 79, 128, 148, 164, 185, 187, 192, 207, 210, 215, 231, 233, 237, 243, 296, 299, 333, 342, 354, 401, 437, 445, 459, 508, 528, 544], "expans": [65, 68, 72, 82, 134, 172, 187, 194, 210, 237, 335, 342, 357, 445, 462], "weep": [65, 192, 215, 243, 342, 445], "aod": [65, 342, 445], "imeout": [65, 192, 215, 243, 342, 445], "wall": [65, 192, 215, 243, 342, 445], "enchmark": [65, 192, 215, 243, 342, 445], "xpansion": [65, 342, 445], "erbos": [65, 173, 192, 195, 215, 217, 243, 245, 342, 445], "est": [65, 192, 215, 243, 342, 445], "ebug": [65, 192, 215, 243, 342, 445], "gdb": [65, 192, 215, 243, 342, 445], "sigsegv": [65, 192, 215, 243, 342, 445], "sigabrt": [65, 192, 215, 243, 342, 445], "dgq": [66, 446], "outputdir": [66, 446], "pp": [66, 169, 190, 213, 241, 446], "uxx": [66, 446], "pathnam": [66, 95, 101, 185, 207, 233, 265, 271, 370, 376, 446, 475, 481], "gq": [66, 446], "dq": [66, 446], "nor": [66, 75, 79, 80, 97, 107, 125, 156, 183, 185, 205, 207, 233, 267, 277, 294, 299, 351, 354, 372, 382, 399, 446, 455, 459, 460, 477, 487, 505, 536], "descend": [66, 75, 79, 89, 91, 94, 98, 101, 102, 106, 109, 110, 111, 112, 113, 115, 118, 119, 121, 124, 128, 185, 207, 233, 259, 261, 264, 268, 271, 272, 276, 279, 280, 281, 282, 283, 285, 288, 289, 291, 293, 296, 299, 351, 354, 364, 366, 369, 373, 376, 377, 381, 384, 385, 386, 387, 388, 390, 393, 394, 396, 398, 401, 446, 455, 459, 469, 471, 474, 478, 481, 482, 486, 489, 490, 491, 492, 493, 495, 498, 499, 501, 504, 508], "compris": [66, 80, 88, 140, 148, 164, 176, 184, 187, 198, 200, 206, 210, 222, 224, 230, 237, 250, 252, 258, 333, 355, 363, 413, 437, 446, 460, 468, 520, 528, 544], "pertain": [66, 446], "elaps": [66, 140, 222, 250, 413, 446, 520], "timestamp": [66, 98, 105, 112, 275, 380, 446, 478, 485, 492], "test_result": [66, 446], "ini": [66, 446], "pre_us": [66, 446], "post_us": [66, 446], "quot": [66, 68, 71, 72, 75, 77, 78, 79, 80, 81, 82, 87, 89, 91, 93, 94, 96, 99, 102, 103, 105, 109, 110, 111, 115, 116, 119, 121, 128, 131, 132, 134, 137, 140, 146, 152, 158, 164, 169, 172, 175, 176, 177, 178, 182, 184, 185, 187, 190, 194, 197, 198, 199, 200, 204, 205, 206, 207, 209, 210, 213, 220, 221, 222, 223, 224, 228, 229, 230, 231, 232, 233, 236, 237, 241, 248, 250, 251, 252, 257, 258, 259, 261, 263, 264, 270, 272, 273, 275, 276, 279, 280, 281, 285, 289, 290, 291, 296, 298, 299, 300, 301, 306, 315, 321, 334, 335, 347, 348, 351, 353, 354, 355, 356, 357, 362, 364, 366, 368, 369, 377, 378, 380, 384, 385, 386, 390, 394, 396, 401, 404, 405, 410, 413, 419, 425, 431, 437, 446, 448, 451, 452, 455, 457, 458, 459, 460, 461, 462, 467, 469, 471, 473, 474, 476, 479, 482, 483, 485, 489, 490, 491, 495, 496, 499, 501, 508, 511, 512, 517, 520, 526, 532, 538, 544], "dry": [66, 91, 93, 94, 102, 105, 111, 115, 121, 158, 185, 187, 207, 210, 232, 233, 237, 261, 263, 264, 272, 275, 281, 285, 291, 327, 366, 368, 369, 377, 380, 386, 390, 396, 431, 446, 471, 473, 474, 482, 485, 491, 495, 501, 538, 555], "kmemleak": [66, 446], "failsaf": [66, 446], "hoc": [66, 446], "demonstr": [66, 446], "jkennedi": [66, 446], "07": [66, 156, 429, 446, 536, 555], "20120923t180654": [66, 446], "libzpool": [67, 68, 87, 171, 193, 195, 205, 216, 217, 229, 244, 245, 257, 343, 344, 362, 447, 448, 467], "poke": [67, 171, 193, 216, 244, 343, 447], "danger": [67, 79, 132, 171, 185, 186, 193, 207, 209, 216, 233, 236, 244, 299, 301, 343, 354, 405, 447, 459, 512], "decrement": [67, 171, 193, 216, 244, 343, 447], "cu": [67, 447], "detach": [67, 81, 84, 100, 120, 123, 127, 128, 133, 134, 135, 140, 146, 147, 148, 149, 150, 152, 154, 158, 159, 164, 176, 187, 198, 210, 222, 237, 250, 254, 270, 290, 296, 302, 303, 304, 315, 316, 317, 318, 319, 321, 323, 327, 328, 333, 334, 356, 359, 375, 395, 401, 406, 407, 408, 413, 419, 420, 421, 422, 423, 425, 427, 431, 432, 437, 447, 461, 464, 480, 500, 503, 507, 508, 513, 514, 515, 520, 526, 527, 528, 529, 530, 532, 534, 538, 539, 544, 551, 552], "undetach": [67, 447], "subcommand": [67, 78, 80, 89, 91, 97, 102, 107, 108, 109, 110, 119, 121, 125, 128, 133, 135, 146, 164, 171, 178, 185, 187, 193, 200, 207, 210, 216, 224, 233, 237, 244, 252, 259, 261, 267, 270, 272, 277, 278, 279, 280, 289, 290, 291, 294, 296, 298, 299, 302, 333, 343, 353, 364, 366, 372, 377, 382, 383, 384, 385, 394, 396, 399, 401, 406, 408, 437, 447, 458, 469, 471, 477, 482, 487, 488, 489, 490, 499, 501, 505, 508, 513, 515, 526, 544], "dir": [67, 80, 144, 171, 187, 193, 210, 216, 237, 244, 313, 343, 417, 447, 460, 524], "for_read_obj": [67, 171, 193, 216, 244, 343, 447], "for_write_obj": [67, 171, 193, 216, 244, 343, 447], "descriptions_obj": [67, 171, 193, 216, 244, 343, 447], "clairvoy": [67, 171, 193, 216, 244, 343, 447], "veg": [68, 344, 448], "size_of_each_vdev": [68, 173, 195, 217, 245, 344, 448], "alignment_shift": [68, 173, 195, 217, 245, 344, 448], "mirror_copi": [68, 173, 195, 217, 245, 344, 448], "raidz_disk": [68, 173, 195, 217, 245, 344, 448], "draid_disk": [68, 344, 448], "raid_par": [68, 344, 448], "raid_kind": [68, 344, 448], "draid_data": [68, 344, 448], "draid_spar": [68, 344, 448], "vdev_class_st": [68, 344, 448], "gang_block_threshold": [68, 173, 195, 217, 245, 344, 448], "initialize_pool_i_tim": [68, 173, 195, 217, 245, 344, 448], "kill_percentag": [68, 173, 195, 217, 245, 344, 448], "zil_failure_r": [68, 173, 195, 217, 245, 344, 448], "vg": 68, "tandem": [68, 173, 195, 217, 245, 344, 448], "nightli": [68, 173, 195, 217, 245, 344, 448], "daili": [68, 173, 195, 217, 245, 344, 448], "team": [68, 111, 115, 173, 195, 217, 245, 281, 285, 344, 386, 390, 448, 491, 495], "wrote": [68, 173, 195, 217, 245, 344, 448, 557], "ten": [68, 72, 173, 195, 217, 245, 251, 258, 344, 348, 448, 452], "quietli": [68, 173, 195, 217, 245, 344, 448], "chatti": [68, 173, 195, 217, 245, 344, 448], "ly": [68, 173, 195, 217, 245, 344, 448], "shouldn": [68, 173, 195, 217, 245, 344, 448], "64m": [68, 173, 195, 217, 245, 344, 448], "eraidz": 68, "spa_freez": [68, 344, 448], "initialis": [68, 173, 195, 217, 245, 448], "stochast": [68, 448], "prepend": [68, 82, 187, 210, 237, 335, 357, 448, 462], "ld_library_path": [68, 448], "lenni": [68, 448], "integ": [68, 79, 87, 88, 105, 183, 184, 185, 186, 205, 206, 207, 229, 230, 232, 233, 257, 258, 275, 299, 344, 354, 362, 363, 380, 448, 459, 467, 468, 485], "zfs_dbgmsg": [68, 72, 87, 105, 205, 217, 229, 232, 245, 251, 257, 275, 344, 348, 362, 380, 448, 452, 467, 485], "vvv": [68, 173, 195, 217, 245, 344, 448], "mayb": [68, 173, 195, 217, 245, 344, 448], "runlength": [68, 173, 195, 217, 245, 344, 448], "120": [68, 173, 195, 217, 245, 344, 448], "unawar": [68, 195, 217, 245, 344, 448], "zfs_stack_siz": [68, 173, 195, 217, 245, 344, 448], "stacksiz": [68, 173, 195, 217, 245, 344, 448], "spuriou": [68, 173, 195, 217, 245, 344, 448], "pthread_stack_min": [68, 173, 195, 217, 245, 344, 448], "procedur": [68, 81, 173, 195, 217, 237, 245, 334, 344, 356, 448, 461, 552], "256k": [68, 173, 194, 195, 217, 245, 251, 344, 448], "27": [69, 87, 100, 105, 106, 111, 115, 120, 135, 136, 145, 155, 161, 163, 178, 183, 205, 210, 229, 232, 257, 275, 345, 357, 362, 364, 365, 367, 375, 376, 380, 381, 389, 393, 394, 395, 406, 408, 409, 413, 418, 419, 428, 434, 436, 449, 467, 480, 485, 486, 491, 495, 500, 515, 516, 525, 535, 541, 543, 555], "spl_kmem_cache_kmem_thread": [71, 220, 248, 347, 451], "spl_kmem_cache_obj_per_slab": [71, 220, 248, 347, 451], "spl_kmem_cache_max_s": [71, 220, 248, 347, 451], "spl_kmem_cache_slab_limit": [71, 220, 248, 347, 451], "spl_kmem_alloc_warn": [71, 220, 248, 347, 451], "32768": [71, 347, 451], "spl_kmem_alloc_max": [71, 220, 248, 347, 451], "spl_kmem_cache_magazine_s": [71, 220, 248, 347, 451], "spl_hostid": [71, 75, 220, 248, 347, 351, 451, 455], "spl_hostid_path": [71, 220, 248, 347, 451], "charp": [71, 72, 177, 199, 220, 223, 248, 251, 347, 348, 451, 452], "spl_panic_halt": [71, 220, 248, 347, 451], "spl_taskq_kick": [71, 220, 248, 347, 451], "kick": [71, 140, 176, 198, 220, 222, 248, 250, 347, 413, 451, 520], "taskq": [71, 72, 220, 223, 248, 251, 347, 348, 451, 452], "didn": [71, 72, 140, 159, 176, 198, 220, 222, 237, 248, 250, 328, 347, 348, 413, 432, 451, 452, 520, 539], "spl_taskq_thread_bind": [71, 220, 248, 347, 451], "spl_taskq_thread_dynam": [71, 220, 248, 347, 451], "spl_taskq_thread_prior": [71, 220, 248, 347, 451], "spl_taskq_thread_sequenti": [71, 220, 248, 347, 451], "spl_max_show_task": [71, 220, 248, 347, 451], "spl_taskq_thread_timeout_m": [71, 451], "respawn": 71, "churn": [71, 451], "august": [71, 79, 130, 139, 142, 143, 149, 150, 157, 160, 178, 241, 243, 244, 245, 248, 250, 252, 253, 255, 258, 273, 295, 301, 302, 304, 305, 306, 307, 308, 309, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 324, 325, 326, 327, 329, 331, 333, 334, 335, 337, 347, 403, 412, 415, 416, 417, 421, 422, 423, 425, 430, 433, 435, 451, 459, 510, 519, 522, 523, 529, 530, 537, 540], "dbuf_cache_max_byt": [72, 223, 251, 348, 452], "uint64_maxb": [72, 452], "u64": [72, 348, 452], "versu": [72, 251, 348, 452], "dbuf_cache_shift": [72, 223, 251, 348, 452], "32nd": [72, 348, 452], "dbuf_metadata_cache_max_byt": [72, 223, 251, 348, 452], "dbuf_metadata_cache_shift": [72, 223, 251, 348, 452], "64th": [72, 348, 452], "dbuf_cache_hiwater_pct": [72, 223, 251, 348, 452], "dbuf_cache_lowater_pct": [72, 223, 251, 348, 452], "log2": [72, 177, 223, 251, 348, 452], "dbuf_mutex_cache_shift": [72, 452], "dmu_object_alloc_chunk_shift": [72, 251, 348, 452], "bulk": [72, 251, 348, 452], "dmu_prefetch_max": [72, 223, 251, 348, 452], "134217728b": [72, 348, 452], "l2arc_feed_again": [72, 177, 199, 223, 251, 348, 452], "l2arc_feed_min_m": [72, 177, 199, 223, 251, 348, 452], "l2arc_feed_sec": [72, 177, 199, 223, 251, 348, 452], "l2arc_headroom": [72, 81, 177, 199, 223, 251, 334, 348, 356, 452, 461], "l2arc_write_max": [72, 177, 199, 223, 251, 348, 452], "l2arc_headroom_boost": [72, 177, 199, 223, 251, 348, 452], "l2arc_exclude_speci": [72, 348, 452], "l2arc_mfuonli": [72, 251, 348, 452], "l2arc_noprefetch": [72, 177, 199, 223, 251, 348, 452], "l2arc_mru_as": [72, 348, 452], "l2arc_mfu_as": [72, 348, 452], "l2arc_prefetch_as": [72, 348, 452], "evict_l2_eligible_mru": [72, 348, 452], "evict_l2_eligible_m": [72, 348, 452], "l2arc_meta_perc": [72, 251, 348, 452], "irration": [72, 348, 452], "l2arc_trim_ahead": [72, 82, 251, 335, 348, 357, 452, 462], "benefici": [72, 348, 452], "l2arc_norw": [72, 177, 199, 223, 251, 348, 452], "l2arc_write_boost": [72, 177, 199, 223, 251, 348, 452], "33554432b": 72, "l2arc_rebuild_en": [72, 81, 251, 334, 348, 356, 452, 461], "somehow": [72, 251, 348, 452], "l2arc_rebuild_blocks_min_l2s": [72, 81, 251, 334, 348, 356, 452, 461], "1073741824b": [72, 348, 452], "mininum": [72, 348, 452], "l2arc_evict": [72, 251, 348, 452], "compar": [72, 75, 78, 79, 80, 82, 87, 97, 106, 107, 125, 166, 167, 185, 207, 233, 237, 251, 252, 257, 267, 276, 277, 294, 298, 299, 335, 348, 351, 353, 354, 355, 357, 362, 372, 381, 382, 399, 452, 455, 458, 459, 460, 462, 467, 477, 486, 487, 505, 546, 547], "metaslab_aliquot": [72, 177, 199, 223, 251, 348, 452], "1048576b": [72, 348, 452], "metaslab_bias_en": [72, 177, 199, 223, 251, 348, 452], "metaslab_force_gang": [72, 223, 251, 348, 452], "16777217b": [72, 452], "metaslab_force_ganging_pct": [72, 452], "brt_zap_prefetch": 72, "brt": [72, 78, 87, 458, 467], "brt_zap_default_b": 72, "brt_zap_default_ib": 72, "ddt_zap_default_b": 72, "ddt_zap_default_ib": 72, "zfs_default_b": [72, 348, 452], "zfs_default_ib": [72, 348, 452], "zfs_history_output_max": [72, 251, 348, 452], "dmu_max_access": [72, 251, 348, 452], "zfs_ioc_channel_program": [72, 251, 348, 452], "zfs_keep_log_spacemaps_at_export": [72, 251, 348, 452], "zfs_metaslab_segment_weight_en": [72, 199, 223, 251, 348, 452], "zfs_metaslab_switch_threshold": [72, 199, 223, 251, 348, 452], "metaslab_debug_load": [72, 177, 199, 223, 251, 348, 452], "metaslab_debug_unload": [72, 177, 199, 223, 251, 348, 452], "metaslab_fragmentation_factor_en": [72, 177, 199, 223, 251, 348, 452], "metaslab_df_max_search": [72, 223, 251, 348, 452], "16777216b": [72, 348, 452], "gt": [72, 77, 82, 95, 103, 105, 128, 140, 171, 172, 173, 175, 177, 179, 181, 182, 185, 186, 192, 193, 194, 195, 197, 199, 200, 201, 203, 204, 207, 209, 215, 216, 217, 221, 222, 223, 224, 225, 227, 228, 229, 231, 232, 233, 236, 243, 244, 245, 250, 251, 252, 253, 255, 257, 265, 273, 275, 296, 299, 301, 333, 335, 348, 354, 357, 370, 378, 380, 401, 413, 452, 457, 462, 475, 483, 485, 508, 520], "metaslab_block_pick": [72, 223, 251, 348, 452], "1024": [72, 146, 185, 210, 223, 237, 251, 315, 348, 419, 452, 526], "metaslab_df_use_largest_seg": [72, 223, 251, 348, 452], "metaslab_df_free_pct": [72, 223, 251, 348, 452], "metaslab_df_alloc_threshold": [72, 223, 251, 348, 452], "zfs_metaslab_max_size_cache_sec": [72, 251, 348, 452], "3600": [72, 251, 348, 452], "zfs_metaslab_mem_limit": [72, 251, 348, 452], "clog": [72, 251, 348, 452], "zfs_metaslab_try_hard_before_gang": [72, 348, 452], "zfs_metaslab_find_max_tri": [72, 348, 452], "zfs_vdev_default_ms_count": [72, 223, 251, 348, 452], "zfs_vdev_default_ms_shift": [72, 251, 348, 452], "29": [72, 154, 188, 211, 238, 251, 348, 370, 427, 452, 534], "zfs_vdev_max_ms_shift": [72, 452], "zfs_vdev_max_auto_ashift": [72, 251, 348, 452], "x2192": [72, 75, 351, 452, 455], "ashift_max": [72, 251, 348, 452], "zfs_vdev_min_auto_ashift": [72, 251, 348, 452], "ashift_min": [72, 251, 348, 452], "zfs_vdev_min_ms_count": [72, 223, 251, 348, 452], "vdev_validate_skip": [72, 251, 348, 452], "zfs_vdev_ms_count_limit": [72, 251, 348, 452], "131072": [72, 223, 251, 348, 452], "metaslab_preload_en": [72, 177, 199, 223, 251, 348, 452], "metaslab_preload_limit": [72, 452], "metaslab_preload_pct": [72, 452], "metaslab_unload_delai": [72, 251, 348, 452], "metaslab_unload_delay_m": [72, 251, 348, 452], "600000m": [72, 348, 452], "reference_histori": [72, 348, 452], "holder": [72, 348, 452], "reference_tracking_en": [72, 348, 452], "raidz_expand_max_copy_byt": 72, "160mb": [72, 251, 348], "raidz_expand_max_reflow_byt": 72, "reflow": 72, "raidz_io_aggregate_row": 72, "refcount_t": [72, 348, 452], "spa_config_path": [72, 177, 183, 199, 223, 251, 348, 452], "spa": [72, 87, 177, 199, 223, 251, 257, 348, 362, 452, 467], "spa_asize_infl": [72, 177, 199, 223, 251, 348, 452], "spa_load_print_vdev_tre": [72, 223, 251, 348, 452], "spa_load_verify_data": [72, 177, 199, 223, 251, 348, 452], "spa_load_verify_metadata": [72, 177, 199, 223, 251, 348, 452], "spa_load_verify_shift": [72, 223, 251, 348, 452], "16th": [72, 222, 250, 348, 413, 452], "spa_slop_shift": [72, 82, 177, 199, 223, 237, 251, 335, 348, 357, 452, 462], "spa_num_alloc": 72, "alloct": 72, "degred": 72, "spa_upgrade_errlog_limit": [72, 452], "head_errlog": [72, 80, 156, 159, 452, 460, 536, 539], "vdev_removal_max_span": [72, 223, 251, 348, 452], "32768b": [72, 348, 452], "zfs_vdev_read_gap_limit": [72, 177, 199, 223, 251, 348, 452], "vdev_file_logical_ashift": [72, 251, 348, 452], "vdev_file_physical_ashift": [72, 251, 348, 452], "zap_iterate_prefetch": [72, 223, 251, 348, 452], "zap_micro_max_s": [72, 452], "131072b": [72, 348, 452], "micro": [72, 199, 223, 251, 348, 452], "zfetch_min_dist": [72, 348, 452], "4194304b": [72, 348, 452], "got": [72, 348, 452], "satur": [72, 348, 452], "zfetch_max_dist": [72, 199, 223, 251, 348, 452], "67108864b": [72, 348, 452], "zfetch_max_idist": [72, 251, 348, 452], "zfetch_max_stream": [72, 177, 199, 223, 251, 348, 452], "zfetch": [72, 177, 199, 223, 251, 348, 452], "zfetch_min_sec_reap": [72, 177, 199, 223, 251, 348, 452], "inact": [72, 80, 147, 178, 187, 200, 210, 224, 237, 252, 316, 348, 355, 420, 452, 460, 527], "zfetch_max_sec_reap": [72, 348, 452], "zfs_abd_scatter_en": [72, 251, 348, 452], "zfs_abd_scatter_max_ord": [72, 251, 348, 452], "max_ord": [72, 348, 452], "zfs_abd_scatter_min_s": [72, 223, 251, 348, 452], "1536b": [72, 348, 452], "abd": [72, 223, 251, 348, 452], "zfs_arc_dnode_limit": [72, 199, 223, 251, 348, 452], "0b": [72, 156, 348, 429, 452, 536], "unpin": [72, 199, 223, 251, 348, 452], "ceil": [72, 199, 223, 251, 348, 452], "zfs_arc_dnode_limit_perc": [72, 199, 223, 251, 348, 452], "nonzero": [72, 81, 103, 105, 152, 187, 199, 210, 223, 232, 237, 251, 275, 321, 348, 356, 378, 380, 425, 451, 452, 461, 483, 485, 532], "zfs_arc_dnode_reduce_perc": [72, 199, 223, 251, 348, 452], "zfs_arc_average_blocks": [72, 177, 199, 223, 251, 348, 452], "8192b": [72, 348, 452], "zfs_arc_eviction_pct": [72, 251, 348, 452], "arc_is_overflow": [72, 251, 348, 452], "arc_get_data_impl": [72, 251, 348, 452], "arc_siz": [72, 251, 348, 452], "arc_c": [72, 177, 199, 223, 251, 348, 452], "finit": [72, 251, 348, 452], "zfs_arc_evict_batch_limit": [72, 177, 199, 223, 251, 348, 452], "zfs_arc_grow_retri": [72, 177, 199, 223, 251, 348, 452], "arc_grow_retri": [72, 199, 223, 251, 348, 452], "growth": [72, 88, 177, 199, 223, 251, 348, 452, 468], "zfs_arc_lotsfree_perc": [72, 177, 199, 223, 251, 348, 452], "x00d7": [72, 87, 452, 467], "zfs_arc_meta_bal": [72, 452], "proportion": [72, 452], "zfs_arc_min": [72, 177, 199, 223, 251, 348, 452], "arc_c_min": [72, 199, 223, 251, 348, 452], "zfs_arc_min_prefetch_m": [72, 223, 251, 348, 452], "0m": [72, 156, 348, 429, 452, 536], "x2261": [72, 348, 452], "zfs_arc_min_prescient_prefetch_m": [72, 223, 251, 348, 452], "zfs_arc_prune_task_thread": [72, 348, 452], "theoret": [72, 348, 452], "proven": [72, 348, 452], "zfs_max_missing_tvd": [72, 223, 251, 348, 452], "zfs_max_nvlist_src_s": [72, 251, 348, 452], "zc_nvlist_src_siz": [72, 251, 348, 452], "einval": [72, 105, 232, 251, 275, 348, 380, 452, 485], "zfs_multilist_num_sublist": [72, 199, 223, 251, 348, 452], "zfs_arc_overflow_shift": [72, 177, 199, 223, 251, 348, 452], "reclam": [72, 348, 452], "till": [72, 348, 452], "zfs_arc_shrink_shift": [72, 177, 199, 223, 251, 348, 452], "zfs_arc_pc_perc": [72, 199, 223, 251, 348, 452], "zfs_arc_shrinker_limit": [72, 251, 348, 452], "10000": [72, 177, 199, 348, 451, 452], "shrinker": [72, 251, 348, 452], "160": [72, 452], "zfs_arc_sys_fre": [72, 177, 199, 223, 251, 348, 452], "bigger": [72, 348, 452], "zfs_autoimport_dis": [72, 177, 199, 223, 251, 348, 452], "zfs_checksum_events_per_second": [72, 251, 348, 452], "zfs_commit_timeout_pct": [72, 199, 223, 251, 348, 452], "zfs_condense_indirect_commit_entry_delay_m": [72, 251, 348, 452], "zfs_condense_indirect_obsolete_pct": [72, 251, 348, 452], "zfs_condense_indirect_vdevs_en": [72, 223, 251, 348, 452], "zfs_condense_min_mapping_byt": [72, 223, 251, 348, 452], "zfs_condense_max_obsolete_byt": [72, 223, 251, 348, 452], "influenc": [72, 231, 273, 348, 452], "zfs_flag": [72, 177, 199, 223, 251, 348, 452], "zfs_dbgmsg_maxsiz": [72, 177, 199, 223, 251, 348, 452], "zfs_dbuf_state_index": [72, 177, 199, 223, 251, 348, 452], "zfs_deadman_en": [72, 177, 199, 223, 251, 348, 452], "zfs_deadman_synctime_m": [72, 177, 199, 223, 251, 348, 452], "zfs_deadman_ziotime_m": [72, 223, 251, 348, 452], "zfs_deadman_failmod": [72, 140, 222, 223, 250, 251, 348, 413, 452, 520], "partner": [72, 223, 251, 348, 452], "zfs_deadman_checktime_m": [72, 199, 223, 251, 348, 452], "60000m": [72, 348, 452], "300000m": [72, 348, 452], "zfs_dedup_prefetch": [72, 177, 199, 223, 251, 348, 452], "ed": [72, 177, 199, 223, 251, 348, 452], "500000": [72, 348, 452], "tenth": [72, 348, 452], "zfs_disable_ivset_guid_check": [72, 251, 348, 452, 559], "zfs_key_max_salt_us": [72, 251, 348, 452], "400000000": [72, 348, 452], "zfs_object_mutex_s": [72, 251, 348, 452], "hashtabl": [72, 251, 348, 452], "zfs_slow_io_events_per_second": [72, 140, 222, 223, 250, 251, 348, 413, 452, 520], "zfs_unflushed_max_mem_amt": [72, 251, 348, 452], "zfs_unflushed_max_mem_ppm": [72, 251, 348, 452], "1000ppm": [72, 348, 452], "millionth": [72, 348, 452], "zfs_unflushed_log_block_max": [72, 251, 348, 452], "ditto": [72, 187, 210, 237, 251, 348, 452], "unclean": [72, 109, 110, 207, 233, 251, 279, 280, 348, 384, 385, 452, 489, 490], "zfs_unflushed_log_block_min": [72, 251, 348, 452], "zfs_unflushed_log_block_pct": [72, 251, 348, 452], "zfs_unflushed_log_txg_max": [72, 348, 452], "zfs_unlink_suspend_progress": [72, 223, 251, 348, 452], "zfs_delete_block": [72, 199, 223, 251, 348, 452], "20480": [72, 173, 348, 452], "zfs_dirty_data_max_perc": [72, 177, 199, 223, 251, 348, 452], "zfs_dirty_data_max_max": [72, 177, 199, 223, 251, 348, 452], "zfs_dirty_data_max_max_perc": [72, 177, 199, 223, 251, 348, 452], "zfs_dirty_data_sync_perc": [72, 223, 251, 348, 452], "zfs_wrlog_data_max": [72, 348, 452], "zfs_fallocate_reserve_perc": [72, 251, 348, 452], "prealloc": [72, 251, 348, 452], "falloc": [72, 251, 348, 452], "eopnotsupp": [72, 251, 348, 452], "vector": [72, 109, 110, 199, 223, 233, 251, 279, 280, 348, 384, 385, 452, 489, 490], "zfs_bclone_en": [72, 452], "block_clon": [72, 80, 452, 460], "zfs_bclone_wait_dirti": [72, 452], "ficlon": [72, 452], "ficlonerang": [72, 452], "sse41": [72, 452], "avx512": [72, 452], "zfs_free_bpobj_en": [72, 199, 223, 251, 348, 452], "zfs_async_block_max_block": [72, 223, 251, 348, 452], "unlimit": [72, 251, 348, 452], "zfs_max_async_dedup_fre": [72, 251, 348, 452], "100000": [72, 79, 233, 299, 348, 354, 452, 459], "zfs_vdev_initializing_max_act": [72, 223, 251, 348, 452], "zfs_vdev_initializing_min_act": [72, 223, 251, 348, 452], "zfs_vdev_open_timeout_m": [72, 348, 452], "briefli": [72, 173, 192, 195, 215, 217, 243, 245, 348, 452], "zfs_vdev_rebuild_max_act": [72, 251, 348, 452], "zfs_vdev_rebuild_min_act": [72, 251, 348, 452], "zfs_vdev_removal_max_act": [72, 223, 251, 348, 452], "zfs_vdev_removal_min_act": [72, 223, 251, 348, 452], "zfs_vdev_trim_max_act": [72, 223, 251, 348, 452], "zfs_vdev_trim_min_act": [72, 223, 251, 348, 452], "zfs_vdev_nia_delai": [72, 251, 348, 452], "zfs_": [72, 128, 348, 452, 508], "_min_act": [72, 251, 348, 452], "zfs_vdev_nia_credit": [72, 251, 348, 452], "monopol": [72, 251, 348, 452], "incomplet": [72, 88, 111, 115, 184, 206, 230, 251, 258, 281, 285, 348, 363, 386, 390, 452, 468, 491, 495], "zfs_vdev_queue_depth_pct": [72, 199, 223, 251, 348, 452], "zio_dva_throttle_en": [72, 199, 223, 251, 348, 452], "zfs_vdev_def_queue_depth": [72, 452], "zfs_vdev_failfast_mask": [72, 452], "bitwis": [72, 177, 199, 223, 251, 348, 452], "ored": [72, 83, 348, 358, 452, 463], "zfs_vdev_disk_max_seg": 72, "clamp": [72, 452], "zfs_vdev_disk_class": 72, "submiss": 72, "confid": 72, "zfs_expire_snapshot": [72, 177, 199, 223, 251, 348, 452], "zfs_admin_snapshot": [72, 177, 199, 223, 251, 348, 452], "zfs_debug_histogram_verifi": [72, 177, 199, 223, 251, 348, 452], "zfs_debug_indirect_remap": [72, 223, 251, 348, 452], "zfs_debug_trim": [72, 223, 251, 348, 452], "allocat": [72, 223, 251, 348, 452], "zfs_debug_log_spacemap": [72, 251, 348, 452], "zfs_btree_verify_intens": [72, 348, 452], "btree": [72, 348, 452], "culmin": [72, 348, 452], "height": [72, 348, 452], "element": [72, 74, 105, 109, 110, 140, 175, 185, 197, 207, 221, 222, 232, 233, 249, 250, 275, 279, 280, 348, 350, 380, 384, 385, 413, 452, 454, 485, 489, 490, 520], "poison": [72, 348, 452], "zfs_free_leak_on_eio": [72, 177, 199, 223, 251, 348, 452], "zfs_free_min_time_m": [72, 177, 199, 223, 251, 348, 452], "1000m": [72, 348, 452], "zfs_obsolete_min_time_m": [72, 251, 348, 452], "zfs_immediate_write_sz": [72, 177, 199, 223, 251, 348, 452], "s64": [72, 452], "zfs_initialize_valu": [72, 223, 251, 348, 452], "16045690984833335022": [72, 348, 452], "zfs_initialize_chunk_s": [72, 251, 348, 452], "zfs_livelist_max_entri": [72, 251, 348, 452], "costli": [72, 251, 348, 452], "perspect": [72, 251, 348, 452, 563], "zfs_livelist_min_percent_shar": [72, 251, 348, 452], "overwritten": [72, 81, 251, 334, 348, 356, 452, 461], "zfs_livelist_condense_new_alloc": [72, 251, 348, 452], "blkptr": [72, 251, 348, 452], "zfs_livelist_condense_sync_cancel": [72, 251, 348, 452], "spa_livelist_condense_sync": [72, 251, 348, 452], "zfs_livelist_condense_sync_paus": [72, 251, 348, 452], "synctask": [72, 251, 348, 452], "zfs_livelist_condense_zthr_cancel": [72, 251, 348, 452], "spa_livelist_condense_cb": [72, 251, 348, 452], "zfs_livelist_condense_zthr_paus": [72, 251, 348, 452], "zfs_lua_max_instrlimit": [72, 223, 251, 348, 452], "100000000": [72, 348, 452], "zfs_lua_max_memlimit": [72, 223, 251, 348, 452], "104857600": [72, 348, 452], "zfs_max_dataset_nest": [72, 128, 223, 251, 348, 452, 508], "zfs_max_log_walk": [72, 251, 348, 452], "zfs_max_logsm_summary_length": [72, 251, 348, 452], "16777216": [72, 452], "cow": [72, 177, 199, 223, 251, 348, 452], "giant": [72, 177, 199, 223, 251, 348, 452], "formerli": [72, 452], "forbad": [72, 452], "zfs_allow_redacted_dataset_mount": [72, 251, 348, 452], "redact": [72, 79, 80, 84, 90, 104, 115, 122, 128, 251, 252, 254, 260, 274, 285, 292, 296, 299, 348, 354, 355, 359, 365, 379, 390, 397, 401, 452, 459, 460, 464, 470, 484, 495, 502, 508], "zfs_min_metaslabs_to_flush": [72, 251, 348, 452], "zfs_metaslab_fragmentation_threshold": [72, 177, 199, 223, 251, 348, 452], "zfs_mg_fragmentation_threshold": [72, 177, 199, 223, 251, 348, 452], "95": [72, 223, 251, 348, 452], "zfs_mg_noalloc_threshold": [72, 177, 199, 223, 251, 348, 452], "zfs_ddt_data_is_speci": [72, 81, 223, 237, 251, 334, 348, 356, 452, 461], "zfs_user_indirect_is_speci": [72, 223, 251, 348, 452], "zfs_multihost_histori": [72, 199, 223, 251, 348, 452], "x27e8": [72, 74, 79, 87, 103, 105, 249, 257, 348, 350, 354, 362, 378, 380, 452, 454, 459, 467, 483, 485], "x27e9": [72, 74, 79, 87, 103, 105, 249, 257, 348, 350, 354, 362, 378, 380, 452, 454, 459, 467, 483, 485], "zfs_multihost_interv": [72, 82, 199, 210, 223, 237, 251, 335, 348, 357, 452, 462], "zfs_multihost_import_interv": [72, 199, 223, 251, 348, 452], "whichev": [72, 177, 199, 223, 251, 348, 452], "mmp": [72, 199, 223, 251, 348, 452], "zfs_multihost_fail_interv": [72, 199, 223, 251, 348, 452], "zfs_no_scrub_io": [72, 177, 199, 223, 251, 348, 452], "zfs_no_scrub_prefetch": [72, 177, 199, 223, 251, 348, 452], "zfs_nocacheflush": [72, 177, 199, 223, 251, 348, 452], "zfs_nopwrite_en": [72, 177, 199, 223, 251, 348, 452], "occurr": [72, 87, 183, 205, 229, 257, 348, 362, 452, 467], "zfs_dmu_offset_next_sync": [72, 199, 223, 251, 348, 452], "zfs_pd_bytes_max": [72, 177, 199, 223, 251, 348, 452], "52428800b": [72, 348, 452], "zfs_traverse_indirect_prefetch_limit": [72, 348, 452], "l0": [72, 74, 197, 221, 249, 348, 350, 452, 454], "zfs_per_txg_dirty_frees_perc": [72, 199, 223, 251, 348, 452], "zfs_prefetch_dis": [72, 177, 199, 223, 251, 348, 452], "zfs_qat_checksum_dis": [72, 223, 251, 348, 452], "zfs_qat_compress_dis": [72, 223, 251, 348, 452], "zfs_qat_encrypt_dis": [72, 223, 251, 348, 452], "zfs_vnops_read_chunk_s": [72, 348, 452], "zfs_read_histori": [72, 177, 199, 223, 251, 348, 452], "zfs_read_history_hit": [72, 177, 199, 223, 251, 348, 452], "zfs_rebuild_max_seg": [72, 251, 348, 452], "zfs_rebuild_scrub_en": [72, 348, 452], "zfs_rebuild_vdev_limit": [72, 348, 452], "zfs_reconstruct_indirect_combinations_max": [72, 223, 251, 348, 452], "zfs_recov": [72, 177, 199, 223, 251, 348, 452], "zfs_removal_ignore_error": [72, 223, 251, 348, 452], "henc": [72, 111, 115, 348, 452, 491, 495], "zfs_removal_suspend_progress": [72, 223, 251, 348, 452], "zfs_remove_max_seg": [72, 223, 251, 348, 452], "zfs_resilver_disable_def": [72, 251, 348, 452], "zfs_resilver_min_time_m": [72, 177, 199, 223, 251, 348, 452], "3000m": [72, 348, 452], "zfs_scan_ignore_error": [72, 199, 223, 251, 348, 452], "unrepair": [72, 156, 199, 223, 251, 348, 452, 536], "zfs_scrub_after_expand": 72, "zfs_scrub_min_time_m": [72, 223, 251, 348, 452], "zfs_scrub_error_blocks_per_txg": [72, 452], "zfs_scan_checkpoint_intv": [72, 223, 251, 348, 452], "zfs_scan_fill_weight": [72, 223, 251, 348, 452], "zfs_scan_issue_strategi": [72, 223, 251, 348, 452], "zfs_scan_mem_lim_fact": [72, 223, 251, 348, 452], "checkpoint": [72, 80, 81, 84, 87, 143, 144, 148, 156, 163, 164, 210, 223, 224, 229, 237, 251, 252, 254, 257, 312, 313, 317, 325, 332, 333, 334, 348, 355, 356, 359, 362, 416, 417, 421, 429, 436, 437, 452, 460, 461, 464, 467, 523, 524, 528, 536, 543, 544], "zfs_scan_legaci": [72, 223, 251, 348, 452], "zfs_scan_max_ext_gap": [72, 223, 251, 348, 452], "2097152b": [72, 348, 452], "zfs_scan_mem_lim_soft_fact": [72, 223, 251, 348, 452], "zfs_scan_report_txg": [72, 348, 452], "zfs_scan_strict_mem_lim": [72, 251, 348, 452], "tight": [72, 251, 348, 452], "zfs_scan_suspend_progress": [72, 251, 348, 452], "zfs_scan_vdev_limit": [72, 223, 251, 348, 452], "zfs_send_corrupt_data": [72, 177, 199, 223, 251, 348, 452], "zfs_send_unmodified_spill_block": [72, 223, 251, 348, 452], "zfs_send_no_prefetch_queue_ff": [72, 251, 348, 452], "woken": [72, 251, 348, 452], "zfs_send_no_prefetch_queue_length": [72, 251, 348, 452], "zfs_send_queue_ff": [72, 251, 348, 452], "zfs_send_queue_length": [72, 199, 223, 251, 348, 452], "zfs_recv_queue_ff": [72, 251, 348, 452], "zfs_recv_queue_length": [72, 199, 223, 251, 348, 452], "zfs_recv_write_batch_s": [72, 251, 348, 452], "dmu": [72, 87, 172, 183, 194, 205, 229, 251, 257, 348, 362, 452, 467], "zfs_recv_best_effort_correct": [72, 452], "zfs_override_estimate_records": [72, 223, 251, 348, 452], "zfs_sync_pass_deferred_fre": [72, 177, 199, 223, 251, 348, 452], "zfs_spa_discard_memory_limit": [72, 223, 251, 348, 452], "zfs_special_class_metadata_reserve_pct": [72, 223, 251, 348, 452], "zfs_sync_pass_dont_compress": [72, 177, 199, 223, 251, 348, 452], "converg": [72, 223, 251, 348, 452], "overwrit": [72, 79, 80, 91, 102, 111, 115, 121, 131, 185, 200, 207, 223, 224, 233, 251, 252, 261, 272, 276, 281, 285, 291, 299, 300, 348, 354, 355, 366, 377, 386, 390, 396, 404, 452, 459, 460, 471, 482, 491, 495, 501, 511], "detriment": [72, 223, 251, 348, 452], "zfs_sync_pass_rewrit": [72, 177, 199, 223, 251, 348, 452], "zfs_trim_extent_bytes_max": [72, 223, 251, 348, 452], "zfs_trim_extent_bytes_min": [72, 223, 251, 348, 452], "zfs_trim_metaslab_skip": [72, 223, 251, 348, 452], "zfs_trim_queue_limit": [72, 223, 251, 348, 452], "zfs_trim_txg_batch": [72, 223, 251, 348, 452], "zfs_txg_histori": [72, 177, 199, 223, 251, 348, 452], "zfs_vdev_aggregation_limit": [72, 177, 199, 223, 251, 348, 452], "zfs_vdev_aggregation_limit_non_rot": [72, 223, 251, 348, 452], "zfs_vdev_mirror_rotating_inc": [72, 199, 223, 251, 348, 452], "predecessor": [72, 199, 223, 251, 348, 452], "decis": [72, 199, 223, 251, 348, 452, 563], "zfs_vdev_mirror_rotating_seek_inc": [72, 199, 223, 251, 348, 452], "zfs_vdev_mirror_rotating_seek_offset": [72, 199, 223, 251, 348, 452], "zfs_vdev_mirror_non_rotating_inc": [72, 199, 223, 251, 348, 452], "zfs_vdev_mirror_non_rotating_seek_inc": [72, 199, 223, 251, 348, 452], "zfs_vdev_write_gap_limit": [72, 177, 199, 223, 251, 348, 452], "4096b": [72, 348, 452], "zfs_vdev_raidz_impl": [72, 199, 223, 251, 348, 452], "squar": [72, 348, 452], "powerpc_altivec": [72, 251, 348, 452], "altivec": [72, 251, 348, 452], "powerpc": [72, 251, 348, 452], "zfs_vdev_schedul": [72, 177, 199, 251, 348, 452], "zfs_zevent_len_max": [72, 177, 199, 223, 251, 348, 452], "zfs_zevent_retain_max": [72, 251, 348, 452], "zfs_zevent_retain_expire_sec": [72, 251, 348, 452], "900": [72, 251, 348, 452], "lifespan": [72, 251, 348, 452], "zfs_zil_clean_taskq_maxalloc": [72, 223, 251, 348, 452], "1048576": [72, 79, 88, 199, 223, 251, 348, 452, 459, 468], "zfs_zil_clean_taskq_minalloc": [72, 223, 251, 348, 452], "zfs_zil_clean_taskq_nthr_pct": [72, 223, 251, 348, 452], "zil_maxblocks": [72, 223, 251, 348, 452], "zil_maxcopi": [72, 452], "7680b": [72, 452], "wr_copi": [72, 452], "tradeoff": [72, 452], "zil_nocacheflush": [72, 223, 251, 348, 452], "zil_replay_dis": [72, 177, 199, 223, 251, 348, 452], "zil_slog_bulk": [72, 199, 223, 251, 348, 452], "zfs_zil_saxattr": [72, 452], "zilsaxattr": [72, 80, 452, 460], "zfs_embedded_slog_min_m": [72, 348, 452], "asid": [72, 82, 237, 335, 348, 357, 452, 462], "unreason": [72, 159, 237, 328, 348, 432, 452, 539], "zstd_earlyabort_pass": [72, 452], "zstd_abort_s": [72, 452], "zio_deadman_log_al": [72, 223, 251, 348, 452], "possess": [72, 223, 251, 348, 452], "zio_slow_io_m": [72, 140, 159, 222, 223, 237, 250, 251, 328, 348, 413, 432, 452, 520, 539], "30000m": [72, 164, 348, 437, 452, 544], "zfs_xattr_compat": [72, 452], "scheme": [72, 74, 94, 113, 118, 128, 185, 197, 207, 221, 233, 249, 296, 350, 401, 452, 454, 474, 493, 498, 508], "zio_requeue_io_start_cut_in_lin": [72, 177, 199, 223, 251, 348, 452], "requeu": [72, 177, 199, 223, 251, 348, 452], "zio_taskq_batch_pct": [72, 177, 199, 223, 251, 348, 452], "zio_taskq_batch_tpq": [72, 348, 452], "zio_taskq_wr_iss_ncpu": 72, "zio_taskq_read": [72, 348, 452], "zio_taskq_writ": [72, 348, 452], "zvol_inhibit_dev": [72, 177, 199, 223, 251, 348, 452], "zvol_major": [72, 177, 199, 223, 251, 348, 452], "zvol_prefetch_byt": [72, 177, 199, 223, 251, 348, 452], "partition": [72, 348, 452], "zvol_request_sync": [72, 79, 199, 223, 251, 348, 452], "zvol_thread": [72, 199, 223, 251, 348, 452], "multiqueu": [72, 452], "blk": [72, 177, 199, 223, 251, 452], "mq": [72, 452], "zvol_blk_mq_thread": [72, 452], "zvol_use_blk_mq": [72, 452], "api": [72, 105, 232, 275, 380, 452, 485], "zvol_blk_mq_blocks_per_thread": [72, 452], "zvol_blk_mq_queue_depth": [72, 452], "queue_depth": [72, 452], "blkdev_min_rq": [72, 452], "blkdev_max_rq": [72, 452], "blkdev_default_rq": [72, 452], "zvol_volmod": [72, 79, 199, 207, 223, 233, 251, 299, 348, 354, 452, 459], "zvol_enforce_quota": [72, 452], "minima": [72, 348, 452], "maxima": [72, 348, 452], "bleed": [72, 177, 199, 223, 251, 348, 452], "januari": [72, 82, 91, 102, 121, 207, 261, 272, 275, 291, 348, 366, 377, 386, 390, 396, 471, 482, 501], "devlink": [74, 175, 197, 221, 249, 350, 454], "hierarchi": [74, 78, 86, 91, 92, 93, 94, 96, 99, 102, 111, 113, 115, 116, 121, 128, 175, 182, 185, 197, 204, 207, 221, 228, 233, 249, 256, 261, 262, 264, 272, 283, 291, 296, 298, 350, 353, 361, 366, 367, 369, 377, 386, 388, 390, 396, 401, 454, 458, 466, 471, 472, 473, 474, 476, 479, 482, 491, 493, 495, 496, 501, 508], "coexist": [74, 175, 197, 221, 249, 350, 454], "enclosure_symlink": [74, 197, 221, 249, 350, 454], "sg": [74, 197, 221, 249, 350, 454], "enclosure_symlinks_prefix": [74, 197, 221, 249, 350, 454], "num": [74, 175, 197, 221, 249, 350, 454], "x201c": [74, 80, 88, 187, 249, 350, 355, 363, 454, 460, 468], "enc": [74, 197, 221, 249, 350, 454], "x201d": [74, 80, 88, 187, 249, 350, 355, 363, 454, 460, 468], "examin": [74, 82, 86, 87, 156, 175, 182, 187, 197, 204, 210, 221, 228, 229, 237, 249, 256, 257, 325, 335, 350, 357, 361, 362, 429, 454, 462, 466, 467, 536], "govern": [74, 86, 140, 175, 176, 182, 197, 198, 204, 221, 222, 228, 249, 250, 256, 350, 361, 413, 454, 466, 520], "phy": [74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "bai": [74, 175, 197, 221, 249, 350, 454], "sg_se": [74, 197, 221, 249, 350, 454], "unsupport": [74, 80, 82, 178, 187, 197, 200, 210, 221, 224, 237, 249, 252, 335, 350, 355, 357, 454, 460, 462], "pci_id": [74, 197, 221, 249, 350, 454], "06": [74, 197, 221, 249, 350, 454], "l1": [74, 197, 221, 249, 350, 454], "u0": [74, 197, 221, 249, 350, 454], "u1": [74, 146, 159, 164, 197, 210, 221, 237, 249, 333, 350, 437, 454, 526, 539, 544], "miscellan": [75, 77, 78, 79, 80, 81, 82, 168, 351, 353, 354, 355, 356, 357, 441, 455, 457, 458, 459, 460, 461, 462, 548], "hook": [75, 351, 455], "x2193": [75, 351, 455], "initqueu": [75, 351, 455], "sysinit": [75, 351, 455], "__________________": [75, 351, 455], "x2191": [75, 351, 455], "_____________________": [75, 351, 455], "sysroot": [75, 351, 455], "x2190": [75, 351, 455], "nonroot": [75, 351, 455], "______________________": [75, 351, 455], "needshutdown": [75, 351, 455], "bootup": [75, 351, 455], "flowchart": [75, 351, 455], "90zf": [75, 351, 455], "henceforth": [75, 351, 455], "libx32": [75, 351, 455], "glob": [75, 351, 455], "deem": [75, 87, 183, 205, 229, 257, 351, 362, 455, 467], "pluse": [75, 351, 455], "x2018": [75, 80, 264, 275, 351, 355, 455, 460], "x2019": [75, 80, 264, 275, 351, 355, 455, 460], "x00a0": [75, 89, 91, 101, 102, 105, 109, 110, 113, 119, 121, 128, 137, 140, 164, 184, 206, 207, 230, 232, 233, 237, 258, 259, 261, 272, 275, 279, 280, 283, 289, 291, 306, 351, 364, 366, 376, 377, 380, 384, 385, 388, 394, 396, 401, 410, 413, 437, 455, 469, 471, 481, 482, 485, 489, 490, 493, 499, 501, 508, 517, 520, 544], "rootflag": [75, 351, 455], "zfsprop": [75, 76, 81, 85, 91, 93, 96, 97, 99, 100, 101, 102, 104, 107, 116, 117, 120, 121, 122, 123, 125, 127, 128, 137, 140, 226, 254, 261, 263, 266, 267, 269, 270, 271, 272, 274, 277, 286, 287, 290, 291, 292, 294, 296, 306, 351, 352, 356, 360, 366, 368, 371, 372, 374, 375, 376, 377, 379, 382, 391, 392, 395, 396, 397, 399, 401, 410, 413, 455, 456, 461, 465, 471, 473, 476, 477, 479, 480, 481, 482, 484, 487, 496, 497, 500, 501, 502, 503, 505, 507, 508, 517, 520], "pivot": [75, 351, 455], "zfsforc": [75, 351, 455], "conjunct": [75, 93, 94, 109, 110, 111, 115, 133, 146, 148, 152, 158, 159, 185, 187, 207, 210, 233, 237, 263, 264, 279, 280, 281, 285, 302, 315, 317, 321, 327, 328, 351, 368, 369, 384, 385, 386, 390, 406, 419, 421, 425, 431, 432, 455, 473, 474, 489, 490, 491, 495, 513, 526, 528, 532, 538, 539], "zpool_import_opt": [75, 351, 455], "thrice": [75, 351, 455], "settl": [75, 351, 455], "signal": [75, 88, 111, 115, 184, 206, 230, 258, 351, 363, 455, 468, 491, 495], "forcibli": [75, 94, 351, 369, 455, 474, 560], "hostonli": [75, 351, 455], "succeed": [75, 105, 232, 275, 351, 380, 455, 485], "plymouth": [75, 351, 455], "zpoolprop": [75, 76, 77, 101, 128, 133, 134, 137, 140, 142, 144, 148, 154, 157, 158, 161, 162, 164, 254, 302, 303, 306, 311, 313, 317, 323, 326, 327, 330, 331, 333, 351, 352, 401, 406, 407, 410, 413, 415, 417, 421, 427, 430, 431, 434, 435, 437, 455, 456, 457, 481, 508, 513, 514, 517, 520, 522, 524, 528, 534, 537, 538, 541, 542, 544, 561, 562], "march": [75, 89, 92, 93, 94, 95, 96, 99, 108, 109, 110, 113, 114, 116, 118, 119, 133, 137, 138, 141, 144, 146, 148, 152, 159, 162, 164, 169, 190, 213, 251, 300, 336, 351, 455, 469, 472, 473, 474, 475, 476, 479, 488, 489, 490, 493, 494, 496, 498, 499, 513, 517, 518, 521, 524, 526, 528, 532, 539, 542, 544], "vdevprop": [76, 142, 157, 456, 522, 537], "zfsconcept": [76, 79, 92, 118, 128, 254, 262, 288, 296, 299, 352, 354, 367, 393, 401, 456, 459, 472, 498, 508], "zpoolconcept": [76, 79, 133, 137, 140, 159, 160, 162, 164, 254, 299, 302, 306, 328, 329, 331, 333, 352, 354, 406, 410, 413, 432, 433, 435, 437, 456, 459, 513, 517, 520, 539, 540, 542, 544], "annot": [77, 79, 82, 128, 185, 207, 233, 296, 299, 354, 401, 457, 459, 462, 508], "io_n": [77, 457], "io_t": [77, 457], "slow_io_n": 77, "slow_io_t": 77, "kb": [77, 79, 185, 207, 233, 299, 354, 457, 459], "forth": [77, 79, 82, 185, 207, 233, 299, 354, 457, 459, 462], "zettabyt": [77, 79, 96, 99, 116, 185, 207, 233, 266, 269, 286, 299, 354, 371, 374, 391, 457, 459, 476, 479, 496], "1536m": [77, 79, 185, 207, 233, 299, 354, 457, 459], "5g": [77, 79, 148, 164, 185, 187, 207, 210, 233, 237, 299, 333, 354, 437, 457, 459, 528, 544], "50gb": [77, 79, 185, 207, 233, 299, 354, 457, 459], "asiz": [77, 457], "psize": [77, 87, 183, 205, 229, 257, 362, 457, 467], "expands": [77, 82, 148, 187, 210, 237, 317, 335, 357, 421, 457, 462, 528], "physpath": [77, 457], "encpath": [77, 457], "fru": [77, 140, 176, 198, 222, 250, 413, 457, 520], "numchildren": [77, 457], "read_error": [77, 457], "write_error": [77, 457], "checksum_error": [77, 457], "initialize_error": [77, 457], "null_op": [77, 457], "read_op": [77, 457], "write_op": [77, 457], "free_op": [77, 457], "claim_op": [77, 457], "trim_op": [77, 457], "null_byt": [77, 457], "read_byt": [77, 457], "write_byt": [77, 457], "free_byt": [77, 457], "claim_byt": [77, 457], "trim_byt": [77, 457], "cumul": [77, 457], "bootsiz": [77, 457], "failfast": [77, 457], "propag": [77, 457], "punctuat": [77, 79, 82, 185, 207, 233, 299, 354, 457, 459, 462], "dash": [77, 79, 82, 137, 185, 187, 207, 210, 233, 237, 299, 306, 354, 410, 457, 459, 462, 517], "underscor": [77, 79, 82, 88, 137, 184, 185, 187, 206, 207, 210, 230, 233, 237, 258, 299, 306, 354, 363, 410, 457, 459, 462, 468, 517], "programmat": [77, 79, 82, 105, 128, 185, 207, 232, 233, 275, 296, 299, 354, 380, 401, 457, 459, 462, 485, 508], "revers": [77, 78, 79, 80, 82, 97, 103, 105, 107, 108, 125, 178, 185, 200, 207, 224, 232, 233, 252, 267, 275, 277, 278, 294, 298, 299, 353, 354, 355, 372, 378, 380, 382, 383, 399, 457, 458, 459, 460, 462, 477, 483, 485, 487, 488, 505], "octob": [77, 78, 166, 167, 199, 220, 222, 240, 362, 457, 458, 546, 547], "administ": [78, 185, 207, 233, 298, 353, 458], "snapdev": [78, 79, 89, 119, 185, 207, 233, 298, 299, 353, 354, 364, 394, 458, 459, 469, 499], "snapdir": [78, 79, 89, 96, 99, 116, 119, 128, 185, 207, 233, 259, 289, 296, 298, 299, 353, 354, 364, 394, 401, 458, 459, 469, 476, 479, 496, 499, 508], "standpoint": [78, 185, 207, 233, 298, 353, 458], "distinct": [78, 185, 207, 233, 298, 353, 458], "light": [78, 185, 207, 233, 298, 353, 458], "incent": [78, 185, 207, 233, 298, 353, 458], "instantan": [78, 146, 185, 207, 210, 233, 237, 298, 315, 353, 419, 458, 526], "relationship": [78, 91, 102, 108, 121, 185, 207, 233, 261, 272, 278, 291, 298, 353, 366, 377, 383, 396, 458, 471, 482, 488, 501], "tib": [78, 207, 233, 298, 353, 458], "improperli": [78, 184, 185, 206, 207, 230, 233, 258, 298, 353, 458], "shallow": [78, 458], "reflink": [78, 458], "sharenf": [79, 89, 96, 99, 116, 117, 119, 128, 185, 207, 233, 259, 287, 289, 296, 299, 354, 364, 392, 394, 401, 459, 469, 476, 479, 496, 497, 499, 508], "sharesmb": [79, 89, 96, 99, 116, 117, 119, 128, 185, 207, 233, 259, 287, 289, 296, 299, 354, 364, 392, 394, 401, 459, 469, 476, 479, 496, 497, 499, 508], "shorten": [79, 82, 185, 187, 207, 210, 233, 237, 299, 335, 354, 357, 459, 462], "compressratio": [79, 96, 99, 116, 128, 185, 207, 233, 296, 299, 354, 401, 459, 476, 479, 496, 508], "refcompressratio": [79, 185, 207, 233, 299, 354, 459], "createtxg": [79, 207, 233, 299, 354, 459], "role": [79, 207, 233, 299, 354, 459], "defer_destroi": [79, 185, 207, 233, 299, 354, 459], "encryptionroot": [79, 91, 102, 121, 233, 261, 272, 291, 299, 354, 366, 377, 396, 459, 471, 482, 501], "implicitli": [79, 91, 102, 121, 187, 233, 261, 272, 291, 299, 354, 366, 377, 396, 459, 471, 482, 501], "filesystem_count": [79, 185, 207, 233, 299, 354, 459], "keystatu": [79, 91, 102, 121, 233, 261, 272, 291, 299, 354, 366, 377, 396, 459, 471, 482, 501], "lifetim": [79, 88, 184, 206, 207, 230, 233, 258, 299, 354, 363, 459, 468], "logicalreferenc": [79, 185, 207, 233, 299, 354, 459], "quantiti": [79, 185, 207, 233, 299, 354, 459], "lrefer": [79, 185, 207, 233, 299, 354, 459], "logicalus": [79, 185, 207, 233, 299, 354, 459], "luse": [79, 185, 207, 233, 299, 354, 459], "objsetid": [79, 233, 299, 354, 459], "receive_resume_token": [79, 109, 110, 111, 115, 207, 233, 279, 280, 281, 285, 299, 354, 384, 385, 386, 390, 459, 489, 490, 491, 495], "opaqu": [79, 207, 233, 299, 354, 459], "token": [79, 109, 110, 166, 167, 207, 233, 279, 280, 299, 336, 354, 384, 385, 439, 440, 459, 489, 490, 546, 547], "redact_snap": [79, 299, 354, 459], "snapshot_count": [79, 185, 207, 233, 299, 354, 459], "snapshot_limit": [79, 89, 119, 185, 207, 233, 259, 289, 299, 354, 364, 394, 459, 469, 499], "subset": [79, 111, 115, 207, 233, 281, 285, 299, 354, 386, 390, 459, 491, 495, 554], "usedbi": [79, 185, 207, 233, 299, 354, 459], "decompos": [79, 185, 207, 233, 299, 354, 459], "usedbychildren": [79, 96, 99, 116, 128, 185, 207, 233, 296, 299, 354, 401, 459, 476, 479, 496, 508], "usedbydataset": [79, 96, 99, 116, 128, 185, 207, 233, 296, 299, 354, 401, 459, 476, 479, 496, 508], "usedbyrefreserv": [79, 96, 99, 116, 128, 185, 207, 233, 296, 299, 354, 401, 459, 476, 479, 496, 508], "usedbysnapshot": [79, 96, 99, 116, 128, 185, 207, 233, 296, 299, 354, 401, 459, 476, 479, 496, 508], "refreserv": [79, 81, 82, 89, 93, 96, 99, 116, 119, 128, 185, 207, 233, 237, 259, 263, 289, 296, 299, 334, 335, 354, 356, 357, 364, 368, 394, 401, 459, 461, 462, 469, 473, 476, 479, 496, 499, 508], "userus": [79, 89, 91, 97, 102, 107, 119, 121, 125, 128, 185, 207, 233, 259, 261, 267, 272, 277, 289, 291, 294, 296, 299, 354, 364, 366, 372, 377, 382, 394, 396, 399, 401, 459, 469, 471, 477, 482, 487, 499, 501, 505, 508], "charg": [79, 185, 207, 233, 299, 354, 459], "owner": [79, 80, 111, 115, 185, 207, 224, 233, 252, 281, 285, 299, 354, 355, 386, 390, 459, 460, 491, 495], "unprivileg": [79, 89, 119, 164, 184, 185, 206, 207, 210, 230, 233, 237, 258, 299, 333, 354, 364, 394, 437, 459, 469, 499, 544], "grant": [79, 82, 89, 119, 128, 185, 187, 207, 210, 233, 237, 259, 289, 296, 299, 335, 354, 357, 364, 394, 401, 459, 462, 469, 499, 508], "privileg": [79, 80, 82, 88, 89, 93, 105, 119, 133, 137, 146, 164, 184, 185, 187, 206, 207, 210, 224, 230, 232, 233, 237, 252, 258, 259, 263, 275, 289, 299, 302, 306, 315, 333, 335, 354, 355, 357, 363, 364, 368, 380, 394, 406, 410, 419, 437, 459, 460, 462, 468, 469, 473, 485, 499, 513, 517, 526, 544], "everyon": [79, 89, 119, 185, 207, 233, 259, 289, 299, 354, 364, 394, 459, 469, 499], "joe": [79, 185, 207, 233, 299, 354, 459], "789": [79, 185, 207, 233, 299, 354, 459], "sid": [79, 97, 107, 125, 185, 207, 233, 267, 277, 294, 299, 354, 372, 382, 399, 459, 477, 487, 505], "smith": [79, 185, 207, 233, 299, 354, 459], "mydomain": [79, 185, 207, 233, 299, 354, 459], "456": [79, 185, 207, 233, 299, 354, 459], "userobjus": [79, 89, 97, 107, 119, 125, 207, 233, 267, 277, 294, 299, 354, 364, 372, 382, 394, 399, 459, 469, 477, 487, 499, 505], "behalf": [79, 207, 233, 299, 354, 459], "df": [79, 134, 207, 233, 299, 354, 459], "userobjquota": [79, 89, 97, 107, 119, 125, 207, 233, 267, 277, 294, 299, 354, 364, 372, 382, 394, 399, 459, 469, 477, 487, 499, 505], "userref": [79, 185, 207, 233, 299, 354, 459], "groupus": [79, 89, 91, 102, 119, 121, 128, 185, 207, 233, 259, 261, 272, 289, 291, 296, 299, 354, 364, 366, 377, 394, 396, 401, 459, 469, 471, 482, 499, 501, 508], "groupobjus": [79, 89, 119, 207, 233, 299, 354, 364, 394, 459, 469, 499], "projectus": [79, 89, 119, 128, 233, 259, 289, 299, 354, 364, 394, 401, 459, 469, 499, 508], "chattr": [79, 80, 224, 233, 252, 299, 354, 355, 459, 460], "anytim": [79, 224, 233, 252, 299, 354, 459], "lsattr": [79, 233, 299, 354, 459], "projectobjus": [79, 89, 119, 233, 259, 289, 299, 354, 364, 394, 459, 469, 499], "fileset": [79, 233, 299, 354, 459], "projectobjquota": [79, 89, 119, 233, 259, 289, 299, 354, 364, 394, 459, 469, 499], "snapshots_chang": [79, 459], "kbyte": [79, 185, 207, 233, 299, 354, 459], "volblock": [79, 185, 207, 233, 299, 354, 459], "interpret": [79, 87, 89, 105, 119, 183, 185, 205, 207, 229, 232, 233, 257, 259, 275, 289, 299, 354, 362, 364, 380, 394, 459, 467, 469, 485, 499], "aclinherit": [79, 89, 96, 99, 116, 119, 128, 185, 207, 233, 259, 289, 296, 299, 354, 364, 394, 401, 459, 469, 476, 479, 496, 499, 508], "noallow": [79, 185, 207, 233, 299, 354, 459], "ac": [79, 89, 119, 128, 185, 207, 233, 296, 299, 354, 401, 459, 469, 499, 508], "write_acl": [79, 185, 207, 233, 299, 354, 459], "write_own": [79, 185, 207, 233, 299, 354, 459], "aclmod": [79, 89, 96, 99, 116, 119, 128, 296, 299, 354, 364, 394, 401, 459, 469, 476, 479, 496, 499, 508], "groupmask": [79, 299, 354, 459], "sticki": [79, 299, 354, 459], "noacl": [79, 185, 207, 233, 299, 354, 459], "getfacl": [79, 299, 354, 459], "setfacl": [79, 299, 354, 459], "mailer": [79, 185, 207, 233, 299, 354, 459], "noatim": [79, 185, 207, 233, 299, 354, 459], "moder": [79, 185, 207, 233, 299, 354, 459, 559], "nul": [79, 106, 233, 276, 299, 354, 381, 459, 486], "fscontext": [79, 89, 119, 181, 185, 203, 207, 227, 233, 255, 299, 354, 364, 394, 459, 469, 499], "defcontext": [79, 89, 119, 181, 203, 207, 227, 233, 255, 299, 354, 364, 394, 459, 469, 499], "unlabel": [79, 181, 185, 203, 207, 227, 233, 255, 299, 354, 459], "rootcontext": [79, 89, 119, 181, 185, 203, 207, 227, 233, 255, 299, 354, 364, 394, 459, 469, 499], "inod": [79, 95, 177, 181, 185, 199, 203, 207, 223, 227, 233, 251, 255, 265, 299, 348, 354, 370, 459, 475], "1k": [79, 207, 233, 299, 354, 459], "2k": [79, 207, 210, 233, 237, 299, 354, 459], "liter": [79, 101, 185, 207, 233, 271, 299, 354, 376, 459, 481], "lustr": [79, 207, 233, 299, 354, 459], "dnsize": [79, 207, 233, 299, 354, 459], "hex": [79, 87, 183, 205, 229, 233, 257, 299, 354, 362, 459, 467], "pbkdf2": [79, 233, 299, 354, 459], "pbkdf2iter": [79, 89, 91, 102, 119, 121, 233, 261, 272, 291, 299, 354, 364, 366, 377, 394, 396, 459, 469, 471, 482, 499, 501], "secret": [79, 80, 200, 224, 233, 252, 299, 354, 355, 459, 460], "uri": [79, 233, 299, 354, 459], "ssl_ca_cert_fil": [79, 354, 459], "concaten": [79, 354, 459], "ssl_ca_cert_path": [79, 354, 459], "ssl_client_cert_fil": [79, 354, 459], "ssl_client_key_fil": [79, 354, 459], "brute": [79, 233, 299, 354, 459], "attack": [79, 80, 91, 102, 121, 200, 224, 233, 252, 261, 272, 291, 299, 354, 355, 366, 377, 396, 459, 460, 471, 482, 501], "arriv": [79, 233, 299, 354, 459], "350000": [79, 233, 299, 354, 459], "noexec": [79, 185, 207, 233, 299, 354, 459], "volthread": 79, "ancestor": [79, 89, 96, 99, 105, 116, 119, 128, 185, 207, 233, 259, 266, 269, 275, 286, 289, 296, 299, 354, 364, 371, 374, 380, 391, 394, 401, 459, 469, 476, 479, 485, 496, 499, 508], "impos": [79, 185, 207, 233, 299, 354, 459], "special_small_block": [79, 81, 89, 119, 233, 237, 299, 334, 354, 356, 364, 394, 459, 461, 469, 499], "unshar": [79, 94, 117, 128, 185, 207, 233, 264, 287, 296, 299, 354, 369, 392, 401, 459, 474, 497, 508], "nbmand": [79, 89, 96, 99, 103, 116, 119, 128, 185, 207, 231, 233, 259, 273, 289, 296, 299, 354, 364, 378, 394, 401, 459, 469, 476, 479, 483, 496, 499, 508], "buggi": [79, 459], "overlai": [79, 89, 104, 119, 122, 185, 207, 233, 274, 292, 299, 354, 364, 379, 394, 397, 459, 469, 484, 499, 502], "userquota": [79, 89, 97, 107, 119, 125, 185, 207, 233, 259, 267, 277, 289, 294, 299, 354, 364, 372, 382, 394, 399, 459, 469, 477, 487, 499, 505], "edquot": [79, 105, 185, 207, 232, 233, 275, 299, 354, 380, 459, 485], "groupquota": [79, 89, 119, 185, 207, 233, 259, 289, 299, 354, 364, 394, 459, 469, 499], "groupobjquota": [79, 89, 119, 207, 233, 299, 354, 364, 394, 459, 469, 499], "projectquota": [79, 89, 119, 233, 259, 289, 299, 354, 364, 394, 459, 469, 499], "rdonli": [79, 82, 185, 187, 207, 210, 233, 237, 299, 335, 354, 357, 459, 462], "suboptim": [79, 185, 207, 233, 299, 354, 459], "recsiz": [79, 185, 207, 233, 299, 354, 459], "redundantli": [79, 81, 185, 207, 233, 299, 354, 356, 459, 461], "refquota": [79, 82, 89, 96, 99, 116, 119, 128, 185, 207, 233, 237, 259, 289, 296, 299, 335, 354, 357, 364, 394, 401, 459, 462, 469, 476, 479, 496, 499, 508], "thick": [79, 233, 299, 354, 459], "hasn": [79, 185, 207, 233, 299, 354, 459], "norelatim": [79, 185, 207, 233, 299, 354, 459], "secondari": [79, 185, 207, 233, 299, 354, 459], "specul": 79, "zfs_disable_prefetch": 79, "bypass": 79, "setuid": [79, 89, 96, 99, 103, 116, 119, 128, 185, 207, 231, 233, 259, 273, 289, 296, 299, 354, 364, 378, 394, 401, 459, 469, 476, 479, 483, 496, 499, 508], "suid": [79, 207, 233, 299, 354, 459], "nosuid": [79, 185, 207, 233, 299, 354, 459], "usershar": [79, 128, 185, 207, 233, 296, 299, 354, 401, 459, 508], "ldap": [79, 128, 185, 207, 233, 296, 299, 354, 401, 459, 508], "smbpasswd": [79, 128, 185, 207, 233, 296, 299, 354, 401, 459, 508], "disallow": [79, 207, 233, 299, 354, 459], "reshar": [79, 459], "exportf": [79, 128, 185, 207, 233, 296, 299, 354, 401, 459, 508], "crossmnt": [79, 207, 233, 299, 354, 459], "no_subtree_check": [79, 185, 207, 233, 299, 354, 459], "negat": [79, 87, 257, 354, 362, 459, 467], "o_dsync": [79, 185, 207, 233, 299, 354, 459], "understood": [79, 185, 207, 233, 299, 354, 459], "unexpect": [79, 94, 185, 207, 233, 264, 299, 354, 369, 459, 474], "fledg": [79, 207, 233, 299, 354, 459], "exposit": [79, 207, 233, 299, 354, 459], "vscan": [79, 89, 96, 99, 116, 119, 128, 185, 207, 233, 259, 289, 296, 299, 354, 364, 394, 401, 459, 469, 476, 479, 496, 499, 508], "virus": [79, 185, 207, 233, 299, 354, 459], "viru": [79, 185, 207, 233, 299, 354, 459], "getxattr": [79, 185, 207, 233, 299, 354, 459], "setxattr": [79, 185, 207, 233, 299, 354, 459], "noxattr": [79, 181, 185, 203, 207, 227, 233, 255, 299, 354, 459], "jail": [79, 84, 120, 128, 254, 290, 296, 299, 354, 359, 395, 401, 459, 464, 500, 508], "unix": [79, 98, 112, 185, 207, 233, 299, 354, 459, 478, 492], "formc": [79, 185, 207, 233, 299, 354, 459], "formkc": [79, 185, 207, 233, 299, 354, 459], "formkd": [79, 185, 207, 233, 299, 354, 459], "unicod": [79, 185, 207, 233, 299, 354, 459], "mand": [79, 354, 459], "nomand": [79, 354, 459], "nodevic": [79, 185, 207, 233, 299, 354, 459], "nosetuid": [79, 185, 207, 233, 299, 354, 459], "whitespac": [80, 81, 82, 187, 210, 237, 334, 355, 356, 357, 460, 461, 462], "saniti": [80, 165, 355, 438, 460, 545], "newlin": [80, 106, 165, 233, 276, 355, 381, 438, 460, 486, 545], "bootpool": [80, 355, 460], "dset": [80, 103, 355, 378, 460, 483], "copy_file_rang": [80, 460], "nontrivi": [80, 451, 460], "bookmark_v2": [80, 224, 252, 355, 460, 559], "bookmark_written": [80, 252, 355, 460], "phase": [80, 252, 355, 460], "device_remov": [80, 152, 224, 237, 252, 321, 355, 425, 460, 532], "nist": [80, 200, 224, 252, 355, 460], "sha": [80, 200, 224, 252, 355, 460], "competit": [80, 200, 224, 252, 355, 460], "350": [80, 200, 224, 252, 355, 460], "seed": [80, 200, 224, 252, 355, 460], "fed": [80, 200, 224, 252, 355, 460], "112": [80, 178, 200, 224, 252, 355, 460], "misnom": [80, 178, 200, 224, 252, 355, 460], "bpobj": [80, 132, 178, 186, 200, 209, 224, 236, 252, 301, 355, 405, 460, 512], "errlog": [80, 132, 186, 209, 236, 301, 405, 460, 512], "Its": [80, 134, 224, 252, 355, 460], "bonu": [80, 200, 224, 252, 355, 460], "multi_vdev_crash_dump": [80, 200, 224, 252, 355, 460], "arrang": [80, 200, 224, 252, 355, 460], "dumpadm": [80, 200, 224, 252, 355, 460], "obsolete_count": [80, 224, 252, 355, 460], "x2013": [80, 82, 128, 335, 348, 355, 357, 401, 460, 462, 508], "prjid": [80, 224, 252, 355, 460], "raidz_expans": 80, "redaction_bookmark": [80, 111, 115, 252, 281, 285, 355, 386, 390, 460, 491, 495], "redacted_dataset": [80, 252, 355, 460], "redaction_list_spil": 80, "fip": [80, 200, 224, 252, 355, 460], "arithmet": [80, 200, 224, 252, 355, 460], "candid": [80, 200, 224, 252, 355, 460], "finalist": [80, 200, 224, 252, 355, 460], "vdev_zaps_v2": [80, 460], "reguid": [80, 82, 84, 135, 140, 164, 176, 187, 198, 210, 222, 237, 250, 254, 304, 333, 335, 357, 359, 408, 413, 437, 460, 462, 464, 515, 520, 544], "xattrdir": [80, 460], "durabl": [80, 460], "rewound": [80, 224, 252, 355, 460], "zstd_compress": [80, 252, 355, 460], "modestli": [80, 252, 355, 460], "250": [80, 252, 355, 460], "mb": [80, 223, 232, 233, 251, 252, 275, 355, 380, 460], "june": [80, 97, 98, 107, 112, 123, 124, 125, 127, 134, 153, 156, 158, 176, 198, 200, 224, 259, 260, 262, 263, 264, 265, 266, 267, 268, 269, 271, 276, 277, 278, 281, 282, 284, 285, 286, 287, 288, 289, 293, 294, 296, 298, 353, 356, 369, 371, 372, 373, 374, 382, 383, 387, 391, 398, 399, 401, 410, 426, 431, 432, 437, 460, 477, 478, 487, 492, 503, 504, 505, 507, 533, 536, 538], "slice": [81, 210, 237, 334, 356, 461], "shorthand": [81, 187, 210, 237, 334, 356, 461], "draid2": [81, 356, 461], "draid3": [81, 356, 461], "single_drive_iop": [81, 356, 461], "datad": [81, 356, 461], "childrenc": [81, 356, 461], "sparess": [81, 356, 461], "mypool": [81, 187, 210, 237, 334, 356, 461], "rich": [81, 187, 210, 237, 334, 356, 461], "health": [81, 82, 130, 134, 144, 148, 159, 164, 187, 210, 237, 313, 317, 328, 333, 334, 335, 356, 357, 403, 417, 421, 432, 437, 461, 462, 510, 524, 528, 539, 544], "replica": [81, 139, 140, 158, 176, 187, 198, 210, 222, 237, 250, 308, 327, 334, 356, 412, 413, 431, 461, 519, 520, 538, 550, 551, 552, 553, 556, 561, 562, 563], "greatest": [81, 187, 210, 237, 334, 356, 461], "reissu": [81, 187, 210, 237, 334, 356, 461], "mistak": [81, 237, 334, 356, 461], "thought": [81, 237, 334, 356, 461], "unenforc": [81, 237, 334, 356, 461], "april": [81, 129, 205, 210, 229, 233, 257, 297, 402, 461, 462, 509], "bcloneratio": [82, 462], "bclonesav": [82, 462], "bcloneus": [82, 462], "autoexpand": [82, 140, 176, 187, 198, 210, 222, 237, 250, 335, 357, 413, 462, 520], "unfrag": [82, 237, 335, 357, 462], "discrep": [82, 187, 210, 237, 335, 357, 462], "load_guid": [82, 237, 335, 357, 462], "invis": [82, 187, 210, 237, 335, 357, 462], "altroot": [82, 137, 144, 148, 158, 164, 187, 210, 237, 306, 313, 317, 327, 333, 335, 357, 410, 417, 421, 431, 437, 462, 517, 524, 528, 538, 544], "unknown": [82, 85, 140, 176, 187, 198, 210, 222, 237, 250, 335, 357, 360, 413, 462, 465, 520, 554], "expon": [82, 187, 210, 237, 335, 357, 462], "dy": [82, 210, 237, 335, 357, 462], "grown": [82, 88, 187, 210, 237, 335, 357, 462, 468], "autoreplac": [82, 130, 187, 210, 237, 335, 357, 403, 462, 510], "autoonlin": [82, 210, 237, 335, 357, 462], "printabl": [82, 187, 210, 237, 335, 357, 462], "ascii": [82, 95, 128, 187, 210, 237, 335, 357, 370, 462, 475, 508], "dedupditto": [82, 187, 210, 237, 335, 357, 462], "catastroph": [82, 187, 210, 237, 335, 357, 462], "feature_nam": [82, 178, 187, 200, 210, 224, 237, 252, 335, 357, 462], "listsnapshot": [82, 101, 210, 237, 271, 335, 357, 376, 462, 481], "listsnap": [82, 101, 128, 185, 187, 207, 210, 233, 237, 296, 335, 357, 401, 462, 481, 508], "dummi": [83, 111, 115, 179, 201, 225, 253, 281, 285, 358, 386, 390, 463, 491, 495], "uncorrect": [83, 144, 179, 187, 201, 210, 225, 237, 253, 313, 358, 417, 463, 524, 557], "fsck": [84, 87, 180, 183, 202, 205, 226, 229, 254, 257, 359, 362, 464, 467], "groupspac": [84, 107, 125, 128, 185, 207, 233, 254, 277, 294, 296, 359, 382, 399, 401, 464, 487, 505, 508], "projectspac": [84, 97, 106, 125, 128, 233, 254, 267, 276, 294, 296, 359, 372, 381, 399, 401, 464, 477, 486, 505, 508], "unallow": [84, 89, 128, 185, 207, 233, 254, 259, 296, 359, 364, 401, 464, 469, 508], "unjail": [84, 100, 128, 254, 270, 296, 359, 375, 401, 464, 480, 508], "unzon": [84, 127, 464, 507], "zfs_ids_to_path": [84, 254, 359, 464], "zfs_prepare_disk": [84, 359, 464], "zinject": [84, 88, 180, 184, 202, 206, 226, 230, 254, 258, 359, 363, 464, 468], "reopen": [84, 132, 136, 149, 150, 155, 164, 186, 187, 209, 210, 236, 237, 254, 301, 305, 318, 319, 324, 333, 359, 405, 409, 422, 423, 428, 437, 464, 512, 516, 529, 530, 535, 544], "zpool_influxdb": [84, 359, 464], "zstream": [84, 109, 110, 167, 254, 279, 280, 359, 384, 385, 440, 464, 489, 490, 547], "zstreamdump": [84, 166, 180, 202, 226, 254, 359, 439, 464, 546], "sfnvh": [85, 181, 203, 227, 255, 360, 465], "zfs_mount_help": [85, 128, 296, 360, 401, 465, 508], "libmount": [85, 360, 465], "sloppi": [85, 181, 203, 227, 255, 360, 465], "parser": [85, 360, 465], "config_fil": [86, 182, 204, 228, 256, 361, 466], "unsatisfactori": [86, 182, 204, 228, 256, 361, 466], "mpath": [86, 182, 204, 228, 256, 361, 466], "classifi": [86, 256, 361, 466], "abcddfghiklmnpstvxyi": [87, 467], "dumpdir": [87, 205, 229, 257, 362, 467], "objset": [87, 129, 132, 186, 209, 236, 257, 297, 301, 362, 402, 405, 467, 509, 512], "adipv": [87, 205, 229, 257, 362, 467], "word0": [87, 205, 229, 257, 362, 467], "word1": [87, 205, 229, 257, 362, 467], "word15": [87, 205, 229, 257, 362, 467], "aflpxi": [87, 229, 257, 362, 467], "lsize": [87, 183, 205, 229, 257, 362, 467], "ap": [87, 183, 205, 229, 257, 362, 467], "inher": [87, 109, 110, 183, 205, 229, 233, 257, 279, 280, 362, 384, 385, 467, 489, 490], "precis": [87, 183, 205, 229, 257, 362, 467], "errat": [87, 183, 205, 229, 257, 362, 467], "tupl": [87, 132, 183, 186, 205, 209, 229, 236, 257, 301, 362, 405, 467, 512], "ddd": [87, 205, 229, 257, 362, 467], "dddd": [87, 205, 229, 257, 362, 467], "ddddd": [87, 205, 229, 257, 362, 467], "sequenc": [87, 257, 362, 467], "l2arc_dev_hdr_mag": [87, 257, 362, 467], "lll": [87, 205, 229, 257, 362, 467], "mmmm": [87, 205, 229, 257, 362, 467], "lookup": [87, 89, 119, 185, 207, 233, 259, 289, 364, 394, 467, 469, 499], "zdb_no_zl": [87, 229, 257, 362, 467], "uninterpret": [87, 183, 205, 229, 257, 362, 467], "tt": [87, 467], "ttt": [87, 467], "aa": [87, 183, 205, 229, 257, 362, 467], "demot": [87, 183, 205, 229, 257, 362, 467], "aaa": [87, 183, 205, 229, 257, 362, 467], "bbc": [87, 205, 229, 257, 362, 467], "msg": [87, 105, 232, 275, 380, 467, 485, 550, 551, 552, 553, 554, 555, 556, 557, 559, 560, 561, 562, 563], "decrypt": [87, 109, 110, 132, 233, 236, 279, 280, 301, 384, 385, 405, 467, 489, 490, 512], "uncontrol": [87, 467], "parseabl": [87, 187, 467], "unscal": [87, 183, 205, 229, 257, 362, 467], "amen": [87, 183, 205, 229, 257, 362, 467], "1000000": [87, 183, 205, 229, 251, 257, 362, 467], "mimic": [87, 183, 205, 229, 257, 362, 467], "cr_txg": [87, 183, 205, 229, 257, 362, 467], "1051": [87, 183, 205, 229, 257, 362, 467], "356": [87, 183, 205, 229, 257, 362, 467], "486m": [87, 183, 205, 229, 257, 362, 467], "137": [87, 183, 205, 229, 257, 362, 467], "1546": [87, 183, 205, 229, 257, 362, 467], "lvl": [87, 183, 205, 229, 257, 362, 467], "iblk": [87, 183, 205, 229, 257, 362, 467], "dblk": [87, 183, 205, 229, 257, 362, 467], "dsize": [87, 183, 205, 229, 257, 362, 467], "0k": [87, 183, 205, 229, 257, 362, 467], "______________________________": [87, 183, 205, 229, 257, 362, 467], "refcnt": [87, 183, 205, 229, 257, 362, 467], "694k": [87, 183, 205, 229, 257, 362, 467], "0g": [87, 96, 99, 116, 128, 148, 164, 183, 185, 187, 205, 207, 210, 229, 233, 237, 257, 296, 333, 362, 401, 437, 467, 476, 479, 496, 508, 528, 544], "35": [87, 183, 205, 229, 257, 362, 467], "33g": [87, 183, 205, 229, 257, 362, 467], "699m": [87, 183, 205, 229, 257, 362, 467], "7k": [87, 183, 205, 229, 257, 362, 467], "79g": [87, 183, 205, 229, 257, 362, 467], "45g": [87, 183, 205, 229, 257, 362, 467], "novemb": [87, 177, 185, 467], "ffhilmvvz": [88, 363, 468], "zedletdir": [88, 184, 206, 230, 258, 363, 468], "pidfil": [88, 184, 206, 230, 258, 363, 468], "statefil": [88, 184, 206, 230, 258, 363, 468], "job": [88, 363, 468], "buflen": [88, 468], "zedlet": [88, 103, 184, 206, 230, 231, 258, 273, 363, 378, 468, 483], "linkag": [88, 184, 206, 230, 258, 363, 468], "throw": [88, 105, 111, 115, 184, 206, 230, 232, 258, 275, 363, 380, 386, 390, 468, 485, 491, 495], "wind": [88, 184, 206, 230, 258, 363, 468], "daemonis": [88, 363, 468], "therebi": [88, 184, 206, 230, 258, 363, 468], "reprocess": [88, 184, 206, 230, 258, 363, 468], "hardcod": [88, 206, 230, 258, 363, 468], "unreclaim": [88, 468], "nvpair": [88, 184, 206, 230, 258, 363, 468], "pair": [88, 105, 140, 184, 206, 222, 230, 232, 250, 258, 275, 363, 380, 413, 468, 485, 520], "eid": [88, 184, 206, 230, 258, 363, 468], "monoton": [88, 184, 206, 230, 258, 363, 468], "breviti": [88, 184, 206, 230, 258, 363, 468], "subclass": [88, 140, 164, 176, 184, 187, 198, 206, 210, 222, 230, 237, 250, 258, 309, 333, 363, 413, 437, 468, 520, 544], "wherea": [88, 105, 156, 184, 187, 206, 210, 230, 232, 237, 258, 275, 325, 363, 380, 429, 468, 485, 536], "ownership": [88, 184, 206, 230, 258, 363, 468], "dotfil": [88, 184, 206, 230, 258, 363, 468], "alphabet": [88, 101, 184, 185, 206, 207, 230, 233, 258, 271, 363, 376, 468, 481], "presumpt": [88, 184, 206, 230, 258, 363, 468], "rc": [88, 100, 120, 184, 206, 230, 258, 270, 290, 363, 375, 395, 468, 480, 500], "zed_": [88, 184, 206, 230, 258, 363, 468], "zevent_": [88, 140, 176, 184, 198, 206, 222, 230, 250, 258, 363, 413, 468, 520], "alphanumer": [88, 137, 184, 187, 206, 210, 230, 237, 258, 306, 363, 410, 468, 517], "zevent_eid": [88, 184, 206, 230, 258, 363, 468], "zevent_class": [88, 184, 206, 230, 258, 363, 468], "zevent_subclass": [88, 184, 206, 230, 258, 363, 468], "zevent_tim": [88, 184, 206, 230, 258, 363, 468], "epoch": [88, 98, 105, 112, 184, 206, 230, 258, 275, 363, 380, 468, 478, 485, 492], "zevent_time_sec": [88, 184, 206, 230, 258, 363, 468], "zevent_time_nsec": [88, 184, 206, 230, 258, 363, 468], "zevent_time_str": [88, 184, 206, 230, 258, 363, 468], "rfc3339": [88, 184, 206, 230, 258, 363, 468], "compliant": [88, 128, 184, 185, 206, 207, 230, 233, 258, 296, 363, 401, 468, 508], "zed_pid": [88, 184, 206, 230, 258, 363, 468], "zed_zedlet_dir": [88, 184, 206, 230, 258, 363, 468], "zfs_alia": [88, 184, 206, 230, 258, 363, 468], "zfs_version": [88, 184, 206, 230, 258, 363, 468], "zfs_releas": [88, 184, 206, 230, 258, 363, 468], "sysconfdir": [88, 103, 184, 206, 230, 231, 258, 273, 363, 378, 468, 483], "zfsexecdir": [88, 103, 130, 230, 231, 258, 273, 363, 378, 403, 468, 483, 510], "runstatedir": [88, 184, 206, 230, 258, 363, 468], "pid": [88, 184, 206, 230, 258, 363, 468], "sighup": [88, 363, 468], "rescan": [88, 184, 206, 230, 258, 363, 468], "sigterm": [88, 363, 468], "sigint": [88, 363, 468], "taunt": [88, 363, 468], "internation": [88, 184, 206, 230, 258, 363, 468], "gettext": [88, 184, 206, 230, 258, 363, 468], "dglu": [89, 119, 207, 233, 259, 289, 364, 394, 469, 499], "perm": [89, 119, 185, 207, 233, 259, 289, 364, 394, 469, 499], "setnam": [89, 119, 185, 207, 233, 259, 289, 364, 394, 469, 499], "dglru": [89, 119, 207, 233, 259, 289, 364, 394, 469, 499], "dlr": [89, 119, 207, 233, 259, 289, 364, 394, 469, 499], "whom": [89, 119, 185, 207, 233, 259, 289, 364, 394, 469, 499], "entiti": [89, 97, 107, 119, 125, 185, 207, 233, 259, 267, 277, 289, 294, 364, 372, 382, 394, 399, 469, 477, 487, 499, 505], "gu": [89, 119, 207, 233, 259, 289, 364, 394, 469, 499], "protocol": [89, 119, 165, 185, 207, 233, 259, 289, 364, 394, 438, 469, 499, 545], "userprop": [89, 119, 185, 207, 233, 259, 289, 364, 394, 469, 499], "mlslabel": [89, 119, 185, 364, 394, 469, 499], "creator": [89, 119, 185, 207, 233, 259, 289, 364, 394, 469, 499], "ldugec": [89, 119, 185, 207, 233, 259, 289, 364, 394, 469, 499], "cindi": [89, 119, 128, 185, 207, 233, 296, 401, 469, 499, 508], "755": [89, 119, 128, 185, 207, 233, 296, 401, 469, 499, 508], "add_subdirectori": [89, 119, 128, 185, 207, 233, 296, 401, 469, 499, 508], "staff": [89, 119, 128, 185, 207, 233, 296, 401, 469, 499, 508], "pset": [89, 119, 128, 185, 207, 233, 296, 401, 469, 499, 508], "10g": [89, 119, 128, 148, 164, 185, 187, 207, 210, 233, 237, 296, 333, 401, 437, 469, 499, 508, 528, 544], "newbookmark": [90, 105, 260, 275, 365, 380, 470, 485], "forens": [91, 102, 121, 261, 272, 291, 366, 377, 396, 471, 482, 501], "indetermin": [91, 102, 121, 261, 272, 291, 366, 377, 396, 471, 482, 501], "fuid": [91, 102, 121, 128, 233, 261, 272, 291, 296, 366, 377, 396, 401, 471, 482, 501, 508], "malici": [91, 102, 121, 233, 261, 272, 291, 366, 377, 396, 471, 482, 501], "crime": [91, 102, 121, 233, 261, 272, 291, 366, 377, 396, 471, 482, 501], "bob": [92, 93, 96, 99, 101, 116, 118, 128, 185, 207, 233, 296, 401, 472, 473, 476, 479, 481, 496, 498, 508], "yesterdai": [92, 94, 113, 114, 118, 128, 185, 207, 233, 296, 401, 472, 474, 493, 494, 498, 508], "pnpuv": [93, 368, 473], "create_ancestor": [93, 263, 368, 473], "nearest": [93, 185, 207, 233, 263, 368, 473], "rfnprv": [94, 207, 233, 264, 369, 474], "rdnprv": [94, 207, 233, 264, 369, 474], "precondit": [94, 185, 207, 233, 264, 369, 474], "newest": [94, 185, 207, 233, 264, 369, 474], "week": [94, 113, 118, 128, 185, 207, 233, 296, 401, 474, 493, 498, 508], "7daysago": [94, 113, 118, 128, 185, 207, 233, 296, 401, 474, 493, 498, 508], "6daysago": [94, 113, 118, 128, 185, 207, 233, 296, 401, 474, 493, 498, 508], "5daysago": [94, 113, 118, 128, 185, 207, 233, 296, 401, 474, 493, 498, 508], "4daysago": [94, 113, 118, 128, 185, 207, 233, 296, 401, 474, 493, 498, 508], "3daysago": [94, 113, 118, 128, 185, 207, 233, 296, 401, 474, 493, 498, 508], "2daysago": [94, 113, 118, 128, 185, 207, 233, 296, 401, 474, 493, 498, 508], "fhth": [95, 370, 475], "door": [95, 185, 207, 233, 265, 370, 475], "socket": [95, 185, 207, 233, 265, 370, 475], "arrow": [95, 185, 207, 233, 265, 370, 475], "0ooo": [95, 370, 475], "escap": [95, 165, 370, 438, 475, 545], "oldnam": [95, 128, 185, 207, 233, 296, 401, 475, 508], "newnam": [95, 128, 185, 207, 233, 296, 401, 475, 508], "hp": [96, 97, 99, 101, 107, 116, 125, 142, 157, 163, 185, 207, 210, 233, 237, 266, 267, 269, 271, 277, 286, 294, 311, 326, 332, 371, 372, 374, 376, 382, 391, 399, 415, 430, 436, 476, 477, 479, 481, 487, 496, 505, 522, 537, 543], "kilobyt": [96, 99, 116, 185, 207, 233, 266, 269, 286, 371, 374, 391, 476, 479, 496], "terabyt": [96, 99, 116, 185, 207, 233, 266, 269, 286, 371, 374, 391, 476, 479, 496], "petabyt": [96, 99, 116, 185, 207, 233, 266, 269, 286, 371, 374, 391, 476, 479, 496], "exabyt": [96, 99, 116, 185, 207, 233, 266, 269, 286, 371, 374, 391, 476, 479, 496], "ann": [96, 99, 101, 114, 116, 128, 185, 207, 233, 296, 401, 476, 479, 481, 494, 496, 508], "gbyte": [96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508], "50g": [96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508], "jul": [96, 99, 116, 128, 156, 185, 207, 233, 296, 401, 429, 476, 479, 496, 508, 536], "53": [96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508, 555], "21k": [96, 99, 101, 116, 128, 185, 207, 233, 296, 401, 476, 479, 481, 496, 508], "00x": [96, 99, 116, 128, 148, 164, 185, 187, 207, 210, 233, 237, 296, 333, 401, 437, 476, 479, 496, 508, 528, 544], "20g": [96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508], "depart": [96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508], "12345": [96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508], "neo": [96, 99, 116, 128, 185, 207, 233, 296, 401, 476, 479, 496, 508], "resolut": [96, 99, 116, 128, 164, 185, 207, 233, 296, 333, 401, 437, 476, 479, 496, 508, 544], "hinp": [97, 107, 125, 185, 207, 233, 267, 277, 294, 372, 382, 399, 477, 487, 505], "posixus": [97, 107, 125, 185, 207, 233, 267, 277, 294, 372, 382, 399, 477, 487, 505], "smbuser": [97, 107, 125, 185, 207, 233, 267, 277, 294, 372, 382, 399, 477, 487, 505], "posixgroup": [97, 107, 125, 185, 207, 233, 267, 277, 294, 372, 382, 399, 477, 487, 505], "smbgroup": [97, 107, 125, 185, 207, 233, 267, 277, 294, 372, 382, 399, 477, 487, 505], "2019": [97, 98, 104, 107, 112, 122, 124, 125, 139, 142, 143, 149, 150, 157, 160, 207, 218, 223, 229, 232, 233, 237, 246, 257, 259, 260, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 274, 276, 277, 278, 281, 282, 284, 285, 286, 287, 288, 289, 290, 292, 293, 294, 295, 296, 298, 302, 304, 305, 306, 307, 308, 309, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 324, 325, 326, 327, 329, 331, 333, 334, 335, 353, 369, 372, 373, 379, 382, 383, 387, 397, 398, 399, 401, 412, 415, 416, 417, 421, 422, 423, 425, 430, 433, 435, 477, 478, 484, 487, 492, 502, 504, 505, 519, 522, 523, 529, 530, 537, 540], "rhp": [98, 112, 478, 492], "ebusi": [98, 105, 112, 128, 185, 207, 232, 233, 268, 275, 282, 296, 373, 380, 387, 401, 478, 485, 492, 508], "jailid": [100, 120, 270, 290, 375, 395, 480, 500], "jailnam": [100, 120, 270, 290, 375, 395, 480, 500], "jid": [100, 120, 270, 290, 375, 395, 480, 500], "enforce_statf": [100, 120, 270, 290, 375, 395, 480, 500], "unaccept": [100, 120, 123, 127, 270, 290, 375, 395, 480, 500, 503, 507], "white": [101, 185, 207, 233, 271, 376, 481], "shortcut": [101, 185, 207, 233, 271, 376, 481], "usedsnap": [101, 185, 207, 233, 271, 376, 481], "usedd": [101, 185, 207, 233, 271, 376, 481], "usedrefreserv": [101, 185, 207, 233, 271, 376, 481], "usedchild": [101, 185, 207, 233, 271, 376, 481], "ascend": [101, 185, 207, 233, 271, 376, 481], "criteria": [101, 185, 207, 233, 271, 376, 481], "bottom": [101, 185, 207, 233, 271, 376, 481], "vol": [101, 481], "450k": [101, 128, 185, 207, 233, 296, 401, 481, 508], "457g": [101, 128, 185, 207, 233, 296, 401, 481, 508], "18k": [101, 128, 185, 207, 233, 296, 401, 481, 508], "315k": [101, 128, 185, 207, 233, 296, 401, 481, 508], "276k": [101, 128, 185, 207, 233, 296, 401, 481, 508], "februari": [101, 104, 122, 140, 183, 223, 232, 274, 279, 280, 292, 310, 330, 332, 379, 384, 385, 397, 414, 481, 484, 502], "systemdgeneratordir": [103, 231, 273, 378, 483], "encroot": [103, 231, 273, 378, 483], "et": [103, 378, 483], "al": [103, 378, 483], "requiredbi": [103, 231, 273, 378, 483], "wax": [103, 378, 483], "wane": [103, 378, 483], "strength": [103, 378, 483], "ho": [103, 378, 483], "writeabl": [103, 231, 273, 378, 483], "inject": [103, 132, 173, 186, 195, 209, 217, 236, 245, 301, 378, 405, 483, 512], "satisfactori": [103, 378, 483], "oflv": [104, 122, 274, 292, 379, 397, 484, 502], "fu": [104, 122, 274, 292, 379, 397, 484, 502], "forcefulli": [104, 122, 138, 141, 185, 187, 207, 210, 233, 237, 274, 292, 310, 379, 397, 411, 414, 484, 502, 518, 521], "jn": [105, 232, 233, 275, 380, 485], "atom": [105, 118, 185, 207, 232, 233, 258, 275, 288, 380, 393, 485, 498], "json": [105, 232, 233, 275, 380, 485], "submodul": [105, 232, 233, 275, 380, 485], "million": [105, 232, 233, 275, 380, 485], "lzc_channel_program": [105, 232, 275, 380, 485], "argv": [105, 232, 275, 380, 485], "arg1": [105, 232, 233, 275, 380, 485], "arg2": [105, 232, 275, 380, 485], "baz": [105, 232, 275, 380, 485], "arr": [105, 232, 275, 380, 485], "ret0": [105, 232, 275, 380, 485], "ret1": [105, 232, 275, 380, 485], "ret2": [105, 232, 275, 380, 485], "singleton": [105, 232, 275, 380, 485], "vice": [105, 232, 275, 380, 485], "versa": [105, 232, 275, 380, 485], "int64": [105, 232, 275, 380, 485], "boolean_valu": [105, 232, 275, 380, 485], "nil": [105, 232, 275, 380, 485], "likewis": [105, 232, 275, 380, 485], "decim": [105, 232, 275, 380, 485], "lld": [105, 232, 275, 380, 485], "rawlen": [105, 232, 275, 380, 485], "collectgarbag": [105, 232, 275, 380, 485], "rawget": [105, 232, 275, 380, 485], "rawset": [105, 232, 275, 380, 485], "getmetat": [105, 232, 275, 380, 485], "ipair": [105, 232, 275, 380, 485], "setmetat": [105, 232, 275, 380, 485], "tonumb": [105, 232, 275, 380, 485], "tostr": [105, 232, 275, 380, 485], "rawequ": [105, 232, 275, 380, 485], "coroutin": [105, 232, 275, 380, 485], "dofil": [105, 232, 275, 380, 485], "loadfil": [105, 232, 275, 380, 485], "pcall": [105, 232, 275, 380, 485], "xpcall": [105, 232, 275, 380, 485], "posit": [105, 232, 275, 380, 485], "parenthes": [105, 232, 275, 380, 485], "curli": [105, 232, 275, 380, 485], "brace": [105, 232, 275, 380, 485], "syntact": [105, 232, 275, 380, 485], "sugar": [105, 232, 275, 380, 485], "unrecover": [105, 232, 275, 380, 485, 555, 557, 560, 561, 562], "errno": [105, 140, 176, 198, 222, 232, 250, 275, 380, 413, 485, 520], "eexist": [105, 232, 275, 380, 485], "list_of_conflicting_snapshot": [105, 232, 275, 380, 485], "eperm": [105, 232, 275, 380, 485], "echild": [105, 132, 186, 209, 232, 236, 275, 301, 380, 405, 485, 512], "enodev": [105, 232, 275, 380, 485], "enoent": [105, 232, 275, 380, 485], "eagain": [105, 232, 275, 380, 485], "enotdir": [105, 232, 275, 380, 485], "espip": [105, 232, 275, 380, 485], "esrch": [105, 232, 275, 380, 485], "enomem": [105, 232, 275, 380, 485], "eisdir": [105, 232, 275, 380, 485], "erof": [105, 232, 275, 380, 485], "eintr": [105, 232, 275, 380, 485], "eacc": [105, 232, 275, 380, 485], "emlink": [105, 232, 275, 380, 485], "efault": [105, 232, 275, 380, 485], "enfil": [105, 232, 275, 380, 485], "epip": [105, 232, 275, 380, 485], "enxio": [105, 132, 186, 209, 232, 236, 275, 301, 380, 405, 485, 512], "enotblk": [105, 232, 275, 380, 485], "emfil": [105, 232, 275, 380, 485], "edom": [105, 232, 275, 380, 485], "e2big": [105, 232, 275, 380, 485], "enotti": [105, 232, 275, 380, 485], "erang": [105, 232, 275, 380, 485], "enoexec": [105, 232, 275, 380, 485], "etxtbsi": [105, 232, 275, 380, 485], "ebadf": [105, 232, 275, 380, 485], "exdev": [105, 232, 275, 380, 485], "efbig": [105, 232, 275, 380, 485], "mdb": [105, 232, 275, 380, 485], "stringof": [105, 232, 275, 380, 485], "arg0": [105, 232, 275, 380, 485], "thrown": [105, 232, 275, 380, 485], "nonexistent_f": [105, 232, 275, 380, 485], "somepool": [105, 232, 275, 380, 485], "fs_that_may_exist": [105, 232, 275, 380, 485], "get_prop": [105, 232, 275, 380, 485], "iscsiopt": [105, 232, 275, 380, 485], "collid": [105, 232, 275, 380, 485], "set_prop": [105, 275, 380, 485], "zcp": [105, 232, 275, 380, 485], "rename_snapshot": [105, 485], "oldsnapnam": [105, 485], "newsnapnam": [105, 485], "set_properti": [105, 275, 380, 485], "user_properti": [105, 275, 380, 485], "system_properti": [105, 232, 275, 380, 485], "naiv": [105, 232, 275, 380, 485], "destroy_recurs": [105, 232, 275, 380, 485], "somef": [105, 232, 275, 380, 485], "err": [105, 232, 275, 380, 485], "force_promot": [105, 232, 275, 380, 485], "elseif": [105, 232, 275, 380, 485], "kr": [106, 233, 276, 381, 486], "diagnos": [106, 486], "fhmnsuv": [109, 110, 279, 280, 384, 385, 489, 490], "vn": [109, 110, 489, 490], "redup": [109, 110, 166, 167, 279, 280, 336, 384, 385, 439, 440, 489, 490, 546, 547], "subtre": [109, 110, 207, 233, 279, 280, 384, 385, 489, 490], "topmost": [109, 110, 207, 233, 279, 280, 384, 385, 489, 490], "x2010": [109, 110, 207, 233, 279, 280, 384, 385, 489, 490], "spite": [109, 110, 207, 233, 279, 280, 384, 385, 489, 490], "recompress": [109, 110, 111, 115, 166, 167, 233, 279, 280, 281, 285, 384, 385, 386, 390, 489, 490, 491, 495, 546, 547], "stem": [109, 110, 233, 279, 280, 384, 385, 489, 490], "aead": [109, 110, 233, 279, 280, 384, 385, 489, 490], "intermedi": [109, 110, 114, 128, 185, 207, 233, 279, 280, 284, 296, 384, 385, 389, 401, 489, 490, 494, 508], "paragraph": [109, 110, 185, 207, 233, 279, 280, 384, 385, 489, 490], "settabl": [109, 110, 207, 233, 279, 280, 384, 385, 489, 490], "snap1": [109, 110, 233, 279, 280, 384, 385, 489, 490, 559], "keyfil": [109, 110, 233, 279, 280, 384, 385, 489, 490, 559], "fact": [109, 110, 233, 279, 280, 384, 385, 489, 490], "prematur": [109, 110, 207, 233, 279, 280, 384, 385, 489, 490], "poolb": [109, 110, 111, 115, 128, 185, 207, 233, 296, 401, 489, 490, 491, 495, 508], "poola": [109, 110, 111, 115, 128, 185, 207, 233, 296, 401, 489, 490, 491, 495, 508], "fsa": [109, 110, 111, 115, 128, 185, 207, 233, 296, 401, 489, 490, 491, 495, 508], "fsb": [109, 110, 111, 115, 128, 185, 207, 233, 296, 401, 489, 490, 491, 495, 508], "dlpvbcehnpsvw": [111, 115, 491, 495], "dlpvcensvw": [111, 115, 386, 390, 491, 495], "dlpvcenpv": [111, 115, 386, 390, 491, 495], "pvenv": [111, 115, 386, 390, 491, 495], "pvnv": [111, 115, 386, 390, 491, 495], "redaction_snapshot": [111, 115, 281, 285, 386, 390, 491, 495], "redirect": [111, 115, 185, 207, 233, 281, 285, 386, 390, 491, 495], "intermediari": [111, 115, 185, 207, 233, 281, 285, 386, 390, 491, 495], "proctitl": [111, 115, 386, 390, 491, 495], "compact": [111, 115, 185, 207, 233, 281, 285, 386, 390, 491, 495], "write_embed": [111, 115, 185, 207, 233, 281, 285, 386, 390, 491, 495], "compactli": [111, 115, 185, 207, 233, 281, 285, 386, 390, 491, 495], "untrust": [111, 115, 233, 281, 285, 386, 390, 491, 495], "lec": [111, 115, 233, 281, 285, 386, 390, 491, 495], "dryrun": [111, 115, 207, 233, 281, 285, 386, 390, 491, 495], "prop": [111, 115, 187, 207, 233, 281, 285, 386, 390, 491, 495], "siginfo": [111, 115, 491, 495], "sigusr1": [111, 115, 491, 495], "dlpvcenvw": [111, 115, 386, 390, 491, 495], "definition": [111, 115, 281, 285, 386, 390, 491, 495], "partwai": [111, 115, 281, 285, 386, 390, 491, 495], "rerun": [111, 115, 281, 285, 386, 390, 491, 495], "truli": [111, 115, 281, 285, 386, 390, 491, 495], "shop": [111, 115, 281, 285, 386, 390, 491, 495], "purchas": [111, 115, 281, 285, 386, 390, 491, 495], "fourth": [111, 115, 183, 281, 285, 386, 390, 491, 495], "transmit": [111, 115, 281, 285, 386, 390, 491, 495], "fake": [111, 115, 181, 203, 227, 255, 281, 285, 386, 390, 491, 495], "juli": [111, 115, 208, 218, 246, 354, 429, 452, 491, 495, 520], "nonexist": [113, 185, 207, 233, 283, 388, 493], "rfr": [114, 207, 233, 284, 389, 494], "rr": [114, 185, 207, 233, 284, 389, 494], "snapnam": [118, 185, 207, 233, 288, 393, 498], "nsfile": [123, 127, 503, 507], "1234": [123, 127, 503, 507], "interrel": [124, 185, 207, 233, 293, 398, 504], "ceas": [126, 163, 164, 295, 332, 333, 400, 436, 437, 506, 543, 544], "deleteq": [126, 295, 400, 506], "lsof": [126, 295, 400, 506], "zfs_max_dataset_name_len": [128, 508], "za": [128, 508], "z_": [128, 508], "deep": [128, 233, 296, 401, 508], "complianc": [128, 185, 207, 233, 296, 401, 508], "tabular": [128, 185, 207, 233, 271, 296, 401, 508], "smbmount": [128, 185, 207, 233, 296, 401, 508], "share_tmp": [128, 185, 207, 233, 296, 401, 508], "workgroup": [128, 185, 207, 233, 296, 401, 508], "obrut": [128, 185, 207, 233, 296, 401, 508], "uid": [128, 185, 207, 233, 296, 401, 508], "zfs_color": [128, 164, 333, 401, 437, 508, 544], "color": [128, 164, 333, 401, 437, 508, 544], "zfs_set_pipe_max": [128, 401, 508], "reciev": [128, 401, 508], "unfix": [128, 401, 508], "zfs_module_timeout": [128, 164, 508, 544], "forev": [128, 164, 508, 544], "vdev_path": [130, 140, 146, 176, 198, 210, 222, 237, 250, 315, 403, 413, 419, 510, 520, 526], "vdev_prepar": [130, 403, 510], "vdev_upath": [130, 146, 210, 237, 315, 403, 419, 510, 526], "vdev_enc_sysfs_path": [130, 146, 210, 237, 315, 403, 419, 510, 526], "hexadecim": [131, 132, 208, 209, 235, 236, 300, 301, 404, 405, 511, 512], "0x": [131, 300, 404, 511], "libc": [131, 208, 235, 300, 404, 511], "deadbeef": [131, 208, 235, 300, 404, 511], "0x01234567": [131, 300, 404, 511], "sethostid": [131, 300, 404, 511], "injector": [132, 186, 209, 236, 301, 405, 512], "artifici": [132, 186, 209, 236, 301, 405, 512], "amu": [132, 186, 209, 236, 301, 405, 512], "lane": [132, 209, 236, 301, 405, 512], "device_error": [132, 186, 209, 236, 301, 405, 512], "label_error": [132, 186, 209, 236, 301, 405, 512], "dva": [132, 236, 301, 405, 512], "amq": [132, 186, 209, 236, 301, 405, 512], "metadnod": [132, 186, 209, 236, 301, 405, 512], "mos_typ": [132, 186, 209, 236, 301, 405, 512], "amqu": [132, 186, 209, 236, 301, 405, 512], "ecksum": [132, 186, 209, 236, 301, 405, 512], "nxio": [132, 186, 209, 236, 301, 405, 512], "0001": [132, 209, 236, 301, 405, 512], "pad1": [132, 186, 209, 236, 301, 405, 512], "pad2": [132, 186, 209, 236, 301, 405, 512], "uber": [132, 186, 209, 236, 301, 405, 512], "mosdir": [132, 186, 209, 236, 301, 405, 512], "fglnp": [133, 187, 210, 237, 302, 406, 513], "gradual": [133, 146, 164, 187, 210, 237, 333, 437, 513, 526, 544], "fsw": [134, 154, 303, 323, 407, 427, 514, 534], "new_devic": [134, 187, 210, 237, 303, 323, 407, 514], "entail": 134, "z2": 134, "zpool_auto_power_on_slot": [136, 149, 150, 164, 409, 422, 423, 437, 516, 529, 530, 544], "dfn": [137, 210, 237, 306, 410, 517], "tname": [137, 187, 210, 237, 306, 410, 517], "preexist": [137, 187, 210, 237, 306, 410, 517], "six": [137, 164, 187, 210, 237, 333, 437, 517, 544], "sdb2": [137, 164, 187, 210, 237, 333, 437, 517, 544], "vhf": [140, 237, 309, 413, 520], "payload": [140, 164, 176, 187, 198, 210, 222, 237, 250, 309, 333, 413, 437, 520, 544], "flaki": [140, 222, 250, 413, 520], "ratelimit": [140, 222, 250, 413, 520], "open_fail": [140, 176, 198, 222, 250, 413, 520], "corrupt_data": [140, 176, 198, 222, 250, 413, 520], "no_replica": [140, 176, 198, 222, 250, 413, 520], "bad_guid_sum": [140, 176, 198, 222, 250, 413, 520], "too_smal": [140, 176, 198, 222, 250, 413, 520], "probe_failur": [140, 176, 198, 222, 250, 413, 520], "bad_label": [140, 176, 198, 222, 250, 413, 520], "bad_ashift": [140, 176, 198, 222, 250, 413, 520], "io_failur": [140, 176, 198, 222, 250, 413, 520], "log_replai": [140, 176, 198, 222, 250, 413, 520], "accompani": [140, 176, 198, 222, 250, 413, 520], "pool_failmod": [140, 176, 198, 222, 250, 413, 520], "pool_guid": [140, 176, 198, 222, 250, 413, 520], "pool_context": [140, 176, 198, 222, 250, 413, 520], "tryimport": [140, 176, 198, 222, 250, 413, 520], "vdev_guid": [140, 176, 198, 222, 250, 413, 520], "question": [140, 176, 198, 222, 250, 413, 520], "vdev_typ": [140, 176, 198, 222, 250, 413, 520], "partx": [140, 176, 198, 222, 250, 413, 520], "vdev_devid": [140, 176, 198, 222, 250, 413, 520], "vdev_fru": [140, 176, 198, 222, 250, 413, 520], "vdev_stat": [140, 176, 198, 222, 250, 413, 520], "vdev_ashift": [140, 176, 198, 222, 250, 413, 520], "vdev_complete_t": [140, 176, 198, 222, 250, 413, 520], "vdev_delta_t": [140, 176, 198, 222, 250, 413, 520], "vdev_spare_path": [140, 176, 198, 222, 250, 413, 520], "vdev_spare_guid": [140, 176, 198, 222, 250, 413, 520], "vdev_read_error": [140, 176, 198, 222, 250, 413, 520], "vdev_write_error": [140, 176, 198, 222, 250, 413, 520], "vdev_cksum_error": [140, 176, 198, 222, 250, 413, 520], "parent_guid": [140, 176, 198, 222, 250, 413, 520], "parent_typ": [140, 176, 198, 222, 250, 413, 520], "parent_path": [140, 176, 198, 222, 250, 413, 520], "parent_devid": [140, 176, 198, 222, 250, 413, 520], "zio_objset": [140, 176, 198, 222, 250, 413, 520], "zio_object": [140, 176, 198, 222, 250, 413, 520], "zio_level": [140, 176, 198, 222, 250, 413, 520], "zio_blkid": [140, 176, 198, 222, 250, 413, 520], "zio_err": [140, 176, 198, 222, 250, 413, 520], "ebad": [140, 222, 250, 413, 520], "zio_offset": [140, 176, 198, 222, 250, 413, 520], "zio_siz": [140, 176, 198, 222, 250, 413, 520], "zio_flag": [140, 176, 198, 222, 250, 413, 520], "zio_stag": [140, 176, 198, 222, 250, 413, 520], "zio_pipelin": [140, 176, 198, 222, 250, 413, 520], "zio_delai": [140, 176, 198, 222, 250, 413, 520], "zio_delta": [140, 176, 198, 222, 250, 413, 520], "zio_timestamp": [140, 176, 198, 222, 250, 413, 520], "prev_stat": [140, 176, 198, 222, 250, 413, 520], "cksum_algorithm": [140, 176, 198, 222, 250, 413, 520], "cksum_byteswap": [140, 176, 198, 222, 250, 413, 520], "byteswap": [140, 222, 250, 413, 520], "bad_rang": [140, 176, 198, 222, 250, 413, 520], "bad_ranges_min_gap": [140, 176, 198, 222, 250, 413, 520], "bad_range_set": [140, 176, 198, 222, 250, 413, 520], "bad_range_clear": [140, 176, 198, 222, 250, 413, 520], "bad_set_bit": [140, 176, 198, 222, 250, 413, 520], "bad_cleared_bit": [140, 176, 198, 222, 250, 413, 520], "zio_stage_open": [140, 176, 198, 222, 250, 413, 520], "0x00000001": [140, 176, 198, 222, 250, 413, 520], "rwfcit": 140, "zio_stage_read_bp_init": [140, 176, 198, 222, 250, 413, 520], "0x00000002": [140, 176, 198, 222, 250, 413, 520], "zio_stage_write_bp_init": [140, 176, 198, 222, 250, 413, 520], "0x00000004": [140, 176, 198, 222, 250, 413, 520], "zio_stage_free_bp_init": [140, 176, 198, 222, 250, 413, 520], "0x00000008": [140, 176, 198, 222, 250, 413, 520], "zio_stage_issue_async": [140, 176, 198, 222, 250, 413, 520], "0x00000010": [140, 176, 198, 222, 250, 413, 520], "wf": 140, "zio_stage_write_compress": [140, 222, 250, 413, 520], "0x00000020": [140, 176, 198, 222, 250, 413, 520], "zio_stage_encrypt": [140, 222, 250, 413, 520], "0x00000040": [140, 176, 198, 222, 250, 413, 520], "zio_stage_checksum_gener": [140, 176, 198, 222, 250, 413, 520], "0x00000080": [140, 176, 198, 222, 250, 413, 520], "zio_stage_nop_writ": [140, 176, 198, 222, 250, 413, 520], "0x00000100": [140, 176, 198, 222, 250, 413, 520], "zio_stage_brt_fre": [140, 520], "0x00000200": [140, 176, 198, 222, 250, 413, 520], "zio_stage_ddt_read_start": [140, 176, 198, 222, 250, 413, 520], "0x00000400": [140, 176, 198, 222, 250, 413, 520], "zio_stage_ddt_read_don": [140, 176, 198, 222, 250, 413, 520], "0x00000800": [140, 176, 198, 222, 250, 413, 520], "zio_stage_ddt_writ": [140, 176, 198, 222, 250, 413, 520], "0x00001000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_ddt_fre": [140, 176, 198, 222, 250, 413, 520], "0x00002000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_gang_assembl": [140, 176, 198, 222, 250, 413, 520], "0x00004000": [140, 176, 198, 222, 250, 413, 520], "rwfc": [140, 176, 198, 222, 250, 413, 520], "zio_stage_gang_issu": [140, 176, 198, 222, 250, 413, 520], "0x00008000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_dva_throttl": [140, 222, 250, 413, 520], "0x00010000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_dva_alloc": [140, 176, 198, 222, 250, 413, 520], "0x00020000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_dva_fre": [140, 176, 198, 222, 250, 413, 520], "0x00040000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_dva_claim": [140, 176, 198, 222, 250, 413, 520], "0x00080000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_readi": [140, 176, 198, 222, 250, 413, 520], "0x00100000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_vdev_io_start": [140, 176, 198, 222, 250, 413, 520], "0x00200000": [140, 176, 198, 222, 250, 413, 520], "IT": [140, 185], "zio_stage_vdev_io_don": [140, 176, 198, 222, 250, 413, 520], "0x00400000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_vdev_io_assess": [140, 176, 198, 222, 250, 413, 520], "0x00800000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_checksum_verifi": [140, 222, 250, 413, 520], "0x01000000": [140, 176, 198, 222, 250, 413, 520], "zio_stage_don": [140, 176, 198, 222, 250, 413, 520], "0x02000000": [140, 176, 198, 222, 250, 413, 520], "zio_flag_dont_aggreg": [140, 176, 198, 222, 250, 413, 520], "zio_flag_io_repair": [140, 176, 198, 222, 250, 413, 520], "zio_flag_self_h": [140, 176, 198, 222, 250, 413, 520], "zio_flag_resilv": [140, 176, 198, 222, 250, 413, 520], "zio_flag_scrub": [140, 176, 198, 222, 250, 413, 520], "zio_flag_scan_thread": [140, 176, 198, 222, 250, 413, 520], "zio_flag_phys": [140, 176, 198, 222, 250, 413, 520], "zio_flag_canfail": [140, 176, 198, 222, 250, 413, 520], "zio_flag_specul": [140, 176, 198, 222, 250, 413, 520], "zio_flag_config_writ": [140, 176, 198, 222, 250, 413, 520], "zio_flag_dont_retri": [140, 176, 198, 222, 250, 413, 520], "zio_flag_nodata": [140, 176, 198, 222, 250, 413, 520], "zio_flag_induce_damag": [140, 176, 198, 222, 250, 413, 520], "zio_flag_io_alloc": [140, 222, 250, 413, 520], "zio_flag_io_retri": [140, 176, 198, 222, 250, 413, 520], "zio_flag_prob": [140, 176, 198, 222, 250, 413, 520], "zio_flag_tryhard": [140, 176, 198, 222, 250, 413, 520], "zio_flag_opt": [140, 176, 198, 222, 250, 413, 520], "zio_flag_dont_queu": [140, 176, 198, 222, 250, 413, 520], "zio_flag_dont_propag": [140, 176, 198, 222, 250, 413, 520], "zio_flag_io_bypass": [140, 176, 198, 222, 250, 413, 520], "zio_flag_io_rewrit": [140, 176, 198, 222, 250, 413, 520], "zio_flag_raw_compress": [140, 222, 250, 413, 520], "zio_flag_raw_encrypt": [140, 222, 250, 413, 520], "zio_flag_gang_child": [140, 176, 198, 222, 250, 413, 520], "zio_flag_ddt_child": [140, 176, 198, 222, 250, 413, 520], "0x04000000": [140, 176, 198, 222, 250, 413, 520], "zio_flag_godfath": [140, 176, 198, 222, 250, 413, 520], "0x08000000": [140, 176, 198, 222, 250, 413, 520], "zio_flag_nopwrit": [140, 176, 198, 222, 250, 413, 520], "0x10000000": [140, 176, 198, 222, 250, 413, 520], "zio_flag_reexecut": [140, 176, 198, 222, 250, 413, 520], "0x20000000": [140, 176, 198, 222, 250, 413, 520], "zio_flag_deleg": [140, 176, 198, 222, 250, 413, 520], "0x40000000": [140, 222, 250, 413, 520], "zio_flag_fastwrit": [140, 176, 198, 222, 250, 413, 520], "0x80000000": [140, 222, 250, 413, 520], "il": [143, 187, 210, 237, 312, 416, 523], "dflmn": [144, 237, 313, 417, 524], "ntx": [144, 417, 524], "mntopt": [144, 187, 210, 237, 313, 417, 524], "dflmt": [144, 417, 524], "newpool": [144, 158, 187, 210, 237, 313, 327, 417, 431, 524, 538], "irretriev": [144, 187, 210, 237, 313, 417, 524], "zpool_import_path": [144, 164, 187, 210, 237, 313, 333, 417, 437, 524, 544], "hazard": [144, 187, 210, 237, 313, 417, 524], "fx": [144, 187, 210, 237, 313, 417, 524], "15451357997522795478": [144, 164, 187, 210, 237, 333, 437, 524, 544], "suspens": [145, 161, 237, 314, 330, 418, 434, 525, 541], "uninit": [145, 418, 525], "unalloco": [145, 418, 525], "lq": [146, 210, 237, 315, 419, 526], "ghhlnppvy": [146, 237, 315, 419, 526], "nearbi": [146, 237, 315, 419, 526], "suppress": [146, 164, 166, 167, 187, 188, 210, 211, 237, 238, 315, 333, 336, 337, 419, 437, 439, 440, 526, 544, 546, 547], "script1": [146, 159, 210, 237, 315, 328, 419, 432, 526, 539], "script2": [146, 159, 210, 237, 315, 328, 419, 432, 526, 539], "slash": [146, 181, 183, 203, 205, 210, 227, 229, 237, 255, 257, 315, 419, 526], "zpool_scripts_path": [146, 164, 210, 237, 315, 333, 419, 437, 526, 544], "zpool_scripts_as_root": [146, 164, 210, 237, 315, 333, 419, 437, 526, 544], "sudoer": [146, 210, 237, 315, 419, 526], "awkabl": [146, 419, 526], "ind": [146, 210, 237, 315, 419, 526], "agg": [146, 210, 237, 315, 419, 526], "total_wait": [146, 210, 237, 315, 419, 526], "disk_wait": [146, 210, 237, 315, 419, 526], "syncq_wait": [146, 210, 237, 315, 419, 526], "asyncq_wait": [146, 210, 237, 315, 419, 526], "syncq_read": [146, 210, 237, 315, 419, 526], "asyncq_read": [146, 210, 237, 315, 419, 526], "scrubq_read": [146, 210, 237, 315, 419, 526], "trimq_writ": [146, 237, 315, 419, 526], "rebuildq_writ": [146, 526], "st8000nm0075": [146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "3t": [146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "u10": [146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "u11": [146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "u12": [146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "u13": [146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "u14": [146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "vc": [146, 159, 164, 210, 237, 333, 437, 526, 539, 544], "6g": [146, 148, 159, 164, 187, 210, 237, 333, 437, 526, 528, 539, 544], "9g": [146, 148, 159, 164, 187, 210, 237, 333, 437, 526, 528, 539, 544], "250k": [146, 159, 164, 333, 437, 526, 539, 544], "69m": [146, 159, 164, 333, 437, 526, 539, 544], "70g": [146, 159, 164, 333, 437, 526, 539, 544], "foreign": [147, 187, 210, 237, 316, 420, 527], "hglppv": [148, 210, 237, 317, 421, 528], "dedupratio": [148, 187, 210, 237, 317, 421, 528], "zion": [148, 164, 187, 210, 237, 333, 437, 528, 544], "expandsz": [148, 164, 187, 210, 237, 333, 437, 528, 544], "frag": [148, 164, 187, 210, 237, 333, 437, 528, 544], "43g": [148, 164, 187, 210, 237, 333, 437, 528, 544], "61": [148, 164, 187, 210, 237, 333, 437, 528, 544], "41": [148, 164, 187, 210, 237, 333, 437, 528, 544], "48": [148, 164, 187, 210, 237, 333, 437, 528, 544], "30g": [148, 164, 187, 210, 237, 333, 437, 528, 544], "ft": [149, 150, 422, 423, 529, 530], "npw": [152, 321, 425, 532], "49": [156, 429, 536], "403m": [156, 429, 536], "405m": [156, 429, 536], "68": [156, 429, 536], "weekli": [156, 161, 429, 536, 541], "monthli": [156, 161, 429, 536, 541], "otherpool": [156, 161, 429, 536, 541], "gllnp": [158, 237, 327, 431, 538], "degilppstvx": 159, "unhealthi": [159, 432, 539], "30000": [159, 539], "took": [159, 237, 328, 432, 539], "dw": [161, 330, 434, 541], "irrespect": [161, 237, 330, 434, 541], "raidz_expand": 163, "zfs_abort": [164, 185, 187, 210, 237, 333, 437, 544], "findleak": [164, 185, 187, 237, 333, 437, 544], "zpool_power_on_slot_timeout_m": [164, 437, 544], "power_control": [164, 437, 544], "zpool_import_udev_timeout_m": [164, 237, 333, 437, 544], "zpool_status_non_native_ashift_ignor": [164, 333, 437, 544], "absenc": [164, 333, 437, 544], "zpool_vdev_name_guid": [164, 187, 210, 237, 333, 437, 544], "zpool_vdev_name_follow_link": [164, 187, 210, 237, 333, 437, 544], "zfs_vdev_devid_opt_out": [164, 210, 237, 333, 437, 544], "nvp": [164, 210, 237, 333, 437, 544], "strip": [164, 210, 237, 333, 437, 544], "zpool_scripts_en": [164, 210, 237, 333, 437, 544], "evolv": [164, 210, 237, 333, 437, 544], "execd": [165, 438, 545], "collector": [165, 438, 545], "insight": [165, 438, 545], "grafana": [165, 438, 545], "heatmap": [165, 438, 545], "cvd": [166, 167, 336, 439, 440, 546, 547], "resume_token": [166, 167, 336, 439, 440, 546, 547], "incorrect": [166, 167, 546, 547], "insist": [166, 167, 546, 547], "dedup_stream_fil": [166, 167, 336, 439, 440, 546, 547], "12762": [166, 167, 546, 547], "sylist": [169, 190], "contin": 169, "seper": 169, "doxygen": [169, 190, 213, 241, 340], "splint": [169, 190, 213, 241, 340], "consolid": [169, 190, 213, 241], "gave": [169, 190, 213, 241], "reson": 169, "lai": 169, "boundri": 169, "2005": [169, 190, 213], "zpio": [170, 171, 191, 193], "darik": [171, 172, 179, 181, 186, 193, 194, 201, 203, 209, 216, 225, 227, 236, 244, 253, 255, 301], "horn": [171, 172, 179, 181, 186, 193, 194, 201, 203, 209, 216, 225, 227, 236, 244, 253, 255, 301], "dajhorn": [171, 172, 179, 181, 186, 193, 194, 201, 203, 209, 216, 225, 227, 236, 244, 253, 255, 301], "vanadac": [171, 172, 179, 181, 186, 193, 194, 201, 203, 209, 216, 225, 227, 236, 244, 253, 255, 301], "mar": [171, 179, 193, 201, 216, 225, 560], "regex": [172, 194], "threadcount": [172, 194], "threadcount_": [172, 194], "regex_low": [172, 194], "threadcount_low": [172, 194], "regex_high": [172, 194], "threadcount_high": [172, 194], "regex_incr": [172, 194], "threadcount_incr": [172, 194], "regioncount": [172, 194], "regioncount_": [172, 194], "regioncount_low": [172, 194], "regioncount_high": [172, 194], "regioncount_incr": [172, 194], "offset_": [172, 194], "size_low": [172, 194], "offset_low": [172, 194], "size_high": [172, 194], "offset_high": [172, 194], "size_incr": [172, 194], "offset_incr": [172, 194], "chunksiz": [172, 194], "chunksize_": [172, 194], "chunksize_low": [172, 194], "chunksize_high": [172, 194], "chunksize_incr": [172, 194], "dmu_flag": [172, 194], "dmuio": [172, 194], "dmu_io": [172, 194], "ssf": [172, 194], "fpp": [172, 194], "dmu_remov": [172, 194], "prerun": [172, 194], "postrun": [172, 194], "regionnois": [172, 194], "regions": [172, 194], "modulo": [172, 194], "chunknois": [172, 194], "threaddelai": [172, 194], "jiffi": [172, 194, 199], "dmu_verifi": [172, 194], "zerocopi": [172, 194], "dmu_read_zc": [172, 194], "dmu_write_zc": [172, 194], "nowait": [172, 194], "dmu_write_nowait": [172, 194], "noprefetch": [172, 194], "dmu_read_nopf": [172, 194], "inc": [172, 194], "feb": [172, 181, 186, 194, 203, 209, 227, 236], "raidz_par": [173, 195, 217, 245], "blather": [173, 195, 217, 245], "xist": [173, 195, 217, 245], "transver": [173, 195], "asciidoc": [173, 195, 217, 245], "michael": [173, 195, 217, 245], "gebetsroith": [173, 195, 217, 245], "gebi": [173, 195, 217, 245], "grml": [173, 195, 217, 245], "opensolari": [173, 195, 217, 245], "lspci": [175, 197, 221], "publicli": [176, 198, 222, 250], "zio_delay_max": [176, 177, 198, 199], "vdeev": 176, "healti": 176, "checkum": [176, 198], "hz": [176, 198], "cksum_expect": [176, 198, 222, 250, 413], "cksum_actu": [176, 198, 222, 250, 413], "bad_set_histogram": [176, 198, 222, 250, 413], "bad_cleared_histogram": [176, 198, 222, 250, 413], "rwfci": [176, 198, 222, 250, 413, 520], "rwf": [176, 198, 222, 250, 413, 520], "zio_stage_checksum_verify0": [176, 198], "zio_flag_dont_cach": [176, 198, 222, 250, 413], "zio_flag_raw": [176, 198], "warmup": 177, "precach": 177, "l2arc_nocompress": 177, "metaslabs_per_vdev": [177, 199], "spa_load_verify_maxinflight": [177, 199], "zfetch_array_rd_sz": [177, 199, 223, 251, 348], "zfetch_block_cap": 177, "1mb": [177, 199, 223, 251, 348, 354], "zfs_arc_meta_limit": [177, 199, 223, 251, 348], "arc_c_max": [177, 199, 223, 251, 348], "zfs_arc_meta_min": [177, 199, 223, 251, 348], "zfs_arc_meta_prun": [177, 199, 223, 251, 348], "zfs_arc_meta_adjust_restart": [177, 199, 223, 251, 348], "zfs_arc_min_prefetch_lifespan": [177, 199], "zfs_arc_num_sublists_per_st": 177, "zfs_arc_p_min_shift": [177, 199, 223, 251, 348], "calc": 177, "arc_p": [177, 199, 223, 251, 348], "zfs_arc_p_aggressive_dis": 177, "zfs_arc_p_dampener_dis": [177, 199, 223, 251, 348], "dampen": [177, 199, 223, 251, 348], "secondli": 177, "zfs_dirty_data_sync": [177, 199], "zfs_free_max_block": [177, 199], "maxium": 177, "zfs_disable_dup_evict": [177, 199], "millisec": 177, "zfs_mdcomp_dis": [177, 199], "fragmen": 177, "zfs_read_chunk_s": [177, 199, 223, 251], "zfs_resilver_delai": [177, 199], "zfs_scan_idl": [177, 199], "zfs_scrub_delai": [177, 199], "zfs_scan_min_time_m": [177, 199], "bp": 177, "zfs_top_maxinflight": [177, 199], "zfs_vdev_cache_bshift": [177, 199, 223, 251, 348], "zfs_vdev_cache_max": [177, 199, 223, 251, 348], "zfs_vdev_cache_s": [177, 199, 223, 251, 348], "zfs_vdev_mirror_switch_u": 177, "usec": 177, "zfs_zevent_col": [177, 199, 223, 251], "zfs_zevent_consol": [177, 199, 223, 251], "zil_slog_limit": 177, "500u": [177, 199, 223, 251, 348], "short_nam": [178, 200, 224, 252], "feature_guid": [178, 187, 200, 210, 237, 335], "deactiv": [178, 200, 224, 252, 355], "compresse": 178, "stub": [179, 201, 225, 253], "filesytem": [181, 185], "saxattr": [181, 203, 227, 255], "dirxattr": [181, 203, 227, 255], "convention": [181, 203, 227, 255], "cumdibcsdvhlmxfpa": 183, "divpa": 183, "mlxfpa": 183, "ua": 183, "fsdb": [183, 205], "fifth": 183, "x00b4": 183, "foreground": [184, 206, 230, 258], "libexecdir": [184, 206], "hup": [184, 206, 230, 258], "_not_implemented_": [184, 206, 230, 258], "diagnosi": [184, 206, 230, 258, 557], "lawrenc": [184, 206, 230, 258], "livermor": [184, 206, 230, 258], "nation": [184, 206, 230, 258], "laboratori": [184, 206, 230, 258], "403049": [184, 206, 230, 258], "octemb": [184, 206, 230], "fnprrv": 185, "dnprrv": 185, "rrf": 185, "vo": 185, "dnpprvel": 185, "ii": 185, "vnfu": 185, "ldug": 185, "ld": 185, "rldug": 185, "rld": 185, "fht": [185, 207, 233, 265], "maxnamelen": [185, 207, 233, 296, 401], "nonstandard": 185, "AND": [185, 231, 273], "tb": 185, "requiren": 185, "affair": 185, "algorthm": 185, "priv_file_upgrade_sl": 185, "priv_file_downgrade_sl": 185, "cif": 185, "shareiscsi": 185, "tape": 185, "ie": [185, 207, 233, 299], "dissalow": 185, "underlai": 185, "hi": [185, 207, 233, 296], "her": [185, 207, 233, 296], "communit": 185, "selinux_us": [185, 207, 233, 299], "selinux_rol": [185, 207, 233, 299], "selinux_typ": [185, 207, 233, 299], "sensitivity_level": [185, 207, 233, 299], "defntext": 185, "nonbmand": 185, "corpor": 185, "microsystem": 185, "thousand": 185, "ug": 185, "aand": 185, "vol1": 185, "iscsitadm": 185, "iqn": 185, "1986": 185, "7b4b02a6": 185, "3277": 185, "eb1b": 185, "e686": 185, "a24762c52a8c": 185, "blkd": [186, 209, 236, 301], "hexidecim": 186, "zinject_debug": [186, 209, 236, 301], "excerpt": [186, 209, 236, 301], "fnd": 187, "vhfc": [187, 210], "glpvy": 187, "hglpv": 187, "glnp": [187, 210], "glpvxd": [187, 210], "miniumum": 187, "quorum": 187, "old_devic": [187, 210, 237, 323], "automaticali": 187, "innplac": 187, "unmirror": [187, 210, 237, 333, 437], "c1t1d0": 187, "c1t2d0": 187, "c1t3d0": 187, "aug": [188, 211, 238], "bencmark": 192, "vdev_raidz": [192, 215, 243], "gvozden": [192, 215, 243], "ne": [192, 215, 243], "x0161": [192, 215, 243], "kovi": [192, 215, 243], "x0107": [192, 215, 243], "neskov": [192, 215, 243], "gmail": [192, 215, 243], "2016": [192, 215], "regionsize_": 194, "regionsize_low": 194, "regionsize_high": 194, "regionsize_incr": 194, "8mb": [199, 223, 251, 348], "67108864": [199, 223, 251], "zfs_arc_meta_strategi": [199, 223, 251, 348], "32m": [199, 223, 251], "zfs_checksums_per_second": [199, 223], "zfs_delays_per_second": 199, "64bit": [199, 223, 251], "zfs_qat_dis": 199, "2018": [200, 210, 222, 224], "smm": [205, 207, 208, 210, 218, 229, 233, 235, 237, 246, 257, 260, 300], "abcddfghilmpsvx": 205, "aflpx": 205, "zbd_no_zl": 205, "ov": 207, "dlprcenpv": 207, "lce": 207, "penv": [207, 233, 281, 285], "fnsuv": 207, "mbyte": [207, 233, 299], "dfmn": 210, "dfm": 210, "ghhlppvy": 210, "cfhv": 210, "slave": [210, 237], "23t": [210, 237], "46k": [210, 237], "sdff": [210, 237], "77k": [210, 237], "3k": [210, 237], "sdgw": [210, 237], "288k": [210, 237], "sdat": [210, 237], "sdgx": [210, 237], "78": [210, 237], "sdau": [210, 237], "sdgy": [210, 237], "sdav": [210, 237], "sdgz": [210, 237], "sdfk": [210, 237], "spl_kmem_cache_expir": [220, 248], "spl_kmem_cache_reclaim": [220, 248, 347], "spl_kmem_cache_obj_per_slab_min": [220, 248], "spl_kmem_cache_kmem_limit": 220, "0x34": [222, 250], "errant": [222, 250, 413], "9th": [222, 250, 413], "8th": [222, 250, 413], "128mb": [223, 251, 348], "vdev_ms_count_limit": 223, "fbzfs_condense_indirect_vdevs_en": [223, 251], "045": [223, 251], "690": [223, 251], "984": [223, 251], "833": [223, 251], "022": [223, 251], "2097152": [223, 251], "41943040": [223, 251], "zfs_sync_taskq_batch_pct": [223, 251, 348, 452], "zfs_vdev_aggregate_trim": [223, 251, 348], "36kb": [223, 251, 348], "zio_decompress_fail_fract": [223, 251], "abcddfghiklmpsvxi": 229, "olv": 233, "dlprbcehnpvw": [233, 281, 285], "lpcenvw": 233, "fhnsuv": 233, "rh": [233, 268, 282, 373, 387], "datatset": 233, "compars": 233, "stdin": [233, 279, 280, 299], "trail": [233, 276], "septemb": [235, 283, 388], "dflm": [237, 313], "np": 237, "diglppstvx": [237, 328], "perl": 240, "neelakanth": 240, "nadgir": 240, "mike": 240, "harsch": 240, "john": 240, "hixson": 240, "freena": 240, "beer": 240, "rational": 251, "512mb": [251, 348], "600000": 251, "vaul": 251, "iunt": 251, "262144": 251, "32gb": 251, "32mb": [251, 348], "slighti": 252, "abcddfghiklmpsvxyi": 257, "violat": 258, "pnpv": 263, "functuin": [270, 290], "dlprcenpvw": [281, 285], "dlpcenpv": [281, 285], "pnv": [281, 285], "ovewrit": 300, "verfi": [303, 323], "ing": 333, "ulong_maxb": 348, "eligibil": 348, "8388608b": [348, 452], "16777217bb": 348, "1b": [348, 355], "1048576bb": 348, "1h": 348, "10min": 348, "32kb": [348, 356], "fs_arc_meta_limit": 348, "minumum": 348, "512kb": 348, "4mb": 348, "5min": 348, "16gb": 348, "nonzer": 348, "operatinon": 348, "100mb": 348, "50mb": 348, "2h": 348, "2mb": 348, "64kb": 348, "16384b": 348, "15min": 348, "zil_min_commit_timeout": [348, 452], "786432b": 348, "768kb": 348, "scarc": 354, "sendstream": 355, "raidzm": 356, "legacyno": [357, 462], "abcddfghiklmnpsvxyi": 362, "dlpvrbcehnpsvw": [386, 390], "dlpvrbcehnpvw": [386, 390], "256b": 401, "deiglppstvx": [432, 539], "argumentss": 437, "tear": 451, "led": 451, "anew": 451, "zfs_ddt_zap_default_b": 452, "zfs_ddt_zap_default_ib": 452, "12743384782310107047": 549, "sda9": [549, 558, 562], "c0t0d0": [550, 551, 552, 553, 554, 555, 556, 557, 560, 561], "c0t0d1": [550, 551, 552, 553, 555, 556, 557], "c0t0d2": [550, 552, 557], "10121266328238932306": [550, 551], "reattach": 551, "5187963178597328409": 552, "irrevoc": 553, "13783646421373024673": [554, 555], "toplevel": 554, "unrepl": 555, "irreversibli": 555, "affirm": 555, "xv": 556, "0h0m": 557, "58": 557, "5k": 557, "203768": 557, "sdb9": [558, 562], "erratum": 559, "vdev0": 559, "vdev1": 559, "vdev2": 559, "vdev3": 559, "1165955789558693437": 559, "crypt1": 559, "newcrypt1": 559, "snap5": 559, "reimport": 559, "alert": 559, "cryptograph": 559, "14702934086626715962": 560, "0x1435718c": 560, "47": 560, "intervent": [561, 562, 563], "c0t1d0": 561, "shortli": 563, "adminstr": 563, "c3t2d0": 563, "c5t3d0": 563}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"checksum": [1, 48], "Their": 1, "us": [1, 18, 19, 20, 21, 22, 35, 36, 38, 43, 44, 54], "zf": [1, 4, 7, 8, 12, 14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 48, 50, 51, 54, 57, 72, 75, 83, 85, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 176, 177, 179, 181, 185, 198, 199, 201, 203, 207, 222, 223, 225, 227, 231, 232, 233, 250, 251, 253, 255, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 348, 351, 358, 360, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 452, 455, 463, 465, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564], "algorithm": 1, "acceler": 1, "microbenchmark": 1, "disabl": 1, "featur": [2, 80, 178, 200, 224, 252, 355, 460], "flag": 2, "compat": 2, "refer": 2, "materi": 2, "implement": 2, "per": 2, "o": [2, 49, 51, 561, 562], "raidz": [3, 48], "introduct": [3, 5, 47], "space": [3, 33, 49, 54], "effici": 3, "perform": [3, 47, 52, 54], "consider": [3, 33, 54], "write": [3, 46], "troubleshoot": [4, 18, 19, 20, 22, 35, 36, 38, 43, 44], "todo": 4, "about": 4, "log": [4, 563], "file": [4, 7, 21, 49, 54, 70, 73, 174, 196, 219, 247, 346, 349, 450, 453], "gener": [4, 49, 54, 103, 231, 273, 378, 483], "kernel": [4, 7, 33, 43, 44, 54], "modul": [4, 48, 177, 199, 220, 223, 248, 251], "debug": [4, 48], "messag": [4, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564], "unkil": 4, "process": [4, 47], "event": [4, 140, 176, 198, 222, 250, 309, 413, 520], "draid": 5, "creat": [5, 10, 54, 93, 137, 263, 306, 368, 410, 473, 517], "vdev": [5, 48], "rebuild": [5, 33], "distribut": 5, "spare": 5, "rebalanc": 5, "basic": [6, 49], "concept": [6, 49], "content": [6, 13, 15, 17, 18, 19, 20, 22, 23, 26, 29, 32, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 47, 49, 52, 54, 58, 60, 564], "buildbot": 7, "option": [7, 8, 18, 19, 20, 22, 33, 35, 43, 44], "choos": 7, "builder": 7, "prevent": 7, "commit": [7, 10], "from": [7, 8, 21, 54], "being": 7, "built": 7, "test": [7, 8, 10, 12, 26, 32, 66, 446], "submit": 7, "style": 7, "onli": 7, "requir": [7, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 54], "spl": [7, 57, 71, 220, 248, 347, 451], "version": [7, 558], "build": [7, 8], "specif": [7, 49], "pull": [7, 10], "request": [7, 10], "branch": [7, 9, 57], "name": [7, 54], "zfsonlinux": 7, "repositori": [7, 8, 10, 32], "linux": [7, 12, 14, 15, 16, 17, 21, 31, 49, 54], "4": [7, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 70, 71, 72, 346, 347, 348, 450, 451, 452, 559], "14": [7, 549], "step": [7, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "overrid": 7, "skip": 7, "lustr": 7, "without": 7, "ldiskf": 7, "configur": [7, 8, 14, 16, 18, 19, 20, 22, 25, 28, 31, 33, 35, 36, 37, 38, 39, 43, 44, 54, 550, 551, 552, 553, 557], "github": [8, 10], "instal": [8, 14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 27, 28, 29, 31, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 54], "depend": 8, "develop": [8, 13, 27], "In": 8, "tree": 8, "clone": [8, 10, 92, 262, 367, 472], "run": 8, "zloop": 8, "sh": 8, "custom": 9, "packag": 9, "rhel": [9, 30, 32], "cento": [9, 30], "fedora": [9, 24, 25, 26], "dkm": [9, 32], "kmod": [9, 32], "kabi": [9, 32], "track": [9, 32], "debian": [9, 18, 19, 20, 21, 22, 23], "ubuntu": [9, 35, 36, 37, 38, 39, 40], "get": [9, 41, 96, 142, 266, 311, 371, 415, 476, 522], "sourc": 9, "code": [9, 54], "releas": [9, 19, 20, 22, 32, 35, 36, 37, 57, 112, 282, 387, 492], "tarbal": 9, "git": [9, 10, 57], "master": [9, 57, 168], "beginn": 10, "zol": 10, "edit": 10, "first": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "time": [10, 47], "setup": [10, 12, 37, 39], "initi": [10, 145, 314, 418, 525], "prepar": [10, 14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 38, 43, 44], "make": 10, "chang": [10, 12, 54, 91, 261, 366, 471], "your": 10, "patch": [10, 12, 33], "befor": [10, 33], "push": 10, "correct": 10, "issu": [10, 47], "maintain": [10, 57], "final": [10, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "word": 10, "openzf": [11, 12, 54, 60], "except": 11, "format": [11, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 47, 49, 54, 73, 174, 196, 219, 247, 349, 453], "port": 12, "environ": [12, 18, 19, 20, 22, 33, 35, 36, 38, 43, 44], "pick": 12, "cherri": 12, "manual": [12, 48], "merg": 12, "resourc": 13, "alpin": [14, 15], "root": [14, 15, 16, 17, 18, 19, 20, 22, 23, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44], "system": [14, 16, 18, 19, 20, 22, 25, 28, 31, 35, 36, 37, 38, 39, 43, 44, 54, 84, 180, 202, 226, 254, 359, 464], "arch": [16, 17], "bootload": [16, 25, 31], "support": [17, 18, 19, 20, 21, 22, 29, 33, 35, 36, 37, 38, 39, 43, 44, 54], "overview": [17, 18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "contribut": [17, 18, 19, 20, 22, 29, 35, 36, 37, 38, 39, 43, 44], "bookworm": 18, "tabl": [18, 19, 20, 22, 23, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 47, 49, 54, 60], "caution": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "encrypt": [18, 19, 20, 21, 22, 35, 36, 37, 38, 39, 43, 44, 48], "1": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 62, 63, 64, 65, 66, 67, 68, 69, 169, 170, 171, 172, 173, 190, 191, 192, 193, 194, 195, 213, 214, 215, 216, 217, 218, 240, 241, 242, 243, 244, 245, 246, 339, 340, 341, 342, 343, 344, 345, 441, 442, 443, 444, 445, 446, 447, 448, 449, 559], "The": [18, 19, 20, 22, 33, 35, 36, 38, 43, 44, 54], "2": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 548, 559], "disk": [18, 19, 20, 21, 22, 35, 36, 37, 38, 39, 43, 44, 48, 49, 54], "3": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 559], "5": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 73, 74, 174, 175, 176, 177, 178, 196, 197, 198, 199, 200, 219, 220, 221, 222, 223, 224, 247, 248, 249, 250, 251, 252, 349, 350, 453, 454], "grub": [18, 19, 20, 22, 35, 36, 38], "6": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 189], "boot": [18, 19, 20, 21, 22, 33, 35, 36, 37, 38, 39, 43, 44, 54], "7": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44, 75, 76, 77, 78, 79, 80, 81, 82, 212, 351, 352, 353, 354, 355, 356, 357, 455, 456, 457, 458, 459, 460, 461, 462], "swap": [18, 19, 20, 22, 35, 43, 44, 54], "8": [18, 19, 20, 22, 35, 36, 38, 43, 44, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547], "full": [18, 19, 20, 22, 35, 36, 37, 38, 39], "softwar": [18, 19, 20, 22, 35, 36, 37, 38, 39], "9": [18, 19, 20, 22, 35, 43, 44], "cleanup": [18, 19, 20, 22, 35, 36, 37, 38, 39, 43, 44], "rescu": [18, 19, 20, 22, 35, 36, 38, 43, 44], "live": [18, 19, 20, 22, 35, 36, 38, 43, 44], "cd": [18, 19, 20, 22, 35, 36, 38, 43, 44], "areca": [18, 19, 20, 22, 35, 36, 38, 43, 44], "mpt2sa": [18, 19, 20, 22, 35, 36, 38, 43, 44], "qemu": [18, 19, 20, 22, 35, 36, 38, 43, 44, 49], "kvm": [18, 19, 20, 22, 35, 36, 38, 43, 44, 49], "xen": [18, 19, 20, 22, 35, 36, 38, 43, 44, 49, 54], "vmware": [18, 19, 20, 22, 35, 36, 38, 43, 44], "bullsey": 19, "newer": [19, 20, 22, 35, 36, 37], "avail": [19, 20, 22, 35, 36, 37], "buster": 20, "gnu": 21, "initrd": [21, 33], "document": [21, 53, 60], "paramet": [21, 48, 177, 199, 220, 223, 248, 251], "pool": [21, 49, 54, 55, 555, 561, 562], "import": [21, 48, 144, 313, 417, 524], "dev": [21, 54], "cach": [21, 49, 54, 549], "last": 21, "ditch": 21, "attempt": 21, "snapshot": [21, 33, 48, 55, 118, 288, 393, 498], "rollback": [21, 114, 284, 389, 494], "select": [21, 54], "dynam": 21, "nativ": 21, "filesystem": [21, 43, 44, 48], "separ": 21, "descend": 21, "stretch": 22, "relat": 23, "topic": 23, "note": [25, 43], "post": [25, 31], "installaion": [25, 31], "repo": 26, "freebsd": 27, "nixo": [28, 29], "rocki": 31, "base": 32, "distro": [32, 49], "previou": 32, "minor": 32, "el": 32, "slackwar": [33, 34], "problem": [33, 54], "partit": [33, 49], "layout": 33, "loader": 33, "reboot": 33, "other": [33, 54], "18": 35, "04": [35, 36, 37, 38, 39], "20": [36, 37], "errata": [36, 559], "Not": 36, "mount": [36, 85, 103, 104, 181, 203, 227, 231, 255, 273, 274, 360, 378, 379, 465, 483, 484], "accountsservic": 36, "raspberri": [36, 37, 38, 39], "pi": [36, 37, 38, 39], "usb": [37, 39, 47], "22": [38, 39], "start": 41, "opensus": [42, 43, 44], "extern": [42, 43, 44], "link": [42, 43, 44], "leap": 43, "grub2": [43, 44], "systemd": [43, 44], "10": [43, 44], "11": [43, 44], "12": [43, 44], "tumblewe": 44, "licens": [45, 54], "async": 46, "hardwar": [47, 54], "bio": 47, "cpu": [47, 48], "microcod": 47, "updat": [47, 49], "background": 47, "ecc": [47, 54], "memori": [47, 48, 54], "drive": 47, "interfac": 47, "sa": 47, "versu": [47, 49], "sata": 47, "hard": 47, "adapt": [47, 49], "control": 47, "raid": [47, 49], "sector": 47, "size": [47, 49], "error": 47, "recoveri": 47, "rpm": 47, "speed": 47, "command": [47, 64, 84, 170, 180, 191, 202, 214, 226, 242, 254, 341, 359, 444, 464], "queu": 47, "nand": 47, "flash": 47, "ssd": [47, 48], "nvme": [47, 49], "low": [47, 49], "level": [47, 49, 554], "power": 47, "failur": [47, 561, 562, 563], "protect": 47, "criteria": 47, "inclus": 47, "list": [47, 56, 101, 148, 271, 317, 376, 421, 481, 528], "page": [47, 48, 61], "ata": 47, "trim": [47, 48, 161, 330, 434, 541], "scsi": 47, "unmap": 47, "optan": 47, "3d": 47, "xpoint": 47, "pwr_ok": 47, "signal": 47, "psu": 47, "hold": [47, 98, 268, 373, 478], "up": [47, 54], "batteri": 47, "tag": [48, 57], "abd": 48, "alloc": [48, 49], "arc": 48, "channel_program": 48, "checkpoint": [48, 135, 304, 408, 515], "compress": [48, 49], "dataset": [48, 49], "dbuf_cach": 48, "dedup": 48, "delai": [48, 50], "delet": 48, "discard": 48, "dmu": 48, "fragment": 48, "hdd": 48, "hostid": [48, 560], "l2arc": 48, "metadata": [48, 555], "metaslab": [48, 49], "mirror": 48, "mmp": 48, "panic": 48, "prefetch": 48, "qat": 48, "receiv": [48, 54, 55, 109, 279, 384, 489], "remov": [48, 152, 321, 425, 532], "resilv": [48, 155, 324, 428, 535], "scrub": [48, 156, 325, 429, 536], "send": [48, 54, 115, 285, 390, 495], "spa": 48, "special_vdev": 48, "taskq": 48, "vdev_cach": 48, "vdev_initi": 48, "vdev_remov": 48, "volum": 48, "write_throttl": 48, "zed": [48, 88, 184, 206, 230, 258, 363, 468], "zil": 48, "zio_schedul": 48, "index": 48, "ignore_hole_birth": 48, "l2arc_exclude_speci": 48, "l2arc_feed_again": 48, "l2arc_feed_min_m": 48, "l2arc_feed_sec": 48, "l2arc_headroom": 48, "l2arc_headroom_boost": 48, "l2arc_nocompress": 48, "l2arc_meta_perc": 48, "l2arc_mfuonli": 48, "l2arc_noprefetch": 48, "l2arc_norw": 48, "l2arc_rebuild_blocks_min_l2s": 48, "l2arc_rebuild_en": 48, "l2arc_trim_ahead": 48, "l2arc_write_boost": 48, "l2arc_write_max": 48, "metaslab_aliquot": 48, "metaslab_bias_en": 48, "zfs_metaslab_segment_weight_en": 48, "zfs_metaslab_switch_threshold": 48, "metaslab_debug_load": 48, "metaslab_debug_unload": 48, "metaslab_fragmentation_factor_en": 48, "metaslabs_per_vdev": 48, "metaslab_preload_en": 48, "metaslab_lba_weighting_en": 48, "spa_config_path": 48, "spa_asize_infl": 48, "spa_load_verify_data": 48, "spa_load_verify_metadata": 48, "spa_load_verify_maxinflight": 48, "spa_slop_shift": 48, "zfetch_array_rd_sz": 48, "zfetch_max_dist": 48, "zfetch_max_stream": 48, "zfetch_min_sec_reap": 48, "zfs_arc_dnode_limit_perc": 48, "zfs_arc_dnode_limit": 48, "zfs_arc_dnode_reduce_perc": 48, "zfs_arc_average_blocks": 48, "zfs_arc_evict_batch_limit": 48, "zfs_arc_grow_retri": 48, "zfs_arc_lotsfree_perc": 48, "zfs_arc_max": 48, "zfs_arc_meta_adjust_restart": 48, "zfs_arc_meta_limit": 48, "zfs_arc_meta_limit_perc": 48, "zfs_arc_meta_min": 48, "zfs_arc_meta_prun": 48, "zfs_arc_meta_strategi": 48, "zfs_arc_min": 48, "zfs_arc_min_prefetch_m": 48, "zfs_arc_min_prescient_prefetch_m": 48, "zfs_multilist_num_sublist": 48, "zfs_arc_overflow_shift": 48, "zfs_arc_p_min_shift": 48, "zfs_arc_p_dampener_dis": 48, "zfs_arc_shrink_shift": 48, "zfs_arc_pc_perc": 48, "zfs_arc_sys_fre": 48, "zfs_autoimport_dis": 48, "zfs_commit_timeout_pct": 48, "zfs_dbgmsg_enabl": 48, "zfs_dbgmsg_maxsiz": 48, "zfs_dbuf_state_index": 48, "zfs_deadman_en": 48, "zfs_deadman_checktime_m": 48, "zfs_deadman_ziotime_m": 48, "zfs_deadman_synctime_m": 48, "zfs_deadman_failmod": 48, "zfs_dedup_prefetch": 48, "zfs_delete_block": 48, "zfs_delay_min_dirty_perc": 48, "zfs_delay_scal": 48, "zfs_dirty_data_max": 48, "zfs_dirty_data_max_perc": 48, "zfs_dirty_data_max_max": 48, "zfs_dirty_data_max_max_perc": 48, "zfs_dirty_data_sync": 48, "zfs_dirty_data_sync_perc": 48, "zfs_fletcher_4_impl": 48, "zfs_free_bpobj_en": 48, "zfs_free_max_block": 48, "zfs_vdev_async_read_max_act": 48, "zfs_vdev_async_read_min_act": 48, "zfs_vdev_async_write_active_max_dirty_perc": 48, "zfs_vdev_async_write_active_min_dirty_perc": 48, "zfs_vdev_async_write_max_act": 48, "zfs_vdev_async_write_min_act": 48, "zfs_vdev_max_act": 48, "zfs_vdev_scrub_max_act": 48, "zfs_vdev_scrub_min_act": 48, "zfs_vdev_sync_read_max_act": 48, "zfs_vdev_sync_read_min_act": 48, "zfs_vdev_sync_write_max_act": 48, "zfs_vdev_sync_write_min_act": 48, "zfs_vdev_queue_depth_pct": 48, "zfs_disable_dup_evict": 48, "zfs_expire_snapshot": 48, "zfs_admin_snapshot": 48, "zfs_flag": 48, "zfs_free_leak_on_eio": 48, "zfs_free_min_time_m": 48, "zfs_immediate_write_sz": 48, "zfs_max_records": 48, "zfs_mdcomp_dis": 48, "zfs_metaslab_fragmentation_threshold": 48, "zfs_mg_fragmentation_threshold": 48, "zfs_mg_noalloc_threshold": 48, "zfs_multihost_histori": 48, "zfs_multihost_interv": 48, "zfs_multihost_import_interv": 48, "zfs_multihost_fail_interv": 48, "zfs_delays_per_second": 48, "zfs_checksums_per_second": 48, "zfs_no_scrub_io": 48, "zfs_no_scrub_prefetch": 48, "zfs_nocacheflush": 48, "zfs_nopwrite_en": 48, "zfs_dmu_offset_next_sync": 48, "zfs_pd_bytes_max": 48, "zfs_per_txg_dirty_frees_perc": 48, "zfs_prefetch_dis": 48, "zfs_read_chunk_s": 48, "zfs_read_histori": 48, "zfs_read_history_hit": 48, "zfs_recov": 48, "zfs_resilver_min_time_m": 48, "zfs_scan_min_time_m": 48, "zfs_scan_checkpoint_intv": 48, "zfs_scan_fill_weight": 48, "zfs_scan_issue_strategi": 48, "zfs_scan_legaci": 48, "zfs_scan_max_ext_gap": 48, "zfs_scan_mem_lim_fact": 48, "zfs_scan_mem_lim_soft_fact": 48, "zfs_scan_vdev_limit": 48, "zfs_send_corrupt_data": 48, "zfs_sync_pass_deferred_fre": 48, "zfs_sync_pass_dont_compress": 48, "zfs_sync_pass_rewrit": 48, "zfs_sync_taskq_batch_pct": 48, "zfs_txg_histori": 48, "zfs_txg_timeout": 48, "zfs_vdev_aggregation_limit": 48, "zfs_vdev_cache_s": 48, "zfs_vdev_cache_bshift": 48, "zfs_vdev_cache_max": 48, "zfs_vdev_mirror_rotating_inc": 48, "zfs_vdev_mirror_non_rotating_inc": 48, "zfs_vdev_mirror_rotating_seek_inc": 48, "zfs_vdev_mirror_rotating_seek_offset": 48, "zfs_vdev_mirror_non_rotating_seek_inc": 48, "zfs_vdev_read_gap_limit": 48, "zfs_vdev_write_gap_limit": 48, "zfs_vdev_schedul": 48, "zfs_vdev_raidz_impl": 48, "zfs_zevent_col": 48, "zfs_zevent_consol": 48, "zfs_zevent_len_max": 48, "zfs_zil_clean_taskq_maxalloc": 48, "zfs_zil_clean_taskq_minalloc": 48, "zfs_zil_clean_taskq_nthr_pct": 48, "zil_replay_dis": 48, "zil_slog_bulk": 48, "zio_delay_max": 48, "zio_dva_throttle_en": 48, "zio_requeue_io_start_cut_in_lin": 48, "zio_taskq_batch_pct": 48, "zvol_inhibit_dev": 48, "zvol_major": 48, "zvol_max_discard_block": 48, "zvol_prefetch_byt": 48, "zvol_request_sync": 48, "zvol_thread": 48, "zvol_volmod": 48, "zfs_qat_dis": 48, "zfs_qat_checksum_dis": 48, "zfs_qat_compress_dis": 48, "zfs_qat_encrypt_dis": 48, "dbuf_cache_hiwater_pct": 48, "dbuf_cache_lowater_pct": 48, "dbuf_cache_max_byt": 48, "dbuf_cache_max_shift": 48, "dmu_object_alloc_chunk_shift": 48, "send_holes_without_birth_tim": 48, "zfs_abd_scatter_en": 48, "zfs_abd_scatter_max_ord": 48, "zfs_compressed_arc_en": 48, "zfs_key_max_salt_us": 48, "zfs_object_mutex_s": 48, "zfs_scan_strict_mem_lim": 48, "zfs_send_queue_length": 48, "zfs_recv_queue_length": 48, "zfs_arc_min_prefetch_lifespan": 48, "zfs_scan_ignore_error": 48, "zfs_top_maxinflight": 48, "zfs_resilver_delai": 48, "zfs_scrub_delai": 48, "zfs_scan_idl": 48, "icp_aes_impl": 48, "icp_gcm_impl": 48, "zfs_abd_scatter_min_s": 48, "zfs_unlink_suspend_progress": 48, "spa_load_verify_shift": 48, "spa_load_print_vdev_tre": 48, "zfs_max_missing_tvd": 48, "dbuf_metadata_cache_shift": 48, "dbuf_metadata_cache_max_byt": 48, "dbuf_cache_shift": 48, "metaslab_force_gang": 48, "zfs_vdev_default_ms_count": 48, "vdev_removal_max_span": 48, "zfs_removal_ignore_error": 48, "zfs_removal_suspend_progress": 48, "zfs_condense_indirect_commit_entry_delay_m": 48, "zfs_condense_indirect_vdevs_en": 48, "zfs_condense_max_obsolete_byt": 48, "zfs_condense_min_mapping_byt": 48, "zfs_vdev_initializing_max_act": 48, "zfs_vdev_initializing_min_act": 48, "zfs_vdev_removal_max_act": 48, "zfs_vdev_removal_min_act": 48, "zfs_vdev_trim_max_act": 48, "zfs_vdev_trim_min_act": 48, "zfs_initialize_valu": 48, "zfs_lua_max_instrlimit": 48, "zfs_lua_max_memlimit": 48, "zfs_max_dataset_nest": 48, "zfs_ddt_data_is_speci": 48, "zfs_user_indirect_is_speci": 48, "zfs_reconstruct_indirect_combinations_max": 48, "zfs_send_unmodified_spill_block": 48, "zfs_spa_discard_memory_limit": 48, "zfs_special_class_metadata_reserve_pct": 48, "zfs_trim_extent_bytes_max": 48, "zfs_trim_extent_bytes_min": 48, "zfs_trim_metaslab_skip": 48, "zfs_trim_queue_limit": 48, "zfs_trim_txg_batch": 48, "zfs_vdev_aggregate_trim": 48, "zfs_vdev_aggregation_limit_non_rot": 48, "zil_nocacheflush": 48, "zio_deadman_log_al": 48, "zio_decompress_fail_fract": 48, "zio_slow_io_m": 48, "vdev_validate_skip": 48, "zfs_async_block_max_block": 48, "zfs_checksum_events_per_second": 48, "zfs_disable_ivset_guid_check": 48, "zfs_obsolete_min_time_m": 48, "zfs_override_estimate_records": 48, "zfs_remove_max_seg": 48, "zfs_resilver_disable_def": 48, "zfs_scan_suspend_progress": 48, "zfs_scrub_min_time_m": 48, "zfs_slow_io_events_per_second": 48, "zfs_vdev_min_ms_count": 48, "zfs_vdev_ms_count_limit": 48, "spl_hostid": 48, "spl_hostid_path": 48, "spl_kmem_alloc_max": 48, "spl_kmem_alloc_warn": 48, "spl_kmem_cache_expir": 48, "spl_kmem_cache_kmem_limit": 48, "spl_kmem_cache_max_s": 48, "spl_kmem_cache_obj_per_slab": 48, "spl_kmem_cache_obj_per_slab_min": 48, "spl_kmem_cache_reclaim": 48, "spl_kmem_cache_slab_limit": 48, "spl_max_show_task": 48, "spl_panic_halt": 48, "spl_taskq_kick": 48, "spl_taskq_thread_bind": 48, "spl_taskq_thread_dynam": 48, "spl_taskq_thread_prior": 48, "spl_taskq_thread_sequenti": 48, "spl_kmem_cache_kmem_thread": 48, "spl_kmem_cache_magazine_s": 48, "workload": 49, "tune": [49, 52], "replac": [49, 154, 323, 427, 534], "align": 49, "shift": 49, "ashift": 49, "z": 49, "stripe": 49, "width": 49, "records": 49, "larger": [49, 54], "record": 49, "zvol": [49, 54], "volblocks": 49, "dedupl": 49, "geometri": 49, "whole": 49, "recommend": 49, "init_on_alloc": 49, "atim": 49, "free": 49, "lz4": 49, "synchron": 49, "i": [49, 51, 54, 55, 561, 562], "overprovis": 49, "secur": 49, "eras": 49, "trick": 49, "bit": [49, 54], "torrent": 49, "databas": 49, "mysql": 49, "innodb": 49, "postgresql": 49, "sqlite": 49, "server": 49, "samba": 49, "sequenti": 49, "video": 49, "game": 49, "directori": 49, "lutri": 49, "steam": 49, "wine": 49, "virtual": 49, "machin": 49, "transact": 50, "zio": 51, "schedul": 51, "admin": 53, "faq": [54, 55], "what": 54, "do": [54, 55], "have": [54, 55], "architectur": 54, "32": 54, "v": 54, "64": 54, "when": 54, "set": [54, 116, 157, 286, 326, 391, 430, 496, 537], "etc": 54, "vdev_id": [54, 74, 86, 175, 182, 197, 204, 221, 228, 249, 256, 350, 361, 454, 466], "conf": [54, 74, 175, 197, 221, 249, 350, 454], "an": [54, 55], "exist": 54, "zpool": [54, 80, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 178, 187, 200, 210, 224, 237, 252, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 355, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 460, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544], "new": 54, "stream": 54, "hole_birth": [54, 55], "bug": 54, "larg": 54, "block": 54, "ceph": 54, "guidelin": 54, "advanc": 54, "than": 54, "expect": 54, "devic": [54, 70, 346, 450, 550, 551, 552, 553, 554, 557], "hypervisor": 54, "dom0": 54, "udisks2": 54, "mapper": 54, "entri": 54, "report": 54, "doe": 54, "conduct": 54, "hole": 55, "birth": 55, "short": 55, "explan": 55, "enabl": 55, "how": 55, "know": 55, "am": 55, "affect": 55, "ani": 55, "less": 55, "pain": 55, "wai": 55, "fix": 55, "thi": 55, "we": 55, "alreadi": 55, "long": 55, "mail": 56, "sign": 57, "kei": [57, 91, 102, 121, 261, 272, 291, 366, 377, 396, 471, 482, 501], "check": 57, "signatur": 57, "project": [58, 106, 276, 381, 486], "commun": 58, "man": 61, "arcstat": [62, 240, 339, 442], "cstyle": [63, 169, 190, 213, 241, 340, 443], "user": [64, 170, 191, 214, 242, 341, 444], "raidz_test": [65, 192, 215, 243, 342, 445], "runner": [66, 446], "zhack": [67, 171, 193, 216, 244, 343, 447], "ztest": [68, 173, 195, 217, 245, 344, 448], "zvol_wait": [69, 218, 246, 345, 449], "special": [70, 346, 450], "convent": [73, 174, 196, 219, 247, 349, 453], "dracut": [75, 351, 455], "miscellan": [76, 352, 456], "vdevprop": [77, 457], "zfsconcept": [78, 298, 353, 458], "zfsprop": [79, 234, 299, 354, 459], "zpoolconcept": [81, 334, 356, 461], "zpoolprop": [82, 335, 357, 462], "fsck": [83, 179, 201, 225, 253, 358, 463], "administr": [84, 180, 202, 226, 254, 359, 464], "zdb": [87, 183, 205, 229, 257, 362, 467], "allow": [89, 259, 364, 469], "bookmark": [90, 260, 365, 470], "destroi": [94, 138, 264, 307, 369, 411, 474, 518], "diff": [95, 265, 370, 475], "groupspac": [97, 267, 372, 477], "inherit": [99, 269, 374, 479], "jail": [100, 270, 375, 480], "load": [102, 272, 377, 482], "program": [105, 232, 275, 380, 485], "projectspac": [107, 277, 382, 487], "promot": [108, 278, 383, 488], "recv": [110, 280, 385, 490], "redact": [111, 281, 386, 491], "renam": [113, 283, 388, 493], "share": [117, 287, 392, 497], "unallow": [119, 289, 394, 499], "unjail": [120, 290, 395, 500], "unload": [121, 291, 396, 501], "unmount": [122, 292, 397, 502], "unzon": [123, 503], "upgrad": [124, 162, 293, 331, 398, 435, 504, 542], "userspac": [125, 294, 399, 505], "wait": [126, 163, 295, 332, 400, 436, 506, 543], "zone": [127, 507], "zfs_ids_to_path": [129, 297, 402, 509], "zfs_prepare_disk": [130, 403, 510], "zgenhostid": [131, 208, 235, 300, 404, 511], "zinject": [132, 186, 209, 236, 301, 405, 512], "add": [133, 302, 406, 513], "attach": [134, 303, 407, 514], "clear": [136, 305, 409, 516], "detach": [139, 308, 412, 519], "export": [141, 310, 414, 521], "histori": [143, 312, 416, 523], "iostat": [146, 315, 419, 526], "labelclear": [147, 316, 420, 527], "offlin": [149, 318, 422, 529], "onlin": [150, 319, 423, 530], "reguid": [151, 320, 424, 531], "reopen": [153, 322, 426, 533], "split": [158, 327, 431, 538], "statu": [159, 328, 432, 539], "sync": [160, 329, 433, 540], "zpool_influxdb": [165, 438, 545], "zstream": [166, 336, 439, 546], "zstreamdump": [167, 188, 211, 238, 337, 440, 547], "zpio": [172, 194], "v0": [189, 212, 239], "v2": [338, 441, 548], "0": 338, "id": [549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "8000": [549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563], "corrupt": [549, 552, 553, 555, 556], "2q": 550, "miss": [550, 551, 554], "replic": [550, 551, 552, 553, 557], "3c": 551, "non": [551, 553], "4j": 552, "label": [552, 553, 560], "5e": 553, "6x": 554, "top": 554, "72": 555, "8a": 556, "data": 556, "9p": 557, "fail": 557, "a5": 558, "incompat": 558, "er": 559, "ei": 560, "mismatch": 560, "hc": 561, "jq": 562, "k4": 563, "intent": 563, "read": 563}, "envversion": {"sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx.ext.todo": 2, "sphinx.ext.intersphinx": 1, "sphinx": 58}, "alltitles": {"Checksums and Their Use in ZFS": [[1, "checksums-and-their-use-in-zfs"]], "Checksum Algorithms": [[1, "checksum-algorithms"]], "Checksum Accelerators": [[1, "checksum-accelerators"]], "Checksum Microbenchmarks": [[1, "checksum-microbenchmarks"]], "Disabling Checksums": [[1, "disabling-checksums"]], "Feature Flags": [[2, "feature-flags"]], "Compatibility": [[2, "compatibility"]], "Reference materials": [[2, "reference-materials"]], "Feature flags implementation per OS": [[2, "feature-flags-implementation-per-os"]], "RAIDZ": [[3, "raidz"]], "Introduction": [[3, "introduction"], [5, "introduction"], [47, "introduction"]], "Space efficiency": [[3, "space-efficiency"]], "Performance considerations": [[3, "performance-considerations"]], "Write": [[3, "write"]], "Troubleshooting": [[4, "troubleshooting"], [18, "troubleshooting"], [19, "troubleshooting"], [20, "troubleshooting"], [22, "troubleshooting"], [35, "troubleshooting"], [36, "troubleshooting"], [38, "troubleshooting"], [43, "troubleshooting"], [44, "troubleshooting"]], "Todo": [[4, "id1"]], "About Log Files": [[4, "about-log-files"]], "Generic Kernel Log": [[4, "generic-kernel-log"]], "ZFS Kernel Module Debug Messages": [[4, "zfs-kernel-module-debug-messages"]], "Unkillable Process": [[4, "unkillable-process"]], "ZFS Events": [[4, "zfs-events"]], "dRAID": [[5, "draid"]], "Create a dRAID vdev": [[5, "create-a-draid-vdev"]], "Rebuilding to a Distributed Spare": [[5, "rebuilding-to-a-distributed-spare"]], "Rebalancing": [[5, "rebalancing"]], "Basic Concepts": [[6, "basic-concepts"]], "Contents:": [[6, null], [13, null], [52, null], [58, null], [564, null]], "Buildbot Options": [[7, "buildbot-options"]], "Choosing Builders": [[7, "choosing-builders"]], "Preventing a commit from being built and tested.": [[7, "preventing-a-commit-from-being-built-and-tested"]], "Submitting a commit to STYLE and TEST builders only.": [[7, "submitting-a-commit-to-style-and-test-builders-only"]], "Requiring SPL Versions": [[7, "requiring-spl-versions"]], "Build SPL from a specific pull request": [[7, "build-spl-from-a-specific-pull-request"]], "Build SPL branch spl-branch-name from zfsonlinux/spl repository": [[7, "build-spl-branch-spl-branch-name-from-zfsonlinux-spl-repository"]], "Requiring Kernel Version": [[7, "requiring-kernel-version"]], "Build Linux Kernel Version 4.14": [[7, "build-linux-kernel-version-4-14"]], "Build Steps Overrides": [[7, "build-steps-overrides"]], "Skip building the SPL and build Lustre without ldiskfs": [[7, "skip-building-the-spl-and-build-lustre-without-ldiskfs"]], "Build ZFS Only": [[7, "build-zfs-only"]], "Configuring Tests with the TEST File": [[7, "configuring-tests-with-the-test-file"]], "Building ZFS": [[8, "building-zfs"]], "GitHub Repositories": [[8, "github-repositories"]], "Installing Dependencies": [[8, "installing-dependencies"]], "Build Options": [[8, "build-options"]], "Developing In-Tree": [[8, "developing-in-tree"]], "Clone from GitHub": [[8, "clone-from-github"]], "Configure and Build": [[8, "configure-and-build"]], "Install": [[8, "install"]], "Running zloop.sh and zfs-tests.sh": [[8, "running-zloop-sh-and-zfs-tests-sh"]], "Custom Packages": [[9, "custom-packages"]], "RHEL, CentOS and Fedora": [[9, "rhel-centos-and-fedora"]], "DKMS": [[9, "dkms"], [9, "dkms-1"], [32, "dkms"]], "kmod": [[9, "kmod"], [9, "kmod-1"]], "kABI-tracking kmod": [[9, "kabi-tracking-kmod"], [32, "kabi-tracking-kmod"]], "Debian and Ubuntu": [[9, "debian-and-ubuntu"]], "Get the Source Code": [[9, "get-the-source-code"]], "Released Tarball": [[9, "released-tarball"]], "Git Master Branch": [[9, "git-master-branch"]], "Git and GitHub for beginners (ZoL edition)": [[10, "git-and-github-for-beginners-zol-edition"]], "First time setup": [[10, "first-time-setup"]], "Cloning the initial repository": [[10, "cloning-the-initial-repository"]], "Preparing and making changes": [[10, "preparing-and-making-changes"]], "Testing your patches before pushing": [[10, "testing-your-patches-before-pushing"]], "Committing your changes to be pushed": [[10, "committing-your-changes-to-be-pushed"]], "Pushing and creating the pull request": [[10, "pushing-and-creating-the-pull-request"]], "Correcting issues with your pull request": [[10, "correcting-issues-with-your-pull-request"]], "Maintaining your repository": [[10, "maintaining-your-repository"]], "Final words": [[10, "final-words"]], "OpenZFS Exceptions": [[11, "openzfs-exceptions"]], "Format:": [[11, "format"]], "OpenZFS Patches": [[12, "openzfs-patches"]], "Porting OpenZFS changes to ZFS on Linux": [[12, "porting-openzfs-changes-to-zfs-on-linux"]], "Setup the Environment": [[12, "setup-the-environment"]], "Pick a patch": [[12, "pick-a-patch"]], "Porting a Patch": [[12, "porting-a-patch"]], "Cherry-pick": [[12, "cherry-pick"]], "Manual merge": [[12, "manual-merge"]], "Testing a Patch": [[12, "testing-a-patch"]], "Merging the Patch": [[12, "merging-the-patch"]], "Porting ZFS on Linux changes to OpenZFS": [[12, "porting-zfs-on-linux-changes-to-openzfs"]], "Developer Resources": [[13, "developer-resources"]], "Alpine Linux Root on ZFS": [[14, "alpine-linux-root-on-zfs"]], "Preparation": [[14, "preparation"], [16, "preparation"], [25, "preparation"], [28, "preparation"], [31, "preparation"]], "System Installation": [[14, "system-installation"], [16, "system-installation"], [25, "system-installation"], [28, "system-installation"], [31, "system-installation"]], "System Configuration": [[14, "system-configuration"], [16, "system-configuration"], [25, "system-configuration"], [28, "system-configuration"], [31, "system-configuration"]], "Alpine Linux": [[15, "alpine-linux"]], "Contents": [[15, "contents"], [17, "contents"], [26, "contents"], [29, "contents"], [32, "contents"]], "Installation": [[15, "installation"], [17, "installation"], [23, "installation"], [26, "installation"], [29, "installation"], [34, "installation"], [40, "installation"], [42, "installation"], [54, "installation"]], "Root on ZFS": [[15, "root-on-zfs"], [17, "root-on-zfs"], [23, "root-on-zfs"], [26, "root-on-zfs"], [29, "root-on-zfs"], [32, "root-on-zfs"], [34, "root-on-zfs"], [40, "root-on-zfs"], [42, "root-on-zfs"]], "Arch Linux Root on ZFS": [[16, "arch-linux-root-on-zfs"]], "Bootloader": [[16, "bootloader"], [25, "bootloader"], [31, "bootloader"]], "Arch Linux": [[17, "arch-linux"]], "Support": [[17, "support"], [18, "support"], [19, "support"], [20, "support"], [22, "support"], [29, "support"], [33, "support"], [35, "support"], [36, "support"], [37, "support"], [38, "support"], [39, "support"], [43, "support"], [44, "support"]], "Overview": [[17, "overview"], [18, "overview"], [19, "overview"], [20, "overview"], [22, "overview"], [35, "overview"], [36, "overview"], [37, "overview"], [38, "overview"], [39, "overview"], [43, "overview"], [44, "overview"]], "Contribute": [[17, "contribute"], [29, "contribute"]], "Debian Bookworm Root on ZFS": [[18, "debian-bookworm-root-on-zfs"]], "Table of Contents": [[18, "table-of-contents"], [19, "table-of-contents"], [20, "table-of-contents"], [22, "table-of-contents"], [23, "table-of-contents"], [34, "table-of-contents"], [35, "table-of-contents"], [36, "table-of-contents"], [37, "table-of-contents"], [38, "table-of-contents"], [39, "table-of-contents"], [40, "table-of-contents"], [42, "table-of-contents"], [43, "table-of-contents"], [44, "table-of-contents"], [47, "table-of-contents"], [49, "table-of-contents"], [54, "table-of-contents"]], "Caution": [[18, "caution"], [19, "caution"], [20, "caution"], [22, "caution"], [35, "caution"], [36, "caution"], [37, "caution"], [38, "caution"], [39, "caution"], [43, "caution"], [44, "caution"]], "System Requirements": [[18, "system-requirements"], [19, "system-requirements"], [20, "system-requirements"], [22, "system-requirements"], [35, "system-requirements"], [36, "system-requirements"], [37, "system-requirements"], [38, "system-requirements"], [39, "system-requirements"], [43, "system-requirements"], [44, "system-requirements"]], "Contributing": [[18, "contributing"], [19, "contributing"], [20, "contributing"], [22, "contributing"], [35, "contributing"], [36, "contributing"], [37, "contributing"], [38, "contributing"], [39, "contributing"], [43, "contributing"], [44, "contributing"]], "Encryption": [[18, "encryption"], [19, "encryption"], [20, "encryption"], [22, "encryption"], [35, "encryption"], [36, "encryption"], [37, "encryption"], [38, "encryption"], [39, "encryption"], [43, "encryption"], [44, "encryption"]], "Step 1: Prepare The Install Environment": [[18, "step-1-prepare-the-install-environment"], [19, "step-1-prepare-the-install-environment"], [20, "step-1-prepare-the-install-environment"], [22, "step-1-prepare-the-install-environment"], [35, "step-1-prepare-the-install-environment"], [36, "step-1-prepare-the-install-environment"], [38, "step-1-prepare-the-install-environment"], [43, "step-1-prepare-the-install-environment"], [44, "step-1-prepare-the-install-environment"]], "Step 2: Disk Formatting": [[18, "step-2-disk-formatting"], [19, "step-2-disk-formatting"], [20, "step-2-disk-formatting"], [22, "step-2-disk-formatting"], [35, "step-2-disk-formatting"], [36, "step-2-disk-formatting"], [38, "step-2-disk-formatting"], [43, "step-2-disk-formatting"], [44, "step-2-disk-formatting"]], "Step 3: System Installation": [[18, "step-3-system-installation"], [19, "step-3-system-installation"], [20, "step-3-system-installation"], [22, "step-3-system-installation"], [35, "step-3-system-installation"], [36, "step-3-system-installation"], [37, "step-3-system-installation"], [38, "step-3-system-installation"], [39, "step-3-system-installation"], [43, "step-3-system-installation"], [44, "step-3-system-installation"]], "Step 4: System Configuration": [[18, "step-4-system-configuration"], [19, "step-4-system-configuration"], [20, "step-4-system-configuration"], [22, "step-4-system-configuration"], [35, "step-4-system-configuration"], [36, "step-4-system-configuration"], [37, "step-4-system-configuration"], [38, "step-4-system-configuration"], [39, "step-4-system-configuration"]], "Step 5: GRUB Installation": [[18, "step-5-grub-installation"], [19, "step-5-grub-installation"], [20, "step-5-grub-installation"], [22, "step-5-grub-installation"], [35, "step-5-grub-installation"], [36, "step-5-grub-installation"], [38, "step-5-grub-installation"]], "Step 6: First Boot": [[18, "step-6-first-boot"], [19, "step-6-first-boot"], [20, "step-6-first-boot"], [22, "step-6-first-boot"], [35, "step-6-first-boot"], [36, "step-6-first-boot"], [38, "step-6-first-boot"]], "Step 7: Optional: Configure Swap": [[18, "step-7-optional-configure-swap"], [19, "step-7-optional-configure-swap"], [20, "step-7-optional-configure-swap"]], "Step 8: Full Software Installation": [[18, "step-8-full-software-installation"], [19, "step-8-full-software-installation"], [20, "step-8-full-software-installation"], [22, "step-8-full-software-installation"], [35, "step-8-full-software-installation"]], "Step 9: Final Cleanup": [[18, "step-9-final-cleanup"], [19, "step-9-final-cleanup"], [20, "step-9-final-cleanup"], [22, "step-9-final-cleanup"], [35, "step-9-final-cleanup"]], "Rescuing using a Live CD": [[18, "rescuing-using-a-live-cd"], [19, "rescuing-using-a-live-cd"], [20, "rescuing-using-a-live-cd"], [22, "rescuing-using-a-live-cd"], [35, "rescuing-using-a-live-cd"], [36, "rescuing-using-a-live-cd"], [38, "rescuing-using-a-live-cd"], [43, "rescuing-using-a-live-cd"], [44, "rescuing-using-a-live-cd"]], "Areca": [[18, "areca"], [19, "areca"], [20, "areca"], [22, "areca"], [35, "areca"], [36, "areca"], [38, "areca"], [43, "areca"], [44, "areca"]], "MPT2SAS": [[18, "mpt2sas"], [19, "mpt2sas"], [20, "mpt2sas"], [22, "mpt2sas"], [35, "mpt2sas"], [36, "mpt2sas"], [38, "mpt2sas"], [43, "mpt2sas"], [44, "mpt2sas"]], "QEMU/KVM/XEN": [[18, "qemu-kvm-xen"], [19, "qemu-kvm-xen"], [20, "qemu-kvm-xen"], [22, "qemu-kvm-xen"], [35, "qemu-kvm-xen"], [36, "qemu-kvm-xen"], [38, "qemu-kvm-xen"], [43, "qemu-kvm-xen"], [44, "qemu-kvm-xen"]], "VMware": [[18, "vmware"], [19, "vmware"], [20, "vmware"], [22, "vmware"], [35, "vmware"], [36, "vmware"], [38, "vmware"], [43, "vmware"], [44, "vmware"]], "Debian Bullseye Root on ZFS": [[19, "debian-bullseye-root-on-zfs"]], "Newer release available": [[19, "newer-release-available"], [20, "newer-release-available"], [22, "newer-release-available"], [35, "newer-release-available"], [36, "newer-release-available"], [37, "newer-release-available"]], "Debian Buster Root on ZFS": [[20, "debian-buster-root-on-zfs"]], "Debian GNU Linux initrd documentation": [[21, "debian-gnu-linux-initrd-documentation"]], "Supported boot parameters": [[21, "supported-boot-parameters"]], "Pool imports": [[21, "pool-imports"]], "Import using /dev/disk/by-*": [[21, "import-using-dev-disk-by"]], "Import using cache file": [[21, "import-using-cache-file"]], "Last ditch attempt at importing": [[21, "last-ditch-attempt-at-importing"]], "Booting": [[21, "booting"]], "Booting from snapshot:": [[21, "booting-from-snapshot"]], "Snapshot rollback": [[21, "snapshot-rollback"]], "Select snapshot dynamically": [[21, "select-snapshot-dynamically"]], "Booting from native encrypted filesystem": [[21, "booting-from-native-encrypted-filesystem"]], "Separated filesystems": [[21, "separated-filesystems"]], "Descended filesystems": [[21, "descended-filesystems"]], "Debian Stretch Root on ZFS": [[22, "debian-stretch-root-on-zfs"]], "Step 7: (Optional) Configure Swap": [[22, "step-7-optional-configure-swap"], [35, "step-7-optional-configure-swap"]], "Debian": [[23, "debian"]], "Related topics": [[23, "related-topics"]], "Fedora": [[24, "fedora"], [26, "fedora"]], "Fedora Root on ZFS": [[25, "fedora-root-on-zfs"]], "Notes": [[25, "notes"], [43, "notes"]], "Post installaion": [[25, "post-installaion"], [31, "post-installaion"]], "Testing Repo": [[26, "testing-repo"]], "FreeBSD": [[27, "freebsd"]], "Installation on FreeBSD": [[27, "installation-on-freebsd"]], "Development on FreeBSD": [[27, "development-on-freebsd"]], "NixOS Root on ZFS": [[28, "nixos-root-on-zfs"]], "NixOS": [[29, "nixos"]], "RHEL and CentOS": [[30, "rhel-and-centos"]], "Rocky Linux Root on ZFS": [[31, "rocky-linux-root-on-zfs"]], "RHEL-based distro": [[32, "rhel-based-distro"]], "Previous minor EL releases": [[32, "previous-minor-el-releases"]], "Testing Repositories": [[32, "testing-repositories"]], "Slackware Root on ZFS": [[33, "slackware-root-on-zfs"]], "Kernel considerations": [[33, "kernel-considerations"]], "The problem space": [[33, "the-problem-space"]], "Partition layout": [[33, "partition-layout"]], "Patch and rebuild the initrd": [[33, "patch-and-rebuild-the-initrd"]], "Configure the boot loader": [[33, "configure-the-boot-loader"]], "Before rebooting": [[33, "before-rebooting"]], "Other options": [[33, "other-options"]], "Snapshots and boot environments": [[33, "snapshots-and-boot-environments"]], "Slackware": [[34, "slackware"]], "Ubuntu 18.04 Root on ZFS": [[35, "ubuntu-18-04-root-on-zfs"]], "Ubuntu 20.04 Root on ZFS": [[36, "ubuntu-20-04-root-on-zfs"]], "Errata": [[36, "errata"]], "/boot/grub Not Mounted": [[36, "boot-grub-not-mounted"]], "AccountsService Not Mounted": [[36, "accountsservice-not-mounted"]], "Ubuntu Installer": [[36, "ubuntu-installer"], [38, "ubuntu-installer"]], "Raspberry Pi": [[36, "raspberry-pi"], [38, "raspberry-pi"]], "Step 7: Full Software Installation": [[36, "step-7-full-software-installation"], [38, "step-7-full-software-installation"]], "Step 8: Final Cleanup": [[36, "step-8-final-cleanup"], [38, "step-8-final-cleanup"]], "Ubuntu 20.04 Root on ZFS for Raspberry Pi": [[37, "ubuntu-20-04-root-on-zfs-for-raspberry-pi"]], "USB Disks": [[37, "usb-disks"], [39, "usb-disks"]], "Step 1: Disk Formatting": [[37, "step-1-disk-formatting"], [39, "step-1-disk-formatting"]], "Step 2: Setup ZFS": [[37, "step-2-setup-zfs"], [39, "step-2-setup-zfs"]], "Step 5: First Boot": [[37, "step-5-first-boot"], [39, "step-5-first-boot"]], "Step 6: Full Software Installation": [[37, "step-6-full-software-installation"], [39, "step-6-full-software-installation"]], "Step 7: Final Cleanup": [[37, "step-7-final-cleanup"], [39, "step-7-final-cleanup"]], "Ubuntu 22.04 Root on ZFS": [[38, "ubuntu-22-04-root-on-zfs"]], "Ubuntu 22.04 Root on ZFS for Raspberry Pi": [[39, "ubuntu-22-04-root-on-zfs-for-raspberry-pi"]], "Ubuntu": [[40, "ubuntu"]], "Getting Started": [[41, "getting-started"]], "openSUSE": [[42, "opensuse"]], "External Links": [[42, "external-links"], [43, "external-links"], [44, "external-links"]], "openSUSE Leap Root on ZFS": [[43, "opensuse-leap-root-on-zfs"]], "Step 4. Install System": [[43, "step-4-install-system"], [44, "step-4-install-system"]], "Step 5: System Configuration": [[43, "step-5-system-configuration"], [44, "step-5-system-configuration"]], "Step 6: Kernel Installation": [[43, "step-6-kernel-installation"], [44, "step-6-kernel-installation"]], "Step 7: Grub2 Installation": [[43, "step-7-grub2-installation"], [44, "step-7-grub2-installation"]], "Step 8: Systemd-Boot Installation": [[43, "step-8-systemd-boot-installation"], [44, "step-8-systemd-boot-installation"]], "Step 9: Filesystem Configuration": [[43, "step-9-filesystem-configuration"], [44, "step-9-filesystem-configuration"]], "Step 10: First Boot": [[43, "step-10-first-boot"], [44, "step-10-first-boot"]], "Step 11: Optional: Configure Swap": [[43, "step-11-optional-configure-swap"], [44, "step-11-optional-configure-swap"]], "Step 12: Final Cleanup": [[43, "step-12-final-cleanup"], [44, "step-12-final-cleanup"]], "openSUSE Tumbleweed Root on ZFS": [[44, "opensuse-tumbleweed-root-on-zfs"]], "License": [[45, "license"]], "Async Writes": [[46, "async-writes"]], "Hardware": [[47, "hardware"]], "BIOS / CPU microcode updates": [[47, "bios-cpu-microcode-updates"]], "Background": [[47, "background"], [47, "background-1"], [47, "background-2"], [47, "background-3"]], "ECC Memory": [[47, "ecc-memory"]], "Drive Interfaces": [[47, "drive-interfaces"]], "SAS versus SATA": [[47, "sas-versus-sata"]], "USB Hard Drives and/or Adapters": [[47, "usb-hard-drives-and-or-adapters"]], "Controllers": [[47, "controllers"]], "Hardware RAID controllers": [[47, "hardware-raid-controllers"]], "Hard drives": [[47, "hard-drives"]], "Sector Size": [[47, "sector-size"]], "Error recovery control": [[47, "error-recovery-control"]], "RPM Speeds": [[47, "rpm-speeds"]], "Command Queuing": [[47, "command-queuing"]], "NAND Flash SSDs": [[47, "nand-flash-ssds"]], "NVMe low level formatting": [[47, "nvme-low-level-formatting"], [49, "nvme-low-level-formatting"]], "Power Failure Protection": [[47, "power-failure-protection"]], "NVMe drives with power failure protection": [[47, "nvme-drives-with-power-failure-protection"]], "SAS drives with power failure protection": [[47, "sas-drives-with-power-failure-protection"]], "SATA drives with power failure protection": [[47, "sata-drives-with-power-failure-protection"]], "Criteria/process for inclusion into these lists": [[47, "criteria-process-for-inclusion-into-these-lists"]], "Flash pages": [[47, "flash-pages"]], "ATA TRIM / SCSI UNMAP": [[47, "ata-trim-scsi-unmap"]], "ATA TRIM Performance Issues": [[47, "ata-trim-performance-issues"]], "Optane / 3D XPoint SSDs": [[47, "optane-3d-xpoint-ssds"]], "Power": [[47, "power"]], "PWR_OK signal": [[47, "pwr-ok-signal"]], "PSU Hold-up Times": [[47, "psu-hold-up-times"]], "UPS batteries": [[47, "ups-batteries"]], "Module Parameters": [[48, "module-parameters"], [48, "zfs-module-parameters-1"]], "Manual Pages": [[48, "manual-pages"]], "ZFS Module Parameters": [[48, "zfs-module-parameters"]], "Tags": [[48, "tags"]], "ABD": [[48, "abd"]], "allocation": [[48, "allocation"]], "ARC": [[48, "arc"]], "channel_programs": [[48, "channel-programs"]], "checkpoint": [[48, "checkpoint"]], "checksum": [[48, "checksum"]], "compression": [[48, "compression"]], "CPU": [[48, "cpu"]], "dataset": [[48, "dataset"]], "dbuf_cache": [[48, "dbuf-cache"]], "debug": [[48, "debug"]], "dedup": [[48, "dedup"]], "delay": [[48, "delay"]], "delete": [[48, "delete"]], "discard": [[48, "discard"]], "disks": [[48, "disks"]], "DMU": [[48, "dmu"]], "encryption": [[48, "encryption"]], "filesystem": [[48, "filesystem"]], "fragmentation": [[48, "fragmentation"]], "HDD": [[48, "hdd"]], "hostid": [[48, "hostid"]], "import": [[48, "import"]], "L2ARC": [[48, "l2arc"]], "memory": [[48, "memory"]], "metadata": [[48, "metadata"]], "metaslab": [[48, "metaslab"]], "mirror": [[48, "mirror"]], "MMP": [[48, "mmp"]], "panic": [[48, "panic"]], "prefetch": [[48, "prefetch"]], "QAT": [[48, "qat"]], "raidz": [[48, "raidz"]], "receive": [[48, "receive"]], "remove": [[48, "remove"]], "resilver": [[48, "resilver"]], "scrub": [[48, "scrub"]], "send": [[48, "send"]], "snapshot": [[48, "snapshot"]], "SPA": [[48, "spa"]], "special_vdev": [[48, "special-vdev"]], "SSD": [[48, "ssd"]], "taskq": [[48, "taskq"]], "trim": [[48, "trim"]], "vdev": [[48, "vdev"]], "vdev_cache": [[48, "vdev-cache"]], "vdev_initialize": [[48, "vdev-initialize"]], "vdev_removal": [[48, "vdev-removal"]], "volume": [[48, "volume"]], "write_throttle": [[48, "write-throttle"]], "zed": [[48, "zed"]], "ZIL": [[48, "zil"]], "ZIO_scheduler": [[48, "zio-scheduler"]], "Index": [[48, "index"]], "ignore_hole_birth": [[48, "ignore-hole-birth"]], "l2arc_exclude_special": [[48, "l2arc-exclude-special"]], "l2arc_feed_again": [[48, "l2arc-feed-again"]], "l2arc_feed_min_ms": [[48, "l2arc-feed-min-ms"]], "l2arc_feed_secs": [[48, "l2arc-feed-secs"]], "l2arc_headroom": [[48, "l2arc-headroom"]], "l2arc_headroom_boost": [[48, "l2arc-headroom-boost"]], "l2arc_nocompress": [[48, "l2arc-nocompress"]], "l2arc_meta_percent": [[48, "l2arc-meta-percent"]], "l2arc_mfuonly": [[48, "l2arc-mfuonly"]], "l2arc_noprefetch": [[48, "l2arc-noprefetch"]], "l2arc_norw": [[48, "l2arc-norw"]], "l2arc_rebuild_blocks_min_l2size": [[48, "l2arc-rebuild-blocks-min-l2size"]], "l2arc_rebuild_enabled": [[48, "l2arc-rebuild-enabled"]], "l2arc_trim_ahead": [[48, "l2arc-trim-ahead"]], "l2arc_write_boost": [[48, "l2arc-write-boost"]], "l2arc_write_max": [[48, "l2arc-write-max"]], "metaslab_aliquot": [[48, "metaslab-aliquot"]], "metaslab_bias_enabled": [[48, "metaslab-bias-enabled"]], "zfs_metaslab_segment_weight_enabled": [[48, "zfs-metaslab-segment-weight-enabled"]], "zfs_metaslab_switch_threshold": [[48, "zfs-metaslab-switch-threshold"]], "metaslab_debug_load": [[48, "metaslab-debug-load"]], "metaslab_debug_unload": [[48, "metaslab-debug-unload"]], "metaslab_fragmentation_factor_enabled": [[48, "metaslab-fragmentation-factor-enabled"]], "metaslabs_per_vdev": [[48, "metaslabs-per-vdev"]], "metaslab_preload_enabled": [[48, "metaslab-preload-enabled"]], "metaslab_lba_weighting_enabled": [[48, "metaslab-lba-weighting-enabled"]], "spa_config_path": [[48, "spa-config-path"]], "spa_asize_inflation": [[48, "spa-asize-inflation"]], "spa_load_verify_data": [[48, "spa-load-verify-data"]], "spa_load_verify_metadata": [[48, "spa-load-verify-metadata"]], "spa_load_verify_maxinflight": [[48, "spa-load-verify-maxinflight"]], "spa_slop_shift": [[48, "spa-slop-shift"]], "zfetch_array_rd_sz": [[48, "zfetch-array-rd-sz"]], "zfetch_max_distance": [[48, "zfetch-max-distance"]], "zfetch_max_streams": [[48, "zfetch-max-streams"]], "zfetch_min_sec_reap": [[48, "zfetch-min-sec-reap"]], "zfs_arc_dnode_limit_percent": [[48, "zfs-arc-dnode-limit-percent"]], "zfs_arc_dnode_limit": [[48, "zfs-arc-dnode-limit"]], "zfs_arc_dnode_reduce_percent": [[48, "zfs-arc-dnode-reduce-percent"]], "zfs_arc_average_blocksize": [[48, "zfs-arc-average-blocksize"]], "zfs_arc_evict_batch_limit": [[48, "zfs-arc-evict-batch-limit"]], "zfs_arc_grow_retry": [[48, "zfs-arc-grow-retry"]], "zfs_arc_lotsfree_percent": [[48, "zfs-arc-lotsfree-percent"]], "zfs_arc_max": [[48, "zfs-arc-max"]], "zfs_arc_meta_adjust_restarts": [[48, "zfs-arc-meta-adjust-restarts"]], "zfs_arc_meta_limit": [[48, "zfs-arc-meta-limit"]], "zfs_arc_meta_limit_percent": [[48, "zfs-arc-meta-limit-percent"]], "zfs_arc_meta_min": [[48, "zfs-arc-meta-min"]], "zfs_arc_meta_prune": [[48, "zfs-arc-meta-prune"]], "zfs_arc_meta_strategy": [[48, "zfs-arc-meta-strategy"]], "zfs_arc_min": [[48, "zfs-arc-min"]], "zfs_arc_min_prefetch_ms": [[48, "zfs-arc-min-prefetch-ms"]], "zfs_arc_min_prescient_prefetch_ms": [[48, "zfs-arc-min-prescient-prefetch-ms"]], "zfs_multilist_num_sublists": [[48, "zfs-multilist-num-sublists"]], "zfs_arc_overflow_shift": [[48, "zfs-arc-overflow-shift"]], "zfs_arc_p_min_shift": [[48, "zfs-arc-p-min-shift"]], "zfs_arc_p_dampener_disable": [[48, "zfs-arc-p-dampener-disable"]], "zfs_arc_shrink_shift": [[48, "zfs-arc-shrink-shift"]], "zfs_arc_pc_percent": [[48, "zfs-arc-pc-percent"]], "zfs_arc_sys_free": [[48, "zfs-arc-sys-free"]], "zfs_autoimport_disable": [[48, "zfs-autoimport-disable"]], "zfs_commit_timeout_pct": [[48, "zfs-commit-timeout-pct"]], "zfs_dbgmsg_enable": [[48, "zfs-dbgmsg-enable"]], "zfs_dbgmsg_maxsize": [[48, "zfs-dbgmsg-maxsize"]], "zfs_dbuf_state_index": [[48, "zfs-dbuf-state-index"]], "zfs_deadman_enabled": [[48, "zfs-deadman-enabled"]], "zfs_deadman_checktime_ms": [[48, "zfs-deadman-checktime-ms"]], "zfs_deadman_ziotime_ms": [[48, "zfs-deadman-ziotime-ms"]], "zfs_deadman_synctime_ms": [[48, "zfs-deadman-synctime-ms"]], "zfs_deadman_failmode": [[48, "zfs-deadman-failmode"]], "zfs_dedup_prefetch": [[48, "zfs-dedup-prefetch"]], "zfs_delete_blocks": [[48, "zfs-delete-blocks"]], "zfs_delay_min_dirty_percent": [[48, "zfs-delay-min-dirty-percent"]], "zfs_delay_scale": [[48, "zfs-delay-scale"]], "zfs_dirty_data_max": [[48, "zfs-dirty-data-max"]], "zfs_dirty_data_max_percent": [[48, "zfs-dirty-data-max-percent"]], "zfs_dirty_data_max_max": [[48, "zfs-dirty-data-max-max"]], "zfs_dirty_data_max_max_percent": [[48, "zfs-dirty-data-max-max-percent"]], "zfs_dirty_data_sync": [[48, "zfs-dirty-data-sync"]], "zfs_dirty_data_sync_percent": [[48, "zfs-dirty-data-sync-percent"]], "zfs_fletcher_4_impl": [[48, "zfs-fletcher-4-impl"]], "zfs_free_bpobj_enabled": [[48, "zfs-free-bpobj-enabled"]], "zfs_free_max_blocks": [[48, "zfs-free-max-blocks"]], "zfs_vdev_async_read_max_active": [[48, "zfs-vdev-async-read-max-active"]], "zfs_vdev_async_read_min_active": [[48, "zfs-vdev-async-read-min-active"]], "zfs_vdev_async_write_active_max_dirty_percent": [[48, "zfs-vdev-async-write-active-max-dirty-percent"]], "zfs_vdev_async_write_active_min_dirty_percent": [[48, "zfs-vdev-async-write-active-min-dirty-percent"]], "zfs_vdev_async_write_max_active": [[48, "zfs-vdev-async-write-max-active"]], "zfs_vdev_async_write_min_active": [[48, "zfs-vdev-async-write-min-active"]], "zfs_vdev_max_active": [[48, "zfs-vdev-max-active"]], "zfs_vdev_scrub_max_active": [[48, "zfs-vdev-scrub-max-active"]], "zfs_vdev_scrub_min_active": [[48, "zfs-vdev-scrub-min-active"]], "zfs_vdev_sync_read_max_active": [[48, "zfs-vdev-sync-read-max-active"]], "zfs_vdev_sync_read_min_active": [[48, "zfs-vdev-sync-read-min-active"]], "zfs_vdev_sync_write_max_active": [[48, "zfs-vdev-sync-write-max-active"]], "zfs_vdev_sync_write_min_active": [[48, "zfs-vdev-sync-write-min-active"]], "zfs_vdev_queue_depth_pct": [[48, "zfs-vdev-queue-depth-pct"]], "zfs_disable_dup_eviction": [[48, "zfs-disable-dup-eviction"]], "zfs_expire_snapshot": [[48, "zfs-expire-snapshot"]], "zfs_admin_snapshot": [[48, "zfs-admin-snapshot"]], "zfs_flags": [[48, "zfs-flags"]], "zfs_free_leak_on_eio": [[48, "zfs-free-leak-on-eio"]], "zfs_free_min_time_ms": [[48, "zfs-free-min-time-ms"]], "zfs_immediate_write_sz": [[48, "zfs-immediate-write-sz"]], "zfs_max_recordsize": [[48, "zfs-max-recordsize"]], "zfs_mdcomp_disable": [[48, "zfs-mdcomp-disable"]], "zfs_metaslab_fragmentation_threshold": [[48, "zfs-metaslab-fragmentation-threshold"]], "zfs_mg_fragmentation_threshold": [[48, "zfs-mg-fragmentation-threshold"]], "zfs_mg_noalloc_threshold": [[48, "zfs-mg-noalloc-threshold"]], "zfs_multihost_history": [[48, "zfs-multihost-history"]], "zfs_multihost_interval": [[48, "zfs-multihost-interval"]], "zfs_multihost_import_intervals": [[48, "zfs-multihost-import-intervals"]], "zfs_multihost_fail_intervals": [[48, "zfs-multihost-fail-intervals"]], "zfs_delays_per_second": [[48, "zfs-delays-per-second"]], "zfs_checksums_per_second": [[48, "zfs-checksums-per-second"]], "zfs_no_scrub_io": [[48, "zfs-no-scrub-io"]], "zfs_no_scrub_prefetch": [[48, "zfs-no-scrub-prefetch"]], "zfs_nocacheflush": [[48, "zfs-nocacheflush"]], "zfs_nopwrite_enabled": [[48, "zfs-nopwrite-enabled"]], "zfs_dmu_offset_next_sync": [[48, "zfs-dmu-offset-next-sync"]], "zfs_pd_bytes_max": [[48, "zfs-pd-bytes-max"]], "zfs_per_txg_dirty_frees_percent": [[48, "zfs-per-txg-dirty-frees-percent"]], "zfs_prefetch_disable": [[48, "zfs-prefetch-disable"]], "zfs_read_chunk_size": [[48, "zfs-read-chunk-size"]], "zfs_read_history": [[48, "zfs-read-history"]], "zfs_read_history_hits": [[48, "zfs-read-history-hits"]], "zfs_recover": [[48, "zfs-recover"]], "zfs_resilver_min_time_ms": [[48, "zfs-resilver-min-time-ms"]], "zfs_scan_min_time_ms": [[48, "zfs-scan-min-time-ms"]], "zfs_scan_checkpoint_intval": [[48, "zfs-scan-checkpoint-intval"]], "zfs_scan_fill_weight": [[48, "zfs-scan-fill-weight"]], "zfs_scan_issue_strategy": [[48, "zfs-scan-issue-strategy"]], "zfs_scan_legacy": [[48, "zfs-scan-legacy"]], "zfs_scan_max_ext_gap": [[48, "zfs-scan-max-ext-gap"]], "zfs_scan_mem_lim_fact": [[48, "zfs-scan-mem-lim-fact"]], "zfs_scan_mem_lim_soft_fact": [[48, "zfs-scan-mem-lim-soft-fact"]], "zfs_scan_vdev_limit": [[48, "zfs-scan-vdev-limit"]], "zfs_send_corrupt_data": [[48, "zfs-send-corrupt-data"]], "zfs_sync_pass_deferred_free": [[48, "zfs-sync-pass-deferred-free"]], "zfs_sync_pass_dont_compress": [[48, "zfs-sync-pass-dont-compress"]], "zfs_sync_pass_rewrite": [[48, "zfs-sync-pass-rewrite"]], "zfs_sync_taskq_batch_pct": [[48, "zfs-sync-taskq-batch-pct"]], "zfs_txg_history": [[48, "zfs-txg-history"]], "zfs_txg_timeout": [[48, "zfs-txg-timeout"]], "zfs_vdev_aggregation_limit": [[48, "zfs-vdev-aggregation-limit"]], "zfs_vdev_cache_size": [[48, "zfs-vdev-cache-size"]], "zfs_vdev_cache_bshift": [[48, "zfs-vdev-cache-bshift"]], "zfs_vdev_cache_max": [[48, "zfs-vdev-cache-max"]], "zfs_vdev_mirror_rotating_inc": [[48, "zfs-vdev-mirror-rotating-inc"]], "zfs_vdev_mirror_non_rotating_inc": [[48, "zfs-vdev-mirror-non-rotating-inc"]], "zfs_vdev_mirror_rotating_seek_inc": [[48, "zfs-vdev-mirror-rotating-seek-inc"]], "zfs_vdev_mirror_rotating_seek_offset": [[48, "zfs-vdev-mirror-rotating-seek-offset"]], "zfs_vdev_mirror_non_rotating_seek_inc": [[48, "zfs-vdev-mirror-non-rotating-seek-inc"]], "zfs_vdev_read_gap_limit": [[48, "zfs-vdev-read-gap-limit"]], "zfs_vdev_write_gap_limit": [[48, "zfs-vdev-write-gap-limit"]], "zfs_vdev_scheduler": [[48, "zfs-vdev-scheduler"]], "zfs_vdev_raidz_impl": [[48, "zfs-vdev-raidz-impl"]], "zfs_zevent_cols": [[48, "zfs-zevent-cols"]], "zfs_zevent_console": [[48, "zfs-zevent-console"]], "zfs_zevent_len_max": [[48, "zfs-zevent-len-max"]], "zfs_zil_clean_taskq_maxalloc": [[48, "zfs-zil-clean-taskq-maxalloc"]], "zfs_zil_clean_taskq_minalloc": [[48, "zfs-zil-clean-taskq-minalloc"]], "zfs_zil_clean_taskq_nthr_pct": [[48, "zfs-zil-clean-taskq-nthr-pct"]], "zil_replay_disable": [[48, "zil-replay-disable"]], "zil_slog_bulk": [[48, "zil-slog-bulk"]], "zio_delay_max": [[48, "zio-delay-max"]], "zio_dva_throttle_enabled": [[48, "zio-dva-throttle-enabled"]], "zio_requeue_io_start_cut_in_line": [[48, "zio-requeue-io-start-cut-in-line"]], "zio_taskq_batch_pct": [[48, "zio-taskq-batch-pct"]], "zvol_inhibit_dev": [[48, "zvol-inhibit-dev"]], "zvol_major": [[48, "zvol-major"]], "zvol_max_discard_blocks": [[48, "zvol-max-discard-blocks"]], "zvol_prefetch_bytes": [[48, "zvol-prefetch-bytes"]], "zvol_request_sync": [[48, "zvol-request-sync"]], "zvol_threads": [[48, "zvol-threads"]], "zvol_volmode": [[48, "zvol-volmode"]], "zfs_qat_disable": [[48, "zfs-qat-disable"]], "zfs_qat_checksum_disable": [[48, "zfs-qat-checksum-disable"]], "zfs_qat_compress_disable": [[48, "zfs-qat-compress-disable"]], "zfs_qat_encrypt_disable": [[48, "zfs-qat-encrypt-disable"]], "dbuf_cache_hiwater_pct": [[48, "dbuf-cache-hiwater-pct"]], "dbuf_cache_lowater_pct": [[48, "dbuf-cache-lowater-pct"]], "dbuf_cache_max_bytes": [[48, "dbuf-cache-max-bytes"], [48, "dbuf-cache-max-bytes-1"]], "dbuf_cache_max_shift": [[48, "dbuf-cache-max-shift"]], "dmu_object_alloc_chunk_shift": [[48, "dmu-object-alloc-chunk-shift"]], "send_holes_without_birth_time": [[48, "send-holes-without-birth-time"]], "zfs_abd_scatter_enabled": [[48, "zfs-abd-scatter-enabled"]], "zfs_abd_scatter_max_order": [[48, "zfs-abd-scatter-max-order"]], "zfs_compressed_arc_enabled": [[48, "zfs-compressed-arc-enabled"]], "zfs_key_max_salt_uses": [[48, "zfs-key-max-salt-uses"]], "zfs_object_mutex_size": [[48, "zfs-object-mutex-size"]], "zfs_scan_strict_mem_lim": [[48, "zfs-scan-strict-mem-lim"]], "zfs_send_queue_length": [[48, "zfs-send-queue-length"]], "zfs_recv_queue_length": [[48, "zfs-recv-queue-length"]], "zfs_arc_min_prefetch_lifespan": [[48, "zfs-arc-min-prefetch-lifespan"]], "zfs_scan_ignore_errors": [[48, "zfs-scan-ignore-errors"]], "zfs_top_maxinflight": [[48, "zfs-top-maxinflight"]], "zfs_resilver_delay": [[48, "zfs-resilver-delay"]], "zfs_scrub_delay": [[48, "zfs-scrub-delay"]], "zfs_scan_idle": [[48, "zfs-scan-idle"]], "icp_aes_impl": [[48, "icp-aes-impl"]], "icp_gcm_impl": [[48, "icp-gcm-impl"]], "zfs_abd_scatter_min_size": [[48, "zfs-abd-scatter-min-size"]], "zfs_unlink_suspend_progress": [[48, "zfs-unlink-suspend-progress"]], "spa_load_verify_shift": [[48, "spa-load-verify-shift"]], "spa_load_print_vdev_tree": [[48, "spa-load-print-vdev-tree"]], "zfs_max_missing_tvds": [[48, "zfs-max-missing-tvds"]], "dbuf_metadata_cache_shift": [[48, "dbuf-metadata-cache-shift"]], "dbuf_metadata_cache_max_bytes": [[48, "dbuf-metadata-cache-max-bytes"]], "dbuf_cache_shift": [[48, "dbuf-cache-shift"]], "metaslab_force_ganging": [[48, "metaslab-force-ganging"]], "zfs_vdev_default_ms_count": [[48, "zfs-vdev-default-ms-count"]], "vdev_removal_max_span": [[48, "vdev-removal-max-span"]], "zfs_removal_ignore_errors": [[48, "zfs-removal-ignore-errors"]], "zfs_removal_suspend_progress": [[48, "zfs-removal-suspend-progress"]], "zfs_condense_indirect_commit_entry_delay_ms": [[48, "zfs-condense-indirect-commit-entry-delay-ms"]], "zfs_condense_indirect_vdevs_enable": [[48, "zfs-condense-indirect-vdevs-enable"]], "zfs_condense_max_obsolete_bytes": [[48, "zfs-condense-max-obsolete-bytes"]], "zfs_condense_min_mapping_bytes": [[48, "zfs-condense-min-mapping-bytes"]], "zfs_vdev_initializing_max_active": [[48, "zfs-vdev-initializing-max-active"]], "zfs_vdev_initializing_min_active": [[48, "zfs-vdev-initializing-min-active"]], "zfs_vdev_removal_max_active": [[48, "zfs-vdev-removal-max-active"]], "zfs_vdev_removal_min_active": [[48, "zfs-vdev-removal-min-active"]], "zfs_vdev_trim_max_active": [[48, "zfs-vdev-trim-max-active"]], "zfs_vdev_trim_min_active": [[48, "zfs-vdev-trim-min-active"]], "zfs_initialize_value": [[48, "zfs-initialize-value"]], "zfs_lua_max_instrlimit": [[48, "zfs-lua-max-instrlimit"]], "zfs_lua_max_memlimit": [[48, "zfs-lua-max-memlimit"]], "zfs_max_dataset_nesting": [[48, "zfs-max-dataset-nesting"]], "zfs_ddt_data_is_special": [[48, "zfs-ddt-data-is-special"]], "zfs_user_indirect_is_special": [[48, "zfs-user-indirect-is-special"]], "zfs_reconstruct_indirect_combinations_max": [[48, "zfs-reconstruct-indirect-combinations-max"]], "zfs_send_unmodified_spill_blocks": [[48, "zfs-send-unmodified-spill-blocks"]], "zfs_spa_discard_memory_limit": [[48, "zfs-spa-discard-memory-limit"]], "zfs_special_class_metadata_reserve_pct": [[48, "zfs-special-class-metadata-reserve-pct"]], "zfs_trim_extent_bytes_max": [[48, "zfs-trim-extent-bytes-max"]], "zfs_trim_extent_bytes_min": [[48, "zfs-trim-extent-bytes-min"]], "zfs_trim_metaslab_skip": [[48, "zfs-trim-metaslab-skip"]], "zfs_trim_queue_limit": [[48, "zfs-trim-queue-limit"]], "zfs_trim_txg_batch": [[48, "zfs-trim-txg-batch"]], "zfs_vdev_aggregate_trim": [[48, "zfs-vdev-aggregate-trim"]], "zfs_vdev_aggregation_limit_non_rotating": [[48, "zfs-vdev-aggregation-limit-non-rotating"]], "zil_nocacheflush": [[48, "zil-nocacheflush"]], "zio_deadman_log_all": [[48, "zio-deadman-log-all"]], "zio_decompress_fail_fraction": [[48, "zio-decompress-fail-fraction"]], "zio_slow_io_ms": [[48, "zio-slow-io-ms"]], "vdev_validate_skip": [[48, "vdev-validate-skip"]], "zfs_async_block_max_blocks": [[48, "zfs-async-block-max-blocks"]], "zfs_checksum_events_per_second": [[48, "zfs-checksum-events-per-second"]], "zfs_disable_ivset_guid_check": [[48, "zfs-disable-ivset-guid-check"]], "zfs_obsolete_min_time_ms": [[48, "zfs-obsolete-min-time-ms"]], "zfs_override_estimate_recordsize": [[48, "zfs-override-estimate-recordsize"]], "zfs_remove_max_segment": [[48, "zfs-remove-max-segment"]], "zfs_resilver_disable_defer": [[48, "zfs-resilver-disable-defer"]], "zfs_scan_suspend_progress": [[48, "zfs-scan-suspend-progress"]], "zfs_scrub_min_time_ms": [[48, "zfs-scrub-min-time-ms"]], "zfs_slow_io_events_per_second": [[48, "zfs-slow-io-events-per-second"]], "zfs_vdev_min_ms_count": [[48, "zfs-vdev-min-ms-count"]], "zfs_vdev_ms_count_limit": [[48, "zfs-vdev-ms-count-limit"]], "spl_hostid": [[48, "spl-hostid"]], "spl_hostid_path": [[48, "spl-hostid-path"]], "spl_kmem_alloc_max": [[48, "spl-kmem-alloc-max"]], "spl_kmem_alloc_warn": [[48, "spl-kmem-alloc-warn"]], "spl_kmem_cache_expire": [[48, "spl-kmem-cache-expire"]], "spl_kmem_cache_kmem_limit": [[48, "spl-kmem-cache-kmem-limit"]], "spl_kmem_cache_max_size": [[48, "spl-kmem-cache-max-size"]], "spl_kmem_cache_obj_per_slab": [[48, "spl-kmem-cache-obj-per-slab"]], "spl_kmem_cache_obj_per_slab_min": [[48, "spl-kmem-cache-obj-per-slab-min"]], "spl_kmem_cache_reclaim": [[48, "spl-kmem-cache-reclaim"]], "spl_kmem_cache_slab_limit": [[48, "spl-kmem-cache-slab-limit"]], "spl_max_show_tasks": [[48, "spl-max-show-tasks"]], "spl_panic_halt": [[48, "spl-panic-halt"]], "spl_taskq_kick": [[48, "spl-taskq-kick"]], "spl_taskq_thread_bind": [[48, "spl-taskq-thread-bind"]], "spl_taskq_thread_dynamic": [[48, "spl-taskq-thread-dynamic"]], "spl_taskq_thread_priority": [[48, "spl-taskq-thread-priority"]], "spl_taskq_thread_sequential": [[48, "spl-taskq-thread-sequential"]], "spl_kmem_cache_kmem_threads": [[48, "spl-kmem-cache-kmem-threads"]], "spl_kmem_cache_magazine_size": [[48, "spl-kmem-cache-magazine-size"]], "Workload Tuning": [[49, "workload-tuning"]], "Basic concepts": [[49, "basic-concepts"]], "Adaptive Replacement Cache": [[49, "adaptive-replacement-cache"]], "Alignment Shift (ashift)": [[49, "alignment-shift-ashift"]], "Compression": [[49, "compression"]], "RAID-Z stripe width": [[49, "raid-z-stripe-width"]], "Dataset recordsize": [[49, "dataset-recordsize"]], "Larger record sizes": [[49, "larger-record-sizes"]], "zvol volblocksize": [[49, "zvol-volblocksize"]], "Deduplication": [[49, "deduplication"]], "Metaslab Allocator": [[49, "metaslab-allocator"]], "Pool Geometry": [[49, "pool-geometry"], [49, "pool-geometry-1"]], "Whole Disks versus Partitions": [[49, "whole-disks-versus-partitions"]], "OS/distro-specific recommendations": [[49, "os-distro-specific-recommendations"]], "Linux": [[49, "linux"]], "init_on_alloc": [[49, "init-on-alloc"]], "General recommendations": [[49, "general-recommendations"]], "Alignment shift": [[49, "alignment-shift"]], "Atime Updates": [[49, "atime-updates"]], "Free Space": [[49, "free-space"]], "LZ4 compression": [[49, "lz4-compression"]], "Synchronous I/O": [[49, "synchronous-i-o"]], "Overprovisioning by secure erase and partition table trick": [[49, "overprovisioning-by-secure-erase-and-partition-table-trick"]], "NVMe overprovisioning": [[49, "nvme-overprovisioning"]], "Whole disks": [[49, "whole-disks"]], "Bit Torrent": [[49, "bit-torrent"]], "Database workloads": [[49, "database-workloads"]], "MySQL": [[49, "mysql"]], "InnoDB": [[49, "innodb"]], "PostgreSQL": [[49, "postgresql"]], "SQLite": [[49, "sqlite"]], "File servers": [[49, "file-servers"]], "Samba": [[49, "samba"]], "Sequential workloads": [[49, "sequential-workloads"]], "Video games directories": [[49, "video-games-directories"]], "Lutris": [[49, "lutris"]], "Steam": [[49, "steam"]], "Wine": [[49, "wine"]], "Virtual machines": [[49, "virtual-machines"]], "QEMU / KVM / Xen": [[49, "qemu-kvm-xen"]], "ZFS Transaction Delay": [[50, "zfs-transaction-delay"]], "ZFS I/O (ZIO) Scheduler": [[51, "zfs-i-o-zio-scheduler"]], "Performance and Tuning": [[52, "performance-and-tuning"]], "Admin Documentation": [[53, "admin-documentation"]], "FAQ": [[54, "faq"], [55, "faq"]], "What is OpenZFS": [[54, "what-is-openzfs"]], "Hardware Requirements": [[54, "hardware-requirements"]], "Do I have to use ECC memory for ZFS?": [[54, "do-i-have-to-use-ecc-memory-for-zfs"]], "Supported Architectures": [[54, "supported-architectures"]], "Supported Linux Kernels": [[54, "supported-linux-kernels"]], "32-bit vs 64-bit Systems": [[54, "bit-vs-64-bit-systems"]], "Booting from ZFS": [[54, "booting-from-zfs"]], "Selecting /dev/ names when creating a pool (Linux)": [[54, "selecting-dev-names-when-creating-a-pool-linux"]], "Setting up the /etc/zfs/vdev_id.conf file": [[54, "setting-up-the-etc-zfs-vdev-id-conf-file"]], "Changing /dev/ names on an existing pool": [[54, "changing-dev-names-on-an-existing-pool"]], "The /etc/zfs/zpool.cache file": [[54, "the-etc-zfs-zpool-cache-file"]], "Generating a new /etc/zfs/zpool.cache file": [[54, "generating-a-new-etc-zfs-zpool-cache-file"]], "Sending and Receiving Streams": [[54, "sending-and-receiving-streams"]], "hole_birth Bugs": [[54, "hole-birth-bugs"]], "Sending Large Blocks": [[54, "sending-large-blocks"]], "CEPH/ZFS": [[54, "ceph-zfs"]], "ZFS Configuration": [[54, "zfs-configuration"]], "CEPH Configuration (ceph.conf)": [[54, "ceph-configuration-ceph-conf"]], "Other General Guidelines": [[54, "other-general-guidelines"]], "Performance Considerations": [[54, "performance-considerations"]], "Advanced Format Disks": [[54, "advanced-format-disks"]], "ZVOL used space larger than expected": [[54, "zvol-used-space-larger-than-expected"]], "Using a zvol for a swap device on Linux": [[54, "using-a-zvol-for-a-swap-device-on-linux"]], "Using ZFS on Xen Hypervisor or Xen Dom0 (Linux)": [[54, "using-zfs-on-xen-hypervisor-or-xen-dom0-linux"]], "udisks2 creating /dev/mapper/ entries for zvol (Linux)": [[54, "udisks2-creating-dev-mapper-entries-for-zvol-linux"]], "Licensing": [[54, "licensing"]], "Reporting a problem": [[54, "reporting-a-problem"]], "Does OpenZFS have a Code of Conduct?": [[54, "does-openzfs-have-a-code-of-conduct"]], "FAQ Hole birth": [[55, "faq-hole-birth"]], "Short explanation": [[55, "short-explanation"]], "I have a pool with hole_birth enabled, how do I know if I am affected?": [[55, "i-have-a-pool-with-hole-birth-enabled-how-do-i-know-if-i-am-affected"]], "Is there any less painful way to fix this if we have already received an affected snapshot?": [[55, "is-there-any-less-painful-way-to-fix-this-if-we-have-already-received-an-affected-snapshot"]], "Long explanation": [[55, "long-explanation"]], "Mailing Lists": [[56, "mailing-lists"]], "Signing Keys": [[57, "signing-keys"]], "Maintainers": [[57, "maintainers"]], "Release branch (spl/zfs-*-release)": [[57, "release-branch-spl-zfs-release"]], "Master branch (master)": [[57, "master-branch-master"]], "Checking the Signature of a Git Tag": [[57, "checking-the-signature-of-a-git-tag"]], "Project and Community": [[58, "project-and-community"]], "OpenZFS Documentation": [[60, "openzfs-documentation"]], "Table of Contents:": [[60, "table-of-contents"]], "Man Pages": [[61, "man-pages"]], "arcstat.1": [[62, "arcstat-1"], [240, "arcstat-1"], [339, "arcstat-1"], [442, "arcstat-1"]], "cstyle.1": [[63, "cstyle-1"], [169, "cstyle-1"], [190, "cstyle-1"], [213, "cstyle-1"], [241, "cstyle-1"], [340, "cstyle-1"], [443, "cstyle-1"]], "User Commands (1)": [[64, "user-commands-1"], [170, "user-commands-1"], [191, "user-commands-1"], [214, "user-commands-1"], [242, "user-commands-1"], [341, "user-commands-1"], [444, "user-commands-1"]], "raidz_test.1": [[65, "raidz-test-1"], [192, "raidz-test-1"], [215, "raidz-test-1"], [243, "raidz-test-1"], [342, "raidz-test-1"], [445, "raidz-test-1"]], "test-runner.1": [[66, "test-runner-1"], [446, "test-runner-1"]], "zhack.1": [[67, "zhack-1"], [171, "zhack-1"], [193, "zhack-1"], [216, "zhack-1"], [244, "zhack-1"], [343, "zhack-1"], [447, "zhack-1"]], "ztest.1": [[68, "ztest-1"], [173, "ztest-1"], [195, "ztest-1"], [217, "ztest-1"], [245, "ztest-1"], [344, "ztest-1"], [448, "ztest-1"]], "zvol_wait.1": [[69, "zvol-wait-1"], [218, "zvol-wait-1"], [246, "zvol-wait-1"], [345, "zvol-wait-1"], [449, "zvol-wait-1"]], "Devices and Special Files (4)": [[70, "devices-and-special-files-4"], [346, "devices-and-special-files-4"], [450, "devices-and-special-files-4"]], "spl.4": [[71, "spl-4"], [347, "spl-4"], [451, "spl-4"]], "zfs.4": [[72, "zfs-4"], [348, "zfs-4"], [452, "zfs-4"]], "File Formats and Conventions (5)": [[73, "file-formats-and-conventions-5"], [174, "file-formats-and-conventions-5"], [196, "file-formats-and-conventions-5"], [219, "file-formats-and-conventions-5"], [247, "file-formats-and-conventions-5"], [349, "file-formats-and-conventions-5"], [453, "file-formats-and-conventions-5"]], "vdev_id.conf.5": [[74, "vdev-id-conf-5"], [175, "vdev-id-conf-5"], [197, "vdev-id-conf-5"], [221, "vdev-id-conf-5"], [249, "vdev-id-conf-5"], [350, "vdev-id-conf-5"], [454, "vdev-id-conf-5"]], "dracut.zfs.7": [[75, "dracut-zfs-7"], [351, "dracut-zfs-7"], [455, "dracut-zfs-7"]], "Miscellaneous (7)": [[76, "miscellaneous-7"], [352, "miscellaneous-7"], [456, "miscellaneous-7"]], "vdevprops.7": [[77, "vdevprops-7"], [457, "vdevprops-7"]], "zfsconcepts.7": [[78, "zfsconcepts-7"], [353, "zfsconcepts-7"], [458, "zfsconcepts-7"]], "zfsprops.7": [[79, "zfsprops-7"], [354, "zfsprops-7"], [459, "zfsprops-7"]], "zpool-features.7": [[80, "zpool-features-7"], [355, "zpool-features-7"], [460, "zpool-features-7"]], "zpoolconcepts.7": [[81, "zpoolconcepts-7"], [356, "zpoolconcepts-7"], [461, "zpoolconcepts-7"]], "zpoolprops.7": [[82, "zpoolprops-7"], [357, "zpoolprops-7"], [462, "zpoolprops-7"]], "fsck.zfs.8": [[83, "fsck-zfs-8"], [179, "fsck-zfs-8"], [201, "fsck-zfs-8"], [225, "fsck-zfs-8"], [253, "fsck-zfs-8"], [358, "fsck-zfs-8"], [463, "fsck-zfs-8"]], "System Administration Commands (8)": [[84, "system-administration-commands-8"], [180, "system-administration-commands-8"], [202, "system-administration-commands-8"], [226, "system-administration-commands-8"], [254, "system-administration-commands-8"], [359, "system-administration-commands-8"], [464, "system-administration-commands-8"]], "mount.zfs.8": [[85, "mount-zfs-8"], [181, "mount-zfs-8"], [203, "mount-zfs-8"], [227, "mount-zfs-8"], [255, "mount-zfs-8"], [360, "mount-zfs-8"], [465, "mount-zfs-8"]], "vdev_id.8": [[86, "vdev-id-8"], [182, "vdev-id-8"], [204, "vdev-id-8"], [228, "vdev-id-8"], [256, "vdev-id-8"], [361, "vdev-id-8"], [466, "vdev-id-8"]], "zdb.8": [[87, "zdb-8"], [183, "zdb-8"], [205, "zdb-8"], [229, "zdb-8"], [257, "zdb-8"], [362, "zdb-8"], [467, "zdb-8"]], "zed.8": [[88, "zed-8"], [184, "zed-8"], [206, "zed-8"], [230, "zed-8"], [258, "zed-8"], [363, "zed-8"], [468, "zed-8"]], "zfs-allow.8": [[89, "zfs-allow-8"], [259, "zfs-allow-8"], [364, "zfs-allow-8"], [469, "zfs-allow-8"]], "zfs-bookmark.8": [[90, "zfs-bookmark-8"], [260, "zfs-bookmark-8"], [365, "zfs-bookmark-8"], [470, "zfs-bookmark-8"]], "zfs-change-key.8": [[91, "zfs-change-key-8"], [261, "zfs-change-key-8"], [366, "zfs-change-key-8"], [471, "zfs-change-key-8"]], "zfs-clone.8": [[92, "zfs-clone-8"], [262, "zfs-clone-8"], [367, "zfs-clone-8"], [472, "zfs-clone-8"]], "zfs-create.8": [[93, "zfs-create-8"], [263, "zfs-create-8"], [368, "zfs-create-8"], [473, "zfs-create-8"]], "zfs-destroy.8": [[94, "zfs-destroy-8"], [264, "zfs-destroy-8"], [369, "zfs-destroy-8"], [474, "zfs-destroy-8"]], "zfs-diff.8": [[95, "zfs-diff-8"], [265, "zfs-diff-8"], [370, "zfs-diff-8"], [475, "zfs-diff-8"]], "zfs-get.8": [[96, "zfs-get-8"], [266, "zfs-get-8"], [371, "zfs-get-8"], [476, "zfs-get-8"]], "zfs-groupspace.8": [[97, "zfs-groupspace-8"], [267, "zfs-groupspace-8"], [372, "zfs-groupspace-8"], [477, "zfs-groupspace-8"]], "zfs-hold.8": [[98, "zfs-hold-8"], [268, "zfs-hold-8"], [373, "zfs-hold-8"], [478, "zfs-hold-8"]], "zfs-inherit.8": [[99, "zfs-inherit-8"], [269, "zfs-inherit-8"], [374, "zfs-inherit-8"], [479, "zfs-inherit-8"]], "zfs-jail.8": [[100, "zfs-jail-8"], [270, "zfs-jail-8"], [375, "zfs-jail-8"], [480, "zfs-jail-8"]], "zfs-list.8": [[101, "zfs-list-8"], [271, "zfs-list-8"], [376, "zfs-list-8"], [481, "zfs-list-8"]], "zfs-load-key.8": [[102, "zfs-load-key-8"], [272, "zfs-load-key-8"], [377, "zfs-load-key-8"], [482, "zfs-load-key-8"]], "zfs-mount-generator.8": [[103, "zfs-mount-generator-8"], [231, "zfs-mount-generator-8"], [273, "zfs-mount-generator-8"], [378, "zfs-mount-generator-8"], [483, "zfs-mount-generator-8"]], "zfs-mount.8": [[104, "zfs-mount-8"], [274, "zfs-mount-8"], [379, "zfs-mount-8"], [484, "zfs-mount-8"]], "zfs-program.8": [[105, "zfs-program-8"], [232, "zfs-program-8"], [275, "zfs-program-8"], [380, "zfs-program-8"], [485, "zfs-program-8"]], "zfs-project.8": [[106, "zfs-project-8"], [276, "zfs-project-8"], [381, "zfs-project-8"], [486, "zfs-project-8"]], "zfs-projectspace.8": [[107, "zfs-projectspace-8"], [277, "zfs-projectspace-8"], [382, "zfs-projectspace-8"], [487, "zfs-projectspace-8"]], "zfs-promote.8": [[108, "zfs-promote-8"], [278, "zfs-promote-8"], [383, "zfs-promote-8"], [488, "zfs-promote-8"]], "zfs-receive.8": [[109, "zfs-receive-8"], [279, "zfs-receive-8"], [384, "zfs-receive-8"], [489, "zfs-receive-8"]], "zfs-recv.8": [[110, "zfs-recv-8"], [280, "zfs-recv-8"], [385, "zfs-recv-8"], [490, "zfs-recv-8"]], "zfs-redact.8": [[111, "zfs-redact-8"], [281, "zfs-redact-8"], [386, "zfs-redact-8"], [491, "zfs-redact-8"]], "zfs-release.8": [[112, "zfs-release-8"], [282, "zfs-release-8"], [387, "zfs-release-8"], [492, "zfs-release-8"]], "zfs-rename.8": [[113, "zfs-rename-8"], [283, "zfs-rename-8"], [388, "zfs-rename-8"], [493, "zfs-rename-8"]], "zfs-rollback.8": [[114, "zfs-rollback-8"], [284, "zfs-rollback-8"], [389, "zfs-rollback-8"], [494, "zfs-rollback-8"]], "zfs-send.8": [[115, "zfs-send-8"], [285, "zfs-send-8"], [390, "zfs-send-8"], [495, "zfs-send-8"]], "zfs-set.8": [[116, "zfs-set-8"], [286, "zfs-set-8"], [391, "zfs-set-8"], [496, "zfs-set-8"]], "zfs-share.8": [[117, "zfs-share-8"], [287, "zfs-share-8"], [392, "zfs-share-8"], [497, "zfs-share-8"]], "zfs-snapshot.8": [[118, "zfs-snapshot-8"], [288, "zfs-snapshot-8"], [393, "zfs-snapshot-8"], [498, "zfs-snapshot-8"]], "zfs-unallow.8": [[119, "zfs-unallow-8"], [289, "zfs-unallow-8"], [394, "zfs-unallow-8"], [499, "zfs-unallow-8"]], "zfs-unjail.8": [[120, "zfs-unjail-8"], [290, "zfs-unjail-8"], [395, "zfs-unjail-8"], [500, "zfs-unjail-8"]], "zfs-unload-key.8": [[121, "zfs-unload-key-8"], [291, "zfs-unload-key-8"], [396, "zfs-unload-key-8"], [501, "zfs-unload-key-8"]], "zfs-unmount.8": [[122, "zfs-unmount-8"], [292, "zfs-unmount-8"], [397, "zfs-unmount-8"], [502, "zfs-unmount-8"]], "zfs-unzone.8": [[123, "zfs-unzone-8"], [503, "zfs-unzone-8"]], "zfs-upgrade.8": [[124, "zfs-upgrade-8"], [293, "zfs-upgrade-8"], [398, "zfs-upgrade-8"], [504, "zfs-upgrade-8"]], "zfs-userspace.8": [[125, "zfs-userspace-8"], [294, "zfs-userspace-8"], [399, "zfs-userspace-8"], [505, "zfs-userspace-8"]], "zfs-wait.8": [[126, "zfs-wait-8"], [295, "zfs-wait-8"], [400, "zfs-wait-8"], [506, "zfs-wait-8"]], "zfs-zone.8": [[127, "zfs-zone-8"], [507, "zfs-zone-8"]], "zfs.8": [[128, "zfs-8"], [185, "zfs-8"], [207, "zfs-8"], [233, "zfs-8"], [296, "zfs-8"], [401, "zfs-8"], [508, "zfs-8"]], "zfs_ids_to_path.8": [[129, "zfs-ids-to-path-8"], [297, "zfs-ids-to-path-8"], [402, "zfs-ids-to-path-8"], [509, "zfs-ids-to-path-8"]], "zfs_prepare_disk.8": [[130, "zfs-prepare-disk-8"], [403, "zfs-prepare-disk-8"], [510, "zfs-prepare-disk-8"]], "zgenhostid.8": [[131, "zgenhostid-8"], [208, "zgenhostid-8"], [235, "zgenhostid-8"], [300, "zgenhostid-8"], [404, "zgenhostid-8"], [511, "zgenhostid-8"]], "zinject.8": [[132, "zinject-8"], [186, "zinject-8"], [209, "zinject-8"], [236, "zinject-8"], [301, "zinject-8"], [405, "zinject-8"], [512, "zinject-8"]], "zpool-add.8": [[133, "zpool-add-8"], [302, "zpool-add-8"], [406, "zpool-add-8"], [513, "zpool-add-8"]], "zpool-attach.8": [[134, "zpool-attach-8"], [303, "zpool-attach-8"], [407, "zpool-attach-8"], [514, "zpool-attach-8"]], "zpool-checkpoint.8": [[135, "zpool-checkpoint-8"], [304, "zpool-checkpoint-8"], [408, "zpool-checkpoint-8"], [515, "zpool-checkpoint-8"]], "zpool-clear.8": [[136, "zpool-clear-8"], [305, "zpool-clear-8"], [409, "zpool-clear-8"], [516, "zpool-clear-8"]], "zpool-create.8": [[137, "zpool-create-8"], [306, "zpool-create-8"], [410, "zpool-create-8"], [517, "zpool-create-8"]], "zpool-destroy.8": [[138, "zpool-destroy-8"], [307, "zpool-destroy-8"], [411, "zpool-destroy-8"], [518, "zpool-destroy-8"]], "zpool-detach.8": [[139, "zpool-detach-8"], [308, "zpool-detach-8"], [412, "zpool-detach-8"], [519, "zpool-detach-8"]], "zpool-events.8": [[140, "zpool-events-8"], [309, "zpool-events-8"], [413, "zpool-events-8"], [520, "zpool-events-8"]], "zpool-export.8": [[141, "zpool-export-8"], [310, "zpool-export-8"], [414, "zpool-export-8"], [521, "zpool-export-8"]], "zpool-get.8": [[142, "zpool-get-8"], [311, "zpool-get-8"], [415, "zpool-get-8"], [522, "zpool-get-8"]], "zpool-history.8": [[143, "zpool-history-8"], [312, "zpool-history-8"], [416, "zpool-history-8"], [523, "zpool-history-8"]], "zpool-import.8": [[144, "zpool-import-8"], [313, "zpool-import-8"], [417, "zpool-import-8"], [524, "zpool-import-8"]], "zpool-initialize.8": [[145, "zpool-initialize-8"], [314, "zpool-initialize-8"], [418, "zpool-initialize-8"], [525, "zpool-initialize-8"]], "zpool-iostat.8": [[146, "zpool-iostat-8"], [315, "zpool-iostat-8"], [419, "zpool-iostat-8"], [526, "zpool-iostat-8"]], "zpool-labelclear.8": [[147, "zpool-labelclear-8"], [316, "zpool-labelclear-8"], [420, "zpool-labelclear-8"], [527, "zpool-labelclear-8"]], "zpool-list.8": [[148, "zpool-list-8"], [317, "zpool-list-8"], [421, "zpool-list-8"], [528, "zpool-list-8"]], "zpool-offline.8": [[149, "zpool-offline-8"], [318, "zpool-offline-8"], [422, "zpool-offline-8"], [529, "zpool-offline-8"]], "zpool-online.8": [[150, "zpool-online-8"], [319, "zpool-online-8"], [423, "zpool-online-8"], [530, "zpool-online-8"]], "zpool-reguid.8": [[151, "zpool-reguid-8"], [320, "zpool-reguid-8"], [424, "zpool-reguid-8"], [531, "zpool-reguid-8"]], "zpool-remove.8": [[152, "zpool-remove-8"], [321, "zpool-remove-8"], [425, "zpool-remove-8"], [532, "zpool-remove-8"]], "zpool-reopen.8": [[153, "zpool-reopen-8"], [322, "zpool-reopen-8"], [426, "zpool-reopen-8"], [533, "zpool-reopen-8"]], "zpool-replace.8": [[154, "zpool-replace-8"], [323, "zpool-replace-8"], [427, "zpool-replace-8"], [534, "zpool-replace-8"]], "zpool-resilver.8": [[155, "zpool-resilver-8"], [324, "zpool-resilver-8"], [428, "zpool-resilver-8"], [535, "zpool-resilver-8"]], "zpool-scrub.8": [[156, "zpool-scrub-8"], [325, "zpool-scrub-8"], [429, "zpool-scrub-8"], [536, "zpool-scrub-8"]], "zpool-set.8": [[157, "zpool-set-8"], [326, "zpool-set-8"], [430, "zpool-set-8"], [537, "zpool-set-8"]], "zpool-split.8": [[158, "zpool-split-8"], [327, "zpool-split-8"], [431, "zpool-split-8"], [538, "zpool-split-8"]], "zpool-status.8": [[159, "zpool-status-8"], [328, "zpool-status-8"], [432, "zpool-status-8"], [539, "zpool-status-8"]], "zpool-sync.8": [[160, "zpool-sync-8"], [329, "zpool-sync-8"], [433, "zpool-sync-8"], [540, "zpool-sync-8"]], "zpool-trim.8": [[161, "zpool-trim-8"], [330, "zpool-trim-8"], [434, "zpool-trim-8"], [541, "zpool-trim-8"]], "zpool-upgrade.8": [[162, "zpool-upgrade-8"], [331, "zpool-upgrade-8"], [435, "zpool-upgrade-8"], [542, "zpool-upgrade-8"]], "zpool-wait.8": [[163, "zpool-wait-8"], [332, "zpool-wait-8"], [436, "zpool-wait-8"], [543, "zpool-wait-8"]], "zpool.8": [[164, "zpool-8"], [187, "zpool-8"], [210, "zpool-8"], [237, "zpool-8"], [333, "zpool-8"], [437, "zpool-8"], [544, "zpool-8"]], "zpool_influxdb.8": [[165, "zpool-influxdb-8"], [438, "zpool-influxdb-8"], [545, "zpool-influxdb-8"]], "zstream.8": [[166, "zstream-8"], [336, "zstream-8"], [439, "zstream-8"], [546, "zstream-8"]], "zstreamdump.8": [[167, "zstreamdump-8"], [188, "zstreamdump-8"], [211, "zstreamdump-8"], [238, "zstreamdump-8"], [337, "zstreamdump-8"], [440, "zstreamdump-8"], [547, "zstreamdump-8"]], "master": [[168, "master"]], "zpios.1": [[172, "zpios-1"], [194, "zpios-1"]], "zfs-events.5": [[176, "zfs-events-5"], [198, "zfs-events-5"], [222, "zfs-events-5"], [250, "zfs-events-5"]], "zfs-module-parameters.5": [[177, "zfs-module-parameters-5"], [199, "zfs-module-parameters-5"], [223, "zfs-module-parameters-5"], [251, "zfs-module-parameters-5"]], "zpool-features.5": [[178, "zpool-features-5"], [200, "zpool-features-5"], [224, "zpool-features-5"], [252, "zpool-features-5"]], "v0.6": [[189, "v0-6"]], "v0.7": [[212, "v0-7"]], "spl-module-parameters.5": [[220, "spl-module-parameters-5"], [248, "spl-module-parameters-5"]], "zfsprops.8": [[234, "zfsprops-8"], [299, "zfsprops-8"]], "v0.8": [[239, "v0-8"]], "zfsconcepts.8": [[298, "zfsconcepts-8"]], "zpoolconcepts.8": [[334, "zpoolconcepts-8"]], "zpoolprops.8": [[335, "zpoolprops-8"]], "v2.0": [[338, "v2-0"]], "v2.1": [[441, "v2-1"]], "v2.2": [[548, "v2-2"]], "Message ID:\u00a0ZFS-8000-14": [[549, "message-id-zfs-8000-14"]], "Corrupt ZFS cache": [[549, "corrupt-zfs-cache"]], "Message ID:\u00a0ZFS-8000-2Q": [[550, "message-id-zfs-8000-2q"]], "Missing device in replicated configuration": [[550, "missing-device-in-replicated-configuration"]], "Message ID:\u00a0ZFS-8000-3C": [[551, "message-id-zfs-8000-3c"]], "Missing device in non-replicated configuration": [[551, "missing-device-in-non-replicated-configuration"]], "Message ID: ZFS-8000-4J": [[552, "message-id-zfs-8000-4j"]], "Corrupted device label in a replicated configuration": [[552, "corrupted-device-label-in-a-replicated-configuration"]], "Message ID: ZFS-8000-5E": [[553, "message-id-zfs-8000-5e"]], "Corrupted device label in non-replicated configuration": [[553, "corrupted-device-label-in-non-replicated-configuration"]], "Message ID: ZFS-8000-6X": [[554, "message-id-zfs-8000-6x"]], "Missing top level device": [[554, "missing-top-level-device"]], "Message ID:\u00a0ZFS-8000-72": [[555, "message-id-zfs-8000-72"]], "Corrupted pool metadata": [[555, "corrupted-pool-metadata"]], "Message ID:\u00a0ZFS-8000-8A": [[556, "message-id-zfs-8000-8a"]], "Corrupted data": [[556, "corrupted-data"]], "Message ID:\u00a0ZFS-8000-9P": [[557, "message-id-zfs-8000-9p"]], "Failing device in replicated configuration": [[557, "failing-device-in-replicated-configuration"]], "Message ID:\u00a0ZFS-8000-A5": [[558, "message-id-zfs-8000-a5"]], "Incompatible version": [[558, "incompatible-version"]], "Message ID:\u00a0ZFS-8000-ER": [[559, "message-id-zfs-8000-er"]], "ZFS Errata #1": [[559, "zfs-errata-1"]], "ZFS Errata #2": [[559, "zfs-errata-2"]], "ZFS Errata #3": [[559, "zfs-errata-3"]], "ZFS Errata #4": [[559, "zfs-errata-4"]], "Message ID:\u00a0ZFS-8000-EY": [[560, "message-id-zfs-8000-ey"]], "ZFS label hostid mismatch": [[560, "zfs-label-hostid-mismatch"]], "Message ID: ZFS-8000-HC": [[561, "message-id-zfs-8000-hc"]], "ZFS pool I/O failures": [[561, "zfs-pool-i-o-failures"], [562, "zfs-pool-i-o-failures"]], "Message ID:\u00a0ZFS-8000-JQ": [[562, "message-id-zfs-8000-jq"]], "Message ID:\u00a0ZFS-8000-K4": [[563, "message-id-zfs-8000-k4"]], "ZFS intent log read failure": [[563, "zfs-intent-log-read-failure"]], "ZFS Messages": [[564, "zfs-messages"]]}, "indexentries": {}}) \ No newline at end of file