Skip to content

Commit

Permalink
docs: zfs docs improvements (#277)
Browse files Browse the repository at this point in the history
  • Loading branch information
alexgarel authored Jan 9, 2024
1 parent 19e7b75 commit 2459ba1
Show file tree
Hide file tree
Showing 3 changed files with 81 additions and 8 deletions.
10 changes: 10 additions & 0 deletions docs/nginx-reverse-proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -303,6 +303,16 @@ etckeeper commit -m "Configured my-service.openfoodfacts.net"

Now we are done 🎉

## Performance tips

### Use a buffer for access log

Use a buffer for access log for high traffic websites.
eg (for off server nginx):
```conf
access_log /var/log/nginx/off-access.log proxied_requests buffer=256K flush=1s;
```

## Install

Install was quite simple: we simply install nginx package, as well as stunnel4.
4 changes: 2 additions & 2 deletions docs/reports/2023-07-off2-off-reinstall.md
Original file line number Diff line number Diff line change
Expand Up @@ -1622,8 +1622,8 @@ We want a shared ZFS dataset for sftp data between the reverse proxy and off-pro
Create ZFS dataset, on off2:
```bash
sudo zfs create zfs-hdd/off-pro/sftp
# make it accessible to root inside a container (where id 0 is mapped to 100000)
chown 100000:100000 /zfs-hdd/off-pro/sftp/
# make top folders accessible to root inside a container (where id 0 is mapped to 100000)
chown 100000:100000 /zfs-hdd/off-pro/sftp/ /zfs-hdd/off-pro/sftp/*
```
We then change reverse proxy configuration (`/etc/pve/lxc/101.conf`) and off-pro (`/etc/pve/lxc/114.conf`) config to add a mount point. Somthing like `mp8: /zfs-hdd/off-pro/sftp,mp=/mnt/off-pro/sftp` (number after mp, depends on already existing one).
Expand Down
75 changes: 69 additions & 6 deletions docs/zfs-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,19 @@ Tutorial about ZFS snapshots and clone: https://ubuntu.com/tutorials/using-zfs-s
* `zfs list -r` to get all datasets and their mountpoints
3. zpool list -v list all devices

* `zpool iostat` to see stats about read / write. `zpool iostat -vl 5` is particularly useful.

* `zpool history` list all operations done on a pool

* `zpool list -o name,size,usedbysnapshots,allocated` see space allocated (equivalent to `df`)

**Note**: `df` on a dataset does not really work because free space is shared between the datasets.
You can still see datasets usage by using:
```bash
zfs list -o zfs list -o name,used,usedbydataset,usedbysnapshots,available -r <pool_name>
```


## Proxmox

Proxmox uses ZFS to replicate containers and VMs between servers. It also use it to backup data.
Expand All @@ -35,11 +48,51 @@ We use sanoid / syncoid to sync ZFS datasets between servers (also to back them

See [sanoid](./sanoid.md)


## Replacing a disk

To replace a disk in a zpool.

* Get a list of devices using `zpool status <pool_name>`

* Put the disk offline: `zpoool <pool_name> offline <device_name>` (eg `zpool rpool offline sdf`)

* Replace the disk physically

* Ask zpool to replace the drive: `zpool <pool_name> replace /dev/<device_name>` (eg `zpool rpool replace /dev/sdf`)

* verify disk is back and being resilvered:
```bash
zpool status <pool_name>
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since …
replacing-5 DEGRADED 0 0 0
old OFFLINE 0 0 0
sdf ONLINE 0 0 0 (resilvering)
```

* after resilver finishes, you can eventually run a scrub: `zpool scrub <pool_name>`


## Sync


To Sync ZFS you just take snapshots on the source at specific intervals (we use cron jobs).
You then use [zfs-send](https://openzfs.github.io/openzfs-docs/man/8/zfs-send.8.html) an [zfs-recv](https://openzfs.github.io/openzfs-docs/man/8/zfs-recv.8.html) through ssh to sync the distant server (send snapshots).

### Doing it automatically

We normally do it using [sanoid and syncoid](./sanoid.md)

Proxmox might also do it as part of corosync to replicate containers across cluster.

### Doing it manually

```bash
zfs send <previous-snap> <dataset_name>@$<last-snap> \
| ssh <hostname> zfs recv <target_dataset_name> -F
Expand All @@ -50,16 +103,11 @@ ZFS sync of sto files from off1 to off2:
* see [sto-products-sync.sh](https://github.com/openfoodfacts/openfoodfacts-infrastructure/blob/develop/scripts/off1/sto-products-sync.sh)





You also have to clean snapshots from time to time to avoid retaining too much useless data.

On ovh3: [snapshot-purge.sh](https://github.com/openfoodfacts/openfoodfacts-infrastructure/blob/develop/scripts/ovh3/snapshot-purge.sh)


**FIXME** explain sanoid

## Docker mount

If ZFS dataset is on same machine we can use bind mounts to mount a folder in a ZFS partition.
Expand All @@ -69,7 +117,22 @@ For distant machines, ZFS datasets can be exposed as NFS partition. Docker as an

## Mounting datasets in a proxmox container

To move dataset in a proxmox container you have to mount them as bind volumes.
To mount dataset in a proxmox container you have:
* to use a shared disk on proxmox
* or to mount them as bind volumes

### Use a shared disk on proxmox

(not really experimented yet, but it could have the advantage to enable replication)

On your VM/CT, in resource, add a disk.

Add the mountpoint to your disk and declare it shared.

In another VM/CT, add the same disk.


### mount dataset as bind volumes

See: https://pve.proxmox.com/wiki/Linux_Container#_bind_mount_points

Expand Down

0 comments on commit 2459ba1

Please sign in to comment.