Skip to content

Latest commit

 

History

History
235 lines (194 loc) · 11.6 KB

03_Debian_and_OMV_all_inclusive.md

File metadata and controls

235 lines (194 loc) · 11.6 KB

Table of Contents

  1. Install
  2. Settings
  3. ZFS Health Checks

Motivation: OMV installer did not create a bootable drive at the end. Kept saying "Insert boot device and press any key" on multiple PCs. Instead, these steps install Debian and then add OMV on top.

Install

  1. Install debian using net-install image

    1. Before starting, identify (1) the INSTALLER usb drive, and (2) the TARGET boot drive (USB drive, hard drive, etc.).
      • Reasoning: The installer cannot install over itself, so there must be 2 drives.
      • PRO TIP: Pick a different sized drive for the TARGET, so it will be easy to find which drive to pick in the installer.
    2. Download debian net-install .iso file from https://www.debian.org/CD/netinst/
    3. Download win32diskimager or similar (rufus?)
    4. Write the debian .iso to the INSTALLER usb drive.
    5. Boot the installer, then use the following install options:
      1. NO grahical install
      2. Hostname: abrums
      3. User Setup:
        • Root password
        • Username: wayne (other users will be added later)
      4. Partition/Disk Selection:
        • Guided - use entire disk
        • Select the TARGET drive, in the free space
      5. Additional Software menu:
        • Uncheck Desktop and Gnome
        • Change SSH server to YES
        • If you accidentally install "desktop", you can remove it later by running apt-get remove task-desktop xserver* && apt-get autoremove
    6. Reboot when the installer asks you to.
    7. Manually set the IP address to static 192.168.1.51 (edit /etc/network/interfaces)
      1. Login as root
      2. Find the name of your network interface by running: ip addr show
      3. Run this command: nano /etc/network/interfaces
      4. Use the text editor to add the text block shown in these instructions below, changing the existing dhcp section if it exists.
        • eth0 might be called something else like enp20s0, use the existing name instead of eth0 in the snippet below.
        • Do not change any of the top lines referencing lo (localhost), just add after the lo section.
        auto eth0
        iface eth0 inet static
        address 192.168.1.51
        netmask 255.255.255.0
        gateway 192.168.1.1
        dns-nameservers 192.168.1.1
        
      5. After editing, use Ctrl+O and Enter to write the file (output the file). Then Ctrl+X to exit.
      6. Reboot by running this command: reboot
  2. Install OMV using OMV-debian instructions https://openmediavault.readthedocs.io/en/5.x/installation/on_debian.html

  3. Set/Verify the network settings in OMV control panel

    1. Open: System > Network. Interfaces tab. Look at the list of devices.
    2. If your network device IS LISTED, then Edit to verify the settings (especially DNS Servers) to match the step below.
    3. If your network device enp1s0 (or your network device) is NOT listed, then click the "Add" button to add "Ethernet":
      • General settings
        • Pick your network device from the list
      • IPv4
        • Method = Static
        • Address = 192.168.1.51
        • Netmask = 255.255.255.0
        • Gateway = 192.168.1.1
      • IPv6 - disabled
      • Advanced Settings
        • DNS Servers = 8.8.8.8
        • (leave all other defaults)
  4. Install OMV-Extras plugin, instructions on this site (easiest to install through Console/SSH) http://omv-extras.org/

  5. Open the OMV control panel. This time, the default login was username: "admin" and password: "openmediavault".

    1. Change admin password to match Root password (for convenience).

    2. Install ZFS plugin through OMV > Plugins menu

      1. search for package name "openmediavault-zfs"
      2. If installation succeeds, skip this step. Otherwise, if installation fails, or packages appear broken, then run these commands. Hopefully this doesn't happen everytime...
        #print a list of not-configured packages
        dpkg -C
        
        # configure package zfs-dkms
        
        dpkg --configure zfs-dkms
        modprobe zfs
        
        # configure other dependent packages
        
        dpkg --configure zfsutils-linux
        dpkg --configure zfs-zed
        dpkg --configure openmediavault-zfs
        
        #verify the list is empty
        dpkg -C
    3. Add additional users to match the existing ZFS pool.

    • wayne (already created during debian OS installation)
    • julia (make this FIRST)
    • daniel (make this SECOND, to match existing ABRUMS ZFS pool)
  6. Install zfs-auto-snapshot in the console.

    1. Follow instructions here: https://github.com/zfsonlinux/zfs-auto-snapshot

      wget https://github.com/zfsonlinux/zfs-auto-snapshot/archive/upstream/1.2.4.tar.gz
      
      tar -xzf 1.2.4.tar.gz
      
      cd zfs-auto-snapshot-upstream-1.2.4
      
      make install

      Configuration is stored on the pool itself, so that should still be intact after installing this script.

    2. Verify the installation by checking: zfs list -t snapshot command output for recent timestamps.

Settings

  1. Check the OMV settings against these screenshots:

    1. SMB CIFS Shares

      abrums_OVM_smb_cifs_shares

    2. SMB CIFS Settings

      abrums_OVM_smb_cifs_settings

    3. Shared Folders

      abrums_OVM_shared_folders

    4. Users

      abrums_OVM_users

    5. ZFS Overview

      abrums_OVM_zfs_overview

    6. OMV-Extras

      abrums_OVM_extras

    7. Plugins

      abrums_OVM_plugins_installed

    8. Notifications

      abrums_OVM_notifications

    9. Network Inferfaces

      abrums_OVM_network_interfaces

ZFS Health Checks

Use PuTTY to login to the server as user root, then run any of these commands to look at specific ZFS info:

  1. zpool status shows drive status, checksum errors (right-most column).
  2. zpool list lists the pool size usage, fragmentation, and capacity.
  3. zfs list lists the ZFS-datasets (top-level folders) usage.
  4. This command will list all the snapshots, sorted by the size used (biggest at the bottom)
    • zfs list -rt snapshot | awk '{ print $2 " " $1}' | sort -h

Examples:

  1. zpool status - Healthy (error count all 0) and scrub shows no errors (scrub repaired 0B, within past week).
    • Ignore the "upgrade pool" status/action, this is irreversible not good for compatibility.
    • NOTE: Serial numbers obscured.
    [root@abrums:~]# zpool status
      pool: abrums
     state: ONLINE
    status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
    action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
      scan: scrub repaired 0B in 04:47:36 with 0 errors on Sun Nov 20 06:47:41 2022  <-- GOOD, scrubbed recently
    config:
    
        NAME                                         STATE     READ WRITE CKSUM
        abrums                                       ONLINE       0     0     0      <-- GOOD, zero drive errors
          raidz1-0                                   ONLINE       0     0     0
            ata-ST2000DM008-xxxxxx_xxxxxxxx          ONLINE       0     0     0
            ata-ST2000DM008-xxxxxx_xxxxxxxx          ONLINE       0     0     0
            ata-ST2000DM008-xxxxxx_xxxxxxxx          ONLINE       0     0     0
            ata-ST2000VX008-xxxxxx_xxxxxxxx          ONLINE       0     0     0
            ata-HGST_HUcccccccxxxxxx_xxxxxxxx-part1  ONLINE       0     0     0
    
    errors: No known data errors                                                     <-- GOOD, no data errors
    
  2. zpool list - Fragmentation is OK, but nearing the maximum capacity.
    • ZFS recommands staying under 80% capacity for max performance.
    [root@abrums:~]# zpool list
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    abrums  9.06T  8.36T   723G        -         -    16%    92%  1.00x    ONLINE  -
    
  3. zfs list - Shows the majority usage (5.95T out of 6.68T) is in LARGE_ONE. The other datasets (top-level folders) only have a few hundred gigs of storage usage.
    • This includes usage from snapshots.
    • NOTE: Names obscured.
    [root@abrums:~]# zfs list
    NAME               USED  AVAIL     REFER  MOUNTPOINT
    abrums            6.68T   449G      166K  legacy
    abrums/LARGE_ONE  5.95T   449G     5.87T  legacy
    abrums/STUFF       350G   449G      350G  legacy
    abrums/SMALLER     195G   449G      195G  legacy
    abrums/SMALLEST    193G   449G      173G  legacy
                        ^^               ^^ Refer counts current (not-deleted) files
                        ^^ Used counts current + deleted files (in snapshots)
    
  4. zfs list -rt snapshot | awk '{ print $2 " " $1}' | sort -h
    • NOTE: Names obscured.
    [root@abrums:~]# zfs list -rt snapshot | awk '{ print $2 " " $1}' | sort -h
    0B abrums/LARGE_ONE@zfs-auto-snap_daily-2021-12-04-1419
    0B abrums/LARGE_ONE@zfs-auto-snap_daily-2021-12-11-1349
    0B abrums/LARGE_ONE@zfs-auto-snap_daily-2021-12-12-1407
    0B abrums/LARGE_ONE@zfs-auto-snap_daily-2021-12-18-1415
    0B abrums/LARGE_ONE@zfs-auto-snap_daily-2021-12-25-1409
    0B abrums/LARGE_ONE@zfs-auto-snap_daily-2022-01-01-1341
    ...
    ... ~350 lines omitted ...
    ...
    10.6M abrums/SMALLEST@zfs-auto-snap_monthly-2021-05-15-1317
    14.2M abrums/SMALLEST@zfs-auto-snap_monthly-2021-01-19-1349
    74.7M abrums/LARGE_ONE@zfs-auto-snap_monthly-2021-09-13-1337
    310M abrums/LARGE_ONE@zfs-auto-snap_monthly-2021-01-19-1349
    594M abrums/SMALLEST@zfs-auto-snap_monthly-2021-10-13-1324
    18.2G abrums/LARGE_ONE@zfs-auto-snap_monthly-2021-10-13-1324
    
    • In this case, if you want to destroy a particular snapshot, you can run one command:
      • zfs destroy abrums/LARGE_ONE@zfs-auto-snap_monthly-2021-01-19-1349 <-- Example, substitute in the real snapshot name.