Skip to content

cluster info

Sacheendra Talluri edited this page Aug 27, 2024 · 74 revisions

Events

  • Enable firewall on headnode al01. Only port 22 traffic is allowed in.
  • Nov 11th, 2021 - node 4 and node5 are operational
  • node 3 is also updated.
  • node 1 is upgraded to 5.12+ (deb packages in atr@node1:/home/atr/linux-v5.12, compiled as https://wiki.ubuntu.com/KernelTeam/GitKernelBuild)

Cluster users

https://docs.google.com/spreadsheets/d/1yUdQA4BveaQWB5d_t_VRTdcXiYVFT3FkxpIX6Ej_4og/edit#gid=0

Cluster Information

We have 4 machine cluster that we will do experiments one. Here are the notes.

Please enjoy the cluster responsibly whenever in doubt please post issues on the slack as well. There are many clusters in the world, but I like mine :)

Software packages and maintenance

if you are installing a new package with sudo apt-get on one machine, please make sure to install on all machines to keep the software in sync.

[atr: April 6th log]

  • common packages installed on all sudo apt-get install build-essential cmake git libaio1 libaio-dev ifstat numactl flex libncurses-dev elfutils libelf-dev libssl-dev net-tools inetutils-tools inetutils-traceroute fio
  • change the default editor from nano (duh!) to vim: sudo update-alternatives --config editor (pick the vim.basic)
  • enable passwordless sudo (be careful)
    • sudo visudo
    • then add %sudo ALL=(ALL) NOPASSWD: ALL (if it is not there already)

packages installed

On a freshly installed machine

  1. Make your account using atl account
  2. change default text editor
sudo update-alternatives --config editor
  1. Enable passwordless sudo
%sudo   ALL=(ALL) NOPASSWD: ALL
  1. Disable password login (if needed, we have this on al01)

in /etc/ssh/ssd_config

PasswordAuthentication no
PubkeyAuthentication yes

  1. Put the name and IP in all nodes /etc/hosts file
atr@al01:~$ cat /etc/hosts
127.0.0.1 localhost
#127.0.1.1 al01
192.168.1.100 al01
192.168.1.100 node0
192.168.1.101 node1
192.168.1.102 node2
192.168.1.103 node3
192.168.1.104 node4
192.168.1.105 node5

How to access

Ask us to setup an account for you. Send us a username (you) and your ssh public key. No password access please.

Step 1 : ssh [email protected] (login here using your vunetid and password)

Step 2: ssh [email protected]

al01 is a special head node, 1 of the 4 machines that we have. For all sense and purposes it is the same like other machines, but be careful with the network setting. As if this node goes down, everything goes down.

Sample of ssh config file that atr is using (~/.ssh/config) :

ServerAliveInterval 10

Host vu-ssh
	HostName ssh.data.vu.nl
	User ati850
	IdentityFile ~/.ssh/das.pub

Host das5
	HostName fs0.das5.cs.vu.nl
	User atrivedi
	ProxyJump vu-ssh

Host al01
	HostName al01.anac.cs.vu.nl
	User atr
	ProxyJump vu-ssh
	IdentityFile ~/.ssh/al01.pub

or direct ssh to al01 without the proxy jump from DAS

Host node4
    HostName node4
    User gst
    ProxyJump [email protected] 
    IdentityFile ~/.ssh/gst.pub
    IdentitiesOnly yes
    ForwardAgent yes

What is the network configuration

1 Gbps link is

  • 192.168.1.9 (sn2100 ethernet switch)
  • 192.168.1.100 (al01, head node)
  • 192.168.1.101
  • 192.168.1.102
  • 192.168.1.103

IPMI IPs

  • 192.168.1.200 (al01, head node)
  • 192.168.1.201
  • 192.168.1.202
  • 192.168.1.203

100 Gbps network

  • 10.100.1.1[nodenum]

IB network

  • 10.10.1.100 (al01, head node)
  • 10.10.1.101
  • 10.10.1.102
  • 10.10.1.101

al01 has following

  IPv4 address for br-fb53e8de1dd2: 172.18.0.1
  IPv4 address for docker0:         172.17.0.1
  IPv4 address for docker_gwbridge: 172.19.0.1
  IPv4 address for eno1:            192.168.1.100
  IPv4 address for eno2:            130.37.193.10
  IPv6 address for eno2:            2001:610:110:6e1::a
  IPv4 address for ibs2f1:          10.10.1.100
  IPv4 address for tun0:            10.8.0.3

atr@al01:~$ ifconfig eno2 
eno2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 130.37.193.10  netmask 255.255.255.0  broadcast 130.37.193.255
        inet6 2001:610:110:6e1::a  prefixlen 128  scopeid 0x0<global>
        inet6 fe80::3eec:efff:fe04:c317  prefixlen 64  scopeid 0x20<link>
        ether 3c:ec:ef:04:c3:17  txqueuelen 1000  (Ethernet)
        RX packets 989652951  bytes 974806124551 (974.8 GB)
        RX errors 0  dropped 505195  overruns 0  frame 0
        TX packets 203380698  bytes 77332779571 (77.3 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xaae00000-aae1ffff  

Hostnames

This file has been copied to all hosts /etc/hosts, please check the hostname and convention here:

127.0.0.1 localhost
#127.0.1.1 al01
192.168.1.100 al01
192.168.1.100 node0
192.168.1.101 node1
192.168.1.102 node2
192.168.1.103 node3
10.10.1.100 node0-ib
10.10.1.101 node1-ib
10.10.1.102 node2-ib
10.10.1.103 node3-ib

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

How to setup a new node

  1. Plugin a usb-stick with ubuntu-server iso (this should also be possible remotely with netboot: see https://github.com/EdgeVU/group-notes/wiki/netboot)
  2. Plugin a monitor / keyboard / mouse
  3. Boot the server
  4. Enter bios, disable hyperthread and move usb-stick to the top of the boot order
  5. Restart and boot ubuntu server.
    1. Install on a non-nvme SSD if possible.
    2. Do not use the LVM option during partitioning.
    3. Use separate partitions for / and /home. This allows us to retain user data after we nuke the OS.
    4. Use 200GB for the root partition. The rest for /home.
  6. Set hostname to nodeX, username to atl and password to ...
  7. ip a to get your ethernet interface, then ethtool eno1 to check if ethernet connection is detected
  8. Set the static IP. You can do this in many ways, one way is to use netplan. Edit /etc/netplan/00-installer-config.yaml to look as follows, with X being your node number
# This is the network config written by 'subiquity'
network:
  ethernets:
    eno1:
      addresses:
      - 192.168.1.10X/16
      # For Ubuntu 20.04
      gateway4: 192.168.1.100
      # For Ubuntu 22.04, use this instead of gateway4
      routes:
      - to: default
        via: 192.168.1.100
      # End of changes
      nameservers:
        addresses: [1.1.1.1, 8.8.8.8]
        search: []
  version: 2

For node3, custom-built kernel 5.12.0+:

# This is the network config written by 'subiquity'
network:
  ethernets:
    eno1:
      dhcp4: false
      dhcp6: false
    ibs2f1:
      dhcp4: no
      addresses:
        - 10.10.1.103/16
      routes:
      - to: default
        via: 192.168.1.100
        metric: 200
      nameservers:
        addresses: [1.1.1.1, 8.8.8.8]
        search: []
  bridges:
    br0:
      interfaces: [eno1]
      addresses: [192.168.1.103/16]
      routes:
      - to: default
        via: 192.168.1.100
        metric: 100
      mtu: 1500
      nameservers:
        addresses: [1.1.1.1, 8.8.8.8]
        search: []
      parameters:
        stp: true
        forward-delay: 0
      dhcp4: false
      dhcp6: false
  version: 2
  1. Set the configuration: sudo netplan generate and sudo netplan apply

To make NAT work

(needed on the head node after reboot) This is not persistent

echo 1 > /proc/sys/net/ipv4/ip_forward
sudo iptables -t nat -I POSTROUTING --out-interface eno2 -j MASQUERADE # check the right NIC 

Apply network update settings

in /etc/netplan

  1. sudo netplan apply
  2. sudo systemctl restart systemd=network

How to add a new user with sudo access

# Give access to the head node
# Do on the head node:
sudo useradd -s /bin/bash -d /home/atr/ -m -G sudo atr # Create user, -m is create home, and -G is to add to groups, here sudo 
usermod -aG sudo id                                    # Add user to sudo group (not needed, if already done in the previous steps) 
sudo su - atr                                          # Login as that user
mkdir .ssh && vim .ssh/authorized_keys                 # Create ssh files, paste public key here

# Give user access to other nodes
# On the head node:
sudo su - atr                                          # Login as that user
ssh-keygen -t rsa -b 4096                              # Create public key to forward to worker nodes

# On the worker node
sudo useradd -s /bin/bash -d /home/atr/ -m -G sudo atr # Create user
usermod -aG sudo id                                    # Add user to sudo group
sudo su - atr                                          # Login as that user
mkdir .ssh && vim .ssh/authorized_keys                 # Create ssh files, paste public key and key from head node here

delete user: sudo userdel -f -r [???]

On nodes 2, 3 and 4 there is a default username (atl) and password. Use that to create the user on machines. After a user is created, change its password

# as atl 
$ sudo su 
$ (as root) passwd new_user 

note we may want to share this with students and let them do their own user account management.

There is a gst (guest) account that we should give out to anyone who wants to do experiments. It does not have sudo acccees, but can start qemu-system-x86.

How to add a new user WITHOUT sudo access but selective sudo access on commands

# as authorized user 
sudo useradd -s /bin/bash -d /home/gst/ -m gst 
sudo visudo 
# in the editor add the following line and save, gives access to qemu without password on sudo 
gst     ALL=(ALL) NOPASSWD: /usr/local/bin/qemu-system-x86_64

# as gst, add the public keys in the authorized_keys 
sudo su - gst 
$ mkdir .ssh && vim .ssh/authorized_keys

logical volumn resizing

https://www.linuxtechi.com/extend-lvm-partitions/

on node5

gst@node5:~$ df -h / 
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv  196G   33G  154G  18% /

gst@node5:~$ lsblk 
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0  55.5M  1 loop /snap/core18/2253
loop1                       7:1    0  43.4M  1 loop /snap/snapd/14549
loop2                       7:2    0  43.3M  1 loop /snap/snapd/14295
loop3                       7:3    0  67.2M  1 loop /snap/lxd/21835
loop4                       7:4    0  55.5M  1 loop /snap/core18/2284
loop5                       7:5    0  42.2M  1 loop 
loop6                       7:6    0  70.3M  1 loop /snap/lxd/21029
loop7                       7:7    0  61.9M  1 loop 
loop8                       7:8    0  61.9M  1 loop /snap/core20/1270
loop9                       7:9    0  61.9M  1 loop /snap/core20/1328
sda                         8:0    0 894.3G  0 disk 
├─sda1                      8:1    0     1M  0 part 
├─sda2                      8:2    0     1G  0 part /boot
└─sda3                      8:3    0 893.3G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   200G  0 lvm  /
sdb                         8:16   0 894.3G  0 disk 
nvme0n1                   259:0    0   1.8T  0 disk 
nvme1n1                   259:1    0   1.8T  0 disk 

atr@node5:/home/atr/src/qemu-6.1.0/build$ sudo vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <893.25 GiB
  PE Size               4.00 MiB
  Total PE              228671
  Alloc PE / Size       51200 / 200.00 GiB
  Free  PE / Size       177471 / <693.25 GiB
  VG UUID               5Cp5wd-YRGe-o2eX-CqU4-vEJ1-hrW9-Bpec9m
   
atr@node5:~$ sudo lvextend -L +690G /dev/mapper/ubuntu--vg-ubuntu--lv  
  Size of logical volume ubuntu-vg/ubuntu-lv changed from 200.00 GiB (51200 extents) to 890.00 GiB (227840 extents).
  Logical volume ubuntu-vg/ubuntu-lv successfully resized.
atr@node5:~$ sudo vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <893.25 GiB
  PE Size               4.00 MiB
  Total PE              228671
  Alloc PE / Size       227840 / 890.00 GiB
  Free  PE / Size       831 / <3.25 GiB
  VG UUID               5Cp5wd-YRGe-o2eX-CqU4-vEJ1-hrW9-Bpec9m
   
atr@node5:~$ sudo lvextend -L +3.2G /dev/mapper/ubuntu--vg-ubuntu--lv  
  Rounding size to boundary between physical extents: 3.20 GiB.
  Size of logical volume ubuntu-vg/ubuntu-lv changed from 890.00 GiB (227840 extents) to 893.20 GiB (228660 extents).
  Logical volume ubuntu-vg/ubuntu-lv successfully resized.
atr@node5:~$ sudo vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <893.25 GiB
  PE Size               4.00 MiB
  Total PE              228671
  Alloc PE / Size       228660 / 893.20 GiB
  Free  PE / Size       11 / 44.00 MiB
  VG UUID               5Cp5wd-YRGe-o2eX-CqU4-vEJ1-hrW9-Bpec9m
atr@node5:~$ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv 
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 25, new_desc_blocks = 112
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 234147840 (4k) blocks long.

atr@node5:~$ df -h / 
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv  879G   33G  809G   4% /
atr@node5:~$ 

How to power cycle and reboot machines and use IPMI tools

Get to the al01 node. From there (broken atm):

  • Accessing a machines's event logs: ipmitool -H 192.168.1.201 -U username -P password sel
  • Rebooting a machine : ipmitool -H 192.168.1.201 -U username -P password power reset (do not do power cycle)

https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool

IP forwarding NAT

on al01

sudo iptables -t nat -I POSTROUTING --out-interface eno2 -j MASQUERADE

Setting up the web interface access to IPMI using it on Firefox

  1. Create ssh tunnel: ssh -D 1080 -q -N [email protected]
  2. Firefox -> Preferences -> Connection settings -> Socks host: localhost, port 1080, SOCKS_v5
  3. Browse to 192.168.1.201
  4. Login
    • username: username
    • password: password

Booting into bios from IPMI

ipmitool -H 192.168.1.203 -U [username] -P [password] chassis bootdev bios

NFS server settings

Ubuntu NFS write up:

Install package on the server side: apt-get install nfs-kernel-server (if in case it is missing)

Then on the server in the /etc/exports file

  1. Add this line: /srv/nfstest (rw,sync,all_squash,anonuid=1026,anongid=1026)
  2. Then set the correct permission chown -R nobody:nogroup /srv/nfstest/

[April 6th] atr: I am setting 777 permissions on /srv/nfstest

What are the different NFS export options do: https://linux.die.net/man/5/exports

On the client side:

  • may be packages are missing: sudo apt-get install rpcbind nfs-common
  • create the mount point, if missing: sudo mkdir -p /mnt/nfs
  • then mount sudo mount -t nfs 192.168.1.100:/srv/nfstest/ /mnt/nfs/

This is not mounted by default on the fresh booted machine, so in case this is missing, please mount it at /mnt/nfs. We should put this in fstab

[ref] https://linuxize.com/post/how-to-mount-an-nfs-share-in-linux/

How to restart the NFS server : service nfs-kernel-server restart (there are other options too: {start|stop|status|reload|force-reload|restart})

How to check if nfs is mounted:

atr@node1:/mnt/nfs$ mount | grep nfs 
192.168.1.100:/srv/nfstest on /mnt/nfs type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.101,local_lock=none,addr=192.168.1.100)
atr@node1:/mnt/nfs$ 

Boot sequence

  • Boot mode select: DUAL

  • Legacy to EFI support: Disabled

  • Boot option #1: UEFI USB Key

  • Boot option #2: USB Hard Disk: Samsung Flash Drive FIT 1100

  • Boot option #3: UEFI Network

  • Boot option #4: UEFI Hard DIsk

  • Boot option #5: Hard Disk: Intel SSDSC2KB400GB

  • Boot option #6: USB Floppy

  • Boot option #7: USB Lan

  • Boot option #8: CD/DVD

  • Boot option #9: Network: IBA GE Slot 1800 v1584

  • Boot option #10: UEFI CD/DVD

  • Boot option #11: USB CD/DVD

  • Boot option #12: UEFI USB CD/DVD:UEFI:Samsung Flaash Drive FIT 1100

InfiniBand details

infiniband-setup

Firewall Config

  1. First allow all traffic by default
sudo ufw default allow all
sudo ufw default deny all
  1. Allow incoming traffic on port 22 at top priority
sudo ufw insert 1 allow from any proto tcp to any port 22
  1. Deny incoming traffic on all other ports on interface eno2 (connection to internet)
sudo ufw deny in on eno2
  1. Commands to enable or disable firewall
sudo ufw enable
sudo ufw disable
  1. Commands to delete rules
sudo ufw status numbered
sudo ufw delete <number>
  1. Command to see added rules without enabling firewall
sudo ufw show added
  1. Command to see default rules
sudo ufw status verbose
  1. Allow traffic from other nodes to be forwarded to the internet
sudo ufw route allow in on eno1 out on eno2

Specs

System Configuration

  • CPU: 2 x Intel Xeon Silver 4210R (10 cores, 3.2GHz) = 20 cores
  • DRAM: 4 x 64GB DDR4-2400 = 256GB
  • Optane: 2 x 280GB Optane SSD 900p = 560GB
  • Boot Drive: 480GB Intel SATA SSD D3-S4510
  • NIC: Mellanox Bluefield (ConnectX-5 generation)
  • PCIe ports: 4 x x16; 1 x x8

Raspberry Pi

Deployment phases of the Raspberry Pi:

  1. By default, a Raspberry Pi 4 can only boot using a dedicated power cable, and a monitor cable (e.g. HDMI).
  2. We want to enable booting a Pi without a dedicated power cable, using only USB-C -> USB-A/C connected to a PC. With this, you can operate a Pi by SSH'ing from your PC. This step is needed to install everything necessary for network booting.
  3. Finally, we want to enable network booting so a Pi is connected with an ethernet cable to a network switch in the cluster (and so to the head node). This allows us to stop using the Pi's SD card by storing the OS on the head node. The Pi gets its power from a USB hub.

USB deployment

  1. Connect the power and a monitor to the Pi, and boot. Internet is not needed. Remember your username and password
  2. Add dtoverlay=dwc2 to /boot/config.txt
  3. Add modules-load=dwc2,g_ether to /boot/cmdline.txt
  4. Reboot
  5. Edit /etc/dhcpcd.conf to set a static IP, similar to this:
# Example static IP configuration:
interface usb0
static ip_address=192.168.100.10/24
static routers=192.168.100.1
static domain_name_servers=8.8.8.8
  1. Shut the Pi down and connect it to a PC (using USB for example)
  2. Your PC will attempt to connect to the Raspberry Pi via the wired link. Open the settings of this network, and on the IPv4 tab (this was tested on Ubuntu) change the following:
  • IPv4 Method: Manual
  • Create a new address entry: Address = 192.168.100.11. Netmask = 24 (same as 255.255.255.0). Gateway = 192.168.100.1
  1. Now the connection should be established. Open a terminal, ping 192.168.100.10 to test the connection, if that works then ssh [email protected] to get access to the Pi over SSH.

Network boot deployment

  1. Update the Pi: sudo apt update && sudo apt full-upgrade && sudo apt install rpi-eeprom && sudo apt autoremove.
  2. Enable network booting on the Raspberry Pi. Connect it to a PC via USB and connect it to ethernet as we will download packages. Follow this tutorial to enable network booting on the Pi. Finally, reboot the Pi, SSH back into it and check if the default boot option is set to network booting: vcgencmd bootloader_config should contain the line BOOT_ORDER=0xf21.

Benchmark installation

The Raspberry Pi's run Raspbian 10 (Buster), with the armv7l instruction set. This is an 32-bit ARM instruction set! We will not run QEMU/KVM VMs on the Pi as this would be too stressful. Instead, we install KubeEdge on it and use the Pi as is.

Physical machines management

We have most of our machines in a rack in the server room in the WN building, but the setup is a bit messy. Therefore, we propose the following plan:

U Current setup New setup Comments
1 - - Needed to route to front cables from the switch and NUC to the back.
Also, the powersupply for the Pi's is taller than 2U, so this extra space is required.
2 Node 5 NUC / Pi Head node, with a 2U Pi x 16 mount in the back, and the NUC in the middle.
We can add a mount in the front in the future, for Jetsons for example.
3 - NUC / Pi
4 Switch Switch
5 Head node Node 1 Was node 1
6 Head node Node 1
7 Node 1 Node 2 Was node 2, at VUSec at the moment
8 Node 1 Node 2
9 Node 4 Node 3 Was node 3
10 Node 4 Node 3
11 Node 3 Node 4 Was node 4. PCIe needs debugging, storage won't connect.
12 Node 3 Node 4
13 Node 6 Node 5 Was node 5
14 Node 6 Node 6 Was the previous head node
15 DAS-4 2 Node 6
16 DAS-4 2 Node 7 New machine we ordered
17 DAS-4 3 Node 7
18 DAS-4 3 -
19 DAS-4 4 -
20 DAS-4 4 -

Note:

  • U = 1 starts right below the DAS-5 machines
  • There is a rack right beside the rack we currently use, which is full of old DAS-4 machines. If needed, we can use this rack as well.

Mount a new disk

# Check all available drives
# For this example, the drive is called 'sdb'
# If the drive is already mounted, you don't have to do anything!
lsblk

# Auto-mount disk on next reboot
sudo vim /etc/fstab
	- Add line like '/dev/sdb        /mnt/sdb        ext4    defaults        0       0'
	- This will work from next reboot onward

# Format disk
sudo mkfs -t ext4 /dev/sdb

# Mount by hand for now
mkdir /mnt/sdb
sudo mount /dev/sdb /mnt/sdb

# Should show you that the drive is mounted now
lsblk

How to use the new mounted drive

# Make a folder for your account on this drive
# For this example, username = guest
cd /mnt/sdb
sudo mkdir guest
sudo chown -R guest guest/

Mellanox SN2100 ethernet switch configuration

The switch came with ports configured in a breakout configuration. I changed them to 1x100G ports in the /etc/cumulus/ports.conf file. Reference: https://docs.nvidia.com/networking-ethernet-software/cumulus-linux-59/Layer-1-and-Switch-Ports/Interface-Configuration-and-Management/Switch-Port-Attributes/#breakout-ports

The switch network interfaces can be configured in /etc/network/interfaces. We just want a simple bridge for all ports. Reference: https://docs.nvidia.com/networking-ethernet-software/cumulus-linux-42/Layer-2/Ethernet-Bridging-VLANs/VLAN-aware-Bridge-Mode/#

Configuring IPMI on a new node

Reference: https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool#User_Configuration Network setup with static IP.

ipmitool lan set 1 ipsrc static
ipmitool lan set 1 ipaddr 192.168.1.2<xx>
ipmitool lan set 1 netmask 192.168.255.255
ipmitool lan set 1 defgw ipaddr 192.168.1.100
ipmitool lan set 1 access on

Adding a new user.

ipmitool user set name 10 <username>
ipmitool user set password 10 <password>
ipmitool user priv 10 0x4
ipmitool user enable 10
ipmitool channel setaccess 1 10 link=on ipmi=on callin=on privilege=4
Clone this wiki locally