Browse Docs

Virtualisation

Virtualisation sections in docs

Documentation regarding all Virtualisation technologies, I also include Cloud providers here.

In this section

  • Azure
    Azure sections in docs
    • ๐ŸŸ Azure

      Create a small infra for kubernetes

        1  #On your Azure CLI
        2  az --version                                     # Version expected 2.1.0 or higher 
        3
        4  az group delete --name kubernetes -y
        5
        6  az group create -n kubernetes -l westeurope
        7
        8  az network vnet create -g kubernetes \
        9    -n kubernetes-vnet \
       10    --address-prefix 10.240.0.0/24 \
       11    --subnet-name kubernetes-subnet
       12
       13  az network nsg create -g kubernetes -n kubernetes-nsg
       14
       15  az network vnet subnet update -g kubernetes \
       16    -n kubernetes-subnet \
       17    --vnet-name kubernetes-vnet \
       18    --network-security-group kubernetes-nsg
       19
       20  az network nsg rule create -g kubernetes \
       21    -n kubernetes-allow-ssh \
       22    --access allow \
       23    --destination-address-prefix '*' \
       24    --destination-port-range 22 \
       25    --direction inbound \
       26    --nsg-name kubernetes-nsg \
       27    --protocol tcp \
       28    --source-address-prefix '*' \
       29    --source-port-range '*' \
       30    --priority 1000
       31
       32  az network nsg rule create -g kubernetes \
       33    -n kubernetes-allow-api-server \
       34    --access allow \
       35    --destination-address-prefix '*' \
       36    --destination-port-range 6443 \
       37    --direction inbound \
       38    --nsg-name kubernetes-nsg \
       39    --protocol tcp \
       40    --source-address-prefix '*' \
       41    --source-port-range '*' \
       42    --priority 1001
       43
       44  az network nsg rule list -g kubernetes --nsg-name kubernetes-nsg --query "[].{Name:name,  Direction:direction, Priority:priority, Port:destinationPortRange}" -o table
       45
       46  az network lb create -g kubernetes --sku Standard \
       47    -n kubernetes-lb \
       48    --backend-pool-name kubernetes-lb-pool \
       49    --public-ip-address kubernetes-pip \
       50    --public-ip-address-allocation static
       51
       52  az network public-ip list --query="[?name=='kubernetes-pip'].{ResourceGroup:resourceGroup,   Region:location,Allocation:publicIpAllocationMethod,IP:ipAddress}" -o table
       53  #For Ubuntu 
       54  # az vm image list --location westeurope --publisher Canonical --offer UbuntuServer --sku 18.04-LTS --all -o table
       55  # For Redhat 
       56  # az vm image list --location westeurope --publisher RedHat --offer RHEL  --sku 8 --all -o table
       57  # => choosen one : 8-lvm-gen2
       58  WHICHOS="RedHat:RHEL:8-lvm-gen2:8.5.2022032206"
       59
       60  # K8s Controller 
       61  az vm availability-set create -g kubernetes -n controller-as
       62
       63  for i in 0 1 2; do
       64	  echo "[Controller ${i}] Creating public IP..."
       65	  az network public-ip create -n controller-${i}-pip -g kubernetes --sku Standard > /dev/null
       66	  echo "[Controller ${i}] Creating NIC..."
       67	  az network nic create -g kubernetes \
       68	  -n controller-${i}-nic \
       69	  --private-ip-address 10.240.0.1${i} \
       70	  --public-ip-address controller-${i}-pip \
       71	  --vnet kubernetes-vnet \
       72	  --subnet kubernetes-subnet \
       73	  --ip-forwarding \
       74	  --lb-name kubernetes-lb \
       75	  --lb-address-pools kubernetes-lb-pool >/dev/null
       76
       77	  echo "[Controller ${i}] Creating VM..."
       78	  az vm create -g kubernetes \
       79	  -n controller-${i} \
       80	  --image ${WHICHOS} \
       81	  --nics controller-${i}-nic \
       82	  --availability-set controller-as \
       83	  --nsg '' \
       84	  --admin-username 'kuberoot' \
       85	  --admin-password 'Changeme!' \
       86	  --size Standard_B2s \
       87	  --storage-sku StandardSSD_LRS 
       88	  #--generate-ssh-keys > /dev/null
       89  done
       90
       91  #K8s Worker 
       92  az vm availability-set create -g kubernetes -n worker-as
       93  for i in 0 1; do
       94  echo "[Worker ${i}] Creating public IP..."
       95  az network public-ip create -n worker-${i}-pip -g kubernetes --sku Standard > /dev/null
       96  echo "[Worker ${i}] Creating NIC..."
       97  az network nic create -g kubernetes \
       98  -n worker-${i}-nic \
       99  --private-ip-address 10.240.0.2${i} \
      100  --public-ip-address worker-${i}-pip \
      101  --vnet kubernetes-vnet \
      102  --subnet kubernetes-subnet \
      103  --ip-forwarding > /dev/null
      104  echo "[Worker ${i}] Creating VM..."
      105  az vm create -g kubernetes \
      106  -n worker-${i} \
      107  --image ${WHICHOS} \
      108  --nics worker-${i}-nic \
      109  --tags pod-cidr=10.200.${i}.0/24 \
      110  --availability-set worker-as \
      111  --nsg '' \
      112  --generate-ssh-keys \
      113  --size Standard_B2s \
      114  --storage-sku StandardSSD_LRS \
      115  --admin-username 'kuberoot'> /dev/null \
      116  --admin-password 'Changeme!' \
      117  done
      118
      119  #Summarize
      120  az vm list -d -g kubernetes -o table
      
  • Digital Ocean
    DO sections in docs
    • ๐Ÿ‹ Digital Ocean

      Install Client

       1# most simple 
       2arkade get doctl
       3
       4# normal way
       5curl -OL https://github.com/digitalocean/doctl/releases/download/v1.104.0/doctl-1.104.0-linux-amd64.tar.gz
       6tar xf doctl-1.104.0-linux-amd64.tar.gz
       7mv doctl /usr/local/bin
       8
       9# Auto-Completion ZSH
      10 doctl completion zsh > $ZSH/completions/_doctl
      

      Basics

      • find possible droplet
      1doctl compute region list
      2doctl compute size list
      3doctl compute image list-distribution
      4doctl compute image list --public
      
      • Auth
      1doctl auth init --context test
      2doctl auth list
      3doctl auth switch --context test2
      
      • Create Project
      1doctl projects create --name rkub --environment staging --purpose "stage rkub with github workflows"
      
      • Create VM
      1doctl compute ssh-key list
      2doctl compute droplet create test --region fra1 --image rockylinux-9-x64 --size s-1vcpu-1gb --ssh-keys <fingerprint>
      3doctl compute droplet delete test -f
      

      with Terraform

       1export DO_PAT="dop_v1_xxxxxxxxxxxxxxxx"
       2doctl auth init --context rkub
       3
       4# inside a dir with a tf file 
       5terraform init
       6terraform validate
       7terraform plan -var "do_token=${DO_PAT}"
       8terraform apply -var "do_token=${DO_PAT}" -auto-approve
       9
      10# clean apply
      11terraform plan -out=infra.tfplan -var "do_token=${DO_PAT}"
      12terraform apply infra.tfplan
      13
      14# Control
      15terraform show terraform.tfstate
      16
      17# Destroy
      18terraform plan -destroy -out=terraform.tfplan -var "do_token=${DO_PAT}"
      19terraform apply terraform.tfplan
      
      • Connect to Droplet with private ssh key ssh root@$(terraform output -json ip_address_workers | jq -r ‘.[0]’) -i .key

  • KVM
    KVM sections in docs
    • ๐Ÿ˜‰ Deploy pfsense VM

      install Pfsense VM

      • Download from Netgate website (account requested)

      • Make network config

      Important note: no need to prepare NetworkManager config, KVM will handle creation of the bridge. Also note that dns enable is set to disables the use of libvirts DHCP server (pfsense is taking over).

       1cat > pfsense.xml << EOF
       2<network>
       3  <name>pfsense-router</name>
       4  <uuid></uuid>
       5  <forward mode='nat'>
       6  </forward>
       7  <bridge name='virbr1' stp='on' delay='0'/>
       8  <dns enable='no'/>
       9  <ip address='192.168.123.1' netmask='255.255.255.0'>
      10  </ip>
      11</network>
      12EOF
      13
      14sudo virsh net-define pfsense.xml
      15sudo virsh net-start pfsense-router
      16sudo virsh net-autostart pfsense-router
      17
      18# Give qemu ACL
      19echo "allow all" | sudo tee /etc/qemu-kvm/${USER}.conf
      20echo "include /etc/qemu-kvm/${USER}.conf" | sudo tee --append /etc/qemu/bridge.conf
      21sudo chown root:${USER} /etc/qemu-kvm/${USER}.conf
      22sudo chmod 640 /etc/qemu-kvm/${USER}.conf
      23
      24# Check network
      25nmcli con show --active
      26sudo virsh net-list --all
      27sudo virsh net-edit pfsense-router
      28sudo virsh net-info pfsense-router
      29sudo virsh net-dhcp-leases pfsense-router
      
      • Create and Run Pfsense VM
       1# Create pfsense vm
       2virt-install \
       3--name pfsense --ram 2048 --vcpus 2 \
       4--disk $HOME/pfsense/disk0.qcow2,size=12,format=qcow2 \
       5--cdrom $HOME/pfsense/netgate-installer-amd64.iso \
       6--network bridge=virbr0,model=e1000 \
       7--network bridge=virbr1,model=e1000 \
       8--graphics vnc,listen=0.0.0.0 --noautoconsole \
       9--osinfo freebsd14.0 \
      10--autostart \
      11--debug
      12
      13virsh start pfsense
      
      • Create OKD vm
       1virt-install \
       2--name okd --ram 2048 --vcpus 2 \
       3--disk $HOME/okd-latest/disk0.qcow2,size=50,format=qcow2 \
       4--autostart \
       5--cdrom $HOME/okd-latest/rhcos-live.iso \
       6--network bridge=virbr0,model=e1000 \
       7--network bridge=virbr1,model=e1000 \
       8--graphics vnc,listen=0.0.0.0 --noautoconsole \
       9--osinfo detect=on,require=off \
      10--debug
      
       1sudo virt-install -n master01 \
       2  --description "Master01 OKD Cluster" \
       3  --ram=8192 \
       4  --cdrom "$HOME/okd-latest/rhcos-live.iso" \
       5  --vcpus=2 \
       6  --disk pool=default,bus=virtio,size=10 \
       7  --graphics none \
       8  --osinfo detect=on,require=off \
       9  --serial pty \
      10  --console pty \
      11  --network network=openshift4,mac=52:54:00:36:14:e5
      
       1sudo cp {{OKUB_INSTALL_PATH}}/rhcos-live.iso /var/lib/libvirt/images/rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso
       2export COREOS_INSTALLER="podman run --privileged --pull always --rm -v /dev:/dev -v /var/lib/libvirt/images:/data -w /data quay.io/coreos/coreos-installer:release"
       3sudo ${COREOS_INSTALLER} iso kargs modify -a "ip={{IP_MASTERS}}::{{GATEWAY}}:{{NETMASK}}:okub-sno:{{INTERFACE}}:none:{{DNS_SERVER}}" "rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso"
       4sudo virt-install --name="openshift-sno" \
       5 --vcpus=4 \
       6 --ram=8192 \
       7 --disk path=/var/lib/libvirt/images/sno-{{PRODUCT}}-{{RELEASE_VERSION}}.qcow2,bus=sata,size=120 \
       8 --network network=sno,model=virtio \
       9 --boot menu=on \
      10 --graphics vnc --console pty,target_type=serial --noautoconsole \
      11 --cpu host-passthrough \
      12 --osinfo detect=on,require=off \
      13 --cdrom /var/lib/libvirt/images/rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso
      

      Checks Pfsense VM

      1# Checks
      2virsh list
      3virsh domifaddr pfsense
      4virsh domiflist pfsense
      5
      6# Connect to console
      7virt-viewer --domain-name pfsense
      

      Delete Pfsense VM

       1virsh destroy pfsense  
       2virsh undefine pfsense --remove-all-storage
       3
       4# disk can be deleted only manually
       5rm -f ~/pfsense/disk0.qcow2
       6
       7# delete network
       8sudo virsh net-destroy pfsense-router
       9sudo virsh net-undefine pfsense-router
      10sudo nmcli con del virbr1
      11sudo nmcli con del eno1
      

      Create a worker

       1# Generate a MAC address
       2date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/';echo
       3
       4sudo virt-install -n worker03.ocp4.example.com \
       5  --description "Worker03 Machine for Openshift 4 Cluster" \
       6  --ram=8192 \
       7  --vcpus=4 \
       8  --os-type=Linux \
       9  --os-variant=rhel8.0 \
      10  --noreboot \
      11  --disk pool=default,bus=virtio,size=50 \
      12  --graphics none \
      13  --serial pty \
      14  --console pty \
      15  --pxe \
      16  --network bridge=openshift4,mac=52:54:00:95:d4:ed
      
    • ๐Ÿ˜ Install KVM

      Prerequisites

      install KVM on RHEL

       1# pre-checks hardware for intel CPU
       2egrep -c '(vmx|svm)' /proc/cpuinfo 
       3lscpu | grep Virtualization
       4lsmod | grep kvm
       5
       6# on RHEL9 Workstation
       7sudo dnf install virt-install virt-viewer -y
       8sudo dnf install -y libvirt
       9sudo dnf install virt-manager -y
      10sudo dnf install -y virt-top libguestfs-tools guestfs-tools
      11sudo gpasswd -a $USER libvirt
      12
      13# Helper
      14sudo dnf -y install bridge-utils
      15
      16# Start libvirt
      17sudo systemctl start libvirtd
      18sudo systemctl enable libvirtd
      19sudo systemctl status libvirtd
      

      install KVM on Ubuntu

       1sudo apt update && sudo apt upgrade -y
       2sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients libvirt-daemon virtinst -y
       3sudo usermod -aG libvirt $(whoami)
       4sudo usermod -aG kvm $(whoami)
       5
       6# Helper
       7sudo apt install bridge-utils cpu-checker -y
       8
       9# Start libvirt
      10sudo systemctl start libvirtd
      11sudo systemctl enable libvirtd
      12sudo systemctl status libvirtd
      
      • Bonus point:
      1sudo apt install cockpit cockpit-machines -y
      2sudo systemctl enable --now cockpit.socket
      3systemctl status cockpit.socket
      

      Then manage your VMs from cockpit: https://localhost:9090 which could be an good alternative to virt-manager.

    • ๐Ÿ˜ The Basics of KVM

      Basic Checks

      1virsh nodeinfo
      

      Config a Bridge network

      Important note that network are created with root user but VM with current user.

      • Non permanent bridge:
      1sudo ip link add virbr1 type bridge
      2sudo ip link set eno1 up
      3sudo ip link set eno1 master virbr1
      4sudo ip address add dev virbr1 192.168.2.1/24
      
      • Permanent bridge
      1sudo nmcli con add ifname virbr1 type bridge con-name virbr1
      2sudo nmcli con add type bridge-slave ifname eno1 master virbr1
      3sudo nmcli con modify virbr1 bridge.stp no
      4sudo nmcli con down eno1
      5sudo nmcli con up virbr1
      6sudo ip address add dev virbr1 192.168.123.1/24
      
      • KVM - Bridge Network
       1cat > hostbridge.xml << EOF
       2<network>
       3  <name>hostbridge</name>
       4  <forward mode='bridge'/>
       5  <bridge name='virbr1'/>
       6</network> 
       7EOF
       8
       9sudo virsh net-define hostbridge.xml
      10sudo virsh net-start hostbridge
      11sudo virsh net-autostart hostbridge
      
      • Give qemu ACL
      1echo "allow all" | sudo tee /etc/qemu-kvm/${USER}.conf
      2echo "include /etc/qemu-kvm/${USER}.conf" | sudo tee --append /etc/qemu/bridge.conf
      3sudo chown root:${USER} /etc/qemu-kvm/${USER}.conf
      4sudo chmod 640 /etc/qemu-kvm/${USER}.conf
      
      • Check network
      1sudo nmcli con show --active
      2sudo virsh net-list --all
      3sudo virsh net-edit hostbridge
      4sudo virsh net-info hostbridge
      5sudo virsh net-dhcp-leases hostbridge
      
      • Check with a small script
       1echo -e "\n##### KVM networks #####\n"
       2kvm_system_networks_all=$(sudo virsh net-list --all)
       3echo -e "Available KVM networks in qemu:///system :\n$kvm_system_networks_all"
       4for net in $(sudo virsh net-list --name); do
       5    bridge_name=$(sudo virsh net-info --network ${net} | grep Bridge | cut -d":" -f2 | sed 's/^[[:space:]]*//')
       6    for br in ${bridge_name}; do
       7        br_info=$(ip -br -c address show dev ${br} || echo "No IP address assigned to bridge ${br}")
       8    done
       9    echo -e "\n\033[1;34m${net}\033[0m have the Bridge: $br_info"
      10done
      11echo -e "\n"
      
      • thanks to bridge-utils package installed ealier:
      1brctl show
      
      • Create a VM with this bridge
       1virt-install \
       2--name pfsense --ram 2048 --vcpus 2 \
       3--disk $HOME/pfsense/disk0.qcow2,size=12,format=qcow2 \
       4--autostart \
       5--cdrom $HOME/pfsense/netgate-installer-amd64.iso \
       6--network bridge=virbr0,model=e1000 \
       7--network network=hostbridge,model=e1000 \
       8--graphics vnc,listen=0.0.0.0 --noautoconsole \
       9--osinfo freebsd14.0 \
      10--debug
      
      • Delete network
      1sudo virsh net-destroy hostbridge
      2sudo virsh net-undefine hostbridge
      3sudo nmcli con del virbr1
      4sudo nmcli con del eno1
      

      Sources

      Blog redhat

  • OLVM
    OLVM sections in docs
    • ๐Ÿ“ Storage

      General concern

      • If you want to move VMs to an another Storage Domain, you need to copy the template from it as well!

      • Remove a disk:

       1# IF RHV does not use anymore disk those should appear empty in lsblk: 
       2lsblk -a
       3sdf                                                                                     8:80   0     4T  0 disk
       4โ””โ”€36001405893b456536be4d67a7f6716e3                                                   253:38   0     4T  0 mpath
       5sdg                                                                                     8:96   0     4T  0 disk
       6โ””โ”€36001405893b456536be4d67a7f6716e3                                                   253:38   0     4T  0 mpath
       7sdh                                                                                     8:112  0     4T  0 disk
       8โ””โ”€36001405893b456536be4d67a7f6716e3                                                   253:38   0     4T  0 mpath
       9sdi                                                                                     8:128  0         0 disk
      10โ””โ”€360014052ab23b1cee074fe38059d7c94                                                   253:39   0   100G  0 mpath
      11sdj                                                                                     8:144  0         0 disk
      12โ””โ”€360014052ab23b1cee074fe38059d7c94                                                   253:39   0   100G  0 mpath
      13sdk                                                                                     8:160  0         0 disk
      14โ””โ”€360014052ab23b1cee074fe38059d7c94                                                   253:39   0   100G  0 mpath
      15
      16# find all disks from LUN ID
      17LUN_ID="360014054ce7e566a01d44c1a4758b092"
      18list_disk=$(dmsetup deps -o devname ${LUN_ID}| cut -f 2 |cut -c 3- |tr -d "()" | tr " " "\n")
      19echo ${list_disk}
      20
      21# Remove from multipath 
      22multipath -f "${LUN_ID}"
      23
      24# remove disk 
      25for i in ${list_disk}; do echo ${i}; blockdev --flushbufs /dev/${i}; echo 1 > /sys/block/${i}/device/delete; done
      26
      27# You can which disk link with which LUN on CEPH side 
      28ls -l /dev/disk/by-*
      

      NFS for OLVM/oVirt

      Since oVirt need a shared stockage, we can create a local NFS to bypass this point if no Storage bay.

    • Administration

      Hosted-engine Administration

      • Connect to VM hosted-engine with root and password setup during the install:
       1# Generate a backup 
       2engine-backup --scope=all --mode=backup --file=/root/backup --log=/root/backuplog
       3
       4# Restore from a backup on Fresh install
       5engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --restore-permissions
       6engine-setup
       7
       8# Restore a backup on existing install
       9engine-cleanup
      10engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions
      11engine-setup
      

      host Administration

      • Connect in ssh to the Host:
       1# Pass a host in maintenance mode manually
       2hosted-engine --vm-status
       3hosted-engine --set-maintenance --mode=global
       4hosted-engine --vm-status
       5
       6# Remove maintenance mode
       7hosted-engine --set-maintenance --mode=none
       8hosted-engine --vm-status
       9
      10# upgrade hosted-engine
      11hosted-engine --set-maintenance --mode=none
      12hosted-engine --vm-status
      13engine-upgrade-check
      14dnf update ovirt\*setup\* # update the setup package
      15engine-setup # launch it to update the engine
      
      • /!\ Connect individually to KVM Virtmanager does not work OVirt use libvirt but not like KVM do…

    • Install

      Prerequisistes

      • Check Compatibilty hardware: Oracle Linux Hardware Certification List (HCL)

      • A minimum of two (2) KVM hosts and no more than seven (7).

      • A fully-qualified domain name for your engine and host with forward and reverse lookup records set in the DNS.

      • /var/tmp 10 GB space at least

      • Prepared a shared-storage (nfs or iscsi) of at least 74 GB to be used as a data storage domain dedicated to the engine virtual machine. ISCSI need to be discovered before oVirt install.

Friday, March 13, 2026 Monday, January 1, 1