Browse Docs

Virtualisation

Virtualisation sections in docs

Documentation regarding all Virtualisation technologies, I also include Cloud providers here.

In this section

  • Azure
    Azure sections in docs
    • ๐Ÿ‹ Azure

      Create a small infra for kubernetes

        1  #On your Azure CLI
        2  az --version                                     # Version expected 2.1.0 or higher 
        3
        4  az group delete --name kubernetes -y
        5
        6  az group create -n kubernetes -l westeurope
        7
        8  az network vnet create -g kubernetes \
        9    -n kubernetes-vnet \
       10    --address-prefix 10.240.0.0/24 \
       11    --subnet-name kubernetes-subnet
       12
       13  az network nsg create -g kubernetes -n kubernetes-nsg
       14
       15  az network vnet subnet update -g kubernetes \
       16    -n kubernetes-subnet \
       17    --vnet-name kubernetes-vnet \
       18    --network-security-group kubernetes-nsg
       19
       20  az network nsg rule create -g kubernetes \
       21    -n kubernetes-allow-ssh \
       22    --access allow \
       23    --destination-address-prefix '*' \
       24    --destination-port-range 22 \
       25    --direction inbound \
       26    --nsg-name kubernetes-nsg \
       27    --protocol tcp \
       28    --source-address-prefix '*' \
       29    --source-port-range '*' \
       30    --priority 1000
       31
       32  az network nsg rule create -g kubernetes \
       33    -n kubernetes-allow-api-server \
       34    --access allow \
       35    --destination-address-prefix '*' \
       36    --destination-port-range 6443 \
       37    --direction inbound \
       38    --nsg-name kubernetes-nsg \
       39    --protocol tcp \
       40    --source-address-prefix '*' \
       41    --source-port-range '*' \
       42    --priority 1001
       43
       44  az network nsg rule list -g kubernetes --nsg-name kubernetes-nsg --query "[].{Name:name,  Direction:direction, Priority:priority, Port:destinationPortRange}" -o table
       45
       46  az network lb create -g kubernetes --sku Standard \
       47    -n kubernetes-lb \
       48    --backend-pool-name kubernetes-lb-pool \
       49    --public-ip-address kubernetes-pip \
       50    --public-ip-address-allocation static
       51
       52  az network public-ip list --query="[?name=='kubernetes-pip'].{ResourceGroup:resourceGroup,   Region:location,Allocation:publicIpAllocationMethod,IP:ipAddress}" -o table
       53  #For Ubuntu 
       54  # az vm image list --location westeurope --publisher Canonical --offer UbuntuServer --sku 18.04-LTS --all -o table
       55  # For Redhat 
       56  # az vm image list --location westeurope --publisher RedHat --offer RHEL  --sku 8 --all -o table
       57  # => choosen one : 8-lvm-gen2
       58  WHICHOS="RedHat:RHEL:8-lvm-gen2:8.5.2022032206"
       59
       60  # K8s Controller 
       61  az vm availability-set create -g kubernetes -n controller-as
       62
       63  for i in 0 1 2; do
       64	  echo "[Controller ${i}] Creating public IP..."
       65	  az network public-ip create -n controller-${i}-pip -g kubernetes --sku Standard > /dev/null
       66	  echo "[Controller ${i}] Creating NIC..."
       67	  az network nic create -g kubernetes \
       68	  -n controller-${i}-nic \
       69	  --private-ip-address 10.240.0.1${i} \
       70	  --public-ip-address controller-${i}-pip \
       71	  --vnet kubernetes-vnet \
       72	  --subnet kubernetes-subnet \
       73	  --ip-forwarding \
       74	  --lb-name kubernetes-lb \
       75	  --lb-address-pools kubernetes-lb-pool >/dev/null
       76
       77	  echo "[Controller ${i}] Creating VM..."
       78	  az vm create -g kubernetes \
       79	  -n controller-${i} \
       80	  --image ${WHICHOS} \
       81	  --nics controller-${i}-nic \
       82	  --availability-set controller-as \
       83	  --nsg '' \
       84	  --admin-username 'kuberoot' \
       85	  --admin-password 'Changeme!' \
       86	  --size Standard_B2s \
       87	  --storage-sku StandardSSD_LRS 
       88	  #--generate-ssh-keys > /dev/null
       89  done
       90
       91  #K8s Worker 
       92  az vm availability-set create -g kubernetes -n worker-as
       93  for i in 0 1; do
       94  echo "[Worker ${i}] Creating public IP..."
       95  az network public-ip create -n worker-${i}-pip -g kubernetes --sku Standard > /dev/null
       96  echo "[Worker ${i}] Creating NIC..."
       97  az network nic create -g kubernetes \
       98  -n worker-${i}-nic \
       99  --private-ip-address 10.240.0.2${i} \
      100  --public-ip-address worker-${i}-pip \
      101  --vnet kubernetes-vnet \
      102  --subnet kubernetes-subnet \
      103  --ip-forwarding > /dev/null
      104  echo "[Worker ${i}] Creating VM..."
      105  az vm create -g kubernetes \
      106  -n worker-${i} \
      107  --image ${WHICHOS} \
      108  --nics worker-${i}-nic \
      109  --tags pod-cidr=10.200.${i}.0/24 \
      110  --availability-set worker-as \
      111  --nsg '' \
      112  --generate-ssh-keys \
      113  --size Standard_B2s \
      114  --storage-sku StandardSSD_LRS \
      115  --admin-username 'kuberoot'> /dev/null \
      116  --admin-password 'Changeme!' \
      117  done
      118
      119  #Summarize
      120  az vm list -d -g kubernetes -o table
      
  • Digital Ocean
    DO sections in docs
    • ๐Ÿ‹ Digital Ocean

      Install Client

       1# most simple 
       2arkade get doctl
       3
       4# normal way
       5curl -OL https://github.com/digitalocean/doctl/releases/download/v1.104.0/doctl-1.104.0-linux-amd64.tar.gz
       6tar xf doctl-1.104.0-linux-amd64.tar.gz
       7mv doctl /usr/local/bin
       8
       9# Auto-Completion ZSH
      10 doctl completion zsh > $ZSH/completions/_doctl
      

      Basics

      • find possible droplet
      1doctl compute region list
      2doctl compute size list
      3doctl compute image list-distribution
      4doctl compute image list --public
      
      • Auth
      1doctl auth init --context test
      2doctl auth list
      3doctl auth switch --context test2
      
      • Create Project
      1doctl projects create --name rkub --environment staging --purpose "stage rkub with github workflows"
      
      • Create VM
      1doctl compute ssh-key list
      2doctl compute droplet create test --region fra1 --image rockylinux-9-x64 --size s-1vcpu-1gb --ssh-keys <fingerprint>
      3doctl compute droplet delete test -f
      

      with Terraform

       1export DO_PAT="dop_v1_xxxxxxxxxxxxxxxx"
       2doctl auth init --context rkub
       3
       4# inside a dir with a tf file 
       5terraform init
       6terraform validate
       7terraform plan -var "do_token=${DO_PAT}"
       8terraform apply -var "do_token=${DO_PAT}" -auto-approve
       9
      10# clean apply
      11terraform plan -out=infra.tfplan -var "do_token=${DO_PAT}"
      12terraform apply infra.tfplan
      13
      14# Control
      15terraform show terraform.tfstate
      16
      17# Destroy
      18terraform plan -destroy -out=terraform.tfplan -var "do_token=${DO_PAT}"
      19terraform apply terraform.tfplan
      
      • Connect to Droplet with private ssh key ssh root@$(terraform output -json ip_address_workers | jq -r ‘.[0]’) -i .key

  • KVM
    KVM sections in docs
    • ๐Ÿ‹ KVM

      install KVM on RHEL

       1# pre-checks hardware for intel CPU
       2grep -e 'vmx' /proc/cpuinfo 
       3lscpu | grep Virtualization
       4lsmod | grep kvm
       5
       6# on RHEL9 Workstation
       7sudo dnf install virt-install virt-viewer -y
       8sudo dnf install -y libvirt
       9sudo dnf install virt-manager -y
      10sudo dnf install -y virt-top libguestfs-tools guestfs-tools
      11sudo gpasswd -a $USER libvirt
      12
      13# Helper
      14sudo dnf -y install bridge-utils
      15
      16# Start libvirt
      17sudo systemctl start libvirtd
      18sudo systemctl enable libvirtd
      19sudo systemctl status libvirtd
      

      Basic Checks

      1virsh nodeinfo
      

      Config a Bridge network

      Important note that network are created with root user but VM with current user.

  • OLVM
    OLVM sections in docs
    • ๐Ÿ“ Storage

      General concern

      • If you want to move VMs to an another Storage Domain, you need to copy the template from it as well!

      • Remove a disk:

       1# IF RHV does not use anymore disk those should appear empty in lsblk: 
       2lsblk -a
       3sdf                                                                                     8:80   0     4T  0 disk
       4โ””โ”€36001405893b456536be4d67a7f6716e3                                                   253:38   0     4T  0 mpath
       5sdg                                                                                     8:96   0     4T  0 disk
       6โ””โ”€36001405893b456536be4d67a7f6716e3                                                   253:38   0     4T  0 mpath
       7sdh                                                                                     8:112  0     4T  0 disk
       8โ””โ”€36001405893b456536be4d67a7f6716e3                                                   253:38   0     4T  0 mpath
       9sdi                                                                                     8:128  0         0 disk
      10โ””โ”€360014052ab23b1cee074fe38059d7c94                                                   253:39   0   100G  0 mpath
      11sdj                                                                                     8:144  0         0 disk
      12โ””โ”€360014052ab23b1cee074fe38059d7c94                                                   253:39   0   100G  0 mpath
      13sdk                                                                                     8:160  0         0 disk
      14โ””โ”€360014052ab23b1cee074fe38059d7c94                                                   253:39   0   100G  0 mpath
      15
      16# find all disks from LUN ID
      17LUN_ID="360014054ce7e566a01d44c1a4758b092"
      18list_disk=$(dmsetup deps -o devname ${LUN_ID}| cut -f 2 |cut -c 3- |tr -d "()" | tr " " "\n")
      19echo ${list_disk}
      20
      21# Remove from multipath 
      22multipath -f "${LUN_ID}"
      23
      24# remove disk 
      25for i in ${list_disk}; do echo ${i}; blockdev --flushbufs /dev/${i}; echo 1 > /sys/block/${i}/device/delete; done
      26
      27# You can which disk link with which LUN on CEPH side 
      28ls -l /dev/disk/by-*
      

      NFS for OLVM/oVirt

      Since oVirt need a shared stockage, we can create a local NFS to bypass this point if no Storage bay.

    • Administration

      Hosted-engine Administration

      • Connect to VM hosted-engine with root and password setup during the install:
       1# Generate a backup 
       2engine-backup --scope=all --mode=backup --file=/root/backup --log=/root/backuplog
       3
       4# Restore from a backup on Fresh install
       5engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --restore-permissions
       6engine-setup
       7
       8# Restore a backup on existing install
       9engine-cleanup
      10engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions
      11engine-setup
      

      host Administration

      • Connect in ssh to the Host:
       1# Pass a host in maintenance mode manually
       2hosted-engine --vm-status
       3hosted-engine --set-maintenance --mode=global
       4hosted-engine --vm-status
       5
       6# Remove maintenance mode
       7hosted-engine --set-maintenance --mode=none
       8hosted-engine --vm-status
       9
      10# upgrade hosted-engine
      11hosted-engine --set-maintenance --mode=none
      12hosted-engine --vm-status
      13engine-upgrade-check
      14dnf update ovirt\*setup\* # update the setup package
      15engine-setup # launch it to update the engine
      
      • /!\ Connect individually to KVM Virtmanager does not work OVirt use libvirt but not like KVM do…

    • Install

      Prerequisistes

      • Check Compatibilty hardware: Oracle Linux Hardware Certification List (HCL)

      • A minimum of two (2) KVM hosts and no more than seven (7).

      • A fully-qualified domain name for your engine and host with forward and reverse lookup records set in the DNS.

      • /var/tmp 10 GB space at least

      • Prepared a shared-storage (nfs or iscsi) of at least 74 GB to be used as a data storage domain dedicated to the engine virtual machine. ISCSI need to be discovered before oVirt install.

Thursday, January 15, 2026 Monday, January 1, 1