Docs

πŸ‘Ύ Pypi Repository
πŸ‘Ύ Pypi Repository
Pypi Repo for airgap env Let’s take as an example py dependencies for Netbox 1# Tools needed 2dnf install -y python3.11 3pip install --upgrade pip setuptool python-pypi-mirror twine 4 5# init mirror 6python3.11 -m venv mirror 7mkdir download 8 9# Get list of Py packages needed 10curl raw.githubusercontent.com/netbox-community/netbox/v3.7.3/requirements.txt -o requirements.txt 11echo pip >> requirements.txt 12echo setuptools >> requirements.txt 13echo uwsgi >> requirements.txt 14 15# Make sure repository CA is installed 16curl http://pki.server/pki/cacerts/ISSUING_CA.pem -o /etc/pki/ca-trust/source/anchors/issuing.crt 17curl http://pki.server/pki/cacerts/ROOT_CA.pem -o /etc/pki/ca-trust/source/anchors/root.crt 18update-ca-trust 19 20 21source mirror/bin/activate 22pypi-mirror download -b -d download -r requirements.tx 23twine upload --repository-url https://nexus3.server/repository/internal-pypi/ download/*.whl --cert /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 24twine upload --repository-url https://nexus3.server/repository/internal-pypi/ /download/*.tar.gz --cert /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Then on target host inside /etc/pip.conf :
πŸ“ Storage
πŸ“ Storage
General concern If you want to move VMs to an another Storage Domain, you need to copy the template from it as well! Remove a disk: 1# IF RHV does not use anymore disk those should appear empty in lsblk: 2lsblk -a 3sdf 8:80 0 4T 0 disk 4└─36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 5sdg 8:96 0 4T 0 disk 6└─36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 7sdh 8:112 0 4T 0 disk 8└─36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 9sdi 8:128 0 0 disk 10└─360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 11sdj 8:144 0 0 disk 12└─360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 13sdk 8:160 0 0 disk 14└─360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 15 16# find all disks from LUN ID 17LUN_ID="360014054ce7e566a01d44c1a4758b092" 18list_disk=$(dmsetup deps -o devname ${LUN_ID}| cut -f 2 |cut -c 3- |tr -d "()" | tr " " "\n") 19echo ${list_disk} 20 21# Remove from multipath 22multipath -f "${LUN_ID}" 23 24# remove disk 25for i in ${list_disk}; do echo ${i}; blockdev --flushbufs /dev/${i}; echo 1 > /sys/block/${i}/device/delete; done 26 27# You can which disk link with which LUN on CEPH side 28ls -l /dev/disk/by-* NFS for OLVM/oVirt Since oVirt need a shared stockage, we can create a local NFS to bypass this point if no Storage bay.
πŸ“¦ Archive
πŸ“¦ Archive
Tar - Β« tape archiver Β» Preserve files permissions and ownership. The Basic 1# Archive 2tar cvf mon_archive.tar <fichier1> <fichier2> </rep/doosier/> 3 4## Archive and compress with zstd everything in the current dir and push to /target/dir 5tar -I zstd -vcf archive.tar.zstd -C /target/dir . 6 7# Extract 8tar xvf mon_archive.tar 9 10# Extract push to target dir 11tar -zxvf new.tar.gz -C /target/dir Other usefull options β€’ t : list archive’s content. β€’ T : Archive list given by a file. β€’ P : Absolute path is preserve (usefull for backup /etc) β€’ X : exclude β€’ z : compression Gunzip β€’ j : compression Bzip2 β€’ J : compression Lzmacd
πŸ”’ Vault on k8s
πŸ”’ Vault on k8s
Some time ago, I made a small shell script to handle Vault on a cluster kubernetes. For documentation purpose. Install Vault with helm 1#!/bin/bash 2 3## Variables 4DIRNAME=$(dirname $0) 5DEFAULT_VALUE="vault/values-override.yaml" 6NewAdminPasswd="PASSWORD" 7PRIVATE_REGISTRY_USER="registry-admin" 8PRIVATE_REGISTRY_PASSWORD="PASSWORD" 9PRIVATE_REGISTRY_ADDRESS="registry.example.com" 10DOMAIN="example.com" 11INGRESS="vault.${DOMAIN}" 12 13if [ -z ${CM_NS+x} ];then 14 CM_NS='your-namespace' 15fi 16 17if [ -z ${1+x} ]; then 18 VALUES_FILE="${DIRNAME}/${DEFAULT_VALUE}" 19 echo -e "\n[INFO] Using default values file '${DEFAULT_VALUE}'" 20else 21 if [ -f $1 ]; then 22 echo -e "\n[INFO] Using values file $1" 23 VALUES_FILE=$1 24 else 25 echo -e "\n[ERROR] No file exist $1" 26 exit 1 27 fi 28fi 29 30## Functions 31function checkComponentsInstall() { 32 componentsArray=("kubectl" "helm") 33 for i in "${componentsArray[@]}"; do 34 command -v "${i}" >/dev/null 2>&1 || 35 { echo "${i} is required, but it's not installed. Aborting." >&2; exit 1; } 36 done 37} 38 39function createSecret() { 40kubectl get secret -n ${CM_NS} registry-pull-secret --no-headers 2> /dev/null \ 41|| \ 42kubectl create secret docker-registry -n ${CM_NS} registry-pull-secret \ 43 --docker-server=${PRIVATE_REGISTRY_ADDRESS} \ 44 --docker-username=${PRIVATE_REGISTRY_USER} \ 45 --docker-password=${PRIVATE_REGISTRY_ADDRESS} 46} 47 48function installWithHelm() { 49helm dep update ${DIRNAME}/helm 50 51helm upgrade --install vault ${DIRNAME}/helm \ 52--namespace=${CM_NS} --create-namespace \ 53--set global.imagePullSecrets.[0]=registry-pull-secret \ 54--set global.image.repository=${PRIVATE_REGISTRY_ADDRESS}/hashicorp/vault-k8s \ 55--set global.agentImage.repository=${PRIVATE_REGISTRY_ADDRESS}/hashicorp/vault \ 56--set ingress.hosts.[0]=${INGRESS} \ 57--set ingress.enabled=true \ 58--set global.leaderElection.namespace=${CM_NS} 59 60echo -e "\n[INFO] sleep 30s" && sleep 30 61} 62 63checkComponentsInstall 64createSecret 65installWithHelm Init Vault on kubernetes Allow local kubernetes to create and reach secret on the Vault
πŸ”— Dependencies
πŸ”— Dependencies
Package with pip3 1pip3 freeze netaddr > requirements.txt 2pip3 download -r requirements.txt -d wheel 3mv requirements.txt wheel 4tar -zcf wheelhouse.tar.gz wheel 5tar -zxf wheelhouse.tar.gz 6pip3 install -r wheel/requirements.txt --no-index --find-links wheel Package with Poetry 1curl -sSL https://install.python-poetry.org | python3 - 2poetry new rp-poetry 3poetry add ansible 4poetry add poetry 5poetry add netaddr 6poetry add kubernetes 7poetry add jsonpatch 8poetry add `cat ~/.ansible/collections/ansible_collections/kubernetes/core/requirements.txt` 9 10poetry build 11 12pip3 install dist/rp_poetry-0.1.0-py3-none-any.whl 13 14poetry export --without-hashes -f requirements.txt -o requirements.txt Push dans Nexus 1poetry config repositories.test http://localhost 2poetry publish -r test Images Builder 1podman login registry.redhat.io 2podman pull registry.redhat.io/ansible-automation-platform-22/ansible-python-base-rhel8:1.0.0-230 3 4pyenv local 3.9.13 5python -m pip install poetry 6poetry init 7poetry add ansible-builder
πŸ”± K3S
πŸ”± K3S
Specific to RHEL 1# Create a trust zone for the two interconnect 2sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 #pods 3sudo firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 #services 4sudo firewall-cmd --reload 5sudo firewall-cmd --list-all-zones 6 7# on Master 8sudo rm -f /var/lib/cni/networks/cbr0/lock 9sudo /usr/local/bin/k3s-killall.sh 10sudo systemctl restart k3s 11sudo systemctl status k3s 12 13# on Worker 14sudo rm -f /var/lib/cni/networks/cbr0/lock 15sudo /usr/local/bin/k3s-killall.sh 16sudo systemctl restart k3s-agent 17sudo systemctl status k3s-agent Check Certificates 1# Get CA from K3s master 2openssl s_client -connect localhost:6443 -showcerts < /dev/null 2>&1 | openssl x509 -noout -enddate 3openssl s_client -showcerts -connect 193.168.51.103:6443 < /dev/null 2>/dev/null|openssl x509 -outform PEM 4openssl s_client -showcerts -connect 193.168.51.103:6443 < /dev/null 2>/dev/null|openssl x509 -outform PEM | base64 | tr -d '\n' 5 6# Check end date: 7for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done 8 9# More efficient: 10cd /var/lib/rancher/k3s/server/tls/ 11for crt in *.crt; do printf '%s: %s\n' "$(date --date="$(openssl x509 -enddate -noout -in "$crt"|cut -d= -f 2)" --iso-8601)" "$crt"; done | sort 12 13# Check CA issuer 14for i in $(find . -maxdepth 1 -type f -name "*.crt"); do openssl x509 -in ${i} -noout -issuer; done General Checks RKE2/K3S Nice gist to troubleshoot etcd link
πŸ˜‰ Deploy pfsense VM
πŸ˜‰ Deploy pfsense VM
install Pfsense VM Download from Netgate website (account requested) Make network config Important note: no need to prepare NetworkManager config, KVM will handle creation of the bridge. Also note that dns enable is set to disables the use of libvirts DHCP server (pfsense is taking over). 1cat > pfsense.xml << EOF 2<network> 3 <name>pfsense-router</name> 4 <uuid></uuid> 5 <forward mode='nat'> 6 </forward> 7 <bridge name='virbr1' stp='on' delay='0'/> 8 <dns enable='no'/> 9 <ip address='192.168.123.1' netmask='255.255.255.0'> 10 </ip> 11</network> 12EOF 13 14sudo virsh net-define pfsense.xml 15sudo virsh net-start pfsense-router 16sudo virsh net-autostart pfsense-router 17 18# Give qemu ACL 19echo "allow all" | sudo tee /etc/qemu-kvm/${USER}.conf 20echo "include /etc/qemu-kvm/${USER}.conf" | sudo tee --append /etc/qemu/bridge.conf 21sudo chown root:${USER} /etc/qemu-kvm/${USER}.conf 22sudo chmod 640 /etc/qemu-kvm/${USER}.conf 23 24# Check network 25nmcli con show --active 26sudo virsh net-list --all 27sudo virsh net-edit pfsense-router 28sudo virsh net-info pfsense-router 29sudo virsh net-dhcp-leases pfsense-router Create and Run Pfsense VM 1# Create pfsense vm 2virt-install \ 3--name pfsense --ram 2048 --vcpus 2 \ 4--disk $HOME/pfsense/disk0.qcow2,size=12,format=qcow2 \ 5--cdrom $HOME/pfsense/netgate-installer-amd64.iso \ 6--network bridge=virbr0,model=e1000 \ 7--network bridge=virbr1,model=e1000 \ 8--graphics vnc,listen=0.0.0.0 --noautoconsole \ 9--osinfo freebsd14.0 \ 10--autostart \ 11--debug 12 13virsh start pfsense Create OKD vm 1virt-install \ 2--name okd --ram 2048 --vcpus 2 \ 3--disk $HOME/okd-latest/disk0.qcow2,size=50,format=qcow2 \ 4--autostart \ 5--cdrom $HOME/okd-latest/rhcos-live.iso \ 6--network bridge=virbr0,model=e1000 \ 7--network bridge=virbr1,model=e1000 \ 8--graphics vnc,listen=0.0.0.0 --noautoconsole \ 9--osinfo detect=on,require=off \ 10--debug 1sudo virt-install -n master01 \ 2 --description "Master01 OKD Cluster" \ 3 --ram=8192 \ 4 --cdrom "$HOME/okd-latest/rhcos-live.iso" \ 5 --vcpus=2 \ 6 --disk pool=default,bus=virtio,size=10 \ 7 --graphics none \ 8 --osinfo detect=on,require=off \ 9 --serial pty \ 10 --console pty \ 11 --network network=openshift4,mac=52:54:00:36:14:e5 1sudo cp {{OKUB_INSTALL_PATH}}/rhcos-live.iso /var/lib/libvirt/images/rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso 2export COREOS_INSTALLER="podman run --privileged --pull always --rm -v /dev:/dev -v /var/lib/libvirt/images:/data -w /data quay.io/coreos/coreos-installer:release" 3sudo ${COREOS_INSTALLER} iso kargs modify -a "ip={{IP_MASTERS}}::{{GATEWAY}}:{{NETMASK}}:okub-sno:{{INTERFACE}}:none:{{DNS_SERVER}}" "rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso" 4sudo virt-install --name="openshift-sno" \ 5 --vcpus=4 \ 6 --ram=8192 \ 7 --disk path=/var/lib/libvirt/images/sno-{{PRODUCT}}-{{RELEASE_VERSION}}.qcow2,bus=sata,size=120 \ 8 --network network=sno,model=virtio \ 9 --boot menu=on \ 10 --graphics vnc --console pty,target_type=serial --noautoconsole \ 11 --cpu host-passthrough \ 12 --osinfo detect=on,require=off \ 13 --cdrom /var/lib/libvirt/images/rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso Checks Pfsense VM 1# Checks 2virsh list 3virsh domifaddr pfsense 4virsh domiflist pfsense 5 6# Connect to console 7virt-viewer --domain-name pfsense Delete Pfsense VM 1virsh destroy pfsense 2virsh undefine pfsense --remove-all-storage 3 4# disk can be deleted only manually 5rm -f ~/pfsense/disk0.qcow2 6 7# delete network 8sudo virsh net-destroy pfsense-router 9sudo virsh net-undefine pfsense-router 10sudo nmcli con del virbr1 11sudo nmcli con del eno1 Create a worker 1# Generate a MAC address 2date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/';echo 3 4sudo virt-install -n worker03.ocp4.example.com \ 5 --description "Worker03 Machine for Openshift 4 Cluster" \ 6 --ram=8192 \ 7 --vcpus=4 \ 8 --os-type=Linux \ 9 --os-variant=rhel8.0 \ 10 --noreboot \ 11 --disk pool=default,bus=virtio,size=50 \ 12 --graphics none \ 13 --serial pty \ 14 --console pty \ 15 --pxe \ 16 --network bridge=openshift4,mac=52:54:00:95:d4:ed
😍 Install KVM
😍 Install KVM
Prerequisites install KVM on RHEL 1# pre-checks hardware for intel CPU 2egrep -c '(vmx|svm)' /proc/cpuinfo 3lscpu | grep Virtualization 4lsmod | grep kvm 5 6# on RHEL9 Workstation 7sudo dnf install virt-install virt-viewer -y 8sudo dnf install -y libvirt 9sudo dnf install virt-manager -y 10sudo dnf install -y virt-top libguestfs-tools guestfs-tools 11sudo gpasswd -a $USER libvirt 12 13# Helper 14sudo dnf -y install bridge-utils 15 16# Start libvirt 17sudo systemctl start libvirtd 18sudo systemctl enable libvirtd 19sudo systemctl status libvirtd install KVM on Ubuntu 1sudo apt update && sudo apt upgrade -y 2sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients libvirt-daemon virtinst -y 3sudo usermod -aG libvirt $(whoami) 4sudo usermod -aG kvm $(whoami) 5 6# Helper 7sudo apt install bridge-utils cpu-checker -y 8 9# Start libvirt 10sudo systemctl start libvirtd 11sudo systemctl enable libvirtd 12sudo systemctl status libvirtd Bonus point: 1sudo apt install cockpit cockpit-machines -y 2sudo systemctl enable --now cockpit.socket 3systemctl status cockpit.socket Then manage your VMs from cockpit: https://localhost:9090 which could be an good alternative to virt-manager.
😏 The Basics of KVM
😏 The Basics of KVM
Basic Checks 1virsh nodeinfo Config a Bridge network Important note that network are created with root user but VM with current user. Non permanent bridge: 1sudo ip link add virbr1 type bridge 2sudo ip link set eno1 up 3sudo ip link set eno1 master virbr1 4sudo ip address add dev virbr1 192.168.2.1/24 Permanent bridge 1sudo nmcli con add ifname virbr1 type bridge con-name virbr1 2sudo nmcli con add type bridge-slave ifname eno1 master virbr1 3sudo nmcli con modify virbr1 bridge.stp no 4sudo nmcli con down eno1 5sudo nmcli con up virbr1 6sudo ip address add dev virbr1 192.168.123.1/24 KVM - Bridge Network 1cat > hostbridge.xml << EOF 2<network> 3 <name>hostbridge</name> 4 <forward mode='bridge'/> 5 <bridge name='virbr1'/> 6</network> 7EOF 8 9sudo virsh net-define hostbridge.xml 10sudo virsh net-start hostbridge 11sudo virsh net-autostart hostbridge Give qemu ACL 1echo "allow all" | sudo tee /etc/qemu-kvm/${USER}.conf 2echo "include /etc/qemu-kvm/${USER}.conf" | sudo tee --append /etc/qemu/bridge.conf 3sudo chown root:${USER} /etc/qemu-kvm/${USER}.conf 4sudo chmod 640 /etc/qemu-kvm/${USER}.conf Check network 1sudo nmcli con show --active 2sudo virsh net-list --all 3sudo virsh net-edit hostbridge 4sudo virsh net-info hostbridge 5sudo virsh net-dhcp-leases hostbridge Check with a small script 1echo -e "\n##### KVM networks #####\n" 2kvm_system_networks_all=$(sudo virsh net-list --all) 3echo -e "Available KVM networks in qemu:///system :\n$kvm_system_networks_all" 4for net in $(sudo virsh net-list --name); do 5 bridge_name=$(sudo virsh net-info --network ${net} | grep Bridge | cut -d":" -f2 | sed 's/^[[:space:]]*//') 6 for br in ${bridge_name}; do 7 br_info=$(ip -br -c address show dev ${br} || echo "No IP address assigned to bridge ${br}") 8 done 9 echo -e "\n\033[1;34m${net}\033[0m have the Bridge: $br_info" 10done 11echo -e "\n" thanks to bridge-utils package installed ealier: 1brctl show Create a VM with this bridge 1virt-install \ 2--name pfsense --ram 2048 --vcpus 2 \ 3--disk $HOME/pfsense/disk0.qcow2,size=12,format=qcow2 \ 4--autostart \ 5--cdrom $HOME/pfsense/netgate-installer-amd64.iso \ 6--network bridge=virbr0,model=e1000 \ 7--network network=hostbridge,model=e1000 \ 8--graphics vnc,listen=0.0.0.0 --noautoconsole \ 9--osinfo freebsd14.0 \ 10--debug Delete network 1sudo virsh net-destroy hostbridge 2sudo virsh net-undefine hostbridge 3sudo nmcli con del virbr1 4sudo nmcli con del eno1 Sources Blog redhat
πŸš€ Operator SDK
πŸš€ Operator SDK
Operators have 3 kinds : go, ansible, helm. 1## Init an Ansible project 2operator-sdk init --plugins=ansible --domain example.org --owner "Your name" 3 4## Command above will create a structure like: 5netbox-operator 6β”œβ”€β”€ Dockerfile 7β”œβ”€β”€ Makefile 8β”œβ”€β”€ PROJECT 9β”œβ”€β”€ config 10β”‚Β β”œβ”€β”€ crd 11β”‚Β β”œβ”€β”€ default 12β”‚Β β”œβ”€β”€ manager 13β”‚Β β”œβ”€β”€ manifests 14β”‚Β β”œβ”€β”€ prometheus 15β”‚Β β”œβ”€β”€ rbac 16β”‚Β β”œβ”€β”€ samples 17β”‚Β β”œβ”€β”€ scorecard 18│ └── testing 19β”œβ”€β”€ molecule 20β”‚Β β”œβ”€β”€ default 21│ └── kind 22β”œβ”€β”€ playbooks 23│ └── install.yml 24β”œβ”€β”€ requirements.yml 25β”œβ”€β”€ roles 26│ └── deployment 27└── watches.yaml 1## Create first role 2operator-sdk create api --group app --version v1alpha1 --kind Deployment --generate-role
🚠 Quay.io
🚠 Quay.io
Deploy a Quay.io / Mirror-registry on container Nothing original, it just the documentation of redhat, but can be usefull to kickstart a registry. Prerequisites: 10G /home 15G /var 300G /srv or /opt (regarding QuayRoot) min 2 or more vCPUs. min 8 GB of RAM. 1# packages 2sudo yum install -y podman 3sudo yum install -y rsync 4sudo yum install -y jq 5 6# Get tar 7mirror="https://mirror.openshift.com/pub/openshift-v4/clients" 8wget ${mirror}/mirror-registry/latest/mirror-registry.tar.gz 9tar zxvf mirror-registry.tar.gz 10 11# Get oc-mirror 12curl https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest/oc-mirror.rhel9.tar.gz -O 13 14# Basic install 15sudo ./mirror-registry install \ 16 --quayHostname quay01.example.local \ 17 --quayRoot /opt 18 19# More detailed install 20sudo ./mirror-registry install \ 21 --quayHostname quay01.example.local \ 22 --quayRoot /srv \ 23 --quayStorage /srv/quay-pg \ 24 --pgStorage /srv/quay-storage \ 25 --sslCert tls.crt \ 26 --sslKey tls.key 27 28podman login -u init \ 29 -p 7u2Dm68a1s3bQvz9twrh4Nel0i5EMXUB \ 30 quay01.example.local:8443 \ 31 --tls-verify=false 32 33# By default login go in: 34cat $XDG_RUNTIME_DIR/containers/auth.json 35 36# Get IP 37sudo podman inspect --format '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' quay-app 38 39#unistall 40sudo ./mirror-registry uninstall -v \ 41 --quayRoot <example_directory_name> 42 43# Info 44curl -u init:password https://quay01.example.local:8443/v2/_catalog | jq 45curl -u root:password https://<url>:<port>/v2/ocp4/openshift4/tags/list | jq 46 47# Get an example of imageset 48oc-mirror init --registry quay.example.com:8443/mirror/oc-mirror-metadata 49 50# Get list of Operators, channels, packages 51oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 52oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged 53oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged --channel=stable unlock user init/admin 1QUAY_POSTGRES=`podman ps | grep quay-postgres | awk '{print $1}'` 2 3podman exec -it $QUAY_POSTGRES psql -d quay -c "UPDATE "public.user" SET invalid_login_attempts = 0 WHERE username = 'init'" Source Mirror-registry