๐Ÿšฉ Compare
๐Ÿšฉ Compare
Compare staffs Compare two jar files: 1diff -W200 -y <(unzip -vqq file1.jar | awk '{ if ($1 > 0) {printf("%s\t%s\n", $1, $8)}}' | sort -k2) <(unzip -vqq file2.jar | awk '{ if ($1 > 0) {printf("%s\t%s\n", $1, $8)}}' | sort -k2)
๐Ÿšฉ Files
๐Ÿšฉ Files
Find a process blocking a file with fuser: 1fuser -m </dir or /files> # Find process blocking/using this directory or files. 2fuser -cu </dir or /files> # Same as above but add the user 3fuser -kcu </dir or /files> # Kill process 4fuser -v -k -HUP -i ./ # Send HUP signal to process 5 6# Output will send you <PID + letter>, here is the meaning: 7# c current directory. 8# e executable being run. 9# f open file. (omitted in default display mode). 10# F open file for writing. (omitted in default display mode). 11# r root directory. 12# m mmap'ed file or shared library. with lsof ( = list open file): 1lsof +D /var/log # Find all files blocked with the process and user. 2lsof -a +L1 <mountpoint> # Process blocking a FS. 3lsof -c ssh -c init # Find files open by thoses processes. 4lsof -p 1753 # Find files open by PID process. 5lsof -u root # Find files open by user. 6lsof -u ^user # Find files open by user except this one. 7kill -9 `lsof -t -u toto` # kill user's processes. (option -t output only PID). MacGyver method: 1#When you have no fuser or lsof: 2find /proc/*/fd -type f -links 0 -exec ls -lrt {} \;
๐Ÿšฉ Network Manager
๐Ÿšฉ Network Manager
Basic Troubleshooting Checks interfaces 1nmcli con show 2NAME UUID TYPE DEVICE 3ens192 4d0087a0-740a-4356-8d9e-f58b63fd180c ethernet ens192 4ens224 3dcb022b-62a2-4632-8b69-ab68e1901e3b ethernet ens224 5 6nmcli dev status 7DEVICE TYPE STATE CONNECTION 8ens192 ethernet connected ens192 9ens224 ethernet connected ens224 10ens256 ethernet connected ens256 11lo loopback unmanaged -- 12 13# Get interfaces details : 14nmcli connection show ens192 15nmcli -p con show ens192 16 17# Get DNS settings in interface 18UUID=$(nmcli --get-values connection.uuid c show "cloud-init eth0") 19nmcli --get-values ipv4.dns c show $UUID Changing Interface name 1nmcli connection add type ethernet mac "00:50:56:80:11:ff" ifname "ens224" 2nmcli connection add type ethernet mac "00:50:56:80:8a:0b" ifname "ens256" Create a custom config 1nmcli con load /etc/sysconfig/network-scripts/ifcfg-ens224 2nmcli con up ens192 Adding a Virtual IP 1nmcli con mod enp1s0 +ipv4.addresses "192.168.122.11/24" 2ip addr del 10.163.148.36/24 dev ens160 3 4nmcli con reload # before to reapply 5nmcli device reapply ens224 6systemctl status network.service 7systemctl restart network.service Add a DNS entry 1UUID=$(nmcli --get-values connection.uuid c show "cloud-init eth0") 2DNS_LIST=$(nmcli --get-values ipv4.dns c show $UUID) 3nmcli conn modify "$UUID" ipv4.dns "${DNS_LIST} ${DNS_IP}" 4 5# /etc/resolved is managed by systemd-resolved 6sudo systemctl restart systemd-resolved
๐ŸŽถ Samba / CIFS
๐ŸŽถ Samba / CIFS
Server Side First Install samba and samba-client (for debug + test) /etc/samba/smb.conf 1[home] 2Workgroup=WORKGROUP (le grp par defaul sur windows) 3Hosts allow = ... 4[shared] 5browseable = yes 6path = /shared 7valid users = user01, @un_group_au_choix 8writable = yes 9passdb backend = tdbsam #passwords are stored in the /var/lib/samba/private/passdb.tdb file. Test samba config testparm /usr/bin/testparm -s /etc/samba/smb.conf smbclient -L \192.168.56.102 -U test : list all samba shares available smbclient //192.168.56.102/sharedrepo -U test : connect to the share pdbedit -L : list user smb (better than smbclient)
๐Ÿป SSHFS
๐Ÿป SSHFS
SSHFS SshFS sert ร  monter sur son FS, un autre systรจme de fichier distant, ร  travers une connexion SSH, le tout avec des droits utilisateur. L’avantage est de manipuler les donnรฉes distantes avec n’importe quel gestionnaire de fichier (Nautilus, Konqueror, ROX, ou mรชme la ligne de commande). - Pre-requis : droits d'administration, connexion ethernet, installation de FUSE et du paquet SSHFS. - Les utilisateurs de sshfs doivent faire partie du groupe fuse. Rq : FUSE permet ร  un utilisateur de monter lui-mรชme un systรจme de fichier. Normalement, pour monter un systรจme de fichier, il faut รชtre administrateur ou que celui-ci l’ait prรฉvu dans ยซ /etc/fstab ยป avec des informations en dur.
๐ŸŒ… UV
๐ŸŒ… UV
Install 1# curl method 2curl -LsSf https://astral.sh/uv/install.sh | sh 3 4# Pip method 5pip install uv Quick example 1pyenv install 3.12 2pyenv local 3.12 3python -m venv .venv 4source .venv/bin/activate 5pip install pandas 6python 7 8# equivalent in uv 9uv run --python 3.12 --with pandas python Usefull 1uv python list --only-installed 2uv python install 3.12 3uv venv /path/to/environment --python 3.12 4uv pip install django 5uv pip compile requirements.in -o requirements.txt 6 7uv init myproject 8uv sync 9uv run manage.py runserver Run as script Put before the import statements: 1#!/usr/bin/env -S uv run --script 2# /// script 3# requires-python = ">=3.12" 4# dependencies = [ 5# "ffmpeg-normalize", 6# ] 7# /// Then can be run with uv run sync-flickr-dates.py. uv will create a Python 3.12 venv for us. For me this is in ~/.cache/uv (which you can find via uv cache dir).
๐ŸŽก Helm
๐ŸŽก Helm
Admnistration See what is currently installed 1helm list -A 2NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION 3nesux3 default 1 2022-08-12 20:01:16.0982324 +0200 CEST deployed nexus3-1.0.6 3.37.3 Install/Uninstall 1helm status nesux3 2helm uninstall nesux3 3helm install nexus3 4helm history nexus3 5 6# work even if already installed 7helm upgrade --install ingress-nginx ${DIR}/helm/ingress-nginx \ 8 --namespace=ingress-nginx \ 9 --create-namespace \ 10 -f $helm {DIR}/helm/ingress-values.yml 11 12#Make helm unsee an apps (it does not delete the apps) 13kubectl delete secret -l owner=helm,name=argo-cd Handle Helm Repo and Charts 1#Handle repo 2helm repo list 3helm repo add gitlab https://charts.gitlab.io/ 4helm repo update 5 6#Pretty usefull to configure 7helm show values elastic/eck-operator 8helm show values grafana/grafana --version 8.5.1 9 10#See different version available 11helm search repo hashicorp/vault 12helm search repo hashicorp/vault -l 13 14# download a chart 15helm fetch ingress/ingress-nginx --untar Tips List all images needed in helm charts (but not the one with no tags) 1helm template -g longhorn-1.4.1.tgz |yq -N '..|.image? | select(. == "*" and . != null)'|sort|uniq|grep ":"|egrep -v '*:[[:blank:]]' || echo ""
๐ŸŽฒ Kubectl
๐ŸŽฒ Kubectl
Connexion to k8s cluster Kubeconfig Define KUBECONFIG in your profile 1# Default one 2KUBECONFIG=~/.kube/config 3 4# Several context - to keep splited 5KUBECONFIG=~/.kube/k3sup-lab:~/.kube/k3s-dev 6 7# Or can be specified in command 8kubectl get pods --kubeconfig=admin-kube-config View and Set 1kubectl config view 2kubectl config current-context 3 4kubectl config set-context \ 5dev-context \ 6--namespace=dev-namespace \ 7--cluster=docker-desktop \ 8--user=dev-user 9 10kubectl config use-context lab Switch context 1#set Namespace 2kubectl config set-context --current --namespace=nexus3 3kubectl config get-contexts Kubecm The problem with the kubeconfig is that it get nexted in one kubeconfig and difficult to manage on long term. The best way to install it, is with Arkade arkade get kubecm - see arkade.
๐Ÿญ Docker
๐Ÿญ Docker
See also documentation about Podman and Docker How to use a docker regsitry 1# list index catalog 2curl https://registry.k3s.example.com/v2/_catalog | jq 3 4# List tags available regarding an image 5curl https://registry.k3s.example.com/v2/myhaproxy/tags/list 6 7# list index catalog - with user/password 8curl https://registry-admin:<PWD>@registry.k3s.example.com/v2/_catalog | jq 9 10# list index catalog - when you need to specify the CA 11curl -u user:password https://<url>:<port>/v2/_catalog --cacert ca.crt | jq 12 13# list index catalog - for OCP 14curl -u user:password https://<url>:<port>/v2/ocp4/openshift4/tags/list | jq 15 16# Login to registry with podman 17podman login -u registry-admin -p <PWD> registry.k3s.example.com 18 19# Push images in the registry 20skopeo copy "--dest-creds=registry-admin:<PWD>" docker://docker.io/goharbor/harbor-core:v2.6.1 docker://registry.k3s.example.com/goharbor/harbor-core:v2.6.1 Install a Local private docker registry Change Docker Daemon config to allow insecure connexion with your ip 1ip a 2sudo vi /etc/docker/daemon.json 1{ 2"insecure-registries": ["192.168.1.11:5000"] 3} 1sudo systemctl restart docker 2docker info Check docker config
๐Ÿ‹ Azure
๐Ÿ‹ Azure
Create a small infra for kubernetes 1 #On your Azure CLI 2 az --version # Version expected 2.1.0 or higher 3 4 az group delete --name kubernetes -y 5 6 az group create -n kubernetes -l westeurope 7 8 az network vnet create -g kubernetes \ 9 -n kubernetes-vnet \ 10 --address-prefix 10.240.0.0/24 \ 11 --subnet-name kubernetes-subnet 12 13 az network nsg create -g kubernetes -n kubernetes-nsg 14 15 az network vnet subnet update -g kubernetes \ 16 -n kubernetes-subnet \ 17 --vnet-name kubernetes-vnet \ 18 --network-security-group kubernetes-nsg 19 20 az network nsg rule create -g kubernetes \ 21 -n kubernetes-allow-ssh \ 22 --access allow \ 23 --destination-address-prefix '*' \ 24 --destination-port-range 22 \ 25 --direction inbound \ 26 --nsg-name kubernetes-nsg \ 27 --protocol tcp \ 28 --source-address-prefix '*' \ 29 --source-port-range '*' \ 30 --priority 1000 31 32 az network nsg rule create -g kubernetes \ 33 -n kubernetes-allow-api-server \ 34 --access allow \ 35 --destination-address-prefix '*' \ 36 --destination-port-range 6443 \ 37 --direction inbound \ 38 --nsg-name kubernetes-nsg \ 39 --protocol tcp \ 40 --source-address-prefix '*' \ 41 --source-port-range '*' \ 42 --priority 1001 43 44 az network nsg rule list -g kubernetes --nsg-name kubernetes-nsg --query "[].{Name:name, Direction:direction, Priority:priority, Port:destinationPortRange}" -o table 45 46 az network lb create -g kubernetes --sku Standard \ 47 -n kubernetes-lb \ 48 --backend-pool-name kubernetes-lb-pool \ 49 --public-ip-address kubernetes-pip \ 50 --public-ip-address-allocation static 51 52 az network public-ip list --query="[?name=='kubernetes-pip'].{ResourceGroup:resourceGroup, Region:location,Allocation:publicIpAllocationMethod,IP:ipAddress}" -o table 53 #For Ubuntu 54 # az vm image list --location westeurope --publisher Canonical --offer UbuntuServer --sku 18.04-LTS --all -o table 55 # For Redhat 56 # az vm image list --location westeurope --publisher RedHat --offer RHEL --sku 8 --all -o table 57 # => choosen one : 8-lvm-gen2 58 WHICHOS="RedHat:RHEL:8-lvm-gen2:8.5.2022032206" 59 60 # K8s Controller 61 az vm availability-set create -g kubernetes -n controller-as 62 63 for i in 0 1 2; do 64 echo "[Controller ${i}] Creating public IP..." 65 az network public-ip create -n controller-${i}-pip -g kubernetes --sku Standard > /dev/null 66 echo "[Controller ${i}] Creating NIC..." 67 az network nic create -g kubernetes \ 68 -n controller-${i}-nic \ 69 --private-ip-address 10.240.0.1${i} \ 70 --public-ip-address controller-${i}-pip \ 71 --vnet kubernetes-vnet \ 72 --subnet kubernetes-subnet \ 73 --ip-forwarding \ 74 --lb-name kubernetes-lb \ 75 --lb-address-pools kubernetes-lb-pool >/dev/null 76 77 echo "[Controller ${i}] Creating VM..." 78 az vm create -g kubernetes \ 79 -n controller-${i} \ 80 --image ${WHICHOS} \ 81 --nics controller-${i}-nic \ 82 --availability-set controller-as \ 83 --nsg '' \ 84 --admin-username 'kuberoot' \ 85 --admin-password 'Changeme!' \ 86 --size Standard_B2s \ 87 --storage-sku StandardSSD_LRS 88 #--generate-ssh-keys > /dev/null 89 done 90 91 #K8s Worker 92 az vm availability-set create -g kubernetes -n worker-as 93 for i in 0 1; do 94 echo "[Worker ${i}] Creating public IP..." 95 az network public-ip create -n worker-${i}-pip -g kubernetes --sku Standard > /dev/null 96 echo "[Worker ${i}] Creating NIC..." 97 az network nic create -g kubernetes \ 98 -n worker-${i}-nic \ 99 --private-ip-address 10.240.0.2${i} \ 100 --public-ip-address worker-${i}-pip \ 101 --vnet kubernetes-vnet \ 102 --subnet kubernetes-subnet \ 103 --ip-forwarding > /dev/null 104 echo "[Worker ${i}] Creating VM..." 105 az vm create -g kubernetes \ 106 -n worker-${i} \ 107 --image ${WHICHOS} \ 108 --nics worker-${i}-nic \ 109 --tags pod-cidr=10.200.${i}.0/24 \ 110 --availability-set worker-as \ 111 --nsg '' \ 112 --generate-ssh-keys \ 113 --size Standard_B2s \ 114 --storage-sku StandardSSD_LRS \ 115 --admin-username 'kuberoot'> /dev/null \ 116 --admin-password 'Changeme!' \ 117 done 118 119 #Summarize 120 az vm list -d -g kubernetes -o table
๐Ÿ‹ Digital Ocean
๐Ÿ‹ Digital Ocean
Install Client 1# most simple 2arkade get doctl 3 4# normal way 5curl -OL https://github.com/digitalocean/doctl/releases/download/v1.104.0/doctl-1.104.0-linux-amd64.tar.gz 6tar xf doctl-1.104.0-linux-amd64.tar.gz 7mv doctl /usr/local/bin 8 9# Auto-Completion ZSH 10 doctl completion zsh > $ZSH/completions/_doctl Basics find possible droplet 1doctl compute region list 2doctl compute size list 3doctl compute image list-distribution 4doctl compute image list --public Auth 1doctl auth init --context test 2doctl auth list 3doctl auth switch --context test2 Create Project 1doctl projects create --name rkub --environment staging --purpose "stage rkub with github workflows" Create VM 1doctl compute ssh-key list 2doctl compute droplet create test --region fra1 --image rockylinux-9-x64 --size s-1vcpu-1gb --ssh-keys <fingerprint> 3doctl compute droplet delete test -f with Terraform 1export DO_PAT="dop_v1_xxxxxxxxxxxxxxxx" 2doctl auth init --context rkub 3 4# inside a dir with a tf file 5terraform init 6terraform validate 7terraform plan -var "do_token=${DO_PAT}" 8terraform apply -var "do_token=${DO_PAT}" -auto-approve 9 10# clean apply 11terraform plan -out=infra.tfplan -var "do_token=${DO_PAT}" 12terraform apply infra.tfplan 13 14# Control 15terraform show terraform.tfstate 16 17# Destroy 18terraform plan -destroy -out=terraform.tfplan -var "do_token=${DO_PAT}" 19terraform apply terraform.tfplan Connect to Droplet with private ssh key ssh root@$(terraform output -json ip_address_workers | jq -r ‘.[0]’) -i .key
๐Ÿ‹ KVM
๐Ÿ‹ KVM
install KVM on RHEL 1# pre-checks hardware for intel CPU 2grep -e 'vmx' /proc/cpuinfo 3lscpu | grep Virtualization 4lsmod | grep kvm 5 6# on RHEL9 Workstation 7sudo dnf install virt-install virt-viewer -y 8sudo dnf install -y libvirt 9sudo dnf install virt-manager -y 10sudo dnf install -y virt-top libguestfs-tools guestfs-tools 11sudo gpasswd -a $USER libvirt 12 13# Helper 14sudo dnf -y install bridge-utils 15 16# Start libvirt 17sudo systemctl start libvirtd 18sudo systemctl enable libvirtd 19sudo systemctl status libvirtd Basic Checks 1virsh nodeinfo Config a Bridge network Important note that network are created with root user but VM with current user.
๐Ÿ Cobra
๐Ÿ Cobra
A Command Builder for Go Usefull Installation 1# Install GO 2GO_VERSION="1.21.0" 3wget https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz 4sudo tar -C /usr/local -xzf go${GO_VERSION}.linux-amd64.tar.gz 5export PATH=$PATH:/usr/local/go/bin 6 7# Install Cobra - CLI builder 8go install github.com/spf13/cobra-cli@latest 9sudo cp -pr ./go /usr/local/. Init 1mkdir -p ${project} && cd ${project} 2go mod init ${project} 3cobra-cli init 4go build 5go install 6cobra-cli add timezone
๐Ÿ™ Network troubleshooting
๐Ÿ™ Network troubleshooting
Troubleshoot DNS vi dns.yml 1apiVersion: v1 2kind: Pod 3metadata: 4 name: dnsutils 5 namespace: default 6spec: 7 containers: 8 - name: dnsutils 9 image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 10 command: 11 - sleep 12 - "infinity" 13 imagePullPolicy: IfNotPresent 14 restartPolicy: Always deploy dnsutils 1k apply -f dns.yml 2pod/dnsutils created 3 4kubectl get pods dnsutils 5NAME READY STATUS RESTARTS AGE 6dnsutils 1/1 Running 0 36s Troubleshoot with dnsutils 1kubectl exec -i -t dnsutils -- nslookup kubernetes.default 2;; connection timed out; no servers could be reached 3command terminated with exit code 1 4 5kubectl exec -ti dnsutils -- cat /etc/resolv.conf 6search default.svc.cluster.local svc.cluster.local cluster.local psflab.local 7nameserver 10.43.0.10 8options ndots:5 9 10kubectl get endpoints kube-dns --namespace=kube-system 11NAME ENDPOINTS AGE 12kube-dns 10.42.0.6:53,10.42.0.6:53,10.42.0.6:9153 5d1h 13 14kubectl get svc kube-dns --namespace=kube-system 15NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 16kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d1h CURL 1cat << EOF > curl.yml 2apiVersion: v1 3kind: Pod 4metadata: 5 name: curl 6 namespace: default 7spec: 8 containers: 9 - name: curl 10 image: curlimages/curl 11 command: 12 - sleep 13 - "infinity" 14 imagePullPolicy: IfNotPresent 15 restartPolicy: Always 16EOF 17 18k apply -f curl.yml 19 20#Test du DNS 21kubectl exec -i -t curl -- curl -v telnet://10.43.0.10:53 22kubectl exec -i -t curl -- curl -v telnet://kube-dns.kube-system.svc.cluster.local:53 23kubectl exec -i -t curl -- nslookup kube-dns.kube-system.svc.cluster.local 24 25curl -k -I --resolve subdomain.domain.com:52.165.230.62 https:/subdomain.domain.com/
๐Ÿ  OKD
๐Ÿ  OKD
Install 1# Get latest version 2OKD_VERSION=$(curl -s https://api.github.com/repos/okd-project/okd/releases/latest | jq -r .tag_name) 3 4# Download 5curl -L https://github.com/okd-project/okd/releases/download/${OKD_VERSION}/openshift-install-linux-${OKD_VERSION}.tar.gz -O 6curl -L https://github.com/okd-project/okd/releases/download/${OKD_VERSION}/openshift-client-linux-${OKD_VERSION}.tar.gz -O 7 8# Download FCOS iso 9./openshift-install coreos print-stream-json | grep '\.iso[^.]' 10./openshift-install coreos print-stream-json | jq .architectures.x86_64.artifacts.metal.formats.iso.disk.location 11./openshift-install coreos print-stream-json | jq .architectures.x86_64.artifacts.vmware.formats.ova.disk.location 12./openshift-install coreos print-stream-json | jq '.architectures.x86_64.artifacts.digitalocean.formats["qcow2.gz"].disk.location' 13./openshift-install coreos print-stream-json | jq '.architectures.x86_64.artifacts.qemu.formats["qcow2.gz"].disk.location' 14./openshift-install coreos print-stream-json | jq '.architectures.x86_64.artifacts.metal.formats.pxe | .. | .location? // empty' Install bare-metal Official doc
๐Ÿ  OpenShift
๐Ÿ  OpenShift
OC Mirror Need at least one Operator: 1kind: ImageSetConfiguration 2apiVersion: mirror.openshift.io/v1alpha2 3archiveSize: 4 4storageConfig: 5 registry: 6 imageURL: quay.example.com:8443/mirror/oc-mirror-metadata 7 skipTLS: false 8mirror: 9 platform: 10 architectures: 11 - "amd64" 12 channels: 13 - name: stable-4.14 14 type: ocp 15 shortestPath: true 16 graph: true 17 operators: 18 - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 19 packages: 20 - name: kubevirt-hyperconverged 21 channels: 22 - name: 'stable' 23 - name: serverless-operator 24 channels: 25 - name: 'stable' 26 additionalImages: 27 - name: registry.redhat.io/ubi9/ubi:latest 28 helm: {} 1# install oc-mirror: 2curl https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest/oc-mirror.rhel9.tar.gz -O 3 4# Get an example of imageset 5oc-mirror init --registry quay.example.com:8443/mirror/oc-mirror-metadata 6 7# Find operators in the list of Operators, channels, packages 8oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 9oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged 10oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged --channel=stable 11 12# mirror with a jumphost which online access 13oc-mirror --config=imageset-config.yaml docker://quay.example.com:8443 14 15# mirror for airgap 16oc-mirror --config=imageSetConfig.yaml file://tmp/download 17oc-mirror --from=/tmp/upload/ docker://quay.example.com/ocp/operators 18 19# Refresh OperatorHub 20oc get pod -n openshift-marketplace 21 22# Get the index pod and delete it to refresh 23oc delete pod cs-redhat-operator-index-m2k2n -n openshift-marketplace Install 1## Get the coreOS which is gonna to be installed 2openshift-install coreos print-stream-json | grep '\.iso[^.]' 3 4openshift-install create install-config 5 6openshift-install create manifests 7 8openshift-install create ignition-configs 9 10openshift-install create cluster --dir . --log-level=info 11openshift-install destroy cluster --log-level=info for baremetal make a iso boot USB 1dd if=$HOME/ocp-latest/rhcos-live.iso of=/dev/sdb bs=1024k status=progress Add node 1export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') 2export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIFT_CLUSTER_ID" '{ 3 "api_vip_dnsname": "<api_vip>", 4 "openshift_cluster_id": $openshift_cluster_id, 5 "name": "<openshift_cluster_name>" 6}') Platform in install-config Get all info on how to config 1openshift-install explain installconfig.platform.libvirt 1## none 2platform: 3 none: {} 4 5## baremetal - use ipmi to provision baremetal 6platform: 7 baremetal: 8 apiVIP: 192.168.111.5 9 ingressVIP: 192.168.111.7 10 provisioningNetwork: "Managed" 11 provisioningNetworkCIDR: 172.22.0.0/24 12 provisioningNetworkInterface: eno1 13 clusterProvisioningIP: 172.22.0.2 14 bootstrapProvisioningIP: 172.22.0.3 15 hosts: 16 - name: master-0 17 role: master 18 bmc: 19 address: ipmi://192.168.111.1 20 username: admin 21 password: password 22 bootMACAddress: 52:54:00:a1:9c:ae 23 hardwareProfile: default 24 - name: master-1 25 role: master 26 bmc: 27 address: ipmi://192.168.111.2 28 username: admin 29 password: password 30 bootMACAddress: 52:54:00:a1:9c:af 31 hardwareProfile: default 32 - name: master-2 33 role: master 34 bmc: 35 address: ipmi://192.168.111.3 36 username: admin 37 password: password 38 bootMACAddress: 52:54:00:a1:9c:b0 39 hardwareProfile: default 40 41## vpshere - old syntax and deprecated form (new one in 4.15 with "failure domain") 42vsphere: 43 vcenter: 44 username: 45 password: 46 datacenter: 47 defaultDatastore: 48 apiVIPs: 49 - x.x.x.x 50 ingressVIPs: 51 - x.x.x.x 52 53## new syntax 54platform: 55 vsphere: 56 apiVIPs: 57 - x.x.x.x 58 datacenter: xxxxxxxxxxxx_datacenter 59 defaultDatastore: /xxxxxxxxxxxx_datacenter/datastore/Shared Storages/ssd-001602 60 failureDomains: 61 - name: CNV4 62 region: fr 63 server: xxxxxxxxxxxx.ovh.com 64 topology: 65 computeCluster: /xxxxxxxxxxxx_datacenter/host/Management Zone Cluster 66 datacenter: xxxxxxxxxxxx_datacenter 67 datastore: /xxxxxxxxxxxx_datacenter/datastore/Shared Storages/ssd-001602 68 networks: 69 - vds_mgmt 70 zone: dc 71 ingressVIPs: 72 - x.x.x.x 73 password: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 74 username: admin 75 vCenter: xxxxxxxxxxx.ovh.com 76 vcenters: 77 - datacenters: 78 - xxxxxxxxxx_datacenter 79 password: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 80 port: 443 81 server: xxxxxxx.ovh.com 82 user: admin Utils 1# Get Cluster ID 2oc get clusterversion -o jsonpath='{.items[].spec.clusterID}' 3 4# Get Nodes which are Ready 5oc get nodes --output jsonpath='{range .items[?(@.status.conditions[-1].type=="Ready")]}{.metadata.name} {.status.conditions[-1].type}{"\n"}{end}' 6 7# get images from all pods in a namespace 8oc get pods -n --output jsonpath='{range .items[*]}{.spec.containers[*].image}{"\n"}{end}' Set OperatorHub in airgap 1oc get catalogsources -n openshift-marketplace
๐Ÿฃ Bash Functions for k8s
๐Ÿฃ Bash Functions for k8s
A list of nice findings for Kubernetes List all images in Helm chart 1images=$(helm template -g $helm |yq -N '..|.image? | select(. == "*" and . != null)'|sort|uniq|grep ":"|egrep -v '*:[[:blank:]]' || echo "") upload images listed in an Helm chart 1load_helm_images(){ 2 # look in helm charts 3 for helm in $(ls ../../roles/*/files/helm/*.tgz); do 4 printf "\e[1;34m[INFO]\e[m Look for images in ${helm}...\n" 5 6 images=$(helm template -g $helm |yq -N '..|.image? | select(. == "*" and . != null)'|sort|uniq|grep ":"|egrep -v '*:[[:blank:]]' || echo "") 7 8 dir=$( dirname $helm | xargs dirname ) 9 10 echo "####" 11 12 if [ "$images" != "" ]; then 13 printf "\e[1;34m[INFO]\e[m Images found in the helm charts: ${images}\n" 14 printf "\e[1;34m[INFO]\e[m Create ${dir}/images images...\n" 15 16 mkdir -p ${dir}/images 17 18 while i= read -r image_name; do 19 archive_name=$(basename -a $(awk -F : '{print $1}'<<<${image_name})); 20 printf "\e[1;34m[INFO]\e[m Pull images...\n" 21 podman pull ${image_name}; 22 printf "\e[1;34m[INFO]\e[m Push ${image_name} in ${dir}/images/${archive_name}\n" 23 podman save ${image_name} --format oci-archive -o ${dir}/images/${archive_name}; 24 done <<< ${images} 25 else 26 printf "\e[1;34m[INFO]\e[m No Images found in the helm charts: $helm\n" 27 fi 28 done 29} Check components version 1function checkComponentsInstall() { 2 componentsArray=("kubectl" "helm") 3 for i in "${componentsArray[@]}"; do 4 command -v "${i}" >/dev/null 2>&1 || 5 { echo "[ERROR] ${i} is required, but it's not installed. Aborting." >&2; exit 1; } 6 done 7} Version comparator 1function checkK8sVersion() { 2 currentK8sVersion=$(kubectl version --short | grep "Server Version" | awk '{gsub(/v/,$5)}1 {print $3}') 3 testVersionComparator 1.20 "$currentK8sVersion" '<' 4 if [[ $k8sVersion == "ok" ]]; then 5 echo "current kubernetes version is ok" 6 else 7 minikube start --kubernetes-version=v1.22.4; 8 fi 9} 10 11 12# the comparator based on https://stackoverflow.com/a/4025065 13versionComparator () { 14 if [[ $1 == $2 ]] 15 then 16 return 0 17 fi 18 local IFS=. 19 local i ver1=($1) ver2=($2) 20 # fill empty fields in ver1 with zeros 21 for ((i=${#ver1[@]}; i<${#ver2[@]}; i++)) 22 do 23 ver1[i]=0 24 done 25 for ((i=0; i<${#ver1[@]}; i++)) 26 do 27 if [[ -z ${ver2[i]} ]] 28 then 29 # fill empty fields in ver2 with zeros 30 ver2[i]=0 31 fi 32 if ((10#${ver1[i]} > 10#${ver2[i]})) 33 then 34 return 1 35 fi 36 if ((10#${ver1[i]} < 10#${ver2[i]})) 37 then 38 return 2 39 fi 40 done 41 return 0 42} 43 44testVersionComparator () { 45 versionComparator $1 $2 46 case $? in 47 0) op='=';; 48 1) op='>';; 49 2) op='<';; 50 esac 51 if [[ $op != "$3" ]] 52 then 53 echo "Kubernetes test fail: Expected '$3', Actual '$op', Arg1 '$1', Arg2 '$2'" 54 k8sVersion="not ok" 55 else 56 echo "Kubernetes test pass: '$1 $op $2'" 57 k8sVersion="ok" 58 fi 59}
๐Ÿฆ Awk
๐Ÿฆ Awk
The Basics awk is treat each line as a table, by default space are separators of columns. General syntax is awk 'search {action}' file_to_parse. 1# Give the value higher than 75000 in column $4 2df | awk '$4 > 75000' 3 4# Print the all line when column $4 is higher than 75000 5df | awk '$4 > 75000 {print $0}' But if you look for a string, the search need to be included in /search/ or ;search;. When you print $0 represent the all line, $1 first column, $2 second column etc.
๐Ÿฌ Podman
๐Ÿฌ Podman
Description Buildah: is used to build Open Container Initiative (OCI) format or Docker format container images without the need for a daemon. Podman: provides the ability to directly run container images without a daemon. Podman can pull container images from a container registry, if they are not available locally. Skopeo: offers features for pulling and pushing containers to registries. Moving containers between registries is supported. Container image inspection is also offered and some introspective capabilities can be performed, without first downloading the container itself.
๐Ÿณ Docker
๐Ÿณ Docker
1# see images available on your hosts 2docker image list 3 4# equal to above 5docker images 6REPOSITORY TAG IMAGE ID CREATED SIZE 7httpd latest 6fa26f20557b 45 hours ago 164MB 8hello-world latest 75280d40a50b 4 months ago 1.69kB 9 10# give sha 11docker images --no-trunc=true 12 13# delete unused images 14docker rmi $(docker images -q) 15# delete images without tags 16docker rmi $(docker images | grep "^<none>" | awk '{print $3}')
๐Ÿด Sed
๐Ÿด Sed
The Basics 1sed -e 'โ€ฆ' -e 'โ€ฆ' # Several execution 2sed -i # Replace in place 3sed -r # Play with REGEX 4 5# The most usefull 6sed -e '/^[ ]*#/d' -e '/^$/d' <fich.> # openfile without empty or commented lines 7sed 's/ -/\n -/g' # replace all "-" with new lines 8sed 's/my_match.*/ /g' # remove from the match till end of line 9sed -i '4048d;3375d' ~/.ssh/known_hosts # delete lines Number 10 11# Buffer 12s/.*@(.*)/$1/; # keep what is after @ put it in buffer ( ) and reuse it with $1. 13sed -e '/^;/! s/.*-reserv.*/; Reserved: &/' file.txt # resuse search with & 14 15# Search a line 16sed -e '/192.168.130/ s/^/#/g' -i /etc/hosts # Comment a line 17sed -re 's/^;(r|R)eserved:/; Reserved:/g' file.txt # Search several string 18 19# Insert - add two lines below a match pattern 20sed -i '/.*\"description\".*/s/$/ \n \"after\" : \"network.target\"\,\n \"requires\" : \"network.target\"\,/g' my_File 21 22# Append 23sed '/WORD/ a Add this line after every line with WORD' 24 25# if no occurence, then add it after "use_authtok" 26sed -e '/remember=10/!s/use_authtok/& remember=10/' -i /etc/pam.d/system-auth-permanent
๐Ÿ‘ฎ CUE-lang
๐Ÿ‘ฎ CUE-lang
CUE stands for Configure, Unify, Execute Basics Installation 1# Install GO 2GO_VERSION="1.21.0" 3wget https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz 4sudo tar -C /usr/local -xzf go${GO_VERSION}.linux-amd64.tar.gz 5export PATH=$PATH:/usr/local/go/bin 6 7go install cuelang.org/go/cmd/cue@latest 8sudo cp -pr ./go /usr/local/. 9 10# or use Container 11printf "\e[1;34m[INFO]\e[m Install CUElang:\n"; 12podman pull docker.io/cuelang/cue:latest concepts top -> schema -> constraint -> data -> bottom Command 1# import a file 2cue import imageset-config.yaml 3 4# Validate 5cue vet imageset-config.cue imageset-config.yaml 6 7 8* Some basics example 9 10```go 11// This is a comment 12_greeting: "Welcome" // Hidden fields start with "_" 13#project: "CUE" // Definitions start with "#" 14 15message: "\(_greeting) to \(#project)!" // Regular fields are exported 16 17#Person: { 18 age: number // Mandatory condition and must be a number 19 hobbies?: [...string] // non mandatory but if present must be a list of string 20} 21 22// Constrain which call #Person and check if age 23#Adult: #Person & { 24 age: >=18 25} 26 27// =~ match a regular expression 28#Phone: string & =~ "[0-9]+" 29 30// Mapping 31instanceType: { 32 web: "small" 33 app: "medium" 34 db: "large" 35} 36 37server1: { 38 role: "app" 39 instance: instanceType[role] 40} 41 42// server1.instance: "medium" Scripting 1# executable have extension name "_tool.cue" 2 3# usage 4cue cmd prompter 1package foo 2 3import ( 4 "tool/cli" 5 "tool/exec" 6 "tool/file" 7) 8 9// moved to the data.cue file to show how we can reference "pure" Cue files 10city: "Amsterdam" 11 12// A command named "prompter" 13command: prompter: { 14 15 // save transcript to this file 16 var: { 17 file: *"out.txt" | string @tag(file) 18 } // you can use "-t flag=filename.txt" to change the output file, see "cue help injection" for more details 19 20 // prompt the user for some input 21 ask: cli.Ask & { 22 prompt: "What is your name?" 23 response: string 24 } 25 26 // run an external command, starts after ask 27 echo: exec.Run & { 28 // note the reference to ask and city here 29 cmd: ["echo", "Hello", ask.response + "!", "Have you been to", city + "?"] 30 stdout: string // capture stdout, don't print to the terminal 31 } 32 33 // append to a file, starts after echo 34 append: file.Append & { 35 filename: var.file 36 contents: echo.stdout // because we reference the echo task 37 } 38 39 // also starts after echo, and concurrently with append 40 print: cli.Print & { 41 text: echo.stdout // write the output to the terminal since we captured it previously 42 } 43} Sources Offical Documentation
๐Ÿ‘ฎ Justfile
๐Ÿ‘ฎ Justfile
Interesting example from justfile documentation: where it create mktemp and set it in variable then by concatenation you get a full path to the tar.gz. Then the Recipe “publish” create the artifact again and push it to a server. 1tmpdir := `mktemp` # Create a tmp file 2version := "0.2.7" 3tardir := tmpdir / "awesomesauce-" + version 4tarball := tardir + ".tar.gz" # use tmpfile path to create a tarball 5 6publish: 7 rm -f {{tarball}} 8 mkdir {{tardir}} 9 cp README.md *.c {{tardir}} 10 tar zcvf {{tarball}} {{tardir}} 11 scp {{tarball}} me@server.com:release/ 12 rm -rf {{tarball}} {{tardir}} This one can be really usefull to define a default value which can be redefine with env variable:
๐Ÿ‘ท Makefile
๐Ÿ‘ท Makefile
Shell Variable $$var $$( python -c ‘import sys; print(sys.implementation.name)’ ) Make Variable T ?= foo # give a default value T := $(shell whoami) # execute shell immediately to put in the var PHONY to execute several makefile Example 1 1SUBDIRS = foo bar baz 2 3## dir is a Shell variables 4## SUBDIR and MAKE are Internal make variables 5subdirs: 6 for dir in $(SUBDIRS); do \ 7 $(MAKE) -C $$dir; \ 8 done Example 2 1SUBDIRS = foo bar baz 2 3.PHONY: subdirs $(SUBDIRS) 4subdirs: $(SUBDIRS) 5$(SUBDIRS): 6 $(MAKE) -C $@ 7foo: baz Idea for a testing tools 1git clone xxx /tmp/xxx&& make -C !$/Makefile 2make download le conteneur 3make build le binaire 4make met le dans /use/local/bin 5make clean 6make help Sources: Tutorials
๐Ÿ‘พ Nexus3
๐Ÿ‘พ Nexus3
Deploy a Nexus3 in container on VM Load the image 1podman pull sonatype/nexus3:3.59.0 2podman save sonatype/nexus3:3.59.0 -o nexus3.tar 3podman load < nexus3.tar Create a service inside /etc/systemd/system/container-nexus3.service with content below: 1[Unit] 2Description=Nexus Podman container 3Wants=syslog.service 4 5[Service] 6User=nexus-system 7Group=nexus-system 8Restart=always 9ExecStart=/usr/bin/podman run \ 10 --log-level=debug \ 11 --rm \ 12 -ti \ 13 --publish 8081:8081 \ 14 --name nexus \ 15 sonatype/nexus3:3.59.0 16 17ExecStop=/usr/bin/podman stop -t 10 nexus 18 19[Install] 20WantedBy=multi-user.target
๐Ÿ‘พ Pypi Repository
๐Ÿ‘พ Pypi Repository
Pypi Repo for airgap env Let’s take as an example py dependencies for Netbox 1# Tools needed 2dnf install -y python3.11 3pip install --upgrade pip setuptool python-pypi-mirror twine 4 5# init mirror 6python3.11 -m venv mirror 7mkdir download 8 9# Get list of Py packages needed 10curl raw.githubusercontent.com/netbox-community/netbox/v3.7.3/requirements.txt -o requirements.txt 11echo pip >> requirements.txt 12echo setuptools >> requirements.txt 13echo uwsgi >> requirements.txt 14 15# Make sure repository CA is installed 16curl http://pki.server/pki/cacerts/ISSUING_CA.pem -o /etc/pki/ca-trust/source/anchors/issuing.crt 17curl http://pki.server/pki/cacerts/ROOT_CA.pem -o /etc/pki/ca-trust/source/anchors/root.crt 18update-ca-trust 19 20 21source mirror/bin/activate 22pypi-mirror download -b -d download -r requirements.tx 23twine upload --repository-url https://nexus3.server/repository/internal-pypi/ download/*.whl --cert /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 24twine upload --repository-url https://nexus3.server/repository/internal-pypi/ /download/*.tar.gz --cert /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Then on target host inside /etc/pip.conf :
๐Ÿ“ Storage
๐Ÿ“ Storage
General concern If you want to move VMs to an another Storage Domain, you need to copy the template from it as well! Remove a disk: 1# IF RHV does not use anymore disk those should appear empty in lsblk: 2lsblk -a 3sdf 8:80 0 4T 0 disk 4โ””โ”€36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 5sdg 8:96 0 4T 0 disk 6โ””โ”€36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 7sdh 8:112 0 4T 0 disk 8โ””โ”€36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 9sdi 8:128 0 0 disk 10โ””โ”€360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 11sdj 8:144 0 0 disk 12โ””โ”€360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 13sdk 8:160 0 0 disk 14โ””โ”€360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 15 16# find all disks from LUN ID 17LUN_ID="360014054ce7e566a01d44c1a4758b092" 18list_disk=$(dmsetup deps -o devname ${LUN_ID}| cut -f 2 |cut -c 3- |tr -d "()" | tr " " "\n") 19echo ${list_disk} 20 21# Remove from multipath 22multipath -f "${LUN_ID}" 23 24# remove disk 25for i in ${list_disk}; do echo ${i}; blockdev --flushbufs /dev/${i}; echo 1 > /sys/block/${i}/device/delete; done 26 27# You can which disk link with which LUN on CEPH side 28ls -l /dev/disk/by-* NFS for OLVM/oVirt Since oVirt need a shared stockage, we can create a local NFS to bypass this point if no Storage bay.