Docs

🚩 Network Manager
🚩 Network Manager
Basic Troubleshooting Checks interfaces 1nmcli con show 2NAME UUID TYPE DEVICE 3ens192 4d0087a0-740a-4356-8d9e-f58b63fd180c ethernet ens192 4ens224 3dcb022b-62a2-4632-8b69-ab68e1901e3b ethernet ens224 5 6nmcli dev status 7DEVICE TYPE STATE CONNECTION 8ens192 ethernet connected ens192 9ens224 ethernet connected ens224 10ens256 ethernet connected ens256 11lo loopback unmanaged -- 12 13# Get interfaces details : 14nmcli connection show ens192 15nmcli -p con show ens192 16 17# Get DNS settings in interface 18UUID=$(nmcli --get-values connection.uuid c show "cloud-init eth0") 19nmcli --get-values ipv4.dns c show $UUID Changing Interface name 1nmcli connection add type ethernet mac "00:50:56:80:11:ff" ifname "ens224" 2nmcli connection add type ethernet mac "00:50:56:80:8a:0b" ifname "ens256" Create a custom config 1nmcli con load /etc/sysconfig/network-scripts/ifcfg-ens224 2nmcli con up ens192 Adding a Virtual IP 1nmcli con mod enp1s0 +ipv4.addresses "192.168.122.11/24" 2ip addr del 10.163.148.36/24 dev ens160 3 4nmcli con reload # before to reapply 5nmcli device reapply ens224 6systemctl status network.service 7systemctl restart network.service Add a DNS entry 1UUID=$(nmcli --get-values connection.uuid c show "cloud-init eth0") 2DNS_LIST=$(nmcli --get-values ipv4.dns c show $UUID) 3nmcli conn modify "$UUID" ipv4.dns "${DNS_LIST} ${DNS_IP}" 4 5# /etc/resolved is managed by systemd-resolved 6sudo systemctl restart systemd-resolved
🎶 Samba / CIFS
🎶 Samba / CIFS
Server Side First Install samba and samba-client (for debug + test) /etc/samba/smb.conf 1[home] 2Workgroup=WORKGROUP (le grp par defaul sur windows) 3Hosts allow = ... 4[shared] 5browseable = yes 6path = /shared 7valid users = user01, @un_group_au_choix 8writable = yes 9passdb backend = tdbsam #passwords are stored in the /var/lib/samba/private/passdb.tdb file. Test samba config testparm /usr/bin/testparm -s /etc/samba/smb.conf smbclient -L \192.168.56.102 -U test : list all samba shares available smbclient //192.168.56.102/sharedrepo -U test : connect to the share pdbedit -L : list user smb (better than smbclient)
🍻 SSHFS
🍻 SSHFS
SSHFS SshFS sert à monter sur son FS, un autre système de fichier distant, à travers une connexion SSH, le tout avec des droits utilisateur. L’avantage est de manipuler les données distantes avec n’importe quel gestionnaire de fichier (Nautilus, Konqueror, ROX, ou même la ligne de commande). - Pre-requis : droits d'administration, connexion ethernet, installation de FUSE et du paquet SSHFS. - Les utilisateurs de sshfs doivent faire partie du groupe fuse. Rq : FUSE permet à un utilisateur de monter lui-même un système de fichier. Normalement, pour monter un système de fichier, il faut être administrateur ou que celui-ci l’ait prévu dans « /etc/fstab » avec des informations en dur.
⚓ Harbor
🌅 UV
🌅 UV
Install 1# curl method 2curl -LsSf https://astral.sh/uv/install.sh | sh 3 4# Pip method 5pip install uv Quick example 1pyenv install 3.12 2pyenv local 3.12 3python -m venv .venv 4source .venv/bin/activate 5pip install pandas 6python 7 8# equivalent in uv 9uv run --python 3.12 --with pandas python Usefull 1uv python list --only-installed 2uv python install 3.12 3uv venv /path/to/environment --python 3.12 4uv pip install django 5uv pip compile requirements.in -o requirements.txt 6 7uv init myproject 8uv sync 9uv run manage.py runserver Run as script Put before the import statements: 1#!/usr/bin/env -S uv run --script 2# /// script 3# requires-python = ">=3.12" 4# dependencies = [ 5# "ffmpeg-normalize", 6# ] 7# /// Then can be run with uv run sync-flickr-dates.py. uv will create a Python 3.12 venv for us. For me this is in ~/.cache/uv (which you can find via uv cache dir).
🎡 Helm
🎡 Helm
Admnistration See what is currently installed 1helm list -A 2NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION 3nesux3 default 1 2022-08-12 20:01:16.0982324 +0200 CEST deployed nexus3-1.0.6 3.37.3 Install/Uninstall 1helm status nesux3 2helm uninstall nesux3 3helm install nexus3 4helm history nexus3 5 6# work even if already installed 7helm upgrade --install ingress-nginx ${DIR}/helm/ingress-nginx \ 8 --namespace=ingress-nginx \ 9 --create-namespace \ 10 -f $helm {DIR}/helm/ingress-values.yml 11 12#Make helm unsee an apps (it does not delete the apps) 13kubectl delete secret -l owner=helm,name=argo-cd Handle Helm Repo and Charts 1#Handle repo 2helm repo list 3helm repo add gitlab https://charts.gitlab.io/ 4helm repo update 5 6#Pretty usefull to configure 7helm show values elastic/eck-operator 8helm show values grafana/grafana --version 8.5.1 9 10#See different version available 11helm search repo hashicorp/vault 12helm search repo hashicorp/vault -l 13 14# download a chart 15helm fetch ingress/ingress-nginx --untar Tips List all images needed in helm charts (but not the one with no tags) 1helm template -g longhorn-1.4.1.tgz |yq -N '..|.image? | select(. == "*" and . != null)'|sort|uniq|grep ":"|egrep -v '*:[[:blank:]]' || echo ""
🎲 Kubectl
🎲 Kubectl
Connexion to k8s cluster Kubeconfig Define KUBECONFIG in your profile 1# Default one 2KUBECONFIG=~/.kube/config 3 4# Several context - to keep splited 5KUBECONFIG=~/.kube/k3sup-lab:~/.kube/k3s-dev 6 7# Or can be specified in command 8kubectl get pods --kubeconfig=admin-kube-config View and Set 1kubectl config view 2kubectl config current-context 3 4kubectl config set-context \ 5dev-context \ 6--namespace=dev-namespace \ 7--cluster=docker-desktop \ 8--user=dev-user 9 10kubectl config use-context lab Switch context 1#set Namespace 2kubectl config set-context --current --namespace=nexus3 3kubectl config get-contexts Kubecm The problem with the kubeconfig is that it get nexted in one kubeconfig and difficult to manage on long term. The best way to install it, is with Arkade arkade get kubecm - see arkade.
🏭 Docker
🏭 Docker
See also documentation about Podman and Docker How to use a docker regsitry 1# list index catalog 2curl https://registry.k3s.example.com/v2/_catalog | jq 3 4# List tags available regarding an image 5curl https://registry.k3s.example.com/v2/myhaproxy/tags/list 6 7# list index catalog - with user/password 8curl https://registry-admin:<PWD>@registry.k3s.example.com/v2/_catalog | jq 9 10# list index catalog - when you need to specify the CA 11curl -u user:password https://<url>:<port>/v2/_catalog --cacert ca.crt | jq 12 13# list index catalog - for OCP 14curl -u user:password https://<url>:<port>/v2/ocp4/openshift4/tags/list | jq 15 16# Login to registry with podman 17podman login -u registry-admin -p <PWD> registry.k3s.example.com 18 19# Push images in the registry 20skopeo copy "--dest-creds=registry-admin:<PWD>" docker://docker.io/goharbor/harbor-core:v2.6.1 docker://registry.k3s.example.com/goharbor/harbor-core:v2.6.1 Install a Local private docker registry Change Docker Daemon config to allow insecure connexion with your ip 1ip a 2sudo vi /etc/docker/daemon.json 1{ 2"insecure-registries": ["192.168.1.11:5000"] 3} 1sudo systemctl restart docker 2docker info Check docker config
🐋 Azure
🐋 Azure
Create a small infra for kubernetes 1 #On your Azure CLI 2 az --version # Version expected 2.1.0 or higher 3 4 az group delete --name kubernetes -y 5 6 az group create -n kubernetes -l westeurope 7 8 az network vnet create -g kubernetes \ 9 -n kubernetes-vnet \ 10 --address-prefix 10.240.0.0/24 \ 11 --subnet-name kubernetes-subnet 12 13 az network nsg create -g kubernetes -n kubernetes-nsg 14 15 az network vnet subnet update -g kubernetes \ 16 -n kubernetes-subnet \ 17 --vnet-name kubernetes-vnet \ 18 --network-security-group kubernetes-nsg 19 20 az network nsg rule create -g kubernetes \ 21 -n kubernetes-allow-ssh \ 22 --access allow \ 23 --destination-address-prefix '*' \ 24 --destination-port-range 22 \ 25 --direction inbound \ 26 --nsg-name kubernetes-nsg \ 27 --protocol tcp \ 28 --source-address-prefix '*' \ 29 --source-port-range '*' \ 30 --priority 1000 31 32 az network nsg rule create -g kubernetes \ 33 -n kubernetes-allow-api-server \ 34 --access allow \ 35 --destination-address-prefix '*' \ 36 --destination-port-range 6443 \ 37 --direction inbound \ 38 --nsg-name kubernetes-nsg \ 39 --protocol tcp \ 40 --source-address-prefix '*' \ 41 --source-port-range '*' \ 42 --priority 1001 43 44 az network nsg rule list -g kubernetes --nsg-name kubernetes-nsg --query "[].{Name:name, Direction:direction, Priority:priority, Port:destinationPortRange}" -o table 45 46 az network lb create -g kubernetes --sku Standard \ 47 -n kubernetes-lb \ 48 --backend-pool-name kubernetes-lb-pool \ 49 --public-ip-address kubernetes-pip \ 50 --public-ip-address-allocation static 51 52 az network public-ip list --query="[?name=='kubernetes-pip'].{ResourceGroup:resourceGroup, Region:location,Allocation:publicIpAllocationMethod,IP:ipAddress}" -o table 53 #For Ubuntu 54 # az vm image list --location westeurope --publisher Canonical --offer UbuntuServer --sku 18.04-LTS --all -o table 55 # For Redhat 56 # az vm image list --location westeurope --publisher RedHat --offer RHEL --sku 8 --all -o table 57 # => choosen one : 8-lvm-gen2 58 WHICHOS="RedHat:RHEL:8-lvm-gen2:8.5.2022032206" 59 60 # K8s Controller 61 az vm availability-set create -g kubernetes -n controller-as 62 63 for i in 0 1 2; do 64 echo "[Controller ${i}] Creating public IP..." 65 az network public-ip create -n controller-${i}-pip -g kubernetes --sku Standard > /dev/null 66 echo "[Controller ${i}] Creating NIC..." 67 az network nic create -g kubernetes \ 68 -n controller-${i}-nic \ 69 --private-ip-address 10.240.0.1${i} \ 70 --public-ip-address controller-${i}-pip \ 71 --vnet kubernetes-vnet \ 72 --subnet kubernetes-subnet \ 73 --ip-forwarding \ 74 --lb-name kubernetes-lb \ 75 --lb-address-pools kubernetes-lb-pool >/dev/null 76 77 echo "[Controller ${i}] Creating VM..." 78 az vm create -g kubernetes \ 79 -n controller-${i} \ 80 --image ${WHICHOS} \ 81 --nics controller-${i}-nic \ 82 --availability-set controller-as \ 83 --nsg '' \ 84 --admin-username 'kuberoot' \ 85 --admin-password 'Changeme!' \ 86 --size Standard_B2s \ 87 --storage-sku StandardSSD_LRS 88 #--generate-ssh-keys > /dev/null 89 done 90 91 #K8s Worker 92 az vm availability-set create -g kubernetes -n worker-as 93 for i in 0 1; do 94 echo "[Worker ${i}] Creating public IP..." 95 az network public-ip create -n worker-${i}-pip -g kubernetes --sku Standard > /dev/null 96 echo "[Worker ${i}] Creating NIC..." 97 az network nic create -g kubernetes \ 98 -n worker-${i}-nic \ 99 --private-ip-address 10.240.0.2${i} \ 100 --public-ip-address worker-${i}-pip \ 101 --vnet kubernetes-vnet \ 102 --subnet kubernetes-subnet \ 103 --ip-forwarding > /dev/null 104 echo "[Worker ${i}] Creating VM..." 105 az vm create -g kubernetes \ 106 -n worker-${i} \ 107 --image ${WHICHOS} \ 108 --nics worker-${i}-nic \ 109 --tags pod-cidr=10.200.${i}.0/24 \ 110 --availability-set worker-as \ 111 --nsg '' \ 112 --generate-ssh-keys \ 113 --size Standard_B2s \ 114 --storage-sku StandardSSD_LRS \ 115 --admin-username 'kuberoot'> /dev/null \ 116 --admin-password 'Changeme!' \ 117 done 118 119 #Summarize 120 az vm list -d -g kubernetes -o table
🐋 Digital Ocean
🐋 Digital Ocean
Install Client 1# most simple 2arkade get doctl 3 4# normal way 5curl -OL https://github.com/digitalocean/doctl/releases/download/v1.104.0/doctl-1.104.0-linux-amd64.tar.gz 6tar xf doctl-1.104.0-linux-amd64.tar.gz 7mv doctl /usr/local/bin 8 9# Auto-Completion ZSH 10 doctl completion zsh > $ZSH/completions/_doctl Basics find possible droplet 1doctl compute region list 2doctl compute size list 3doctl compute image list-distribution 4doctl compute image list --public Auth 1doctl auth init --context test 2doctl auth list 3doctl auth switch --context test2 Create Project 1doctl projects create --name rkub --environment staging --purpose "stage rkub with github workflows" Create VM 1doctl compute ssh-key list 2doctl compute droplet create test --region fra1 --image rockylinux-9-x64 --size s-1vcpu-1gb --ssh-keys <fingerprint> 3doctl compute droplet delete test -f with Terraform 1export DO_PAT="dop_v1_xxxxxxxxxxxxxxxx" 2doctl auth init --context rkub 3 4# inside a dir with a tf file 5terraform init 6terraform validate 7terraform plan -var "do_token=${DO_PAT}" 8terraform apply -var "do_token=${DO_PAT}" -auto-approve 9 10# clean apply 11terraform plan -out=infra.tfplan -var "do_token=${DO_PAT}" 12terraform apply infra.tfplan 13 14# Control 15terraform show terraform.tfstate 16 17# Destroy 18terraform plan -destroy -out=terraform.tfplan -var "do_token=${DO_PAT}" 19terraform apply terraform.tfplan Connect to Droplet with private ssh key ssh root@$(terraform output -json ip_address_workers | jq -r ‘.[0]’) -i .key
🐋 KVM
🐋 KVM
install KVM on RHEL 1# pre-checks hardware for intel CPU 2grep -e 'vmx' /proc/cpuinfo 3lscpu | grep Virtualization 4lsmod | grep kvm 5 6# on RHEL9 Workstation 7sudo dnf install virt-install virt-viewer -y 8sudo dnf install -y libvirt 9sudo dnf install virt-manager -y 10sudo dnf install -y virt-top libguestfs-tools guestfs-tools 11sudo gpasswd -a $USER libvirt 12 13# Helper 14sudo dnf -y install bridge-utils 15 16# Start libvirt 17sudo systemctl start libvirtd 18sudo systemctl enable libvirtd 19sudo systemctl status libvirtd Basic Checks 1virsh nodeinfo Config a Bridge network Important note that network are created with root user but VM with current user.
🐍 Cobra
🐍 Cobra
A Command Builder for Go Usefull Installation 1# Install GO 2GO_VERSION="1.21.0" 3wget https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz 4sudo tar -C /usr/local -xzf go${GO_VERSION}.linux-amd64.tar.gz 5export PATH=$PATH:/usr/local/go/bin 6 7# Install Cobra - CLI builder 8go install github.com/spf13/cobra-cli@latest 9sudo cp -pr ./go /usr/local/. Init 1mkdir -p ${project} && cd ${project} 2go mod init ${project} 3cobra-cli init 4go build 5go install 6cobra-cli add timezone