Devops

🐎 K3D
🐎 K3D
K3D equal k3s in a container. a tools to create single- and multi-node k3s clusters. Our favorite use case, is with podman and rootless. So there is some customization upstream to do. One downside I’ve found with k3d is that the Kubernetes version it uses is behind the current k3s release. Note for ARM PC: 1sudo apt install qemu-user-static 2podman run --rm --privileged multiarch/qemu-user-static --reset -p yes Install 1# Manual way 2curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash 3 4# or with arkade: 5arkade get k3d 6 7# Auto-completion 8k3d completion zsh > "$ZSH/completions/_k3d" Tweaks for podman and rootless The issue: 1k3d cluster create test 2 3ERRO[0000] Failed to get nodes for cluster 'test': docker failed to get containers with labels 'map[k3d.cluster:test]': failed to list containers: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.46/containers/json?all=1&filters=%7B%22label%22%3A%7B%22app%3Dk3d%22%3Atrue%2C%22k3d.cluster%3Dtest%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied The solution: 1# TODO 2loginctl enable-linger $(whoami) 3 4# Either reload terminal or do below: 5export XDG_RUNTIME_DIR=/tmp/run-$(id -u) 6mkdir -p $XDG_RUNTIME_DIR 7chmod 700 $XDG_RUNTIME_DIR 8 9sudo mkdir -p /etc/containers/containers.conf.d 10sudo sh -c "echo 'service_timeout=0' > /etc/containers/containers.conf.d/timeout.conf" 11 12sudo ln -s /run/podman/podman.sock /var/run/docker.sock 13 14XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR:-/run/user/$(id -u)} 15export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock 16export DOCKER_SOCK=$XDG_RUNTIME_DIR/podman/podman.sock 17 18systemctl --user enable --now podman.socket If /sys/fs/cgroup/cgroup.controllers is present on your system, you are using v2, otherwise you are using v1.
💫 Podman as a service
Projects
Projects
Just a short list of personnal projects, I am currently working on.
👺 The Bad, the Good and the Ugly Git
👺 The Bad, the Good and the Ugly Git
When it come about IT, git cannot to be ignore… even for an infrastructure guys!
⚓ Harbor
🌅 UV
🌅 UV
Install 1# curl method 2curl -LsSf https://astral.sh/uv/install.sh | sh 3 4# Pip method 5pip install uv Quick example 1pyenv install 3.12 2pyenv local 3.12 3python -m venv .venv 4source .venv/bin/activate 5pip install pandas 6python 7 8# equivalent in uv 9uv run --python 3.12 --with pandas python Usefull 1uv python list --only-installed 2uv python install 3.12 3uv venv /path/to/environment --python 3.12 4uv pip install django 5uv pip compile requirements.in -o requirements.txt 6 7uv init myproject 8uv sync 9uv run manage.py runserver Run as script Put before the import statements: 1#!/usr/bin/env -S uv run --script 2# /// script 3# requires-python = ">=3.12" 4# dependencies = [ 5# "ffmpeg-normalize", 6# ] 7# /// Then can be run with uv run sync-flickr-dates.py. uv will create a Python 3.12 venv for us. For me this is in ~/.cache/uv (which you can find via uv cache dir).
🎡 Helm
🎡 Helm
Admnistration See what is currently installed 1helm list -A 2NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION 3nesux3 default 1 2022-08-12 20:01:16.0982324 +0200 CEST deployed nexus3-1.0.6 3.37.3 Install/Uninstall 1helm status nesux3 2helm uninstall nesux3 3helm install nexus3 4helm history nexus3 5 6# work even if already installed 7helm upgrade --install ingress-nginx ${DIR}/helm/ingress-nginx \ 8 --namespace=ingress-nginx \ 9 --create-namespace \ 10 -f $helm {DIR}/helm/ingress-values.yml 11 12#Make helm unsee an apps (it does not delete the apps) 13kubectl delete secret -l owner=helm,name=argo-cd Handle Helm Repo and Charts 1#Handle repo 2helm repo list 3helm repo add gitlab https://charts.gitlab.io/ 4helm repo update 5 6#Pretty usefull to configure 7helm show values elastic/eck-operator 8helm show values grafana/grafana --version 8.5.1 9 10#See different version available 11helm search repo hashicorp/vault 12helm search repo hashicorp/vault -l 13 14# download a chart 15helm fetch ingress/ingress-nginx --untar Tips List all images needed in helm charts (but not the one with no tags) 1helm template -g longhorn-1.4.1.tgz |yq -N '..|.image? | select(. == "*" and . != null)'|sort|uniq|grep ":"|egrep -v '*:[[:blank:]]' || echo ""
🎲 Kubectl
🎲 Kubectl
Connexion to k8s cluster Kubeconfig Define KUBECONFIG in your profile 1# Default one 2KUBECONFIG=~/.kube/config 3 4# Several context - to keep splited 5KUBECONFIG=~/.kube/k3sup-lab:~/.kube/k3s-dev 6 7# Or can be specified in command 8kubectl get pods --kubeconfig=admin-kube-config View and Set 1kubectl config view 2kubectl config current-context 3 4kubectl config set-context \ 5dev-context \ 6--namespace=dev-namespace \ 7--cluster=docker-desktop \ 8--user=dev-user 9 10kubectl config use-context lab Switch context 1#set Namespace 2kubectl config set-context --current --namespace=nexus3 3kubectl config get-contexts Kubecm The problem with the kubeconfig is that it get nexted in one kubeconfig and difficult to manage on long term. The best way to install it, is with Arkade arkade get kubecm - see arkade.
🏭 Docker
🏭 Docker
See also documentation about Podman and Docker How to use a docker regsitry 1# list index catalog 2curl https://registry.k3s.example.com/v2/_catalog | jq 3 4# List tags available regarding an image 5curl https://registry.k3s.example.com/v2/myhaproxy/tags/list 6 7# list index catalog - with user/password 8curl https://registry-admin:<PWD>@registry.k3s.example.com/v2/_catalog | jq 9 10# list index catalog - when you need to specify the CA 11curl -u user:password https://<url>:<port>/v2/_catalog --cacert ca.crt | jq 12 13# list index catalog - for OCP 14curl -u user:password https://<url>:<port>/v2/ocp4/openshift4/tags/list | jq 15 16# Login to registry with podman 17podman login -u registry-admin -p <PWD> registry.k3s.example.com 18 19# Push images in the registry 20skopeo copy "--dest-creds=registry-admin:<PWD>" docker://docker.io/goharbor/harbor-core:v2.6.1 docker://registry.k3s.example.com/goharbor/harbor-core:v2.6.1 Install a Local private docker registry Change Docker Daemon config to allow insecure connexion with your ip 1ip a 2sudo vi /etc/docker/daemon.json 1{ 2"insecure-registries": ["192.168.1.11:5000"] 3} 1sudo systemctl restart docker 2docker info Check docker config
🐋 Digital Ocean
🐋 Digital Ocean
Install Client 1# most simple 2arkade get doctl 3 4# normal way 5curl -OL https://github.com/digitalocean/doctl/releases/download/v1.104.0/doctl-1.104.0-linux-amd64.tar.gz 6tar xf doctl-1.104.0-linux-amd64.tar.gz 7mv doctl /usr/local/bin 8 9# Auto-Completion ZSH 10 doctl completion zsh > $ZSH/completions/_doctl Basics find possible droplet 1doctl compute region list 2doctl compute size list 3doctl compute image list-distribution 4doctl compute image list --public Auth 1doctl auth init --context test 2doctl auth list 3doctl auth switch --context test2 Create Project 1doctl projects create --name rkub --environment staging --purpose "stage rkub with github workflows" Create VM 1doctl compute ssh-key list 2doctl compute droplet create test --region fra1 --image rockylinux-9-x64 --size s-1vcpu-1gb --ssh-keys <fingerprint> 3doctl compute droplet delete test -f with Terraform 1export DO_PAT="dop_v1_xxxxxxxxxxxxxxxx" 2doctl auth init --context rkub 3 4# inside a dir with a tf file 5terraform init 6terraform validate 7terraform plan -var "do_token=${DO_PAT}" 8terraform apply -var "do_token=${DO_PAT}" -auto-approve 9 10# clean apply 11terraform plan -out=infra.tfplan -var "do_token=${DO_PAT}" 12terraform apply infra.tfplan 13 14# Control 15terraform show terraform.tfstate 16 17# Destroy 18terraform plan -destroy -out=terraform.tfplan -var "do_token=${DO_PAT}" 19terraform apply terraform.tfplan Connect to Droplet with private ssh key ssh root@$(terraform output -json ip_address_workers | jq -r ‘.[0]’) -i .key