Devops

🐍 Cobra
🐍 Cobra
A Command Builder for Go Usefull Installation 1# Install GO 2GO_VERSION="1.21.0" 3wget https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz 4sudo tar -C /usr/local -xzf go${GO_VERSION}.linux-amd64.tar.gz 5export PATH=$PATH:/usr/local/go/bin 6 7# Install Cobra - CLI builder 8go install github.com/spf13/cobra-cli@latest 9sudo cp -pr ./go /usr/local/. Init 1mkdir -p ${project} && cd ${project} 2go mod init ${project} 3cobra-cli init 4go build 5go install 6cobra-cli add timezone
🐙 Network troubleshooting
🐙 Network troubleshooting
Troubleshoot DNS vi dns.yml 1apiVersion: v1 2kind: Pod 3metadata: 4 name: dnsutils 5 namespace: default 6spec: 7 containers: 8 - name: dnsutils 9 image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 10 command: 11 - sleep 12 - "infinity" 13 imagePullPolicy: IfNotPresent 14 restartPolicy: Always deploy dnsutils 1k apply -f dns.yml 2pod/dnsutils created 3 4kubectl get pods dnsutils 5NAME READY STATUS RESTARTS AGE 6dnsutils 1/1 Running 0 36s Troubleshoot with dnsutils 1kubectl exec -i -t dnsutils -- nslookup kubernetes.default 2;; connection timed out; no servers could be reached 3command terminated with exit code 1 4 5kubectl exec -ti dnsutils -- cat /etc/resolv.conf 6search default.svc.cluster.local svc.cluster.local cluster.local psflab.local 7nameserver 10.43.0.10 8options ndots:5 9 10kubectl get endpoints kube-dns --namespace=kube-system 11NAME ENDPOINTS AGE 12kube-dns 10.42.0.6:53,10.42.0.6:53,10.42.0.6:9153 5d1h 13 14kubectl get svc kube-dns --namespace=kube-system 15NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 16kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d1h CURL 1cat << EOF > curl.yml 2apiVersion: v1 3kind: Pod 4metadata: 5 name: curl 6 namespace: default 7spec: 8 containers: 9 - name: curl 10 image: curlimages/curl 11 command: 12 - sleep 13 - "infinity" 14 imagePullPolicy: IfNotPresent 15 restartPolicy: Always 16EOF 17 18k apply -f curl.yml 19 20#Test du DNS 21kubectl exec -i -t curl -- curl -v telnet://10.43.0.10:53 22kubectl exec -i -t curl -- curl -v telnet://kube-dns.kube-system.svc.cluster.local:53 23kubectl exec -i -t curl -- nslookup kube-dns.kube-system.svc.cluster.local 24 25curl -k -I --resolve subdomain.domain.com:52.165.230.62 https:/subdomain.domain.com/
🐠 OKD
🐠 OKD
Install 1# Get latest version 2OKD_VERSION=$(curl -s https://api.github.com/repos/okd-project/okd/releases/latest | jq -r .tag_name) 3 4# Download 5curl -L https://github.com/okd-project/okd/releases/download/${OKD_VERSION}/openshift-install-linux-${OKD_VERSION}.tar.gz -O 6curl -L https://github.com/okd-project/okd/releases/download/${OKD_VERSION}/openshift-client-linux-${OKD_VERSION}.tar.gz -O 7 8# Download FCOS iso 9./openshift-install coreos print-stream-json | grep '\.iso[^.]' 10./openshift-install coreos print-stream-json | jq .architectures.x86_64.artifacts.metal.formats.iso.disk.location 11./openshift-install coreos print-stream-json | jq .architectures.x86_64.artifacts.vmware.formats.ova.disk.location 12./openshift-install coreos print-stream-json | jq '.architectures.x86_64.artifacts.digitalocean.formats["qcow2.gz"].disk.location' 13./openshift-install coreos print-stream-json | jq '.architectures.x86_64.artifacts.qemu.formats["qcow2.gz"].disk.location' 14./openshift-install coreos print-stream-json | jq '.architectures.x86_64.artifacts.metal.formats.pxe | .. | .location? // empty' Install bare-metal Official doc
🐠 OpenShift
🐠 OpenShift
OC Mirror Need at least one Operator: 1kind: ImageSetConfiguration 2apiVersion: mirror.openshift.io/v1alpha2 3archiveSize: 4 4storageConfig: 5 registry: 6 imageURL: quay.example.com:8443/mirror/oc-mirror-metadata 7 skipTLS: false 8mirror: 9 platform: 10 architectures: 11 - "amd64" 12 channels: 13 - name: stable-4.14 14 type: ocp 15 shortestPath: true 16 graph: true 17 operators: 18 - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 19 packages: 20 - name: kubevirt-hyperconverged 21 channels: 22 - name: 'stable' 23 - name: serverless-operator 24 channels: 25 - name: 'stable' 26 additionalImages: 27 - name: registry.redhat.io/ubi9/ubi:latest 28 helm: {} 1# install oc-mirror: 2curl https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest/oc-mirror.rhel9.tar.gz -O 3 4# Get an example of imageset 5oc-mirror init --registry quay.example.com:8443/mirror/oc-mirror-metadata 6 7# Find operators in the list of Operators, channels, packages 8oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 9oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged 10oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged --channel=stable 11 12# mirror with a jumphost which online access 13oc-mirror --config=imageset-config.yaml docker://quay.example.com:8443 14 15# mirror for airgap 16oc-mirror --config=imageSetConfig.yaml file://tmp/download 17oc-mirror --from=/tmp/upload/ docker://quay.example.com/ocp/operators 18 19# Refresh OperatorHub 20oc get pod -n openshift-marketplace 21 22# Get the index pod and delete it to refresh 23oc delete pod cs-redhat-operator-index-m2k2n -n openshift-marketplace Install 1## Get the coreOS which is gonna to be installed 2openshift-install coreos print-stream-json | grep '\.iso[^.]' 3 4openshift-install create install-config 5 6openshift-install create manifests 7 8openshift-install create ignition-configs 9 10openshift-install create cluster --dir . --log-level=info 11openshift-install destroy cluster --log-level=info for baremetal make a iso boot USB 1dd if=$HOME/ocp-latest/rhcos-live.iso of=/dev/sdb bs=1024k status=progress Add node 1export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') 2export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIFT_CLUSTER_ID" '{ 3 "api_vip_dnsname": "<api_vip>", 4 "openshift_cluster_id": $openshift_cluster_id, 5 "name": "<openshift_cluster_name>" 6}') Platform in install-config Get all info on how to config 1openshift-install explain installconfig.platform.libvirt 1## none 2platform: 3 none: {} 4 5## baremetal - use ipmi to provision baremetal 6platform: 7 baremetal: 8 apiVIP: 192.168.111.5 9 ingressVIP: 192.168.111.7 10 provisioningNetwork: "Managed" 11 provisioningNetworkCIDR: 172.22.0.0/24 12 provisioningNetworkInterface: eno1 13 clusterProvisioningIP: 172.22.0.2 14 bootstrapProvisioningIP: 172.22.0.3 15 hosts: 16 - name: master-0 17 role: master 18 bmc: 19 address: ipmi://192.168.111.1 20 username: admin 21 password: password 22 bootMACAddress: 52:54:00:a1:9c:ae 23 hardwareProfile: default 24 - name: master-1 25 role: master 26 bmc: 27 address: ipmi://192.168.111.2 28 username: admin 29 password: password 30 bootMACAddress: 52:54:00:a1:9c:af 31 hardwareProfile: default 32 - name: master-2 33 role: master 34 bmc: 35 address: ipmi://192.168.111.3 36 username: admin 37 password: password 38 bootMACAddress: 52:54:00:a1:9c:b0 39 hardwareProfile: default 40 41## vpshere - old syntax and deprecated form (new one in 4.15 with "failure domain") 42vsphere: 43 vcenter: 44 username: 45 password: 46 datacenter: 47 defaultDatastore: 48 apiVIPs: 49 - x.x.x.x 50 ingressVIPs: 51 - x.x.x.x 52 53## new syntax 54platform: 55 vsphere: 56 apiVIPs: 57 - x.x.x.x 58 datacenter: xxxxxxxxxxxx_datacenter 59 defaultDatastore: /xxxxxxxxxxxx_datacenter/datastore/Shared Storages/ssd-001602 60 failureDomains: 61 - name: CNV4 62 region: fr 63 server: xxxxxxxxxxxx.ovh.com 64 topology: 65 computeCluster: /xxxxxxxxxxxx_datacenter/host/Management Zone Cluster 66 datacenter: xxxxxxxxxxxx_datacenter 67 datastore: /xxxxxxxxxxxx_datacenter/datastore/Shared Storages/ssd-001602 68 networks: 69 - vds_mgmt 70 zone: dc 71 ingressVIPs: 72 - x.x.x.x 73 password: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 74 username: admin 75 vCenter: xxxxxxxxxxx.ovh.com 76 vcenters: 77 - datacenters: 78 - xxxxxxxxxx_datacenter 79 password: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 80 port: 443 81 server: xxxxxxx.ovh.com 82 user: admin Utils 1# Get Cluster ID 2oc get clusterversion -o jsonpath='{.items[].spec.clusterID}' 3 4# Get Nodes which are Ready 5oc get nodes --output jsonpath='{range .items[?(@.status.conditions[-1].type=="Ready")]}{.metadata.name} {.status.conditions[-1].type}{"\n"}{end}' 6 7# get images from all pods in a namespace 8oc get pods -n --output jsonpath='{range .items[*]}{.spec.containers[*].image}{"\n"}{end}' Set OperatorHub in airgap 1oc get catalogsources -n openshift-marketplace
🐣 Bash Functions for k8s
🐣 Bash Functions for k8s
A list of nice findings for Kubernetes List all images in Helm chart 1images=$(helm template -g $helm |yq -N '..|.image? | select(. == "*" and . != null)'|sort|uniq|grep ":"|egrep -v '*:[[:blank:]]' || echo "") upload images listed in an Helm chart 1load_helm_images(){ 2 # look in helm charts 3 for helm in $(ls ../../roles/*/files/helm/*.tgz); do 4 printf "\e[1;34m[INFO]\e[m Look for images in ${helm}...\n" 5 6 images=$(helm template -g $helm |yq -N '..|.image? | select(. == "*" and . != null)'|sort|uniq|grep ":"|egrep -v '*:[[:blank:]]' || echo "") 7 8 dir=$( dirname $helm | xargs dirname ) 9 10 echo "####" 11 12 if [ "$images" != "" ]; then 13 printf "\e[1;34m[INFO]\e[m Images found in the helm charts: ${images}\n" 14 printf "\e[1;34m[INFO]\e[m Create ${dir}/images images...\n" 15 16 mkdir -p ${dir}/images 17 18 while i= read -r image_name; do 19 archive_name=$(basename -a $(awk -F : '{print $1}'<<<${image_name})); 20 printf "\e[1;34m[INFO]\e[m Pull images...\n" 21 podman pull ${image_name}; 22 printf "\e[1;34m[INFO]\e[m Push ${image_name} in ${dir}/images/${archive_name}\n" 23 podman save ${image_name} --format oci-archive -o ${dir}/images/${archive_name}; 24 done <<< ${images} 25 else 26 printf "\e[1;34m[INFO]\e[m No Images found in the helm charts: $helm\n" 27 fi 28 done 29} Check components version 1function checkComponentsInstall() { 2 componentsArray=("kubectl" "helm") 3 for i in "${componentsArray[@]}"; do 4 command -v "${i}" >/dev/null 2>&1 || 5 { echo "[ERROR] ${i} is required, but it's not installed. Aborting." >&2; exit 1; } 6 done 7} Version comparator 1function checkK8sVersion() { 2 currentK8sVersion=$(kubectl version --short | grep "Server Version" | awk '{gsub(/v/,$5)}1 {print $3}') 3 testVersionComparator 1.20 "$currentK8sVersion" '<' 4 if [[ $k8sVersion == "ok" ]]; then 5 echo "current kubernetes version is ok" 6 else 7 minikube start --kubernetes-version=v1.22.4; 8 fi 9} 10 11 12# the comparator based on https://stackoverflow.com/a/4025065 13versionComparator () { 14 if [[ $1 == $2 ]] 15 then 16 return 0 17 fi 18 local IFS=. 19 local i ver1=($1) ver2=($2) 20 # fill empty fields in ver1 with zeros 21 for ((i=${#ver1[@]}; i<${#ver2[@]}; i++)) 22 do 23 ver1[i]=0 24 done 25 for ((i=0; i<${#ver1[@]}; i++)) 26 do 27 if [[ -z ${ver2[i]} ]] 28 then 29 # fill empty fields in ver2 with zeros 30 ver2[i]=0 31 fi 32 if ((10#${ver1[i]} > 10#${ver2[i]})) 33 then 34 return 1 35 fi 36 if ((10#${ver1[i]} < 10#${ver2[i]})) 37 then 38 return 2 39 fi 40 done 41 return 0 42} 43 44testVersionComparator () { 45 versionComparator $1 $2 46 case $? in 47 0) op='=';; 48 1) op='>';; 49 2) op='<';; 50 esac 51 if [[ $op != "$3" ]] 52 then 53 echo "Kubernetes test fail: Expected '$3', Actual '$op', Arg1 '$1', Arg2 '$2'" 54 k8sVersion="not ok" 55 else 56 echo "Kubernetes test pass: '$1 $op $2'" 57 k8sVersion="ok" 58 fi 59}
🐬 Podman
🐬 Podman
Description Buildah: is used to build Open Container Initiative (OCI) format or Docker format container images without the need for a daemon. Podman: provides the ability to directly run container images without a daemon. Podman can pull container images from a container registry, if they are not available locally. Skopeo: offers features for pulling and pushing containers to registries. Moving containers between registries is supported. Container image inspection is also offered and some introspective capabilities can be performed, without first downloading the container itself.
🐳 Docker
🐳 Docker
1# see images available on your hosts 2docker image list 3 4# equal to above 5docker images 6REPOSITORY TAG IMAGE ID CREATED SIZE 7httpd latest 6fa26f20557b 45 hours ago 164MB 8hello-world latest 75280d40a50b 4 months ago 1.69kB 9 10# give sha 11docker images --no-trunc=true 12 13# delete unused images 14docker rmi $(docker images -q) 15# delete images without tags 16docker rmi $(docker images | grep "^<none>" | awk '{print $3}')
👮 CUE-lang
👮 CUE-lang
CUE stands for Configure, Unify, Execute Basics Installation 1# Install GO 2GO_VERSION="1.21.0" 3wget https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz 4sudo tar -C /usr/local -xzf go${GO_VERSION}.linux-amd64.tar.gz 5export PATH=$PATH:/usr/local/go/bin 6 7go install cuelang.org/go/cmd/cue@latest 8sudo cp -pr ./go /usr/local/. 9 10# or use Container 11printf "\e[1;34m[INFO]\e[m Install CUElang:\n"; 12podman pull docker.io/cuelang/cue:latest concepts top -> schema -> constraint -> data -> bottom Command 1# import a file 2cue import imageset-config.yaml 3 4# Validate 5cue vet imageset-config.cue imageset-config.yaml 6 7 8* Some basics example 9 10```go 11// This is a comment 12_greeting: "Welcome" // Hidden fields start with "_" 13#project: "CUE" // Definitions start with "#" 14 15message: "\(_greeting) to \(#project)!" // Regular fields are exported 16 17#Person: { 18 age: number // Mandatory condition and must be a number 19 hobbies?: [...string] // non mandatory but if present must be a list of string 20} 21 22// Constrain which call #Person and check if age 23#Adult: #Person & { 24 age: >=18 25} 26 27// =~ match a regular expression 28#Phone: string & =~ "[0-9]+" 29 30// Mapping 31instanceType: { 32 web: "small" 33 app: "medium" 34 db: "large" 35} 36 37server1: { 38 role: "app" 39 instance: instanceType[role] 40} 41 42// server1.instance: "medium" Scripting 1# executable have extension name "_tool.cue" 2 3# usage 4cue cmd prompter 1package foo 2 3import ( 4 "tool/cli" 5 "tool/exec" 6 "tool/file" 7) 8 9// moved to the data.cue file to show how we can reference "pure" Cue files 10city: "Amsterdam" 11 12// A command named "prompter" 13command: prompter: { 14 15 // save transcript to this file 16 var: { 17 file: *"out.txt" | string @tag(file) 18 } // you can use "-t flag=filename.txt" to change the output file, see "cue help injection" for more details 19 20 // prompt the user for some input 21 ask: cli.Ask & { 22 prompt: "What is your name?" 23 response: string 24 } 25 26 // run an external command, starts after ask 27 echo: exec.Run & { 28 // note the reference to ask and city here 29 cmd: ["echo", "Hello", ask.response + "!", "Have you been to", city + "?"] 30 stdout: string // capture stdout, don't print to the terminal 31 } 32 33 // append to a file, starts after echo 34 append: file.Append & { 35 filename: var.file 36 contents: echo.stdout // because we reference the echo task 37 } 38 39 // also starts after echo, and concurrently with append 40 print: cli.Print & { 41 text: echo.stdout // write the output to the terminal since we captured it previously 42 } 43} Sources Offical Documentation
👾 Nexus3
👾 Nexus3
Deploy a Nexus3 in container on VM Load the image 1podman pull sonatype/nexus3:3.59.0 2podman save sonatype/nexus3:3.59.0 -o nexus3.tar 3podman load < nexus3.tar Create a service inside /etc/systemd/system/container-nexus3.service with content below: 1[Unit] 2Description=Nexus Podman container 3Wants=syslog.service 4 5[Service] 6User=nexus-system 7Group=nexus-system 8Restart=always 9ExecStart=/usr/bin/podman run \ 10 --log-level=debug \ 11 --rm \ 12 -ti \ 13 --publish 8081:8081 \ 14 --name nexus \ 15 sonatype/nexus3:3.59.0 16 17ExecStop=/usr/bin/podman stop -t 10 nexus 18 19[Install] 20WantedBy=multi-user.target
👾 Pypi Repository
👾 Pypi Repository
Pypi Repo for airgap env Let’s take as an example py dependencies for Netbox 1# Tools needed 2dnf install -y python3.11 3pip install --upgrade pip setuptool python-pypi-mirror twine 4 5# init mirror 6python3.11 -m venv mirror 7mkdir download 8 9# Get list of Py packages needed 10curl raw.githubusercontent.com/netbox-community/netbox/v3.7.3/requirements.txt -o requirements.txt 11echo pip >> requirements.txt 12echo setuptools >> requirements.txt 13echo uwsgi >> requirements.txt 14 15# Make sure repository CA is installed 16curl http://pki.server/pki/cacerts/ISSUING_CA.pem -o /etc/pki/ca-trust/source/anchors/issuing.crt 17curl http://pki.server/pki/cacerts/ROOT_CA.pem -o /etc/pki/ca-trust/source/anchors/root.crt 18update-ca-trust 19 20 21source mirror/bin/activate 22pypi-mirror download -b -d download -r requirements.tx 23twine upload --repository-url https://nexus3.server/repository/internal-pypi/ download/*.whl --cert /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 24twine upload --repository-url https://nexus3.server/repository/internal-pypi/ /download/*.tar.gz --cert /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Then on target host inside /etc/pip.conf :