πŸ“¦ Archive
πŸ“¦ Archive
Tar - Β« tape archiver Β» Preserve files permissions and ownership. The Basic 1# Archive 2tar cvf mon_archive.tar <fichier1> <fichier2> </rep/doosier/> 3 4## Archive and compress with zstd everything in the current dir and push to /target/dir 5tar -I zstd -vcf archive.tar.zstd -C /target/dir . 6 7# Extract 8tar xvf mon_archive.tar 9 10# Extract push to target dir 11tar -zxvf new.tar.gz -C /target/dir Other usefull options β€’ t : list archive’s content. β€’ T : Archive list given by a file. β€’ P : Absolute path is preserve (usefull for backup /etc) β€’ X : exclude β€’ z : compression Gunzip β€’ j : compression Bzip2 β€’ J : compression Lzmacd
πŸ”’ Vault on k8s
πŸ”’ Vault on k8s
Some time ago, I made a small shell script to handle Vault on a cluster kubernetes. For documentation purpose. Install Vault with helm 1#!/bin/bash 2 3## Variables 4DIRNAME=$(dirname $0) 5DEFAULT_VALUE="vault/values-override.yaml" 6NewAdminPasswd="PASSWORD" 7PRIVATE_REGISTRY_USER="registry-admin" 8PRIVATE_REGISTRY_PASSWORD="PASSWORD" 9PRIVATE_REGISTRY_ADDRESS="registry.example.com" 10DOMAIN="example.com" 11INGRESS="vault.${DOMAIN}" 12 13if [ -z ${CM_NS+x} ];then 14 CM_NS='your-namespace' 15fi 16 17if [ -z ${1+x} ]; then 18 VALUES_FILE="${DIRNAME}/${DEFAULT_VALUE}" 19 echo -e "\n[INFO] Using default values file '${DEFAULT_VALUE}'" 20else 21 if [ -f $1 ]; then 22 echo -e "\n[INFO] Using values file $1" 23 VALUES_FILE=$1 24 else 25 echo -e "\n[ERROR] No file exist $1" 26 exit 1 27 fi 28fi 29 30## Functions 31function checkComponentsInstall() { 32 componentsArray=("kubectl" "helm") 33 for i in "${componentsArray[@]}"; do 34 command -v "${i}" >/dev/null 2>&1 || 35 { echo "${i} is required, but it's not installed. Aborting." >&2; exit 1; } 36 done 37} 38 39function createSecret() { 40kubectl get secret -n ${CM_NS} registry-pull-secret --no-headers 2> /dev/null \ 41|| \ 42kubectl create secret docker-registry -n ${CM_NS} registry-pull-secret \ 43 --docker-server=${PRIVATE_REGISTRY_ADDRESS} \ 44 --docker-username=${PRIVATE_REGISTRY_USER} \ 45 --docker-password=${PRIVATE_REGISTRY_ADDRESS} 46} 47 48function installWithHelm() { 49helm dep update ${DIRNAME}/helm 50 51helm upgrade --install vault ${DIRNAME}/helm \ 52--namespace=${CM_NS} --create-namespace \ 53--set global.imagePullSecrets.[0]=registry-pull-secret \ 54--set global.image.repository=${PRIVATE_REGISTRY_ADDRESS}/hashicorp/vault-k8s \ 55--set global.agentImage.repository=${PRIVATE_REGISTRY_ADDRESS}/hashicorp/vault \ 56--set ingress.hosts.[0]=${INGRESS} \ 57--set ingress.enabled=true \ 58--set global.leaderElection.namespace=${CM_NS} 59 60echo -e "\n[INFO] sleep 30s" && sleep 30 61} 62 63checkComponentsInstall 64createSecret 65installWithHelm Init Vault on kubernetes Allow local kubernetes to create and reach secret on the Vault
πŸ”— Dependencies
πŸ”— Dependencies
Package with pip3 1pip3 freeze netaddr > requirements.txt 2pip3 download -r requirements.txt -d wheel 3mv requirements.txt wheel 4tar -zcf wheelhouse.tar.gz wheel 5tar -zxf wheelhouse.tar.gz 6pip3 install -r wheel/requirements.txt --no-index --find-links wheel Package with Poetry 1curl -sSL https://install.python-poetry.org | python3 - 2poetry new rp-poetry 3poetry add ansible 4poetry add poetry 5poetry add netaddr 6poetry add kubernetes 7poetry add jsonpatch 8poetry add `cat ~/.ansible/collections/ansible_collections/kubernetes/core/requirements.txt` 9 10poetry build 11 12pip3 install dist/rp_poetry-0.1.0-py3-none-any.whl 13 14poetry export --without-hashes -f requirements.txt -o requirements.txt Push dans Nexus 1poetry config repositories.test http://localhost 2poetry publish -r test Images Builder 1podman login registry.redhat.io 2podman pull registry.redhat.io/ansible-automation-platform-22/ansible-python-base-rhel8:1.0.0-230 3 4pyenv local 3.9.13 5python -m pip install poetry 6poetry init 7poetry add ansible-builder
πŸ”± K3S
πŸ”± K3S
Specific to RHEL 1# Create a trust zone for the two interconnect 2sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 #pods 3sudo firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 #services 4sudo firewall-cmd --reload 5sudo firewall-cmd --list-all-zones 6 7# on Master 8sudo rm -f /var/lib/cni/networks/cbr0/lock 9sudo /usr/local/bin/k3s-killall.sh 10sudo systemctl restart k3s 11sudo systemctl status k3s 12 13# on Worker 14sudo rm -f /var/lib/cni/networks/cbr0/lock 15sudo /usr/local/bin/k3s-killall.sh 16sudo systemctl restart k3s-agent 17sudo systemctl status k3s-agent Check Certificates 1# Get CA from K3s master 2openssl s_client -connect localhost:6443 -showcerts < /dev/null 2>&1 | openssl x509 -noout -enddate 3openssl s_client -showcerts -connect 193.168.51.103:6443 < /dev/null 2>/dev/null|openssl x509 -outform PEM 4openssl s_client -showcerts -connect 193.168.51.103:6443 < /dev/null 2>/dev/null|openssl x509 -outform PEM | base64 | tr -d '\n' 5 6# Check end date: 7for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done 8 9# More efficient: 10cd /var/lib/rancher/k3s/server/tls/ 11for crt in *.crt; do printf '%s: %s\n' "$(date --date="$(openssl x509 -enddate -noout -in "$crt"|cut -d= -f 2)" --iso-8601)" "$crt"; done | sort 12 13# Check CA issuer 14for i in $(find . -maxdepth 1 -type f -name "*.crt"); do openssl x509 -in ${i} -noout -issuer; done General Checks RKE2/K3S Nice gist to troubleshoot etcd link
πŸ˜‰ Deploy pfsense VM
πŸ˜‰ Deploy pfsense VM
install Pfsense VM Download from Netgate website (account requested) Make network config Important note: no need to prepare NetworkManager config, KVM will handle creation of the bridge. Also note that dns enable is set to disables the use of libvirts DHCP server (pfsense is taking over). 1cat > pfsense.xml << EOF 2<network> 3 <name>pfsense-router</name> 4 <uuid></uuid> 5 <forward mode='nat'> 6 </forward> 7 <bridge name='virbr1' stp='on' delay='0'/> 8 <dns enable='no'/> 9 <ip address='192.168.123.1' netmask='255.255.255.0'> 10 </ip> 11</network> 12EOF 13 14sudo virsh net-define pfsense.xml 15sudo virsh net-start pfsense-router 16sudo virsh net-autostart pfsense-router 17 18# Give qemu ACL 19echo "allow all" | sudo tee /etc/qemu-kvm/${USER}.conf 20echo "include /etc/qemu-kvm/${USER}.conf" | sudo tee --append /etc/qemu/bridge.conf 21sudo chown root:${USER} /etc/qemu-kvm/${USER}.conf 22sudo chmod 640 /etc/qemu-kvm/${USER}.conf 23 24# Check network 25nmcli con show --active 26sudo virsh net-list --all 27sudo virsh net-edit pfsense-router 28sudo virsh net-info pfsense-router 29sudo virsh net-dhcp-leases pfsense-router Create and Run Pfsense VM 1# Create pfsense vm 2virt-install \ 3--name pfsense --ram 2048 --vcpus 2 \ 4--disk $HOME/pfsense/disk0.qcow2,size=12,format=qcow2 \ 5--cdrom $HOME/pfsense/netgate-installer-amd64.iso \ 6--network bridge=virbr0,model=e1000 \ 7--network bridge=virbr1,model=e1000 \ 8--graphics vnc,listen=0.0.0.0 --noautoconsole \ 9--osinfo freebsd14.0 \ 10--autostart \ 11--debug 12 13virsh start pfsense Create OKD vm 1virt-install \ 2--name okd --ram 2048 --vcpus 2 \ 3--disk $HOME/okd-latest/disk0.qcow2,size=50,format=qcow2 \ 4--autostart \ 5--cdrom $HOME/okd-latest/rhcos-live.iso \ 6--network bridge=virbr0,model=e1000 \ 7--network bridge=virbr1,model=e1000 \ 8--graphics vnc,listen=0.0.0.0 --noautoconsole \ 9--osinfo detect=on,require=off \ 10--debug 1sudo virt-install -n master01 \ 2 --description "Master01 OKD Cluster" \ 3 --ram=8192 \ 4 --cdrom "$HOME/okd-latest/rhcos-live.iso" \ 5 --vcpus=2 \ 6 --disk pool=default,bus=virtio,size=10 \ 7 --graphics none \ 8 --osinfo detect=on,require=off \ 9 --serial pty \ 10 --console pty \ 11 --network network=openshift4,mac=52:54:00:36:14:e5 1sudo cp {{OKUB_INSTALL_PATH}}/rhcos-live.iso /var/lib/libvirt/images/rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso 2export COREOS_INSTALLER="podman run --privileged --pull always --rm -v /dev:/dev -v /var/lib/libvirt/images:/data -w /data quay.io/coreos/coreos-installer:release" 3sudo ${COREOS_INSTALLER} iso kargs modify -a "ip={{IP_MASTERS}}::{{GATEWAY}}:{{NETMASK}}:okub-sno:{{INTERFACE}}:none:{{DNS_SERVER}}" "rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso" 4sudo virt-install --name="openshift-sno" \ 5 --vcpus=4 \ 6 --ram=8192 \ 7 --disk path=/var/lib/libvirt/images/sno-{{PRODUCT}}-{{RELEASE_VERSION}}.qcow2,bus=sata,size=120 \ 8 --network network=sno,model=virtio \ 9 --boot menu=on \ 10 --graphics vnc --console pty,target_type=serial --noautoconsole \ 11 --cpu host-passthrough \ 12 --osinfo detect=on,require=off \ 13 --cdrom /var/lib/libvirt/images/rhcos-live-{{PRODUCT}}-{{RELEASE_VERSION}}.iso Checks Pfsense VM 1# Checks 2virsh list 3virsh domifaddr pfsense 4virsh domiflist pfsense 5 6# Connect to console 7virt-viewer --domain-name pfsense Delete Pfsense VM 1virsh destroy pfsense 2virsh undefine pfsense --remove-all-storage 3 4# disk can be deleted only manually 5rm -f ~/pfsense/disk0.qcow2 6 7# delete network 8sudo virsh net-destroy pfsense-router 9sudo virsh net-undefine pfsense-router 10sudo nmcli con del virbr1 11sudo nmcli con del eno1 Create a worker 1# Generate a MAC address 2date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/';echo 3 4sudo virt-install -n worker03.ocp4.example.com \ 5 --description "Worker03 Machine for Openshift 4 Cluster" \ 6 --ram=8192 \ 7 --vcpus=4 \ 8 --os-type=Linux \ 9 --os-variant=rhel8.0 \ 10 --noreboot \ 11 --disk pool=default,bus=virtio,size=50 \ 12 --graphics none \ 13 --serial pty \ 14 --console pty \ 15 --pxe \ 16 --network bridge=openshift4,mac=52:54:00:95:d4:ed
😍 Install KVM
😍 Install KVM
Prerequisites install KVM on RHEL 1# pre-checks hardware for intel CPU 2egrep -c '(vmx|svm)' /proc/cpuinfo 3lscpu | grep Virtualization 4lsmod | grep kvm 5 6# on RHEL9 Workstation 7sudo dnf install virt-install virt-viewer -y 8sudo dnf install -y libvirt 9sudo dnf install virt-manager -y 10sudo dnf install -y virt-top libguestfs-tools guestfs-tools 11sudo gpasswd -a $USER libvirt 12 13# Helper 14sudo dnf -y install bridge-utils 15 16# Start libvirt 17sudo systemctl start libvirtd 18sudo systemctl enable libvirtd 19sudo systemctl status libvirtd install KVM on Ubuntu 1sudo apt update && sudo apt upgrade -y 2sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients libvirt-daemon virtinst -y 3sudo usermod -aG libvirt $(whoami) 4sudo usermod -aG kvm $(whoami) 5 6# Helper 7sudo apt install bridge-utils cpu-checker -y 8 9# Start libvirt 10sudo systemctl start libvirtd 11sudo systemctl enable libvirtd 12sudo systemctl status libvirtd Bonus point: 1sudo apt install cockpit cockpit-machines -y 2sudo systemctl enable --now cockpit.socket 3systemctl status cockpit.socket Then manage your VMs from cockpit: https://localhost:9090 which could be an good alternative to virt-manager.
😏 The Basics of KVM
😏 The Basics of KVM
Basic Checks 1virsh nodeinfo Config a Bridge network Important note that network are created with root user but VM with current user. Non permanent bridge: 1sudo ip link add virbr1 type bridge 2sudo ip link set eno1 up 3sudo ip link set eno1 master virbr1 4sudo ip address add dev virbr1 192.168.2.1/24 Permanent bridge 1sudo nmcli con add ifname virbr1 type bridge con-name virbr1 2sudo nmcli con add type bridge-slave ifname eno1 master virbr1 3sudo nmcli con modify virbr1 bridge.stp no 4sudo nmcli con down eno1 5sudo nmcli con up virbr1 6sudo ip address add dev virbr1 192.168.123.1/24 KVM - Bridge Network 1cat > hostbridge.xml << EOF 2<network> 3 <name>hostbridge</name> 4 <forward mode='bridge'/> 5 <bridge name='virbr1'/> 6</network> 7EOF 8 9sudo virsh net-define hostbridge.xml 10sudo virsh net-start hostbridge 11sudo virsh net-autostart hostbridge Give qemu ACL 1echo "allow all" | sudo tee /etc/qemu-kvm/${USER}.conf 2echo "include /etc/qemu-kvm/${USER}.conf" | sudo tee --append /etc/qemu/bridge.conf 3sudo chown root:${USER} /etc/qemu-kvm/${USER}.conf 4sudo chmod 640 /etc/qemu-kvm/${USER}.conf Check network 1sudo nmcli con show --active 2sudo virsh net-list --all 3sudo virsh net-edit hostbridge 4sudo virsh net-info hostbridge 5sudo virsh net-dhcp-leases hostbridge Check with a small script 1echo -e "\n##### KVM networks #####\n" 2kvm_system_networks_all=$(sudo virsh net-list --all) 3echo -e "Available KVM networks in qemu:///system :\n$kvm_system_networks_all" 4for net in $(sudo virsh net-list --name); do 5 bridge_name=$(sudo virsh net-info --network ${net} | grep Bridge | cut -d":" -f2 | sed 's/^[[:space:]]*//') 6 for br in ${bridge_name}; do 7 br_info=$(ip -br -c address show dev ${br} || echo "No IP address assigned to bridge ${br}") 8 done 9 echo -e "\n\033[1;34m${net}\033[0m have the Bridge: $br_info" 10done 11echo -e "\n" thanks to bridge-utils package installed ealier: 1brctl show Create a VM with this bridge 1virt-install \ 2--name pfsense --ram 2048 --vcpus 2 \ 3--disk $HOME/pfsense/disk0.qcow2,size=12,format=qcow2 \ 4--autostart \ 5--cdrom $HOME/pfsense/netgate-installer-amd64.iso \ 6--network bridge=virbr0,model=e1000 \ 7--network network=hostbridge,model=e1000 \ 8--graphics vnc,listen=0.0.0.0 --noautoconsole \ 9--osinfo freebsd14.0 \ 10--debug Delete network 1sudo virsh net-destroy hostbridge 2sudo virsh net-undefine hostbridge 3sudo nmcli con del virbr1 4sudo nmcli con del eno1 Sources Blog redhat
πŸš€ Operator SDK
πŸš€ Operator SDK
Operators have 3 kinds : go, ansible, helm. 1## Init an Ansible project 2operator-sdk init --plugins=ansible --domain example.org --owner "Your name" 3 4## Command above will create a structure like: 5netbox-operator 6β”œβ”€β”€ Dockerfile 7β”œβ”€β”€ Makefile 8β”œβ”€β”€ PROJECT 9β”œβ”€β”€ config 10β”‚Β β”œβ”€β”€ crd 11β”‚Β β”œβ”€β”€ default 12β”‚Β β”œβ”€β”€ manager 13β”‚Β β”œβ”€β”€ manifests 14β”‚Β β”œβ”€β”€ prometheus 15β”‚Β β”œβ”€β”€ rbac 16β”‚Β β”œβ”€β”€ samples 17β”‚Β β”œβ”€β”€ scorecard 18│ └── testing 19β”œβ”€β”€ molecule 20β”‚Β β”œβ”€β”€ default 21│ └── kind 22β”œβ”€β”€ playbooks 23│ └── install.yml 24β”œβ”€β”€ requirements.yml 25β”œβ”€β”€ roles 26│ └── deployment 27└── watches.yaml 1## Create first role 2operator-sdk create api --group app --version v1alpha1 --kind Deployment --generate-role
🚠 Quay.io
🚠 Quay.io
Deploy a Quay.io / Mirror-registry on container Nothing original, it just the documentation of redhat, but can be usefull to kickstart a registry. Prerequisites: 10G /home 15G /var 300G /srv or /opt (regarding QuayRoot) min 2 or more vCPUs. min 8 GB of RAM. 1# packages 2sudo yum install -y podman 3sudo yum install -y rsync 4sudo yum install -y jq 5 6# Get tar 7mirror="https://mirror.openshift.com/pub/openshift-v4/clients" 8wget ${mirror}/mirror-registry/latest/mirror-registry.tar.gz 9tar zxvf mirror-registry.tar.gz 10 11# Get oc-mirror 12curl https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest/oc-mirror.rhel9.tar.gz -O 13 14# Basic install 15sudo ./mirror-registry install \ 16 --quayHostname quay01.example.local \ 17 --quayRoot /opt 18 19# More detailed install 20sudo ./mirror-registry install \ 21 --quayHostname quay01.example.local \ 22 --quayRoot /srv \ 23 --quayStorage /srv/quay-pg \ 24 --pgStorage /srv/quay-storage \ 25 --sslCert tls.crt \ 26 --sslKey tls.key 27 28podman login -u init \ 29 -p 7u2Dm68a1s3bQvz9twrh4Nel0i5EMXUB \ 30 quay01.example.local:8443 \ 31 --tls-verify=false 32 33# By default login go in: 34cat $XDG_RUNTIME_DIR/containers/auth.json 35 36# Get IP 37sudo podman inspect --format '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' quay-app 38 39#unistall 40sudo ./mirror-registry uninstall -v \ 41 --quayRoot <example_directory_name> 42 43# Info 44curl -u init:password https://quay01.example.local:8443/v2/_catalog | jq 45curl -u root:password https://<url>:<port>/v2/ocp4/openshift4/tags/list | jq 46 47# Get an example of imageset 48oc-mirror init --registry quay.example.com:8443/mirror/oc-mirror-metadata 49 50# Get list of Operators, channels, packages 51oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 52oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged 53oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged --channel=stable unlock user init/admin 1QUAY_POSTGRES=`podman ps | grep quay-postgres | awk '{print $1}'` 2 3podman exec -it $QUAY_POSTGRES psql -d quay -c "UPDATE "public.user" SET invalid_login_attempts = 0 WHERE username = 'init'" Source Mirror-registry
🚦 Gita
🚦 Gita
Presentation Gita is opensource project in python to handle a bit number of projects available: Here 1# Install 2pip3 install -U gita 3 4# add repo in gita 5gita add dcc/ssg/toolset 6gita add -r dcc/ssg # recursively add 7gita add -a dcc # resursively add and auto-group based on folder structure 8 9# create a group 10gita group add docs -n ccn 11 12# Checks 13gita ls 14gita ll -g 15gita group ls 16gita group ll 17gita st dcc 18 19# Use 20gita pull ccn 21gita push ccn 22 23gita freeze
Administration
Administration
Hosted-engine Administration Connect to VM hosted-engine with root and password setup during the install: 1# Generate a backup 2engine-backup --scope=all --mode=backup --file=/root/backup --log=/root/backuplog 3 4# Restore from a backup on Fresh install 5engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --restore-permissions 6engine-setup 7 8# Restore a backup on existing install 9engine-cleanup 10engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions 11engine-setup host Administration Connect in ssh to the Host: 1# Pass a host in maintenance mode manually 2hosted-engine --vm-status 3hosted-engine --set-maintenance --mode=global 4hosted-engine --vm-status 5 6# Remove maintenance mode 7hosted-engine --set-maintenance --mode=none 8hosted-engine --vm-status 9 10# upgrade hosted-engine 11hosted-engine --set-maintenance --mode=none 12hosted-engine --vm-status 13engine-upgrade-check 14dnf update ovirt\*setup\* # update the setup package 15engine-setup # launch it to update the engine /!\ Connect individually to KVM Virtmanager does not work OVirt use libvirt but not like KVM do…
Bash Shortcurt
Bash Shortcurt
Most usefull shortcut Ctrl + r : Search and reverse. (ctrl+r pour remonter l’history). Ctrl + l : Clear the screen (instead to use β€œclear” command). Ctrl + p : Repeat last command. Ctrl + x + Ctrl + e : Edit the current command on an external editor. (Need to define export EDITOR=vim ). Ctrl + shift + v : Copy / paste in linux. Ctrl + a : Move to the begin of the line. Ctrl + e : Move to the end of the line. Ctrl + xx : Move to the opposite end of the line. Ctrl + left : Move to left one word. Ctrl + right : Move to right one word.
CEPH
Certificates Authority
Certificates Authority
Trust a CA on Linux host 1# [RHEL] RootCA from DC need to be installed on host: 2cp my-domain-issuing.crt /etc/pki/ca-trust/source/anchors/my_domain_issuing.crt 3cp my-domain-rootca.crt /etc/pki/ca-trust/source/anchors/my_domain_rootca.crt 4update-ca-trust extract 5 6# [Ubuntu] 7sudo apt-get install -y ca-certificates 8sudo cp local-ca.crt /usr/local/share/ca-certificates 9sudo update-ca-certificates
Cloud-Init
Cloud-Init
Troubleshooting cloud-init status --wait usefull for scripting, waiting cloud-init to finish before going to next step. cloud-init status --long 1status: done 2extended_status: done 3boot_status_code: enabled-by-generator 4last_update: Thu, 01 Jan 1970 00:00:55 +0000 5detail: DataSourceNoCloud [seed=/dev/sr0] 6errors: [] 7recoverable_errors: {} sudo cloud-init analyze show 1-- Boot Record 01 -- 2The total time elapsed since completing an event is printed after the "@" character. 3The time the event takes is printed after the "+" character. 4 5Starting stage: init-local 6|`->no cache found @00.00600s +00.00000s 7|`->found local data from DataSourceNoCloud @00.01500s +00.12600s 8Finished stage: (init-local) 00.75400 seconds 9 10Starting stage: init-network 11|`->restored from cache with run check: DataSourceNoCloud [seed=/dev/sr0] @04.21100s +00.00200s 12|`->setting up datasource @04.22800s +00.00000s 13|`->reading and applying user-data @04.23400s +00.00500s 14|`->reading and applying vendor-data @04.23900s +00.00000s 15|`->reading and applying vendor-data2 @04.23900s +00.00000s 16|`->activating datasource @04.27100s +00.00100s 17|`->config-seed_random ran successfully and took 0.000 seconds @04.29500s +00.00100s 18|`->config-write_files ran successfully and took 0.001 seconds @04.29600s +00.00100s 19|`->config-growpart ran successfully and took 0.562 seconds @04.29700s +00.56200s 20|`->config-resizefs ran successfully and took 0.193 seconds @04.86000s +00.19200s 21|`->config-mounts ran successfully and took 0.001 seconds @05.05200s +00.00100s 22|`->config-set_hostname ran successfully and took 0.004 seconds @05.05300s +00.00500s 23|`->config-update_hostname ran successfully and took 0.001 seconds @05.05800s +00.00100s 24|`->config-update_etc_hosts ran successfully and took 0.005 seconds @05.05900s +00.00500s 25|`->config-users_groups ran successfully and took 0.216 seconds @05.06400s +00.21600s 26|`->config-ssh ran successfully and took 0.404 seconds @05.28100s +00.40400s 27|`->config-set_passwords ran successfully and took 0.001 seconds @05.68500s +00.00200s 28Finished stage: (init-network) 01.50000 seconds 29 30Starting stage: modules-config 31|`->config-ssh_import_id ran successfully and took 0.001 seconds @07.43300s +00.00100s 32|`->config-locale ran successfully and took 0.003 seconds @07.43400s +00.00300s 33|`->config-grub_dpkg ran successfully and took 0.352 seconds @07.43700s +00.35200s 34|`->config-apt_configure ran successfully and took 0.049 seconds @07.79000s +00.04800s 35|`->config-timezone ran successfully and took 0.007 seconds @07.83900s +00.00700s 36|`->config-runcmd ran successfully and took 0.001 seconds @07.84600s +00.00100s 37|`->config-byobu ran successfully and took 0.000 seconds @07.84700s +00.00100s 38Finished stage: (modules-config) 00.45400 seconds 39 40Starting stage: modules-final 41|`->config-package_update_upgrade_install ran successfully and took 26.632 seconds @20.56700s +26.63300s 42|`->config-write_files_deferred ran successfully and took 0.001 seconds @47.20000s +00.00200s 43|`->config-reset_rmc ran successfully and took 0.000 seconds @47.20200s +00.00100s 44|`->config-scripts_vendor ran successfully and took 0.001 seconds @47.20300s +00.00000s 45|`->config-scripts_per_once ran successfully and took 0.000 seconds @47.20300s +00.00100s 46|`->config-scripts_per_boot ran successfully and took 0.000 seconds @47.20400s +00.00000s 47|`->config-scripts_per_instance ran successfully and took 0.000 seconds @47.20400s +00.00100s 48|`->config-scripts_user ran successfully and took 0.558 seconds @47.20500s +00.55800s 49|`->config-ssh_authkey_fingerprints ran successfully and took 0.005 seconds @47.76400s +00.00500s 50|`->config-keys_to_console ran successfully and took 0.054 seconds @47.76900s +00.05500s 51|`->config-install_hotplug ran successfully and took 0.001 seconds @47.82400s +00.00100s 52|`->config-final_message ran successfully and took 0.001 seconds @47.82500s +00.00100s 53Finished stage: (modules-final) 27.29600 seconds Check the logs: sudo tail -n 50 /var/log/cloud-init-output.log
Collection
Collection
List 1ansible-galaxy collection list Install an Ansible Collection 1# From Ansible Galaxy official repo 2ansible-galaxy collection install community.general 3 4# From a tarball locally 5ansible-galaxy collection install ./community-general-6.0.0.tar.gz 6 7# From custom Repo 8ansible-galaxy collection install git+https://git.example.com/projects/namespace.collectionName.git 9ansible-galaxy collection install git+https://git.example.com/projects/namespace.collectionName,v1.0.2 10ansible-galaxy collection install git+https://git.example.com/namespace/collectionName.git 11 12# From a requirement.yml file 13ansible-galaxy collection install -r ./requirement.yaml Requirement file to install Ansible Collection 1collections: 2- name: kubernetes.core 3 4- source: https://gitlab.example.com/super-group/collector.git 5 type: git 6 version: "v1.0.6" 7 8- source: https://gitlab.ipolicedev.int/another-projects/plates.git 9 type: git
Git
Git
GIT is a distributed version control system that was created by Linus Torvalds, the mastermind of Linux itself. It was designed to be a superior version control system to those that were readily available, the two most common of these being CVS and Subversion (SVN). Whereas CVS and SVN use the Client/Server model for their systems, GIT operates a little differently. Instead of downloading a project, making changes, and uploading it back to the server, GIT makes the local machine act as a server. Tecmint
Gitea
Gitea
Prerequis - Firewalld activated, important otherwise the routing to the app is not working - Podman, jq installed Import image 1podman pull docker.io/gitea/gitea:1-rootless 2podman save docker.io/gitea/gitea:1-rootless -o gitea-rootless.tar 3podman load < gitea-rootless.tar Install cat /etc/systemd/system/container-gitea-app.service 1# container-gitea-app.service 2[Unit] 3Description=Podman container-gitea-app.service 4 5Wants=network.target 6After=network-online.target 7RequiresMountsFor=/var/lib/containers/storage /var/run/containers/storage 8 9[Service] 10Environment=PODMAN_SYSTEMD_UNIT=%n 11Restart=on-failure 12TimeoutStopSec=70 13PIDFile=%t/container-gitea-app.pid 14Type=forking 15 16ExecStartPre=/bin/rm -f %t/container-gitea-app.pid %t/container-gitea-app.ctr-id 17ExecStart=/usr/bin/podman container run \ 18 --conmon-pidfile %t/container-gitea-app.pid \ 19 --cidfile %t/container-gitea-app.ctr-id \ 20 --cgroups=no-conmon \ 21 --replace \ 22 --detach \ 23 --tty \ 24 --env DB_TYPE=sqlite3 \ 25 --env DB_HOST=gitea-db:3306 \ 26 --env DB_NAME=gitea \ 27 --env DB_USER=gitea \ 28 --env DB_PASSWD=9Oq6P9Tsm6j8J7c18Jxc \ 29 --volume gitea-data-volume:/var/lib/gitea:Z \ 30 --volume gitea-config-volume:/etc/gitea:Z \ 31 --network gitea-net \ 32 --publish 2222:2222 \ 33 --publish 3000:3000 \ 34 --label "io.containers.autoupdate=registry" \ 35 --name gitea-app \ 36 docker.io/gitea/gitea:1-rootless 37 38ExecStop=/usr/bin/podman container stop \ 39 --ignore \ 40 --cidfile %t/container-gitea-app.ctr-id \ 41 -t 10 42 43ExecStopPost=/usr/bin/podman container rm \ 44 --ignore \ 45 -f \ 46 --cidfile %t/container-gitea-app.ctr-id 47 48[Install] 49WantedBy=multi-user.target default.target Configuration inside /var/lib/containers/storage/volumes/gitea-config-volume/_data/app.ini
Github
Github
Get tag_name from latest 1export RKE_VERSION=$(curl -s https://update.rke2.io/v1-release/channels | jq -r '.data[] | select(.id=="stable") | .latest' | awk -F"+" '{print $1}'| sed 's/v//') 2export CERT_VERSION=$(curl -s https://api.github.com/repos/cert-manager/cert-manager/releases/latest | jq -r .tag_name) 3export RANCHER_VERSION=$(curl -s https://api.github.com/repos/rancher/rancher/releases/latest | jq -r .tag_name) 4export LONGHORN_VERSION=$(curl -s https://api.github.com/repos/longhorn/longhorn/releases/latest | jq -r .tag_name) 5export NEU_VERSION=$(curl -s https://api.github.com/repos/neuvector/neuvector-helm/releases/latest | jq -r .tag_name) Install gh 1# ubuntu 2type -p curl >/dev/null || (sudo apt update && sudo apt install curl -y) 3curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \ 4&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \ 5&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \ 6&& sudo apt update \ 7&& sudo apt install gh -y 8 9# Redhat 10sudo dnf install 'dnf-command(config-manager)' 11sudo dnf config-manager --add-repo https://cli.github.com/packages/rpm/gh-cli.repo 12sudo dnf install gh Autocompletions 1gh completion zsh > $ZSH/completions/_gh Create an ssh key ed Login 1gh auth login -p ssh -h GitHub.com -s read:project,delete:repo,repo,workflow -w 2 3gh auth status 4github.com 5 βœ“ Logged in to github.com as MorzeBaltyk ($HOME/.config/gh/hosts.yml) 6 βœ“ Git operations for github.com configured to use ssh protocol. 7 βœ“ Token: gho_************************************ 8 βœ“ Token scopes: delete_repo, gist, read:org, read:project, repo To use your key One way:
Gitlab
Gitlab
Glab CLI https://glab.readthedocs.io/en/latest/intro.html 1# add token 2glab auth login --hostname mygitlab.example.com 3# view fork of dep installer 4glab repo view mygitlab.example.com/copain/project 5# clone fork of dep installer 6glab repo clone mygitlab.example.com/copain/project Install 1Optimization 2puma['worker_processes'] = 16 3puma['worker_timeout'] = 60 4puma['min_threads'] = 1 5puma['max_threads'] = 4 6puma['per_worker_max_memory_mb'] = 2048 Certificats Generate CSR in /data/gitlab/csr/server_cert.cnf 1[req] 2default_bits = 2048 3distinguished_name = req_distinguished_name 4req_extensions = req_ext 5prompt = no 6 7[req_distinguished_name] 8C = PL 9ST = Poland 10L = Warsaw 11O = myOrg 12OU = DEV 13CN = gitlab.example.com 14 15[req_ext] 16subjectAltName = @alt_names 17 18[alt_names] 19DNS = gitlab.example.com 20IP = 192.168.01.01 1# Create CSR 2openssl req -new -newkey rsa:2048 -nodes -keyout gitlab.example.com.key -config /data/gitlab/csr/server_cert.cnf -out gitlab.example.com.csr 3 4openssl req -noout -text -in gitlab.example.com.csr 5 6# Sign your CSR with your PKI. If you PKI is a windows one, you should get back a .CER file. 7 8# check info: 9openssl x509 -text -in gitlab.example.com.cer -noout 1### push it in crt/key in Gitlab 2cp /tmp/gitlab.example.com.cer cert/gitlab.example.com.crt 3cp /tmp/gitlab.example.com.key cert/gitlab.example.com.key 4cp /tmp/gitlab.example.com.cer cert/192.168.01.01.crt 5cp /tmp/gitlab.example.com.key cert/192.168.01.01.key 6 7### push rootCA in gitlab 8cp /etc/pki/ca-trust/source/anchors/domain-issuing.crt /data/gitlab/config/trusted-certs/domain-issuing.crt 9cp /etc/pki/ca-trust/source/anchors/domain-rootca.crt /data/gitlab/config/trusted-certs/domain-rootca.crt 10 11### Reconfigure 12vi /data/gitlab/config/gitlab.rb 13docker exec gitlab bash -c 'update-ca-certificates' 14docker exec gitlab bash -c 'gitlab-ctl reconfigure' 15 16### Stop / Start 17docker stop gitlab 18docker rm gitlab 19docker run -d -p 5050:5050 -p 2289:22 -p 443:443 --restart=always \ 20-v /data/gitlab/config:/etc/gitlab \ 21-v /data/gitlab/logs:/var/log/gitlab \ 22-v /data/gitlab/data:/var/opt/gitlab \ 23-v /data/gitlab/cert:/etc/gitlab/ssl \ 24-v /data/gitlab/config/trusted-certs:/usr/local/share/ca-certificates \ 25--name gitlab gitlab/gitlab-ce:15.0.5-ce.0 Health-Checks 1docker exec gitlab bash -c 'gitlab-ctl status' 2docker exec -it gitlab gitlab-rake gitlab:check SANITIZE=true 3docker exec -it gitlab gitlab-rake gitlab:env:info Backup 1docker exec -it gitlab gitlab-rake gitlab:backup:create --trace 2 3#Alternate way to do it 4docker exec gitlab bash -c 'gitlab-backup create' 5docker exec gitlab bash -c 'gitlab-backup create SKIP=repositories' 6docker exec gitlab bash -c 'gitlab-backup create SKIP=registry' Restore from a Backup 1Restore 2gitlab-ctl reconfigure 3gitlab-ctl start 4gitlab-ctl stop unicorn 5gitlab-ctl stop sidekiq 6gitlab-ctl status 7ls -lart /var/opt/gitlab/backups 8 9docker exec -it gitlab gitlab-rake gitlab:backup:restore --trace 10docker exec -it gitlab gitlab-rake gitlab:backup:restore BACKUP=1537738690_2018_09_23_10.8.3 --trace 11 12Restart 13docker exec gitlab bash -c 'gitlab-ctl restart' Update Pre-checks before update sudo docker exec -it gitlab gitlab-rake gitlab:check sudo docker exec -it gitlab gitlab-rake gitlab:doctor:secrets
GUI
Idm
Idm
Server Idm - Identity Manager prerequisites : repository configured NTP synchronize check config DHCP/DNS hostname -f == hostname acces to webui IDM : https://idm01.idm.ad-support.local/ipa/ui/ 1yum install -y ipa-server ipa-server-dns 2 3ipa-server-install \ 4 --domain=example.com \ 5 --realm=EXAMPLE.COM \ 6 --ds-password=password \ 7 --admin-password=password \ 8 --hostname=classroom.example.com \ 9 --ip-address=172.25.0.254 \ 10 --reverse-zone=0.25.172.in-addr.arpa. \ 11 --forwarder=208.67.222.222 \ 12 --allow-zone-overlap \ 13 --setup-dns \ 14 --unattended Client link to IDM 1yum install -y ipa-client 2 3ipa-client-install --mkhomedir --enable-dns-updates --force-ntpd -p admin@EXAMPLE.COM --password='password' --force-join -U 4 5# Test login 6echo -n 'password' | kinit admin Script if DNS config is right for a IDM server 1sudo sh -c "cat <<EOF > ~/IdmZoneCheck.sh 2#!/bin/bash 3### IdM zone check ### 4# Check if the zone name is provided as a parameter # 5if [ -z "$1" ]; 6then 7 echo -e "Provide the zone name to be checked as a parameter!\n(ex: IdmZoneCheck.sh domain.local)" 8 exit 9fi 10clear 11echo -e "### IDM / TCP ###\n\n" 12echo -e "TCP / kerberos-master (SRV)" 13dig +short _kerberos-master._tcp.$1. SRV 14echo -e "_TCP / kerberos (SRV)" 15dig +short _kerberos._tcp.$1. SRV 16echo -e "_TCP / kpasswd (SRV)" 17dig +short _kpasswd._tcp.$1. SRV 18echo -e "_TCP / ldap (SRV)" 19dig +short _ldap._tcp.$1. SRV 20echo -e "\n### IDM / UDP ###\n\n" 21echo -e "_UDP / kerberos-master (SRV)" 22dig +short _kerberos-master._udp.$1. SRV 23echo -e "_UDP / kerberos (SRV)" 24dig +short _kerberos._udp.$1. SRV 25echo -e "_UCP / kpasswd (SRV)" 26dig +short _kpasswd._udp.$1. SRV 27echo -e "\n### IDM / MSDCS DC TCP ###\n\n" 28echo -e "_MSDCS / TCP / kerberos (SRV)" 29dig +short _kerberos._tcp.dc._msdcs.$1. SRV 30echo -e "_MSDCS / TCP / ldap (SRV)" 31dig +short _ldap._tcp.dc._msdcs.$1. SRV 32echo -e "\n### IDM / MSDCS DC UDP ###\n\n" 33echo -e "_MSDCS / UDP / kerberos (SRV)" 34dig +short _kerberos._udp.dc._msdcs.$1. SRV 35echo -e "\n### IDM / REALM ###\n\n" 36echo -e "REALM (TXT)" 37dig +short _kerberos.$1. TXT 38echo -e "\n### IDM / CA ###\n\n" 39echo -e "A / ipa-ca" 40dig +short ipa-ca.$1. A 41echo -e "\n### IDM / A ###\n\n" 42echo -e "A / $HOSTNAME" 43dig +short $HOSTNAME. A 44EOF Script usage : 1./IdmZoneCheck.sh idm.ad-support.local
Install
Install
Prerequisistes Check Compatibilty hardware: Oracle Linux Hardware Certification List (HCL) A minimum of two (2) KVM hosts and no more than seven (7). A fully-qualified domain name for your engine and host with forward and reverse lookup records set in the DNS. /var/tmp 10 GB space at least Prepared a shared-storage (nfs or iscsi) of at least 74 GB to be used as a data storage domain dedicated to the engine virtual machine. ISCSI need to be discovered before oVirt install.
Inventory
Inventory
1ansible-inventory --list | jq -r 'map_values(select(.hosts != null and (.hosts | contains(["myhost"])))) | keys[]' 1kafka_host: "[{{ groups['KAFKA'] | map('extract', hostvars, 'inventory_hostname') | map('regex_replace', '^', '\"') | map('regex_replace', '\\\"', '\"') | map('regex_replace', '$', ':'+ kafka_port +'\"') | join(', ') }}]" 2 3elasticsearch_host: "{{ groups['ELASTICSEARCH'] | map('extract', hostvars, 'inventory_hostname') | map('regex_replace', '^', '\"') | map('regex_replace', '\\\"', '\"') | map('regex_replace', '$', ':'+ elasticsearch_port +'\"') | join(', ') }}"
Manual
Manual
Manuals for commands man <cmd> : Open man page of command. space : go ahead page by page. b : go back page by page. q : quit. Enter : go line by line. /<word> : search a word in man. n : go to the next expression that you search. N : go back to search expression. man -k <key word> : look for in all man for your key words. man -k <word1>.*<word2> : “.*” allow to search several words. whatis <cmd> : give short explaination about the command.
Mysql
Mysql
Example 1# Import values with details connexion 2. .\values.ps1 3 4$scriptFilePath ="$MyPath\Install\MysqlBase\Script.sql" 5 6# Load the required DLL file (depend on your connector) 7[void][System.Reflection.Assembly]::LoadFrom("C:\Program Files (x86)\MySQL\MySQL Connector Net 8.0.23\Assemblies\v4.5.2\MySql.Data.dll") 8 9# Load in var the SQL script file 10$scriptContent = Get-Content -Path $scriptFilePath -Raw 11 12# Execute the modified SQL script 13$Connection = [MySql.Data.MySqlClient.MySqlConnection]@{ 14 ConnectionString = "server=$MysqlIP;uid=$MysqlUser;Port=3306;user id=$MysqlUser;pwd=$MysqlPassword;database=$MysqlDatabase;pooling=false;CharSet=utf8;SslMode=none" 15 } 16 $sql = New-Object MySql.Data.MySqlClient.MySqlCommand 17 $sql.Connection = $Connection 18 $sql.CommandText = $scriptContent 19 write-host $sql.CommandText 20 $Connection.Open() 21 $sql.ExecuteNonQuery() 22 $Connection.Close()
Oracle Basics
Oracle Basics
Oracle DB Diagram --- config: theme: forest layout: elk --- flowchart TD subgraph s1["Instance DB"] style s1 fill:#E8F5E9,stroke:#388E3C,stroke-width:2px subgraph s1a["Background Processes"] style s1a fill:#FFF9C4,stroke:#FBC02D,stroke-width:1px n5["PMON (Process Monitor)"] n6["SMON (System Monitor)"] n10["RECO (Recoverer Process)"] end subgraph s1b["PGA (Process Global Area)"] style s1b fill:#E3F2FD,stroke:#1976D2,stroke-width:1px n1["Processes"] end subgraph s1c["SGA (System Global Area)"] style s1c fill:#FFEBEE,stroke:#D32F2F,stroke-width:1px subgraph n7["Shared Pool (SP)"] style n7 fill:#F3E5F5,stroke:#7B1FA2,stroke-width:1px n7a["DC (Dictionary Cache)"] n7b["LC (Library Cache)"] n7c["RC (Result Cache)"] end n8["DB Cache (DBC)"] n9["Redo Buffer"] n3["DBWR (DB Writer)"] n4["LGWR (Log Writer)"] n5["PMON (Process Monitor)"] n6["SMON (System Monitor)"] n10["RECO (Recoverer Process)"] end end subgraph s2["Database: Physical Files"] style s2 fill:#FFF3E0,stroke:#F57C00,stroke-width:2px n11["TBS (Tablespaces, files in .DBF)"] n12["Redo Log Files"] n13["Control Files"] n14["SPFILE (Binary Authentication File)"] n15["ArchiveLog files"] end subgraph s3["Operating System"] style s3 fill:#E0F7FA,stroke:#00796B,stroke-width:2px n16["Listener (Port 1521)"] end n3 --> n11 n3 --> n7c n4 --> n12 n6 --> n7a s3 --> s1 s1c <--> n12 s1c <--> n13 s1c <--> n14 n7b <--> n7c classDef Aqua stroke-width:1px, stroke-dasharray:none, stroke:#0288D1, fill:#B3E5FC, color:#01579B classDef Yellow stroke-width:1px, stroke-dasharray:none, stroke:#FBC02D, fill:#FFF9C4, color:#F57F17 classDef Green stroke-width:1px, stroke-dasharray:none, stroke:#388E3C, fill:#C8E6C9, color:#1B5E20 classDef Red stroke-width:1px, stroke-dasharray:none, stroke:#D32F2F, fill:#FFCDD2, color:#B71C1C class n11,n12,n13,n14,n15 Aqua class n5,n6,n10 Yellow class n1 Green class n7,n8,n9,n3,n4 Red Explanation An Oracle server includes an Oracle Instance and an Oracle Database.
Parsing
Parsing
POO 1# Convert your json in object and put it in variable 2$a = Get-Content 'D:\temp\mytest.json' -raw | ConvertFrom-Json 3$a.update | % {if($_.name -eq 'test1'){$_.version=3.0}} 4 5$a | ConvertTo-Json -depth 32| set-content 'D:\temp\mytestBis.json' Example updating a XML 1#The file we want to change 2$xmlFilePath = "$MyPath\EXAMPLE\some.config" 3 4 # Read the XML file content 5 $xml = [xml](Get-Content $xmlFilePath) 6 7 $node = $xml.connectionStrings.add | where {$_.name -eq 'MetaData' -And $_.providerName -eq 'MySql.Data.MySqlClient'} 8 $node.connectionString = $AuditDB_Value 9 10 $node1 = $xml.connectionStrings.add | where {$_.name -eq 'Account'} 11 $node1.connectionString = $Account_Value 12 13 # Save the updated XML back to the file 14 $xml.Save($xmlFilePath) 15 16 Write-Host "$xmlFilePath Updated" Nested loop between a JSON and CSV 1# Read the JSON file and convert to a PowerShell object 2$jsonContent = Get-Content -Raw -Path ".\example.json" | ConvertFrom-Json 3 4# Read CSV and set a Header to determine the column 5$csvState = Import-CSV -Path .\referentials\states.csv -Header "ID", "VALUE" -Delimiter "`t" 6# Convert in object 7$csvState | ForEach-Object { $TableState[$_.ID] = $_.VALUE } 8 9# Loop through the Entities array and look for the state 10foreach ($item in $jsonContent.Entities) { 11 $stateValue = $item.State 12 13 # Compare the ID and stateValue then get the Value 14 $status = ($csvState | Where-Object { $_.'ID' -eq $stateValue }).VALUE 15 16 Write-Host "Status: $status" 17} Sources https://devblogs.microsoft.com/powershell-community/update-xml-files-using-powershell/
Pull
Pull
Test locally a playbook 1ansible-pull -U https://github.com/MozeBaltyk/Okub.git ./playbooks/tasks/provision.yml Inside a cloud-init 1#cloud-config 2timezone: ${timezone} 3 4packages: 5 - qemu-guest-agent 6 - git 7 8package_update: true 9package_upgrade: true 10 11 12## Test 1 13ansible: 14 install_method: pip 15 package_name: ansible-core 16 run_user: ansible 17 galaxy: 18 actions: 19 - ["ansible-galaxy", "collection", "install", "community.general"] 20 - ["ansible-galaxy", "collection", "install", "ansible.posix"] 21 - ["ansible-galaxy", "collection", "install", "ansible.utils"] 22 pull: 23 playbook_name: ./playbooks/tasks/provision.yml 24 url: "https://github.com/MozeBaltyk/Okub.git" 25 26## Test 2 27ansible: 28 install_method: pip 29 package_name: ansible 30 #run_user only with install_method: pip 31 run_user: ansible 32 setup_controller: 33 repositories: 34 - path: /home/ansible/Okub 35 source: https://github.com/MozeBaltyk/Okub.git 36 run_ansible: 37 - playbook_dir: /home/ansible/Okub 38 playbook_name: ./playbooks/tasks/provision.yml 39######## Troubleshooting 1systemctl --failed 2systemctl list-jobs --after 3journalctl -e Checks user-data and config: