πŸ“¦ Archive
πŸ“¦ Archive
Tar - Β« tape archiver Β» Preserve files permissions and ownership. The Basic 1# Archive 2tar cvf mon_archive.tar <fichier1> <fichier2> </rep/doosier/> 3 4## Archive and compress with zstd everything in the current dir and push to /target/dir 5tar -I zstd -vcf archive.tar.zstd -C /target/dir . 6 7# Extract 8tar xvf mon_archive.tar 9 10# Extract push to target dir 11tar -zxvf new.tar.gz -C /target/dir Other usefull options β€’ t : list archive’s content. β€’ T : Archive list given by a file. β€’ P : Absolute path is preserve (usefull for backup /etc) β€’ X : exclude β€’ z : compression Gunzip β€’ j : compression Bzip2 β€’ J : compression Lzmacd
πŸ”’ Vault on k8s
πŸ”’ Vault on k8s
Some time ago, I made a small shell script to handle Vault on a cluster kubernetes. For documentation purpose. Install Vault with helm 1#!/bin/bash 2 3## Variables 4DIRNAME=$(dirname $0) 5DEFAULT_VALUE="vault/values-override.yaml" 6NewAdminPasswd="PASSWORD" 7PRIVATE_REGISTRY_USER="registry-admin" 8PRIVATE_REGISTRY_PASSWORD="PASSWORD" 9PRIVATE_REGISTRY_ADDRESS="registry.example.com" 10DOMAIN="example.com" 11INGRESS="vault.${DOMAIN}" 12 13if [ -z ${CM_NS+x} ];then 14 CM_NS='your-namespace' 15fi 16 17if [ -z ${1+x} ]; then 18 VALUES_FILE="${DIRNAME}/${DEFAULT_VALUE}" 19 echo -e "\n[INFO] Using default values file '${DEFAULT_VALUE}'" 20else 21 if [ -f $1 ]; then 22 echo -e "\n[INFO] Using values file $1" 23 VALUES_FILE=$1 24 else 25 echo -e "\n[ERROR] No file exist $1" 26 exit 1 27 fi 28fi 29 30## Functions 31function checkComponentsInstall() { 32 componentsArray=("kubectl" "helm") 33 for i in "${componentsArray[@]}"; do 34 command -v "${i}" >/dev/null 2>&1 || 35 { echo "${i} is required, but it's not installed. Aborting." >&2; exit 1; } 36 done 37} 38 39function createSecret() { 40kubectl get secret -n ${CM_NS} registry-pull-secret --no-headers 2> /dev/null \ 41|| \ 42kubectl create secret docker-registry -n ${CM_NS} registry-pull-secret \ 43 --docker-server=${PRIVATE_REGISTRY_ADDRESS} \ 44 --docker-username=${PRIVATE_REGISTRY_USER} \ 45 --docker-password=${PRIVATE_REGISTRY_ADDRESS} 46} 47 48function installWithHelm() { 49helm dep update ${DIRNAME}/helm 50 51helm upgrade --install vault ${DIRNAME}/helm \ 52--namespace=${CM_NS} --create-namespace \ 53--set global.imagePullSecrets.[0]=registry-pull-secret \ 54--set global.image.repository=${PRIVATE_REGISTRY_ADDRESS}/hashicorp/vault-k8s \ 55--set global.agentImage.repository=${PRIVATE_REGISTRY_ADDRESS}/hashicorp/vault \ 56--set ingress.hosts.[0]=${INGRESS} \ 57--set ingress.enabled=true \ 58--set global.leaderElection.namespace=${CM_NS} 59 60echo -e "\n[INFO] sleep 30s" && sleep 30 61} 62 63checkComponentsInstall 64createSecret 65installWithHelm Init Vault on kubernetes Allow local kubernetes to create and reach secret on the Vault
πŸ”— Dependencies
πŸ”— Dependencies
Package with pip3 1pip3 freeze netaddr > requirements.txt 2pip3 download -r requirements.txt -d wheel 3mv requirements.txt wheel 4tar -zcf wheelhouse.tar.gz wheel 5tar -zxf wheelhouse.tar.gz 6pip3 install -r wheel/requirements.txt --no-index --find-links wheel Package with Poetry 1curl -sSL https://install.python-poetry.org | python3 - 2poetry new rp-poetry 3poetry add ansible 4poetry add poetry 5poetry add netaddr 6poetry add kubernetes 7poetry add jsonpatch 8poetry add `cat ~/.ansible/collections/ansible_collections/kubernetes/core/requirements.txt` 9 10poetry build 11 12pip3 install dist/rp_poetry-0.1.0-py3-none-any.whl 13 14poetry export --without-hashes -f requirements.txt -o requirements.txt Push dans Nexus 1poetry config repositories.test http://localhost 2poetry publish -r test Images Builder 1podman login registry.redhat.io 2podman pull registry.redhat.io/ansible-automation-platform-22/ansible-python-base-rhel8:1.0.0-230 3 4pyenv local 3.9.13 5python -m pip install poetry 6poetry init 7poetry add ansible-builder
πŸ”± K3S
πŸ”± K3S
Specific to RHEL 1# Create a trust zone for the two interconnect 2sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 #pods 3sudo firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 #services 4sudo firewall-cmd --reload 5sudo firewall-cmd --list-all-zones 6 7# on Master 8sudo rm -f /var/lib/cni/networks/cbr0/lock 9sudo /usr/local/bin/k3s-killall.sh 10sudo systemctl restart k3s 11sudo systemctl status k3s 12 13# on Worker 14sudo rm -f /var/lib/cni/networks/cbr0/lock 15sudo /usr/local/bin/k3s-killall.sh 16sudo systemctl restart k3s-agent 17sudo systemctl status k3s-agent Check Certificates 1# Get CA from K3s master 2openssl s_client -connect localhost:6443 -showcerts < /dev/null 2>&1 | openssl x509 -noout -enddate 3openssl s_client -showcerts -connect 193.168.51.103:6443 < /dev/null 2>/dev/null|openssl x509 -outform PEM 4openssl s_client -showcerts -connect 193.168.51.103:6443 < /dev/null 2>/dev/null|openssl x509 -outform PEM | base64 | tr -d '\n' 5 6# Check end date: 7for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done 8 9# More efficient: 10cd /var/lib/rancher/k3s/server/tls/ 11for crt in *.crt; do printf '%s: %s\n' "$(date --date="$(openssl x509 -enddate -noout -in "$crt"|cut -d= -f 2)" --iso-8601)" "$crt"; done | sort 12 13# Check CA issuer 14for i in $(find . -maxdepth 1 -type f -name "*.crt"); do openssl x509 -in ${i} -noout -issuer; done General Checks RKE2/K3S Nice gist to troubleshoot etcd link
πŸš€ Operator SDK
πŸš€ Operator SDK
Operators have 3 kinds : go, ansible, helm. 1## Init an Ansible project 2operator-sdk init --plugins=ansible --domain example.org --owner "Your name" 3 4## Command above will create a structure like: 5netbox-operator 6β”œβ”€β”€ Dockerfile 7β”œβ”€β”€ Makefile 8β”œβ”€β”€ PROJECT 9β”œβ”€β”€ config 10β”‚Β β”œβ”€β”€ crd 11β”‚Β β”œβ”€β”€ default 12β”‚Β β”œβ”€β”€ manager 13β”‚Β β”œβ”€β”€ manifests 14β”‚Β β”œβ”€β”€ prometheus 15β”‚Β β”œβ”€β”€ rbac 16β”‚Β β”œβ”€β”€ samples 17β”‚Β β”œβ”€β”€ scorecard 18│ └── testing 19β”œβ”€β”€ molecule 20β”‚Β β”œβ”€β”€ default 21│ └── kind 22β”œβ”€β”€ playbooks 23│ └── install.yml 24β”œβ”€β”€ requirements.yml 25β”œβ”€β”€ roles 26│ └── deployment 27└── watches.yaml 1## Create first role 2operator-sdk create api --group app --version v1alpha1 --kind Deployment --generate-role
🚠 Quay.io
🚠 Quay.io
Deploy a Quay.io / Mirror-registry on container Nothing original, it just the documentation of redhat, but can be usefull to kickstart a registry. Prerequisites: 10G /home 15G /var 300G /srv or /opt (regarding QuayRoot) min 2 or more vCPUs. min 8 GB of RAM. 1# packages 2sudo yum install -y podman 3sudo yum install -y rsync 4sudo yum install -y jq 5 6# Get tar 7mirror="https://mirror.openshift.com/pub/openshift-v4/clients" 8wget ${mirror}/mirror-registry/latest/mirror-registry.tar.gz 9tar zxvf mirror-registry.tar.gz 10 11# Get oc-mirror 12curl https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest/oc-mirror.rhel9.tar.gz -O 13 14# Basic install 15sudo ./mirror-registry install \ 16 --quayHostname quay01.example.local \ 17 --quayRoot /opt 18 19# More detailed install 20sudo ./mirror-registry install \ 21 --quayHostname quay01.example.local \ 22 --quayRoot /srv \ 23 --quayStorage /srv/quay-pg \ 24 --pgStorage /srv/quay-storage \ 25 --sslCert tls.crt \ 26 --sslKey tls.key 27 28podman login -u init \ 29 -p 7u2Dm68a1s3bQvz9twrh4Nel0i5EMXUB \ 30 quay01.example.local:8443 \ 31 --tls-verify=false 32 33# By default login go in: 34cat $XDG_RUNTIME_DIR/containers/auth.json 35 36# Get IP 37sudo podman inspect --format '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' quay-app 38 39#unistall 40sudo ./mirror-registry uninstall -v \ 41 --quayRoot <example_directory_name> 42 43# Info 44curl -u init:password https://quay01.example.local:8443/v2/_catalog | jq 45curl -u root:password https://<url>:<port>/v2/ocp4/openshift4/tags/list | jq 46 47# Get an example of imageset 48oc-mirror init --registry quay.example.com:8443/mirror/oc-mirror-metadata 49 50# Get list of Operators, channels, packages 51oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 52oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged 53oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14 --package=kubevirt-hyperconverged --channel=stable unlock user init/admin 1QUAY_POSTGRES=`podman ps | grep quay-postgres | awk '{print $1}'` 2 3podman exec -it $QUAY_POSTGRES psql -d quay -c "UPDATE "public.user" SET invalid_login_attempts = 0 WHERE username = 'init'" Source Mirror-registry
🚦 Gita
🚦 Gita
Presentation Gita is opensource project in python to handle a bit number of projects available: Here 1# Install 2pip3 install -U gita 3 4# add repo in gita 5gita add dcc/ssg/toolset 6gita add -r dcc/ssg # recursively add 7gita add -a dcc # resursively add and auto-group based on folder structure 8 9# create a group 10gita group add docs -n ccn 11 12# Checks 13gita ls 14gita ll -g 15gita group ls 16gita group ll 17gita st dcc 18 19# Use 20gita pull ccn 21gita push ccn 22 23gita freeze
Administration
Administration
Hosted-engine Administration Connect to VM hosted-engine with root and password setup during the install: 1# Generate a backup 2engine-backup --scope=all --mode=backup --file=/root/backup --log=/root/backuplog 3 4# Restore from a backup on Fresh install 5engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --restore-permissions 6engine-setup 7 8# Restore a backup on existing install 9engine-cleanup 10engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions 11engine-setup host Administration Connect in ssh to the Host: 1# Pass a host in maintenance mode manually 2hosted-engine --vm-status 3hosted-engine --set-maintenance --mode=global 4hosted-engine --vm-status 5 6# Remove maintenance mode 7hosted-engine --set-maintenance --mode=none 8hosted-engine --vm-status 9 10# upgrade hosted-engine 11hosted-engine --set-maintenance --mode=none 12hosted-engine --vm-status 13engine-upgrade-check 14dnf update ovirt\*setup\* # update the setup package 15engine-setup # launch it to update the engine /!\ Connect individually to KVM Virtmanager does not work OVirt use libvirt but not like KVM do…
Bash Shortcurt
Bash Shortcurt
Most usefull shortcut Ctrl + r : Search and reverse. (ctrl+r pour remonter l’history). Ctrl + l : Clear the screen (instead to use β€œclear” command). Ctrl + p : Repeat last command. Ctrl + x + Ctrl + e : Edit the current command on an external editor. (Need to define export EDITOR=vim ). Ctrl + shift + v : Copy / paste in linux. Ctrl + a : Move to the begin of the line. Ctrl + e : Move to the end of the line. Ctrl + xx : Move to the opposite end of the line. Ctrl + left : Move to left one word. Ctrl + right : Move to right one word.
CEPH
Certificates Authority
Certificates Authority
Trust a CA on Linux host 1# [RHEL] RootCA from DC need to be installed on host: 2cp my-domain-issuing.crt /etc/pki/ca-trust/source/anchors/my_domain_issuing.crt 3cp my-domain-rootca.crt /etc/pki/ca-trust/source/anchors/my_domain_rootca.crt 4update-ca-trust extract 5 6# [Ubuntu] 7sudo apt-get install -y ca-certificates 8sudo cp local-ca.crt /usr/local/share/ca-certificates 9sudo update-ca-certificates
Collection
Collection
List 1ansible-galaxy collection list Install an Ansible Collection 1# From Ansible Galaxy official repo 2ansible-galaxy collection install community.general 3 4# From a tarball locally 5ansible-galaxy collection install ./community-general-6.0.0.tar.gz 6 7# From custom Repo 8ansible-galaxy collection install git+https://git.example.com/projects/namespace.collectionName.git 9ansible-galaxy collection install git+https://git.example.com/projects/namespace.collectionName,v1.0.2 10ansible-galaxy collection install git+https://git.example.com/namespace/collectionName.git 11 12# From a requirement.yml file 13ansible-galaxy collection install -r ./requirement.yaml Requirement file to install Ansible Collection 1collections: 2- name: kubernetes.core 3 4- source: https://gitlab.example.com/super-group/collector.git 5 type: git 6 version: "v1.0.6" 7 8- source: https://gitlab.ipolicedev.int/another-projects/plates.git 9 type: git
Git
Git
GIT is a distributed version control system that was created by Linus Torvalds, the mastermind of Linux itself. It was designed to be a superior version control system to those that were readily available, the two most common of these being CVS and Subversion (SVN). Whereas CVS and SVN use the Client/Server model for their systems, GIT operates a little differently. Instead of downloading a project, making changes, and uploading it back to the server, GIT makes the local machine act as a server. Tecmint
Gitea
Gitea
Prerequis - Firewalld activated, important otherwise the routing to the app is not working - Podman, jq installed Import image 1podman pull docker.io/gitea/gitea:1-rootless 2podman save docker.io/gitea/gitea:1-rootless -o gitea-rootless.tar 3podman load < gitea-rootless.tar Install cat /etc/systemd/system/container-gitea-app.service 1# container-gitea-app.service 2[Unit] 3Description=Podman container-gitea-app.service 4 5Wants=network.target 6After=network-online.target 7RequiresMountsFor=/var/lib/containers/storage /var/run/containers/storage 8 9[Service] 10Environment=PODMAN_SYSTEMD_UNIT=%n 11Restart=on-failure 12TimeoutStopSec=70 13PIDFile=%t/container-gitea-app.pid 14Type=forking 15 16ExecStartPre=/bin/rm -f %t/container-gitea-app.pid %t/container-gitea-app.ctr-id 17ExecStart=/usr/bin/podman container run \ 18 --conmon-pidfile %t/container-gitea-app.pid \ 19 --cidfile %t/container-gitea-app.ctr-id \ 20 --cgroups=no-conmon \ 21 --replace \ 22 --detach \ 23 --tty \ 24 --env DB_TYPE=sqlite3 \ 25 --env DB_HOST=gitea-db:3306 \ 26 --env DB_NAME=gitea \ 27 --env DB_USER=gitea \ 28 --env DB_PASSWD=9Oq6P9Tsm6j8J7c18Jxc \ 29 --volume gitea-data-volume:/var/lib/gitea:Z \ 30 --volume gitea-config-volume:/etc/gitea:Z \ 31 --network gitea-net \ 32 --publish 2222:2222 \ 33 --publish 3000:3000 \ 34 --label "io.containers.autoupdate=registry" \ 35 --name gitea-app \ 36 docker.io/gitea/gitea:1-rootless 37 38ExecStop=/usr/bin/podman container stop \ 39 --ignore \ 40 --cidfile %t/container-gitea-app.ctr-id \ 41 -t 10 42 43ExecStopPost=/usr/bin/podman container rm \ 44 --ignore \ 45 -f \ 46 --cidfile %t/container-gitea-app.ctr-id 47 48[Install] 49WantedBy=multi-user.target default.target Configuration inside /var/lib/containers/storage/volumes/gitea-config-volume/_data/app.ini
Github
Github
Get tag_name from latest 1export RKE_VERSION=$(curl -s https://update.rke2.io/v1-release/channels | jq -r '.data[] | select(.id=="stable") | .latest' | awk -F"+" '{print $1}'| sed 's/v//') 2export CERT_VERSION=$(curl -s https://api.github.com/repos/cert-manager/cert-manager/releases/latest | jq -r .tag_name) 3export RANCHER_VERSION=$(curl -s https://api.github.com/repos/rancher/rancher/releases/latest | jq -r .tag_name) 4export LONGHORN_VERSION=$(curl -s https://api.github.com/repos/longhorn/longhorn/releases/latest | jq -r .tag_name) 5export NEU_VERSION=$(curl -s https://api.github.com/repos/neuvector/neuvector-helm/releases/latest | jq -r .tag_name) Install gh 1# ubuntu 2type -p curl >/dev/null || (sudo apt update && sudo apt install curl -y) 3curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \ 4&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \ 5&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \ 6&& sudo apt update \ 7&& sudo apt install gh -y 8 9# Redhat 10sudo dnf install 'dnf-command(config-manager)' 11sudo dnf config-manager --add-repo https://cli.github.com/packages/rpm/gh-cli.repo 12sudo dnf install gh Autocompletions 1gh completion zsh > $ZSH/completions/_gh Create an ssh key ed Login 1gh auth login -p ssh -h GitHub.com -s read:project,delete:repo,repo,workflow -w 2 3gh auth status 4github.com 5 βœ“ Logged in to github.com as MorzeBaltyk ($HOME/.config/gh/hosts.yml) 6 βœ“ Git operations for github.com configured to use ssh protocol. 7 βœ“ Token: gho_************************************ 8 βœ“ Token scopes: delete_repo, gist, read:org, read:project, repo To use your key One way:
Gitlab
Gitlab
Glab CLI https://glab.readthedocs.io/en/latest/intro.html 1# add token 2glab auth login --hostname mygitlab.example.com 3# view fork of dep installer 4glab repo view mygitlab.example.com/copain/project 5# clone fork of dep installer 6glab repo clone mygitlab.example.com/copain/project Install 1Optimization 2puma['worker_processes'] = 16 3puma['worker_timeout'] = 60 4puma['min_threads'] = 1 5puma['max_threads'] = 4 6puma['per_worker_max_memory_mb'] = 2048 Certificats Generate CSR in /data/gitlab/csr/server_cert.cnf 1[req] 2default_bits = 2048 3distinguished_name = req_distinguished_name 4req_extensions = req_ext 5prompt = no 6 7[req_distinguished_name] 8C = PL 9ST = Poland 10L = Warsaw 11O = myOrg 12OU = DEV 13CN = gitlab.example.com 14 15[req_ext] 16subjectAltName = @alt_names 17 18[alt_names] 19DNS = gitlab.example.com 20IP = 192.168.01.01 1# Create CSR 2openssl req -new -newkey rsa:2048 -nodes -keyout gitlab.example.com.key -config /data/gitlab/csr/server_cert.cnf -out gitlab.example.com.csr 3 4openssl req -noout -text -in gitlab.example.com.csr 5 6# Sign your CSR with your PKI. If you PKI is a windows one, you should get back a .CER file. 7 8# check info: 9openssl x509 -text -in gitlab.example.com.cer -noout 1### push it in crt/key in Gitlab 2cp /tmp/gitlab.example.com.cer cert/gitlab.example.com.crt 3cp /tmp/gitlab.example.com.key cert/gitlab.example.com.key 4cp /tmp/gitlab.example.com.cer cert/192.168.01.01.crt 5cp /tmp/gitlab.example.com.key cert/192.168.01.01.key 6 7### push rootCA in gitlab 8cp /etc/pki/ca-trust/source/anchors/domain-issuing.crt /data/gitlab/config/trusted-certs/domain-issuing.crt 9cp /etc/pki/ca-trust/source/anchors/domain-rootca.crt /data/gitlab/config/trusted-certs/domain-rootca.crt 10 11### Reconfigure 12vi /data/gitlab/config/gitlab.rb 13docker exec gitlab bash -c 'update-ca-certificates' 14docker exec gitlab bash -c 'gitlab-ctl reconfigure' 15 16### Stop / Start 17docker stop gitlab 18docker rm gitlab 19docker run -d -p 5050:5050 -p 2289:22 -p 443:443 --restart=always \ 20-v /data/gitlab/config:/etc/gitlab \ 21-v /data/gitlab/logs:/var/log/gitlab \ 22-v /data/gitlab/data:/var/opt/gitlab \ 23-v /data/gitlab/cert:/etc/gitlab/ssl \ 24-v /data/gitlab/config/trusted-certs:/usr/local/share/ca-certificates \ 25--name gitlab gitlab/gitlab-ce:15.0.5-ce.0 Health-Checks 1docker exec gitlab bash -c 'gitlab-ctl status' 2docker exec -it gitlab gitlab-rake gitlab:check SANITIZE=true 3docker exec -it gitlab gitlab-rake gitlab:env:info Backup 1docker exec -it gitlab gitlab-rake gitlab:backup:create --trace 2 3#Alternate way to do it 4docker exec gitlab bash -c 'gitlab-backup create' 5docker exec gitlab bash -c 'gitlab-backup create SKIP=repositories' 6docker exec gitlab bash -c 'gitlab-backup create SKIP=registry' Restore from a Backup 1Restore 2gitlab-ctl reconfigure 3gitlab-ctl start 4gitlab-ctl stop unicorn 5gitlab-ctl stop sidekiq 6gitlab-ctl status 7ls -lart /var/opt/gitlab/backups 8 9docker exec -it gitlab gitlab-rake gitlab:backup:restore --trace 10docker exec -it gitlab gitlab-rake gitlab:backup:restore BACKUP=1537738690_2018_09_23_10.8.3 --trace 11 12Restart 13docker exec gitlab bash -c 'gitlab-ctl restart' Update Pre-checks before update sudo docker exec -it gitlab gitlab-rake gitlab:check sudo docker exec -it gitlab gitlab-rake gitlab:doctor:secrets
GUI
Idm
Idm
Server Idm - Identity Manager prerequisites : repository configured NTP synchronize check config DHCP/DNS hostname -f == hostname acces to webui IDM : https://idm01.idm.ad-support.local/ipa/ui/ 1yum install -y ipa-server ipa-server-dns 2 3ipa-server-install \ 4 --domain=example.com \ 5 --realm=EXAMPLE.COM \ 6 --ds-password=password \ 7 --admin-password=password \ 8 --hostname=classroom.example.com \ 9 --ip-address=172.25.0.254 \ 10 --reverse-zone=0.25.172.in-addr.arpa. \ 11 --forwarder=208.67.222.222 \ 12 --allow-zone-overlap \ 13 --setup-dns \ 14 --unattended Client link to IDM 1yum install -y ipa-client 2 3ipa-client-install --mkhomedir --enable-dns-updates --force-ntpd -p admin@EXAMPLE.COM --password='password' --force-join -U 4 5# Test login 6echo -n 'password' | kinit admin Script if DNS config is right for a IDM server 1sudo sh -c "cat <<EOF > ~/IdmZoneCheck.sh 2#!/bin/bash 3### IdM zone check ### 4# Check if the zone name is provided as a parameter # 5if [ -z "$1" ]; 6then 7 echo -e "Provide the zone name to be checked as a parameter!\n(ex: IdmZoneCheck.sh domain.local)" 8 exit 9fi 10clear 11echo -e "### IDM / TCP ###\n\n" 12echo -e "TCP / kerberos-master (SRV)" 13dig +short _kerberos-master._tcp.$1. SRV 14echo -e "_TCP / kerberos (SRV)" 15dig +short _kerberos._tcp.$1. SRV 16echo -e "_TCP / kpasswd (SRV)" 17dig +short _kpasswd._tcp.$1. SRV 18echo -e "_TCP / ldap (SRV)" 19dig +short _ldap._tcp.$1. SRV 20echo -e "\n### IDM / UDP ###\n\n" 21echo -e "_UDP / kerberos-master (SRV)" 22dig +short _kerberos-master._udp.$1. SRV 23echo -e "_UDP / kerberos (SRV)" 24dig +short _kerberos._udp.$1. SRV 25echo -e "_UCP / kpasswd (SRV)" 26dig +short _kpasswd._udp.$1. SRV 27echo -e "\n### IDM / MSDCS DC TCP ###\n\n" 28echo -e "_MSDCS / TCP / kerberos (SRV)" 29dig +short _kerberos._tcp.dc._msdcs.$1. SRV 30echo -e "_MSDCS / TCP / ldap (SRV)" 31dig +short _ldap._tcp.dc._msdcs.$1. SRV 32echo -e "\n### IDM / MSDCS DC UDP ###\n\n" 33echo -e "_MSDCS / UDP / kerberos (SRV)" 34dig +short _kerberos._udp.dc._msdcs.$1. SRV 35echo -e "\n### IDM / REALM ###\n\n" 36echo -e "REALM (TXT)" 37dig +short _kerberos.$1. TXT 38echo -e "\n### IDM / CA ###\n\n" 39echo -e "A / ipa-ca" 40dig +short ipa-ca.$1. A 41echo -e "\n### IDM / A ###\n\n" 42echo -e "A / $HOSTNAME" 43dig +short $HOSTNAME. A 44EOF Script usage : 1./IdmZoneCheck.sh idm.ad-support.local
Install
Install
Prerequisistes Check Compatibilty hardware: Oracle Linux Hardware Certification List (HCL) A minimum of two (2) KVM hosts and no more than seven (7). A fully-qualified domain name for your engine and host with forward and reverse lookup records set in the DNS. /var/tmp 10 GB space at least Prepared a shared-storage (nfs or iscsi) of at least 74 GB to be used as a data storage domain dedicated to the engine virtual machine. ISCSI need to be discovered before oVirt install.
Inventory
Inventory
1ansible-inventory --list | jq -r 'map_values(select(.hosts != null and (.hosts | contains(["myhost"])))) | keys[]' 1kafka_host: "[{{ groups['KAFKA'] | map('extract', hostvars, 'inventory_hostname') | map('regex_replace', '^', '\"') | map('regex_replace', '\\\"', '\"') | map('regex_replace', '$', ':'+ kafka_port +'\"') | join(', ') }}]" 2 3elasticsearch_host: "{{ groups['ELASTICSEARCH'] | map('extract', hostvars, 'inventory_hostname') | map('regex_replace', '^', '\"') | map('regex_replace', '\\\"', '\"') | map('regex_replace', '$', ':'+ elasticsearch_port +'\"') | join(', ') }}"
Manual
Manual
Manuals for commands man <cmd> : Open man page of command. space : go ahead page by page. b : go back page by page. q : quit. Enter : go line by line. /<word> : search a word in man. n : go to the next expression that you search. N : go back to search expression. man -k <key word> : look for in all man for your key words. man -k <word1>.*<word2> : “.*” allow to search several words. whatis <cmd> : give short explaination about the command.
Mysql
Mysql
Example 1# Import values with details connexion 2. .\values.ps1 3 4$scriptFilePath ="$MyPath\Install\MysqlBase\Script.sql" 5 6# Load the required DLL file (depend on your connector) 7[void][System.Reflection.Assembly]::LoadFrom("C:\Program Files (x86)\MySQL\MySQL Connector Net 8.0.23\Assemblies\v4.5.2\MySql.Data.dll") 8 9# Load in var the SQL script file 10$scriptContent = Get-Content -Path $scriptFilePath -Raw 11 12# Execute the modified SQL script 13$Connection = [MySql.Data.MySqlClient.MySqlConnection]@{ 14 ConnectionString = "server=$MysqlIP;uid=$MysqlUser;Port=3306;user id=$MysqlUser;pwd=$MysqlPassword;database=$MysqlDatabase;pooling=false;CharSet=utf8;SslMode=none" 15 } 16 $sql = New-Object MySql.Data.MySqlClient.MySqlCommand 17 $sql.Connection = $Connection 18 $sql.CommandText = $scriptContent 19 write-host $sql.CommandText 20 $Connection.Open() 21 $sql.ExecuteNonQuery() 22 $Connection.Close()
Oracle Basics
Oracle Basics
Oracle DB Diagram --- config: theme: forest layout: elk --- flowchart TD subgraph s1["Instance DB"] style s1 fill:#E8F5E9,stroke:#388E3C,stroke-width:2px subgraph s1a["Background Processes"] style s1a fill:#FFF9C4,stroke:#FBC02D,stroke-width:1px n5["PMON (Process Monitor)"] n6["SMON (System Monitor)"] n10["RECO (Recoverer Process)"] end subgraph s1b["PGA (Process Global Area)"] style s1b fill:#E3F2FD,stroke:#1976D2,stroke-width:1px n1["Processes"] end subgraph s1c["SGA (System Global Area)"] style s1c fill:#FFEBEE,stroke:#D32F2F,stroke-width:1px subgraph n7["Shared Pool (SP)"] style n7 fill:#F3E5F5,stroke:#7B1FA2,stroke-width:1px n7a["DC (Dictionary Cache)"] n7b["LC (Library Cache)"] n7c["RC (Result Cache)"] end n8["DB Cache (DBC)"] n9["Redo Buffer"] n3["DBWR (DB Writer)"] n4["LGWR (Log Writer)"] n5["PMON (Process Monitor)"] n6["SMON (System Monitor)"] n10["RECO (Recoverer Process)"] end end subgraph s2["Database: Physical Files"] style s2 fill:#FFF3E0,stroke:#F57C00,stroke-width:2px n11["TBS (Tablespaces, files in .DBF)"] n12["Redo Log Files"] n13["Control Files"] n14["SPFILE (Binary Authentication File)"] n15["ArchiveLog files"] end subgraph s3["Operating System"] style s3 fill:#E0F7FA,stroke:#00796B,stroke-width:2px n16["Listener (Port 1521)"] end n3 --> n11 n3 --> n7c n4 --> n12 n6 --> n7a s3 --> s1 s1c <--> n12 s1c <--> n13 s1c <--> n14 n7b <--> n7c classDef Aqua stroke-width:1px, stroke-dasharray:none, stroke:#0288D1, fill:#B3E5FC, color:#01579B classDef Yellow stroke-width:1px, stroke-dasharray:none, stroke:#FBC02D, fill:#FFF9C4, color:#F57F17 classDef Green stroke-width:1px, stroke-dasharray:none, stroke:#388E3C, fill:#C8E6C9, color:#1B5E20 classDef Red stroke-width:1px, stroke-dasharray:none, stroke:#D32F2F, fill:#FFCDD2, color:#B71C1C class n11,n12,n13,n14,n15 Aqua class n5,n6,n10 Yellow class n1 Green class n7,n8,n9,n3,n4 Red Explanation An Oracle server includes an Oracle Instance and an Oracle Database.
Parsing
Parsing
POO 1# Convert your json in object and put it in variable 2$a = Get-Content 'D:\temp\mytest.json' -raw | ConvertFrom-Json 3$a.update | % {if($_.name -eq 'test1'){$_.version=3.0}} 4 5$a | ConvertTo-Json -depth 32| set-content 'D:\temp\mytestBis.json' Example updating a XML 1#The file we want to change 2$xmlFilePath = "$MyPath\EXAMPLE\some.config" 3 4 # Read the XML file content 5 $xml = [xml](Get-Content $xmlFilePath) 6 7 $node = $xml.connectionStrings.add | where {$_.name -eq 'MetaData' -And $_.providerName -eq 'MySql.Data.MySqlClient'} 8 $node.connectionString = $AuditDB_Value 9 10 $node1 = $xml.connectionStrings.add | where {$_.name -eq 'Account'} 11 $node1.connectionString = $Account_Value 12 13 # Save the updated XML back to the file 14 $xml.Save($xmlFilePath) 15 16 Write-Host "$xmlFilePath Updated" Nested loop between a JSON and CSV 1# Read the JSON file and convert to a PowerShell object 2$jsonContent = Get-Content -Raw -Path ".\example.json" | ConvertFrom-Json 3 4# Read CSV and set a Header to determine the column 5$csvState = Import-CSV -Path .\referentials\states.csv -Header "ID", "VALUE" -Delimiter "`t" 6# Convert in object 7$csvState | ForEach-Object { $TableState[$_.ID] = $_.VALUE } 8 9# Loop through the Entities array and look for the state 10foreach ($item in $jsonContent.Entities) { 11 $stateValue = $item.State 12 13 # Compare the ID and stateValue then get the Value 14 $status = ($csvState | Where-Object { $_.'ID' -eq $stateValue }).VALUE 15 16 Write-Host "Status: $status" 17} Sources https://devblogs.microsoft.com/powershell-community/update-xml-files-using-powershell/
Pull
Pull
Test locally a playbook 1ansible-pull -U https://github.com/MozeBaltyk/Okub.git ./playbooks/tasks/provision.yml Inside a cloud-init 1#cloud-config 2timezone: ${timezone} 3 4packages: 5 - qemu-guest-agent 6 - git 7 8package_update: true 9package_upgrade: true 10 11 12## Test 1 13ansible: 14 install_method: pip 15 package_name: ansible-core 16 run_user: ansible 17 galaxy: 18 actions: 19 - ["ansible-galaxy", "collection", "install", "community.general"] 20 - ["ansible-galaxy", "collection", "install", "ansible.posix"] 21 - ["ansible-galaxy", "collection", "install", "ansible.utils"] 22 pull: 23 playbook_name: ./playbooks/tasks/provision.yml 24 url: "https://github.com/MozeBaltyk/Okub.git" 25 26## Test 2 27ansible: 28 install_method: pip 29 package_name: ansible 30 #run_user only with install_method: pip 31 run_user: ansible 32 setup_controller: 33 repositories: 34 - path: /home/ansible/Okub 35 source: https://github.com/MozeBaltyk/Okub.git 36 run_ansible: 37 - playbook_dir: /home/ansible/Okub 38 playbook_name: ./playbooks/tasks/provision.yml 39######## Troubleshooting 1systemctl --failed 2systemctl list-jobs --after 3journalctl -e Checks user-data and config:
S3 blockstorage
S3 blockstorage
S3cmd command S3cmd is a tool to handle blockstorage S3 type. Install the command 1# Ubuntu install 2sudo apt-get install s3cmd 3 4# Redhat install 5sudo dnf install s3cmd 6 7# or from sources 8wget https://sourceforge.net/projects/s3tools/files/s3cmd/2.2.0/s3cmd-2.2.0.tar.gz 9tar xzf s3cmd-2.2.0.tar.gz 10cd s3cmd-2.2.0 11sudo python3 setup.py install Configure it From Cloud providers (for example DO): Log in to the DigitalOcean Control Panel. Navigate to API > Spaces Access Keys and generate a new key pair.
Satellite
Sessions
Sessions
Register your session Usefull to keep a track or document and share what have been done. script : save all commandes and result in a “typescript” file. script -a : append to an existing “typescript” file (otherwise erase previous one). exit : to stop session. asciinema : save the terminal session in video. For RHEL - something like Tlog exists and can be configure and centralised with Rsyslog. Terminal /etc/DIR_COLORS.xterm define terminal colors dircolors change colors in the ls output
sssd
sssd
Troubleshooting 1sudo realm list 2authselect current 3sssctl domain-list 4sssctl config-check 5getent -s files passwd 6getent -s sss passwd user 7getent passwd 8dig -t SRV _ldap._tcp.example.com 9sssctl user-checks toto -s sshd -a auth SSSD process config to link to AD Prerequisites : Need port 369 and 3268 for RHEL8 : 1dnf -y install realmd adcli sssd oddjob oddjob-mkhomedir samba-common-tools krb5-workstation authselect-compat 2 3realm discover example.com 4realm join example.com -U svc-sssd --client-software=sssd --os-name=RedHat --os-version=8 5 6sudo authselect select sssd with-mkhomedir 7sudo systemctl enable --now oddjobd.service inside /etc/sssd/sssd.conf 1[sssd] 2services = nss, pam, ssh, sudo 3domains = example.com 4config_file_version = 2 5default_domain_suffix = example.com 6 7[domain/example.com] 8default_shell = /bin/bash 9override_shell = /bin/bash 10 11ad_domain = example.com 12krb5_realm = example.com 13realmd_tags = manages-system joined-with-adcli 14cache_credentials = True 15id_provider = ad 16krb5_store_password_if_offline = True 17ldap_id_mapping = True 18ldap_user_objectsid = objectSid 19ldap_group_objectsid = objectSid 20ldap_user_primary_group = primaryGroupID 21 22use_fully_qualified_names = True 23fallback_homedir = /home/%u 24 25access_provider = ad 26ldap_access_order=filter,expire 27ldap_account_expire_policy = ad 28ad_access_filter = (memberOf=CN=INTERNAL Team,OU=team-platform,OU=test-groups,DC=example,DC=com) 29 30 31[nss] 32homedir_substring = /home 33 34[pam] 35pam_pwd_expiration_warning = 7 36pam_account_expired_message = Account expired, please contact AD administrator. 37pam_account_locked_message = Account locked, please contact AD administrator. 38pam_verbosity = 3 39 40[ssh] 41 42[sudo] Reload config: 1sss_cache -E; systemctl restart sssd ; sss_cache -E 2systemctl status sssd define sudoers rights /etc/sudoers.d/admin : 1%EXAMPLE.COM\\internal\ team ALL=(ALL) ALL reload sudoers rights: 1realm permit -g 'internal team@example.com'
Terraform
Terraform
Validate Terraform code 1dirs -c 2for DIR in $(find ./examples -type d); do 3 pushd $DIR 4 terraform init 5 terraform fmt -check 6 terraform validate 7 popd 8 done Execute Terraform 1export DO_PAT="dop_v1_xxxxxxxxxxxxxxxx" 2doctl auth init --context rkub 3 4# inside a dir with a tf file 5terraform init 6terraform validate 7terraform plan -var "do_token=${DO_PAT}" 8terraform apply -var "do_token=${DO_PAT}" -auto-approve 9 10# clean apply 11terraform plan -out=infra.tfplan -var "do_token=${DO_PAT}" 12terraform apply infra.tfplan 13 14# Control 15terraform show terraform.tfstate 16 17# Destroy 18terraform plan -destroy -out=terraform.tfplan -var "do_token=${DO_PAT}" 19terraform apply terraform.tfplan Connect to server getting the ip with terraform command: 1ssh root@$(terraform output -json ip_address_workers | jq -r '.[0]') -i .key Work with yaml in terraform Two possibilities: