Devops

Inventory
Inventory
1ansible-inventory --list | jq -r 'map_values(select(.hosts != null and (.hosts | contains(["myhost"])))) | keys[]' 1kafka_host: "[{{ groups['KAFKA'] | map('extract', hostvars, 'inventory_hostname') | map('regex_replace', '^', '\"') | map('regex_replace', '\\\"', '\"') | map('regex_replace', '$', ':'+ kafka_port +'\"') | join(', ') }}]" 2 3elasticsearch_host: "{{ groups['ELASTICSEARCH'] | map('extract', hostvars, 'inventory_hostname') | map('regex_replace', '^', '\"') | map('regex_replace', '\\\"', '\"') | map('regex_replace', '$', ':'+ elasticsearch_port +'\"') | join(', ') }}"
Pull
Pull
Test locally a playbook 1ansible-pull -U https://github.com/MozeBaltyk/Okub.git ./playbooks/tasks/provision.yml Inside a cloud-init 1#cloud-config 2timezone: ${timezone} 3 4packages: 5 - qemu-guest-agent 6 - git 7 8package_update: true 9package_upgrade: true 10 11 12## Test 1 13ansible: 14 install_method: pip 15 package_name: ansible-core 16 run_user: ansible 17 galaxy: 18 actions: 19 - ["ansible-galaxy", "collection", "install", "community.general"] 20 - ["ansible-galaxy", "collection", "install", "ansible.posix"] 21 - ["ansible-galaxy", "collection", "install", "ansible.utils"] 22 pull: 23 playbook_name: ./playbooks/tasks/provision.yml 24 url: "https://github.com/MozeBaltyk/Okub.git" 25 26## Test 2 27ansible: 28 install_method: pip 29 package_name: ansible 30 #run_user only with install_method: pip 31 run_user: ansible 32 setup_controller: 33 repositories: 34 - path: /home/ansible/Okub 35 source: https://github.com/MozeBaltyk/Okub.git 36 run_ansible: 37 - playbook_dir: /home/ansible/Okub 38 playbook_name: ./playbooks/tasks/provision.yml 39######## Troubleshooting 1systemctl --failed 2systemctl list-jobs --after 3journalctl -e Checks user-data and config:
S3 blockstorage
S3 blockstorage
S3cmd command S3cmd is a tool to handle blockstorage S3 type. Install the command 1# Ubuntu install 2sudo apt-get install s3cmd 3 4# Redhat install 5sudo dnf install s3cmd 6 7# or from sources 8wget https://sourceforge.net/projects/s3tools/files/s3cmd/2.2.0/s3cmd-2.2.0.tar.gz 9tar xzf s3cmd-2.2.0.tar.gz 10cd s3cmd-2.2.0 11sudo python3 setup.py install Configure it From Cloud providers (for example DO): Log in to the DigitalOcean Control Panel. Navigate to API > Spaces Access Keys and generate a new key pair.
Terraform
Terraform
Validate Terraform code 1dirs -c 2for DIR in $(find ./examples -type d); do 3 pushd $DIR 4 terraform init 5 terraform fmt -check 6 terraform validate 7 popd 8 done Execute Terraform 1export DO_PAT="dop_v1_xxxxxxxxxxxxxxxx" 2doctl auth init --context rkub 3 4# inside a dir with a tf file 5terraform init 6terraform validate 7terraform plan -var "do_token=${DO_PAT}" 8terraform apply -var "do_token=${DO_PAT}" -auto-approve 9 10# clean apply 11terraform plan -out=infra.tfplan -var "do_token=${DO_PAT}" 12terraform apply infra.tfplan 13 14# Control 15terraform show terraform.tfstate 16 17# Destroy 18terraform plan -destroy -out=terraform.tfplan -var "do_token=${DO_PAT}" 19terraform apply terraform.tfplan Connect to server getting the ip with terraform command: 1ssh root@$(terraform output -json ip_address_workers | jq -r '.[0]') -i .key Work with yaml in terraform Two possibilities: