Unix-Like

🚩 Firewalld
🚩 Firewalld
Basic Troubleshooting 1# Get the state 2firewall-cmd --state 3systemctl status firewalld 4 5# Get infos 6firewall-cmd --get-default-zone 7firewall-cmd --get-active-zones 8firewall-cmd --get-zones 9firewall-cmd --set-default-zone=home 10 11firewall-cmd --permanent --zone=FedoraWorkstation --add-source=00:FF:B0:CB:30:0A 12firewall-cmd --permanent --zone=FedoraWorkstation --add-service=ssh 13 14firewall-cmd --get-log-denied 15firewall-cmd --set-log-denied=<all, unicast, broadcast, multicast, or off> Add/Remove/List Services 1#Remove 2firewall-cmd --zone=public --add-service=ftp --permanent 3firewall-cmd --zone=public --remove-service=ftp --permanent 4firewall-cmd --zone=public --remove-port=53/tcp --permanent 5firewall-cmd --zone=public --list-services 6 7# Add 8firewall-cmd --zone=public --new-service=portal --permanent 9firewall-cmd --zone=public --service=portal --add-port=8080/tcp --permanent 10firewall-cmd --zone=public --service=portal --add-port=8443/tcp --permanent 11firewall-cmd --zone=public --add-service=portal --permanent 12firewall-cmd --reload 13 14firewall-cmd --zone=public --new-service=k3s-server --permanent 15firewall-cmd --zone=public --service=k3s-server --add-port=443/tcp --permanent 16firewall-cmd --zone=public --service=k3s-server --add-port=6443/tcp --permanent 17firewall-cmd --zone=public --service=k3s-server --add-port=8472/udp --permanent 18firewall-cmd --zone=public --service=k3s-server --add-port=10250/tcp --permanent 19firewall-cmd --zone=public --add-service=k3s-server --permanent 20firewall-cmd --reload 21 22firewall-cmd --zone=public --new-service=quay --permanent 23firewall-cmd --zone=public --service=quay --add-port=8443/tcp --permanent 24firewall-cmd --zone=public --add-service=quay --permanent 25firewall-cmd --reload 26 27firewall-cmd --get-services # It's also possible to add a service from list 28firewall-cmd --runtime-to-permanent Checks and Get infos list open port by services 1for s in `firewall-cmd --list-services`; do echo $s; firewall-cmd --permanent --service "$s" --get-ports; done; 2 3sudo sh -c 'for s in `firewall-cmd --list-services`; do echo $s; firewall-cmd --permanent --service "$s" --get-ports; done;' 4ssh 522/tcp 6dhcpv6-client 7546/udp Check one service 1firewall-cmd --info-service cfrm-IC 2cfrm-IC 3 ports: 7780/tcp 8440/tcp 8443/tcp 4 protocols: 5 source-ports: 6 modules: 7 destination: List zones and services associated 1firewall-cmd --list-all 2public (active) 3 target: default 4 icmp-block-inversion: no 5 interfaces: ens192 6 sources: 7 services: ssh dhcpv6-client https Oracle nimsoft 8 ports: 10050/tcp 1521/tcp 9 protocols: 10 masquerade: no 11 forward-ports: 12 source-ports: 13 icmp-blocks: 14 rich rules: 1firewall-cmd --zone=backup --list-all Get active zones 1firewall-cmd --get-active-zones 2backup 3 interfaces: ens224 4public 5 interfaces: ens192 Tree folder 1ls /etc/firewalld/ 2firewalld.conf helpers/ icmptypes/ ipsets/ lockdown-whitelist.xml services/ zones/ IPSET 1firewall-cmd --get-ipset-types 2firewall-cmd --permanent --get-ipsets 3firewall-cmd --permanent --info-ipset=integration 4firewall-cmd --ipset=integration --get-entries 5 6firewall-cmd --permanent --new-ipset=test --type=hash:net 7firewall-cmd --ipset=local-blocklist --add-entry=103.133.104.0/23
👢 Boot
👢 Boot
The Boot - starting process - BIOS est lancé automatiquement et détecte les périphs. - Charge la routine de démarrage depuis le MBR (Master Boot Record) - C'est le disk de boot et se trouve sur le premier secteur du disque dur. - Le MBR contient un loader qui charge le "second stage loader" c'est le "boot loader" qui est propre au système qu'on charge. -> linux a LILO (Linux Loader) ou GRUB ( Grand Unified Bootloader) - LILO charge le noyau en mémoire, le décompresse et lui passe les paramètres. - Le noyau monte le FS / (à partir de là, les commandes dans /sbin et /bin sont disponibles) - Le Noyau exécute le premier procès "init" Conf LILO LILO peut avoir plusieurs Noyaux comme choix. Le choix par default : “Linux”. /etc/lilo.conf : Config des parametres du noyau /sbin/lilo : pour que les nouveaux params soient enregistrés. -> créé le fichier /boot/map qui contient les blocs physiques où se trouve le prog de démarrage.
🗿 Partition
🗿 Partition
Checks your disks 1# check partion 2parted -l /dev/sda 3fdisk -l 4 5# check partition - visible before the mkfs 6ls /sys/sda/sda* 7ls /dev/sd* 8 9# give partition after the mkfs or pvcreate 10blkid 11blkid -o list 12 13# summary about the disks, partitions, FS and LVM 14lsblk 15lsblk -f Create Partition 1 on disk sdb in script mode 1# with fdisk 2printf "n\np\n1\n\n\nt\n8e\nw\n" | sudo fdisk "/dev/sdb" 3 4# with parted 5sudo parted /dev/sdb mklabel gpt mkpart primary 1 100% set 1 lvm on Gparted : interface graphique (ce base sur parted un utilitaire GNU - Table GPT)
🌱 MDadm
🌱 MDadm
The Basics mdadm (multiple devices admin) is software solution to manage RAID. It allow: create, manage, monitor your disks in an RAID array. you can the full disks (/dev/sdb, /dev/sdc) or (/dev/sdb1, /dev/sdc1) replace or complete raidtools Checks Basic checks 1# View real-time information about your md devices 2cat /proc/mdstat 3 4# Monitor for failed disks (indicated by "(F)" next to the disk) 5watch cat /proc/mdstat Checks RAID 1# Display details about the RAID array (replace /dev/md0 with your array) 2mdadm --detail /dev/md0 3 4# Examine RAID disks for information (not volume) similar to --detail 5mdadm --examine /dev/sd* Settings The conf file /etc/mdadm.conf does not exist by default and need to be created once you finish your install. This file is required for the autobuild at boot.
📂 Filesystem
📂 Filesystem
FS Types ext4 : le plus répandu sous GNU/Linux (issu de ext2 et ext3). Il est journalisé, c’est à dire qu’il trace les opérations d’écriture pour garantir l’intégrité des données en cas d’arrêt brutal du disque. De plus, il peut gérer des volumes de taille jusque 1 024 pébioctets et permet la pré-allocation d’une zone contiguë pour un fichier, afin de minimiser la fragmentation. Utilisez ce système de fichiers si vous comptez pouvoir relire des informations depuis votre Mac OS X ou Windows.
🧪 SMART
🧪 SMART
S.M.A.R.T. is a technology that allows you to monitor and analyze the health and performance of your hard drives. It provides valuable information about the status of your storage devices. Here are some useful commands and tips for using S.M.A.R.T. with smartctl: Display S.M.A.R.T. Information To display S.M.A.R.T. information for a specific drive, you can use the following command: 1smartctl -a /dev/sda This command will show all available S.M.A.R.T. data for the /dev/sda drive.
🧱 ISCSI
🧱 ISCSI
Install 1yum install iscsi-initiator-utils 2 3#Checks 4iscsiadm -m session -P 0 # get the target name 5iscsiadm -m session -P 3 | grep "Target: iqn\|Attached scsi disk\|Current Portal" 6 7# Discover and mount ISCSI disk 8iscsiadm -m discovery -t st -p 192.168.40.112 9iscsiadm --mode discovery --type sendtargets --portal 192.168.40.112 10 11# Login 12iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.b0 -l 13iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.b1 -l 14iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.a1 -l 15iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.a0 -l 16 17# Enable/Start service 18systemctl enable iscsid iscsi && systemctl stop iscsid iscsi && systemctl start iscsid iscsi Rescan BUS 1for BUS in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${BUS} ; done 2 3sudo sh -c 'for BUS in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${BUS} ; done ' Partition your FS
🩺 multipath
🩺 multipath
Install and Set Multipath 1yum install device-mapper-multipath Check settings in vim /etc/multipath.conf: 1defaults { 2user_friendly_names yes 3path_grouping_policy multibus 4} add disk in blacklisted and a block 1multipaths { 2 multipath { 3 wwid "36000d310004142000000000000000f23" 4 alias oralog1 5 } Special config for some providers. For example, recommended settings for all Clariion/VNX/Unity class arrays that support ALUA: 1 devices { 2 device { 3 vendor "DGC" 4 product ".*" 5 product_blacklist "LUNZ" 6 : 7 path_checker emc_clariion ### Rev 47 alua 8 hardware_handler "1 alua" ### modified for alua 9 prio alua ### modified for alua 10 : 11 } 12 } Checks config with: multipathd show config |more
🧐 LVM
🧐 LVM
The Basics list of component: PV (Physical Volume) VG (Volume Group) LV (Logical Volume) PE (Physical Extend) LE (Logical Extend) FS (File Sytem) LVM2 use a new driver, the device-mapper allow the us of disk´s sectors in different targets: - linear (most used in LVM). - stripped (stripped on several disks) - error (all I/O are consider in errors) - snapshot (allow snapshot async) mirror (integrate elements usefull for pvmove commande) below example show you a striped volume and linear volume 1lvs --all --segments -o +devices 2server_xplore_col1 vgdata -wi-ao---- 21 striped 1.07t /dev/md2(40229),/dev/md3(40229),/dev/md4(40229),/dev/md5(40229),… 3server_xplore_col2 vgdata -wi-ao---- 1 linear 219.87g /dev/md48(0) Basic checks 1# Summary 2pvs 3vgs 4lvs 5 6# Scanner 7pvscan 8vgscan 9lvscan 10 11# Details info 12pvdisplay [sda] 13pvdisplay -m /dev/emcpowerd1 14vgdisplay [vg_root] 15lvdisplay [/dev/vg_root/lv_usr] 16 17# Summary details 18lvmdiskscan 19 /dev/sda1 [ 600.00 MiB] 20 /dev/sda2 [ 1.00 GiB] 21 /dev/sda3 [ 38.30 GiB] LVM physical volume 22 /dev/sdb1 [ <100.00 GiB] LVM physical volume 23 /dev/sdc1 [ <50.00 GiB] LVM physical volume 24 /dev/sdj [ 20.00 GiB] 25 1 disk 26 2 partitions 27 0 LVM physical volume whole disks 28 3 LVM physical volumes Usual Scenario in LVM Extend an existing LVM filesystem: 1parted /dev/sda resizepart 3 100% 2udevadm settle 3pvresize /dev/sda3 4 5# Extend a XFS to a fixe size 6lvextend -L 30G /dev/vg00/var 7xfs_growfs /dev/vg00/var 8 9# Add some space to a ext4 FS 10lvextend -L +10G /dev/vg00/var 11resize2fs /dev/vg00/var 12 13# Extend to a pourcentage and resize automaticly whatever is the FS type. 14lvextend -l +100%FREE /dev/vg00/var -r Create a new LVM filesystem: 1parted /dev/sdb mklabel gpt mkpart primary 1 100% set 1 lvm on 2udevadm settle 3pvcreate /dev/sdb1 4vgcreate vg01 /dev/sdb1 5lvcreate -n lv_data -l 100%FREE vg01 6 7# Create a XFS 8mkfs.xfs /dev/vg01/lv_data 9mkdir /data 10echo "/dev/mapper/vg01-lv_data /data xfs defaults 0 0" >> /etc/fstab 11mount -a 12 13# Create an ext4 14mkfs.ext4 /dev/vg01/lv_data 15mkdir /data 16echo "/dev/mapper/vg01-lv_data /data ext4 defaults 0 0" >> /etc/fstab 17mount -a Remove SWAP: 1swapoff -v /dev/dm-1 2lvremove /dev/vg00/swap 3vi /etc/fstab 4vi /etc/default/grub 5grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 6grubby --remove-args "rd.lvm.lv=vg00/swap" --update-kernel /boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64 7grubby --remove-args "rd.lvm.lv=vg00swap" --update-kernel /boot/vmlinuz-3.10.0-1160.el7.x86_64 8grubby --remove-args "rd.lvm.lv=vg00/swap" --update-kernel /boot/vmlinuz-0-rescue-cd2525c8417d4f798a7e6c371121ef34 9echo "vm.swappiness = 0" >> /etc/sysctl.conf 10sysctl -p Move data form disk to another: 1# #n case of crash, just relaunch pvmove without arguments 2pvmove /dev/emcpowerd1 /dev/emcpowerc1 3 4# Remove PV from a VG 5vgreduce /dev/emcpowerd1 vg01 6 7# Remove all unused PV from VG01 8vgreduce -a vg01 9 10# remove all PV 11pvremove /dev/emcpowerd1 mount /var even if doesn’t want: 1lvchange -ay --ignorelockingfailure --sysinit vgroot/var Renaming: 1# VG rename 2vgrename 3 4# LV rename 5lvrename 6 7# PV does not need to be rename LVM on partition VS on Raw Disk Even if in the past I was using partition MS-DOS disklabel or GPT disklabel for PV, I prefer now to use directly LVM on the main block device. There is no reason to use 2 disklabels, unless you have a very specific use case (like disk with boot sector and boot partition).
🐛 NFS
🐛 NFS
The Basics NFS vs iscsi NFS can handle simultaniously writing from several clients. NFS is a filesystem , iscsi is a block storage. iscsi performance are same with NFS. iscsi will appear as disk to the OS, not the case for NFS. Concurrent access to a block device like iSCSI is not possible with standard file systems. You’ll need a shared disk filesystem (like GFS or OCSFS) to allow this, but in most cases the easiest solution would be to just use a network share (via SMB/CIFS or NFS) if this is sufficient for your application.
🔍️ Investigate
🔍️ Investigate
Ressources 1# in crontab or tmux session - take every hour a track of the memory usage 2for i in {1..24} ; do echo -n "===================== " ; date ; free -m ; top -b -n1 | head -n 15 ; sleep 3600; done >> /var/log/SYSADM/memory.log & Hardware Logs Health Checks