Docs

🌱 MDadm
🌱 MDadm
The Basics mdadm (multiple devices admin) is software solution to manage RAID. It allow: create, manage, monitor your disks in an RAID array. you can the full disks (/dev/sdb, /dev/sdc) or (/dev/sdb1, /dev/sdc1) replace or complete raidtools Checks Basic checks 1# View real-time information about your md devices 2cat /proc/mdstat 3 4# Monitor for failed disks (indicated by "(F)" next to the disk) 5watch cat /proc/mdstat Checks RAID 1# Display details about the RAID array (replace /dev/md0 with your array) 2mdadm --detail /dev/md0 3 4# Examine RAID disks for information (not volume) similar to --detail 5mdadm --examine /dev/sd* Settings The conf file /etc/mdadm.conf does not exist by default and need to be created once you finish your install. This file is required for the autobuild at boot.
📂 Filesystem
📂 Filesystem
FS Types ext4 : le plus répandu sous GNU/Linux (issu de ext2 et ext3). Il est journalisé, c’est à dire qu’il trace les opérations d’écriture pour garantir l’intégrité des données en cas d’arrêt brutal du disque. De plus, il peut gérer des volumes de taille jusque 1 024 pébioctets et permet la pré-allocation d’une zone contiguë pour un fichier, afin de minimiser la fragmentation. Utilisez ce système de fichiers si vous comptez pouvoir relire des informations depuis votre Mac OS X ou Windows.
🧪 SMART
🧪 SMART
S.M.A.R.T. is a technology that allows you to monitor and analyze the health and performance of your hard drives. It provides valuable information about the status of your storage devices. Here are some useful commands and tips for using S.M.A.R.T. with smartctl: Display S.M.A.R.T. Information To display S.M.A.R.T. information for a specific drive, you can use the following command: 1smartctl -a /dev/sda This command will show all available S.M.A.R.T. data for the /dev/sda drive.
🧱 ISCSI
🧱 ISCSI
Install 1yum install iscsi-initiator-utils 2 3#Checks 4iscsiadm -m session -P 0 # get the target name 5iscsiadm -m session -P 3 | grep "Target: iqn\|Attached scsi disk\|Current Portal" 6 7# Discover and mount ISCSI disk 8iscsiadm -m discovery -t st -p 192.168.40.112 9iscsiadm --mode discovery --type sendtargets --portal 192.168.40.112 10 11# Login 12iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.b0 -l 13iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.b1 -l 14iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.a1 -l 15iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.a0 -l 16 17# Enable/Start service 18systemctl enable iscsid iscsi && systemctl stop iscsid iscsi && systemctl start iscsid iscsi Rescan BUS 1for BUS in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${BUS} ; done 2 3sudo sh -c 'for BUS in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${BUS} ; done ' Partition your FS
🩺 multipath
🩺 multipath
Install and Set Multipath 1yum install device-mapper-multipath Check settings in vim /etc/multipath.conf: 1defaults { 2user_friendly_names yes 3path_grouping_policy multibus 4} add disk in blacklisted and a block 1multipaths { 2 multipath { 3 wwid "36000d310004142000000000000000f23" 4 alias oralog1 5 } Special config for some providers. For example, recommended settings for all Clariion/VNX/Unity class arrays that support ALUA: 1 devices { 2 device { 3 vendor "DGC" 4 product ".*" 5 product_blacklist "LUNZ" 6 : 7 path_checker emc_clariion ### Rev 47 alua 8 hardware_handler "1 alua" ### modified for alua 9 prio alua ### modified for alua 10 : 11 } 12 } Checks config with: multipathd show config |more
🧐 LVM
🧐 LVM
The Basics list of component: PV (Physical Volume) VG (Volume Group) LV (Logical Volume) PE (Physical Extend) LE (Logical Extend) FS (File Sytem) LVM2 use a new driver, the device-mapper allow the us of disk´s sectors in different targets: - linear (most used in LVM). - stripped (stripped on several disks) - error (all I/O are consider in errors) - snapshot (allow snapshot async) mirror (integrate elements usefull for pvmove commande) below example show you a striped volume and linear volume 1lvs --all --segments -o +devices 2server_xplore_col1 vgdata -wi-ao---- 21 striped 1.07t /dev/md2(40229),/dev/md3(40229),/dev/md4(40229),/dev/md5(40229),… 3server_xplore_col2 vgdata -wi-ao---- 1 linear 219.87g /dev/md48(0) Basic checks 1# Summary 2pvs 3vgs 4lvs 5 6# Scanner 7pvscan 8vgscan 9lvscan 10 11# Details info 12pvdisplay [sda] 13pvdisplay -m /dev/emcpowerd1 14vgdisplay [vg_root] 15lvdisplay [/dev/vg_root/lv_usr] 16 17# Summary details 18lvmdiskscan 19 /dev/sda1 [ 600.00 MiB] 20 /dev/sda2 [ 1.00 GiB] 21 /dev/sda3 [ 38.30 GiB] LVM physical volume 22 /dev/sdb1 [ <100.00 GiB] LVM physical volume 23 /dev/sdc1 [ <50.00 GiB] LVM physical volume 24 /dev/sdj [ 20.00 GiB] 25 1 disk 26 2 partitions 27 0 LVM physical volume whole disks 28 3 LVM physical volumes Usual Scenario in LVM Extend an existing LVM filesystem: 1parted /dev/sda resizepart 3 100% 2udevadm settle 3pvresize /dev/sda3 4 5# Extend a XFS to a fixe size 6lvextend -L 30G /dev/vg00/var 7xfs_growfs /dev/vg00/var 8 9# Add some space to a ext4 FS 10lvextend -L +10G /dev/vg00/var 11resize2fs /dev/vg00/var 12 13# Extend to a pourcentage and resize automaticly whatever is the FS type. 14lvextend -l +100%FREE /dev/vg00/var -r Create a new LVM filesystem: 1parted /dev/sdb mklabel gpt mkpart primary 1 100% set 1 lvm on 2udevadm settle 3pvcreate /dev/sdb1 4vgcreate vg01 /dev/sdb1 5lvcreate -n lv_data -l 100%FREE vg01 6 7# Create a XFS 8mkfs.xfs /dev/vg01/lv_data 9mkdir /data 10echo "/dev/mapper/vg01-lv_data /data xfs defaults 0 0" >> /etc/fstab 11mount -a 12 13# Create an ext4 14mkfs.ext4 /dev/vg01/lv_data 15mkdir /data 16echo "/dev/mapper/vg01-lv_data /data ext4 defaults 0 0" >> /etc/fstab 17mount -a Remove SWAP: 1swapoff -v /dev/dm-1 2lvremove /dev/vg00/swap 3vi /etc/fstab 4vi /etc/default/grub 5grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 6grubby --remove-args "rd.lvm.lv=vg00/swap" --update-kernel /boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64 7grubby --remove-args "rd.lvm.lv=vg00swap" --update-kernel /boot/vmlinuz-3.10.0-1160.el7.x86_64 8grubby --remove-args "rd.lvm.lv=vg00/swap" --update-kernel /boot/vmlinuz-0-rescue-cd2525c8417d4f798a7e6c371121ef34 9echo "vm.swappiness = 0" >> /etc/sysctl.conf 10sysctl -p Move data form disk to another: 1# #n case of crash, just relaunch pvmove without arguments 2pvmove /dev/emcpowerd1 /dev/emcpowerc1 3 4# Remove PV from a VG 5vgreduce /dev/emcpowerd1 vg01 6 7# Remove all unused PV from VG01 8vgreduce -a vg01 9 10# remove all PV 11pvremove /dev/emcpowerd1 mount /var even if doesn’t want: 1lvchange -ay --ignorelockingfailure --sysinit vgroot/var Renaming: 1# VG rename 2vgrename 3 4# LV rename 5lvrename 6 7# PV does not need to be rename LVM on partition VS on Raw Disk Even if in the past I was using partition MS-DOS disklabel or GPT disklabel for PV, I prefer now to use directly LVM on the main block device. There is no reason to use 2 disklabels, unless you have a very specific use case (like disk with boot sector and boot partition).
🐛 NFS
🐛 NFS
The Basics NFS vs iscsi NFS can handle simultaniously writing from several clients. NFS is a filesystem , iscsi is a block storage. iscsi performance are same with NFS. iscsi will appear as disk to the OS, not the case for NFS. Concurrent access to a block device like iSCSI is not possible with standard file systems. You’ll need a shared disk filesystem (like GFS or OCSFS) to allow this, but in most cases the easiest solution would be to just use a network share (via SMB/CIFS or NFS) if this is sufficient for your application.
🔍️ Investigate
🔍️ Investigate
Ressources 1# in crontab or tmux session - take every hour a track of the memory usage 2for i in {1..24} ; do echo -n "===================== " ; date ; free -m ; top -b -n1 | head -n 15 ; sleep 3600; done >> /var/log/SYSADM/memory.log & Hardware Logs Health Checks
🚩 Compare
🚩 Compare
Compare staffs Compare two jar files: 1diff -W200 -y <(unzip -vqq file1.jar | awk '{ if ($1 > 0) {printf("%s\t%s\n", $1, $8)}}' | sort -k2) <(unzip -vqq file2.jar | awk '{ if ($1 > 0) {printf("%s\t%s\n", $1, $8)}}' | sort -k2)
🚩 Files
🚩 Files
Find a process blocking a file with fuser: 1fuser -m </dir or /files> # Find process blocking/using this directory or files. 2fuser -cu </dir or /files> # Same as above but add the user 3fuser -kcu </dir or /files> # Kill process 4fuser -v -k -HUP -i ./ # Send HUP signal to process 5 6# Output will send you <PID + letter>, here is the meaning: 7# c current directory. 8# e executable being run. 9# f open file. (omitted in default display mode). 10# F open file for writing. (omitted in default display mode). 11# r root directory. 12# m mmap'ed file or shared library. with lsof ( = list open file): 1lsof +D /var/log # Find all files blocked with the process and user. 2lsof -a +L1 <mountpoint> # Process blocking a FS. 3lsof -c ssh -c init # Find files open by thoses processes. 4lsof -p 1753 # Find files open by PID process. 5lsof -u root # Find files open by user. 6lsof -u ^user # Find files open by user except this one. 7kill -9 `lsof -t -u toto` # kill user's processes. (option -t output only PID). MacGyver method: 1#When you have no fuser or lsof: 2find /proc/*/fd -type f -links 0 -exec ls -lrt {} \;
🚩 Network Manager
🚩 Network Manager
Basic Troubleshooting Checks interfaces 1nmcli con show 2NAME UUID TYPE DEVICE 3ens192 4d0087a0-740a-4356-8d9e-f58b63fd180c ethernet ens192 4ens224 3dcb022b-62a2-4632-8b69-ab68e1901e3b ethernet ens224 5 6nmcli dev status 7DEVICE TYPE STATE CONNECTION 8ens192 ethernet connected ens192 9ens224 ethernet connected ens224 10ens256 ethernet connected ens256 11lo loopback unmanaged -- 12 13# Get interfaces details : 14nmcli connection show ens192 15nmcli -p con show ens192 16 17# Get DNS settings in interface 18UUID=$(nmcli --get-values connection.uuid c show "cloud-init eth0") 19nmcli --get-values ipv4.dns c show $UUID Changing Interface name 1nmcli connection add type ethernet mac "00:50:56:80:11:ff" ifname "ens224" 2nmcli connection add type ethernet mac "00:50:56:80:8a:0b" ifname "ens256" Create a custom config 1nmcli con load /etc/sysconfig/network-scripts/ifcfg-ens224 2nmcli con up ens192 Adding a Virtual IP 1nmcli con mod enp1s0 +ipv4.addresses "192.168.122.11/24" 2ip addr del 10.163.148.36/24 dev ens160 3 4nmcli con reload # before to reapply 5nmcli device reapply ens224 6systemctl status network.service 7systemctl restart network.service Add a DNS entry 1UUID=$(nmcli --get-values connection.uuid c show "cloud-init eth0") 2DNS_LIST=$(nmcli --get-values ipv4.dns c show $UUID) 3nmcli conn modify "$UUID" ipv4.dns "${DNS_LIST} ${DNS_IP}" 4 5# /etc/resolved is managed by systemd-resolved 6sudo systemctl restart systemd-resolved
🎶 Samba / CIFS
🎶 Samba / CIFS
Server Side First Install samba and samba-client (for debug + test) /etc/samba/smb.conf 1[home] 2Workgroup=WORKGROUP (le grp par defaul sur windows) 3Hosts allow = ... 4[shared] 5browseable = yes 6path = /shared 7valid users = user01, @un_group_au_choix 8writable = yes 9passdb backend = tdbsam #passwords are stored in the /var/lib/samba/private/passdb.tdb file. Test samba config testparm /usr/bin/testparm -s /etc/samba/smb.conf smbclient -L \192.168.56.102 -U test : list all samba shares available smbclient //192.168.56.102/sharedrepo -U test : connect to the share pdbedit -L : list user smb (better than smbclient)