Storage

Disks ASM
Disks ASM
Basics Start ASM - The old way: 1. oraenv # ora SID = +ASM1 (if second nodes +ASM2 ) 2sqlplus / as sysasm 3startup Start ASM - The new method: 1srvctl start asm -n ora-node1-hostname Check ASM volumes 1srvctl status asm 2asmcmd lsdsk 3asmcmd lsdsk -G DATA 4srvctl status diskgroup -g DATA Check clients connected to ASM volume 1# List clients 2asmcmd lsct 3 4DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group 5+ASM CONNECTED 19.0.0.0.0 19.0.0.0.0 +ASM DATA 6+ASM CONNECTED 19.0.0.0.0 19.0.0.0.0 +ASM FRA 7MANA CONNECTED 12.2.0.1.0 12.2.0.0.0 MANA DATA 8MANA CONNECTED 12.2.0.1.0 12.2.0.0.0 MANA FRA 9MREPORT CONNECTED 12.2.0.1.0 12.2.0.0.0 MREPORT DATA 10MREPORT CONNECTED 12.2.0.1.0 12.2.0.0.0 MREPORT FRA 11 12# Files Open 13asmcmd lsof 14 15DB_Name Instance_Name Path 16MANA MANA +DATA/MANA/DATAFILE/blob.268.1045299983 17MANA MANA +DATA/MANA/DATAFILE/data.270.1045299981 18MANA MANA +DATA/MANA/DATAFILE/indx.269.1045299983 19MANA MANA +DATA/MANA/control01.ctl 20MANA MANA +DATA/MANA/redo01a.log 21MANA MANA +DATA/MANA/redo02a.log 22MANA MANA +DATA/MANA/redo03a.log 23MANA MANA +DATA/MANA/redo04a.log 24MANA MANA +DATA/MANA/sysaux01.dbf 25[...] Connect to ASM prompt 1. oraenv # ora SID = +ASM 2asmcmd ASMlib ASMlib - provide oracleasm command: 1# list 2oracleasm listdisks 3DATA2 4FRA1 5 6# check 7oracleasm status 8Checking if ASM is loaded: yes 9Checking if /dev/oracleasm is mounted: yes 10 11# check one ASM volume 12oracleasm querydisk -d DATA2 13Disk "DATA2" is a valid ASM disk on device [8,49] 14 15# scan 16oracleasm scandisks 17Reloading disk partitions: done 18Cleaning any stale ASM disks... 19Scanning system for ASM disks... 20Instantiating disk "DATA3" 21 22# Create, delete, rename 23oracleasm createdisk DATA3 /dev/sdf1 24oracleasm deletedisk 25oracleasm renamedisk custom script to list disks handle for ASM (not relevant anymore): 1cat asmliblist.sh 2#!/bin/bash 3for asmlibdisk in `ls /dev/oracleasm/disks/*` 4 do 5 echo "ASMLIB disk name: $asmlibdisk" 6 asmdisk=`kfed read $asmlibdisk | grep dskname | tr -s ' '| cut -f2 -d' '` 7 echo "ASM disk name: $asmdisk" 8 majorminor=`ls -l $asmlibdisk | tr -s ' ' | cut -f5,6 -d' '` 9 device=`ls -l /dev | tr -s ' ' | grep -w "$majorminor" | cut -f10 -d' '` 10 echo "Device path: /dev/$device" 11 done Disks Group Disk Group : all disks in teh same DG should have same size. Different type of DG, external means that LUN replication is on storage side. When a disk is added to DG wait for rebalancing before continuing operations.
Scripting
🌱 MDadm
🌱 MDadm
The Basics mdadm (multiple devices admin) is software solution to manage RAID. It allow: create, manage, monitor your disks in an RAID array. you can the full disks (/dev/sdb, /dev/sdc) or (/dev/sdb1, /dev/sdc1) replace or complete raidtools Checks Basic checks 1# View real-time information about your md devices 2cat /proc/mdstat 3 4# Monitor for failed disks (indicated by "(F)" next to the disk) 5watch cat /proc/mdstat Checks RAID 1# Display details about the RAID array (replace /dev/md0 with your array) 2mdadm --detail /dev/md0 3 4# Examine RAID disks for information (not volume) similar to --detail 5mdadm --examine /dev/sd* Settings The conf file /etc/mdadm.conf does not exist by default and need to be created once you finish your install. This file is required for the autobuild at boot.
🧪 SMART
🧪 SMART
S.M.A.R.T. is a technology that allows you to monitor and analyze the health and performance of your hard drives. It provides valuable information about the status of your storage devices. Here are some useful commands and tips for using S.M.A.R.T. with smartctl: Display S.M.A.R.T. Information To display S.M.A.R.T. information for a specific drive, you can use the following command: 1smartctl -a /dev/sda This command will show all available S.M.A.R.T. data for the /dev/sda drive.
🧱 ISCSI
🧱 ISCSI
Install 1yum install iscsi-initiator-utils 2 3#Checks 4iscsiadm -m session -P 0 # get the target name 5iscsiadm -m session -P 3 | grep "Target: iqn\|Attached scsi disk\|Current Portal" 6 7# Discover and mount ISCSI disk 8iscsiadm -m discovery -t st -p 192.168.40.112 9iscsiadm --mode discovery --type sendtargets --portal 192.168.40.112 10 11# Login 12iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.b0 -l 13iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.b1 -l 14iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.a1 -l 15iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00192201413.a0 -l 16 17# Enable/Start service 18systemctl enable iscsid iscsi && systemctl stop iscsid iscsi && systemctl start iscsid iscsi Rescan BUS 1for BUS in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${BUS} ; done 2 3sudo sh -c 'for BUS in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${BUS} ; done ' Partition your FS
📐 Storage
📐 Storage
General concern If you want to move VMs to an another Storage Domain, you need to copy the template from it as well! Remove a disk: 1# IF RHV does not use anymore disk those should appear empty in lsblk: 2lsblk -a 3sdf 8:80 0 4T 0 disk 4└─36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 5sdg 8:96 0 4T 0 disk 6└─36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 7sdh 8:112 0 4T 0 disk 8└─36001405893b456536be4d67a7f6716e3 253:38 0 4T 0 mpath 9sdi 8:128 0 0 disk 10└─360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 11sdj 8:144 0 0 disk 12└─360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 13sdk 8:160 0 0 disk 14└─360014052ab23b1cee074fe38059d7c94 253:39 0 100G 0 mpath 15 16# find all disks from LUN ID 17LUN_ID="360014054ce7e566a01d44c1a4758b092" 18list_disk=$(dmsetup deps -o devname ${LUN_ID}| cut -f 2 |cut -c 3- |tr -d "()" | tr " " "\n") 19echo ${list_disk} 20 21# Remove from multipath 22multipath -f "${LUN_ID}" 23 24# remove disk 25for i in ${list_disk}; do echo ${i}; blockdev --flushbufs /dev/${i}; echo 1 > /sys/block/${i}/device/delete; done 26 27# You can which disk link with which LUN on CEPH side 28ls -l /dev/disk/by-* NFS for OLVM/oVirt Since oVirt need a shared stockage, we can create a local NFS to bypass this point if no Storage bay.
CEPH
S3 blockstorage
S3 blockstorage
S3cmd command S3cmd is a tool to handle blockstorage S3 type. Install the command 1# Ubuntu install 2sudo apt-get install s3cmd 3 4# Redhat install 5sudo dnf install s3cmd 6 7# or from sources 8wget https://sourceforge.net/projects/s3tools/files/s3cmd/2.2.0/s3cmd-2.2.0.tar.gz 9tar xzf s3cmd-2.2.0.tar.gz 10cd s3cmd-2.2.0 11sudo python3 setup.py install Configure it From Cloud providers (for example DO): Log in to the DigitalOcean Control Panel. Navigate to API > Spaces Access Keys and generate a new key pair.