HA

Clusterware
Clusterware
Grid The grid is the component responsable for Clustering in oracle. Grid (couche clusterware) -> ASM -> Disk Group - Oracle Restart = Single instance = 1 Grid (with or without ASM) - Oracle RAC OneNode = 2 instances Oracle in Actif/Passif with shared storage - Oracle RAC (Actif/Actif) SCAN 1# As oracle user: 2srvctl config scan 3 4SCAN name: host-env-datad1-scan.domain, Network: 1 5Subnet IPv4: 172.16.228.0/255.255.255.0/ens192, static 6Subnet IPv6: 7SCAN 1 IPv4 VIP: 172.16.228.33 8SCAN VIP is enabled. 9SCAN VIP is individually enabled on nodes: 10SCAN VIP is individually disabled on nodes: 11SCAN 2 IPv4 VIP: 172.16.228.35 12SCAN VIP is enabled. 13SCAN VIP is individually enabled on nodes: 14SCAN VIP is individually disabled on nodes: 15SCAN 3 IPv4 VIP: 172.16.228.34 16SCAN VIP is enabled. 17SCAN VIP is individually enabled on nodes: 18SCAN VIP is individually disabled on nodes: Oracle Instance resources: 1# As oracle user 2srvctl config database 3srvctl config database -d <SID> 4srvctl status database -d <SID> 5srvctl status nodeapps -n host-env-datad1n1 6srvctl config nodeapps -n host-env-datad1n1 7# ============ 8srvctl stop database -d DB_NAME 9srvctl stop database -d DB_NAME -o normal 10srvctl stop database -d DB_NAME -o immediate 11srvctl stop database -d DB_NAME -o transactional 12srvctl stop database -d DB_NAME -o abort 13srvctl stop instance -d DB_NAME -i INSTANCE_NAME 14# ============= 15srvctl start database -d DB_NAME -n host-env-datad1n1 16srvctl start database -d DB_NAME -o nomount 17srvctl start database -d DB_NAME -o mount 18srvctl start database -d DB_NAME -o open 19# ============ 20srvctl relocate database -db DB_NAME -node host-env-datad1n1 21srvctl modify database -d DB_NAME -instance DB_NAME 22srvctl restart database -d DB_NAME 23# === Do not do it 24srvctl modify instance -db DB_NAME -instance DB_NAME_2 -node host-env-datad1n2 25srvctl modify database -d DB_NAME -instance DB_NAME 26srvctl modify database -d oraclath -instance oraclath Cluster resources 1crs_stat 2crsctl status res 3crsctl status res -t 4crsctl check cluster -all 5 6# Example how it should look: 7/opt/oracle/grid/12.2.0.1/bin/crsctl check cluster -all 8************************************************************** 9host-env-datad1n1: 10CRS-4535: Cannot communicate with Cluster Ready Services 11CRS-4529: Cluster Synchronization Services is online 12CRS-4534: Cannot communicate with Event Manager 13************************************************************** 14host-env-datad1n2: 15CRS-4537: Cluster Ready Services is online 16CRS-4529: Cluster Synchronization Services is online 17CRS-4533: Event Manager is online 18************************************************************** 1show parameter cluster 2 3NAME TYPE VALUE 4------------------------------------ ----------- ------------------------------ 5cdb_cluster boolean FALSE 6cdb_cluster_name string DB_NAME 7cluster_database boolean TRUE 8cluster_database_instances integer 2 9cluster_interconnects string Stop/start secondary node: 1-- Prevent Database to switch over 2ALTER database cluster_database=FALSE; 1# as root 2/u01/oracle/base/product/19.0.0/grid/bin/crsctl stop crs -f 3/u01/oracle/base/product/19.0.0/grid/bin/crsctl disable crs 4 5# Shutdown/startup VM or other actions 6 7# as root 8/u01/oracle/base/product/19.0.0/grid/bin/crsctl enable crs 9/u01/oracle/base/product/19.0.0/grid/bin/crsctl start crs Stop/Start properly DB on both nodes: 1# as oracle user 2srvctl stop database -d oraclath 3 4# As root user, on both nodes: 5/opt/oracle/grid/12.2.0.1/bin/crsctl stop crs -f 6/opt/oracle/grid/12.2.0.1/bin/crsctl disable crs 7 8# As root user, on both nodes: 9/opt/oracle/grid/12.2.0.1/bin/crsctl enable crs 10/opt/oracle/grid/12.2.0.1/bin/crsctl start crs 11 12# checks after restart 13ps -ef | grep asm_pmon | grep -v "grep" 14 15# if ASM is up and running 16srvctl start database -d oraclath -node host1-env-data1n1.domain Listner issue 1# As oracle user 2srvctl status scan_listener 3 4PRCR-1068 : Failed to query resources 5CRS-0184 : Cannot communicate with the CRS daemon. the solution:
🩺 multipath
🩺 multipath
Install and Set Multipath 1yum install device-mapper-multipath Check settings in vim /etc/multipath.conf: 1defaults { 2user_friendly_names yes 3path_grouping_policy multibus 4} add disk in blacklisted and a block 1multipaths { 2 multipath { 3 wwid "36000d310004142000000000000000f23" 4 alias oralog1 5 } Special config for some providers. For example, recommended settings for all Clariion/VNX/Unity class arrays that support ALUA: 1 devices { 2 device { 3 vendor "DGC" 4 product ".*" 5 product_blacklist "LUNZ" 6 : 7 path_checker emc_clariion ### Rev 47 alua 8 hardware_handler "1 alua" ### modified for alua 9 prio alua ### modified for alua 10 : 11 } 12 } Checks config with: multipathd show config |more