Wednesday 1 August 2018

Delete, Remove a Node from Oracle RAC 11gR2 in Red Hat 5.8

Deleting a Cluster Node in Oracle 11gR2 on Linux and UNIX Systems

How to remove a node from a Oracle Cluster? How to delete the nodes from a running Oracle RAC Environment? Steps for the removal of node from Oracle Clusterware | Deletion of Node from Oracle RAC Explained

removing a node from oracle rac


The below mentioned steps will guide you to perform Node Removal Process from an existing RAC Environment. Here I am having a 4 Node 11gR2 RAC installed with the Nodes names as
  • node1
  • node2
  • node3
  • node4
I'll be removing node3, node4 from the Cluster leaving behind node1, node2 to run the same. Please follow the guidelines ahead:

1) Before moving ahead, make sure you have properly defined Environment Variables like ORACLE_HOME, GRID_HOME, DB_NAME etc for Oracle Software as well as Grid Infrastructure Owners. If not, make sure you define the path correctly in commands.
To take the reference for ORACLE_HOME or GRID_HOME, you can always check /etc/oratab file.

2) Check for the pinned/unpinned (active or not) status for all the nodes. (Run this command either with root OR Clusterware owner by going to $GRID_HOME/bin directory)

[oracle@node1 bin]$ ./olsnodes -s -t
node1   Active  Unpinned
node2   Active  Unpinned
node3   Active  Unpinned
node4   Active  Unpinned
[oracle@node1 bin]$

3) If the status shows as pinned, then from any node which you are not deleting, run the below mentioned command to expire the CSS Leave for the node you are about to remove from clustureware (Run this command either with root OR Clusterware owner by going to $GRID_HOME/bin directory)

[oracle@node1 bin]$ ./crsctl unpin css -n node_to_be_deleted

4) Disable the Oracle Clusterware applications and daemons running on the node. (Run this command either with root OR Clusterware owner by going to $GRID_HOME/crs/install directory)

[root@node3 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/10.54.4.0/255.255.255.0/eth0, type static
VIP exists: /node1-vip/10.54.4.223/10.54.4.0/255.255.255.0/eth0, hosting node node1
VIP exists: /node2-vip/10.54.4.224/10.54.4.0/255.255.255.0/eth0, hosting node node2
VIP exists: /node3-vip/10.54.4.123/10.54.4.0/255.255.255.0/eth0, hosting node node3
VIP exists: /node4-vip/10.54.4.124/10.54.4.0/255.255.255.0/eth0, hosting node node4
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node3'
CRS-2677: Stop of 'ora.registry.acfs' on 'node3' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node3'
CRS-2673: Attempting to stop 'ora.crsd' on 'node3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node3'
CRS-2673: Attempting to stop 'ora.CRSGRP.dg' on 'node3'
CRS-2673: Attempting to stop 'ora.DATAGRP.dg' on 'node3'
CRS-2673: Attempting to stop 'ora.FLBKGRP.dg' on 'node3'
CRS-2677: Stop of 'ora.DATAGRP.dg' on 'node3' succeeded
CRS-2677: Stop of 'ora.FLBKGRP.dg' on 'node3' succeeded
CRS-2677: Stop of 'ora.CRSGRP.dg' on 'node3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node3'
CRS-2677: Stop of 'ora.asm' on 'node3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node3' has completed
CRS-2677: Stop of 'ora.crsd' on 'node3' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node3'
CRS-2673: Attempting to stop 'ora.evmd' on 'node3'
CRS-2673: Attempting to stop 'ora.asm' on 'node3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node3'
CRS-2677: Stop of 'ora.evmd' on 'node3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node3' succeeded
CRS-2677: Stop of 'ora.asm' on 'node3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node3' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node3'
CRS-2677: Stop of 'ora.cssd' on 'node3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node3'
CRS-2677: Stop of 'ora.gipcd' on 'node3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node3'
CRS-2677: Stop of 'ora.gpnpd' on 'node3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

[root@node4 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/10.54.4.0/255.255.255.0/eth0, type static
VIP exists: /node1-vip/10.54.4.223/10.54.4.0/255.255.255.0/eth0, hosting node node1
VIP exists: /node2-vip/10.54.4.224/10.54.4.0/255.255.255.0/eth0, hosting node node2
VIP exists: /node4-vip/10.54.4.124/10.54.4.0/255.255.255.0/eth0, hosting node node4
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node4'
CRS-2677: Stop of 'ora.registry.acfs' on 'node4' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node4'
CRS-2673: Attempting to stop 'ora.crsd' on 'node4'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node4'
CRS-2673: Attempting to stop 'ora.CRSGRP.dg' on 'node4'
CRS-2673: Attempting to stop 'ora.DATAGRP.dg' on 'node4'
CRS-2673: Attempting to stop 'ora.FLBKGRP.dg' on 'node4'
CRS-2677: Stop of 'ora.DATAGRP.dg' on 'node4' succeeded
CRS-2677: Stop of 'ora.FLBKGRP.dg' on 'node4' succeeded
CRS-2677: Stop of 'ora.CRSGRP.dg' on 'node4' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node4'
CRS-2677: Stop of 'ora.asm' on 'node4' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node4' has completed
CRS-2677: Stop of 'ora.crsd' on 'node4' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'node4'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node4'
CRS-2673: Attempting to stop 'ora.evmd' on 'node4'
CRS-2673: Attempting to stop 'ora.asm' on 'node4'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node4'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node4'
CRS-2677: Stop of 'ora.crf' on 'node4' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node4' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node4' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node4' succeeded
CRS-2677: Stop of 'ora.asm' on 'node4' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node4'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node4' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node4'
CRS-2677: Stop of 'ora.cssd' on 'node4' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node4'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node4' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'node4' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node4'
CRS-2677: Stop of 'ora.gpnpd' on 'node4' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node4' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

5) Remove the nodes from the cluster. Run the below mentioned command from any node you don't want to remove. (Run this command either with root OR Clusterware owner by going to $GRID_HOME/bin directory)

[root@node1 ~]# cd /u01/crs/product/11.2.0/crs/bin
[root@node1 bin]# ./crsctl delete node -n node3
CRS-4661: Node node3 successfully deleted.
[root@node1 bin]# ./crsctl delete node -n node4
CRS-4661: Node node4 successfully deleted.

6) Run the below mentioned command on all the nodes which you are going to remove as shown below. (Run this command either with root OR Clusterware owner by going to $GRID_HOME/oui/bin directory)

$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
{node_to_be_deleted}" CRS=TRUE -local

[oracle@node3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/crs/product/11.2.0/crs "CLUSTER_NODES=node3" CRS=TRUE -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 5503 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/crs/oraInventory
'UpdateNodeList' was successful.
[oracle@node3 bin]$

[oracle@node4 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/crs/product/11.2.0/crs "CLUSTER_NODES=node4" CRS=TRUE -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 5503 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/crs/oraInventory
'UpdateNodeList' was successful.

7) If you have a shared home, then run the following command from the Grid_home/oui/bin directory on the node you want to delete:

$ ./runInstaller -detachHome  ORACLE_HOME=Grid_home

If not shared, then run the command as shown below,

[grid@node3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/crs/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/crs/product/11.2.0/crs
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/crs/grid
Checking for existence of central inventory location /u01/crs/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: node3
Checking for sufficient temp space availability on node(s) : 'node3'

## [END] Install check configuration ##

Traces log file: /u01/crs/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "node3"[node3-vip]
 >

The following information can be collected by running "/sbin/ifconfig -a" on node "node3"
Enter the IP netmask of Virtual IP "10.54.4.123" on node "node3"[255.255.255.0]
 >

Enter the network interface name on which the virtual IP address "10.54.4.123" is active
 >

Enter an address or the name of the virtual IP[]
 >

Network Configuration check config START

Network de-configuration trace file location: /u01/crs/oraInventory/logs/netdc_check2018-07-31_07-20-28-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN2,LISTENER_SCAN1]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /u01/crs/oraInventory/logs/asmcadc_check2018-07-31_07-20-41-PM.log

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:node3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'node3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/crs/product/11.2.0/crs
Inventory Location where the Oracle home registered is: /u01/crs/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN2,LISTENER_SCAN1
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/crs/oraInventory/logs/deinstall_deconfig2018-07-31_07-19-14-PM.out'
Any error messages from this session will be written to: '/u01/crs/oraInventory/logs/deinstall_deconfig2018-07-31_07-19-14-PM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /u01/crs/oraInventory/logs/asmcadc_clean2018-07-31_07-20-45-PM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /u01/crs/oraInventory/logs/netdc_clean2018-07-31_07-20-45-PM.log

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN2,LISTENER_SCAN1

De-configuring listener: LISTENER
    Stopping listener on node "node3": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN2
    Stopping listener on node "node3": LISTENER_SCAN2
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN1
    Stopping listener on node "node3": LISTENER_SCAN1
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

--------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "node3".

/tmp/deinstall2018-07-31_07-19-04PM/perl/bin/perl -I/tmp/deinstall2018-07-31_07-19-04PM/perl/lib -I/tmp/deinstall2018-07-31_07-19-04PM/crs/install /tmp/deinstall2018-07-31_07-19-04PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2018-07-31_07-19-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

Remove the directory: /tmp/deinstall2018-07-31_07-19-04PM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/crs/product/11.2.0/crs' from the central inventory on the local node : Done

Delete directory '/u01/crs/product/11.2.0/crs' on the local node : Done

Delete directory '/u01/crs/grid' on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-07-31_07-19-04PM' on node 'node3'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "node3"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/crs/product/11.2.0/crs' from the central inventory on the local node.
Successfully deleted directory '/u01/crs/product/11.2.0/crs' on the local node.
Successfully deleted directory '/u01/crs/grid' on the local node.
Oracle Universal Installer cleanup was successful.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'node3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[grid@node3 deinstall]$

8) Update the nodes list from any node you don't want to delete, (Run this command either with root OR Clusterware owner by going to $GRID_HOME/oui/bin directory)

[grid@node1 ~]$ cd /u01/crs/product/11.2.0/crs/oui/bin
[grid@node1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/crs/product/11.2.0/crs "CLUSTER_NODES=node1,node2" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5503 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/crs/oraInventory
'UpdateNodeList' was successful.
[grid@node1 bin]$

9) Verify the integrity and node removal status for your cluster. (Run this command either with root OR Clusterware owner by going to $GRID_HOME/bin directory)

[grid@node1 bin]$ ./cluvfy stage -post nodedel -n node3,node4

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Node removal check passed

Post-check for node removal was successful.
[grid@node1 bin]$ 

I hope this helps !!
Stay Tuned :)

No comments:

Post a Comment