Monday 23 December 2019

RAC : Add Failed cluster on 2nd node 12c R1 (12.1.0.2.0)

Scenario Preview :

We have 2 node RAC configuration. 2nd node operating system got failed or crash.
Recreating n 2nd node from scratch.

DB Name :

RAC

Instance :

RAC1, RAC2

Server Name:

srv1.example.com , srv2.example.com

STEPS:
  • Removing the rdbms inventory  of 2nd node from the 1st cluster using oracle user.
  • Removing the grid inventory of 2nd node from the 1st cluster using grid user.
  • Removing the node from cluster-ware registry.
  • Check and remove VIP of node 2.
  • Remove the rac2 instance of 2nd node.
  • Remove 2nd node directory from Operating system.
  • Add node  srv2 using grid user from srv1 server
  • Deconfigured the oracle grid infrastructure on this node: srv2.
  • Run root.sh on srv2 node using root user.
  • Check VIP status on node  srv2.
  • Add node rdbms from srv1 (GUI)
  • Add and start instance of rac2 
  • Modify service , start and check services on node.
 
Summary :

STEP : 
Directory : /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin

[oracle@srv1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 "CLUSTER_NODES={srv1}" LOCAL_NODE=srv1
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5981 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@srv1 bin]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin
[oracle@srv1 bin]$

STEP :
Directory : /u01/app/12.1.0/grid/oui/bin

[grid@srv1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={srv1}" CRS=TRUE -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5981 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

STEP :

[root@srv1 bin]# crsctl delete node -n srv2
CRS-4661: Node srv2 successfully deleted.
[root@srv1 bin]#

STEP :

[root@srv1 bin]# srvctl stop vip -vip srv2-vip.example.com -force
[root@srv1 bin]#


[root@srv1 bin]# srvctl remove vip -vip srv2-vip.example.com
Please confirm that you intend to remove the VIPs srv2-vip.example.com (y/[n]) y
[root@srv1 bin]#

STEP :

[root@srv1 bin]# srvctl remove instance -db rac -instance rac2
Remove instance from the database rac? (y/[n]) y
[root@srv1 bin]#

#################################################################################

STEP :

[root@srv2 u01]# rm -rf app
[root@srv2 u01]# cd /tmp
[root@srv2 tmp]# ll
total 276
drwxrwx---  3 grid   oinstall   4096 Dec 20 16:26 CVU_12.1.0.2.0_grid
drwxrwx---  3 oracle oinstall   4096 Dec 23 12:40 CVU_12.1.0.2.0_oracle
drwxr-xr-x  3 grid   oinstall   4096 Dec 21 05:47 CVU_12.1.0.2.0_resource
drwxr-x---  2 grid   oinstall   4096 Dec 23 11:57 hsperfdata_grid
drwxr-xr-x  2 root   root       4096 Dec 23 12:58 hsperfdata_root
drwx------  2 root   root       4096 Oct 30 13:58 keyring-822RUe
drwx------  2 root   root       4096 Sep 27 14:46 keyring-8GhiEO
drwx------  2 root   root       4096 Oct 14 12:25 keyring-odq9s3
drwx------  2 root   root       4096 Sep 20 15:44 keyring-VqEQir
-rw-r--r--  1 root   root     222736 Dec 20 18:31 modules.dep
drwxr-xr-x  4 grid   oinstall   4096 Dec 20 17:40 OraInstall2019-12-20_04-26-29PM
drwxr-xr-x  4 oracle oinstall   4096 Dec 20 19:07 OraInstall2019-12-20_06-51-52PM
drwx------  2 gdm    gdm        4096 Dec 20 15:47 orbit-gdm
drwx------. 2 root   root       4096 Dec 19 13:23 pulse-Fhy2hgN3LpSQ
drwx------  2 gdm    gdm        4096 Dec 20 15:47 pulse-ry0r6BdeXuu1

[root@srv2 tmp]# rm -rf OraInstall2019-12-20_04-26-29PM OraInstall2019-12-20_06-51-52PM CVU_12.1.0.2.0_resource CVU_12.1.0.2.0_oracle CVU_12.1.0.2.0_grid

[root@srv2 tmp]# cd /usr/local/bin/
-rwxr-xr-x 1 grid root 6583 Dec 20 18:27 coraenv
-rwxr-xr-x 1 grid root 2445 Dec 20 18:27 dbhome
-rwxr-xr-x 1 grid root 7012 Dec 20 18:27 oraenv
[root@srv2 bin]# rm -f *

 [root@srv2 oracle]#cd /etc/oracle

[root@srv2 oracle]# ll
total 3000
drwxrwx--- 2 root oinstall    4096 Sep 23 17:27 lastgasp
drwxrwxrwt 2 root oinstall    4096 Dec 23 16:07 maps
-rw-r--r-- 1 root oinstall      72 Dec 23 14:51 ocr.loc
-rw-r--r-- 1 root root           0 Dec 23 14:51 ocr.loc.orig
-rw-r--r-- 1 root oinstall      80 Dec 23 14:51 olr.loc
-rw-r--r-- 1 root root           0 Dec 23 14:51 olr.loc.orig
drwxrwxr-x 5 root oinstall    4096 Sep 23 17:20 oprocd
drwxr-xr-x 3 root oinstall    4096 Sep 23 17:20 scls_scr
-rws--x--- 1 root oinstall 3044561 Dec 23 14:49 setasmgid

[root@srv2 oracle]#

 rm -f ocr.loc ocr.loc.orig olr.loc olr.loc.orig

#################################################################################

STEP :

[grid@srv1 ~]$ cd /u01/app/12.1.0/grid/addnode/
[grid@srv1 addnode]$ ll
total 12
-rw-r--r-- 1 grid oinstall 1963 Sep 23 16:54 addnode_oraparam.ini
-rw-r--r-- 1 grid oinstall 1971 Jul  7  2014 addnode_oraparam.ini.sbs
-rwxr-xr-x 1 grid oinstall 3575 Sep 23 16:54 addnode.sh

[grid@srv1 addnode]$ ./addnode.sh -ignoreSysPrereqs -ignorePrereq -silent "CLUSTER_NEW_NODES={srv2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={srv2-vip}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 20656 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 5981 MB    Passed

Prepare Configuration in progress.

Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2019-12-23_01-14-13PM.log

Instantiate files in progress.

Instantiate files successful.
..................................................   14% Done.

Copying files to node in progress.

Copying files to node successful.
..................................................   73% Done.

Saving cluster inventory in progress.


..................................................   80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   88% Done.

As a root user, execute the following script(s):
        1. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[srv2]

The scripts can be executed in parallel on all the nodes.

..........
Update Inventory in progress.
..................................................   100% Done.

Update Inventory successful.
Successfully Setup Software.
[grid@srv1 addnode]$


##############################################################################

STEP :

[root@srv2 install]# pwd
/u01/app/12.1.0/grid/crs/install
[root@srv2 install]#

STEP :

[root@srv2 install]# ./rootcrs.sh -verbose -deconfig -force
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2019-12-23 14:48:33:   mkpath (/u01/app/12.1.0/grid/cfgtoollogs/crsconfig)
2019/12/23 14:48:36 CLSRSC-561: The Oracle Grid Infrastructure has already been deconfigured on this node: srv2.


###############
[root@srv2]# cd /u01/app/12.1.0/grid

STEP :

[root@srv2]# ./root.sh

[root@srv2 install]# srvctl status vip -vip srv2-vip.example.com
VIP srv2-vip is enabled
VIP srv2-vip is running on node: srv2
[root@srv2 install]#


################


/u01/app/oracle/product/12.1.0/dbhome_2/addnode
[oracle@srv1 addnode]$ ll
total 12
-rw-r--r-- 1 oracle oinstall 1963 Sep 27 15:35 addnode_oraparam.ini
-rw-r--r-- 1 oracle oinstall 1971 Jul  7  2014 addnode_oraparam.ini.sbs
-rwxr-xr-x 1 oracle oinstall 3593 Sep 27 15:35 addnode.sh
[oracle@srv1 addnode]$ 

###############

STEP :
[oracle@srv1 addnode]$ ./addnode.sh  -silent "CLUSTER_NEW_NODES={srv2}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 20439 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 5956 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Logfile Location : /u01/app/oraInventory/logs/sshsetup1_2019-12-23_03-32-37PM.log
ClusterLogger - log file location: /u01/app/oracle/product/12.1.0/dbhome_2/oui/bin/Logs/remoteInterfaces2019-12-23_03-32-37PM.log
Validating remote binaries..
Remote binaries check succeeded
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2019-12-23_03-32-37PM.log

####################

STEP :

[oracle@srv1 addnode]$ srvctl add instance -db rac -instance rac2 -node srv2
[oracle@srv1 addnode]$
[oracle@srv1 addnode]$
[oracle@srv1 addnode]$ srvctl start instance -db rac -instance rac2
[oracle@srv1 addnode]$ srvctl status database -d rac
Instance rac1 is running on node srv1
Instance rac2 is running on node srv2


[oracle@srv1 addnode]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Dec 23 16:05:06 2019

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select inst_id,status from gv$instance;

   INST_ID STATUS
---------- ------------
         1 OPEN
         2 OPEN

SQL> exit


##########################################################################

STEP :

[oracle@srv1 addnode]$ srvctl modify service -db rac -service pretaf -modifyconfig -preferred "rac1,rac2"
[oracle@srv1 addnode]$ srvctl start service -db rac -service pretaf -node srv2

[oracle@srv1 addnode]$ srvctl status service -db rac
Service acsrac is running on instance(s) rac1
Service pretaf is running on instance(s) rac1,rac2



No comments:

Post a Comment