Monday, January 11, 2010

How to Change Public IP,Private IP and VIP in Oracle 10g RAC on Linux

Changing the IP at OS level without updating the oracle cluster will crash the cluster.

Hence below are steps which provides the process of changing the IP on Cluster nodes.

Here i am using 2 node cluster on RHEL AS-4.

1) Shut down all the services like database,ASM,nodeapps on the both the nodes.

Here in my case only cluster services are running




2)Change the Public IP and Private IPin clusterware. Fire the statement from the node from which you have installed clusterware.

Here my previous IP was 172.26.0.0 series and i am changing it to 172.25.0.0 series





3) Change the Virtual IP

Change the VIP as it needs to be in the same subnet as of Public IP

My VIP was 172.26.16.61 on node2 and i am changing it to 172.25.16.61. Similarly for node1






4) Shut down the cluster on all nodes

node1 #]crsctl stop crs
node2 #]crsctl stop crs

5) Now change the IP at OS level and change the /etc/hosts file on both nodes

Changing from 172.26.0.0 series to 172.25.0.0 series.




6) After performing all the above steps reboot both the nodes.

Tuesday, January 5, 2010

Find the patches applied to your database

SQL> select ACTION_TIME,ACTION,VERSION,ID,COMMENTS from registry$history;

ACTION_TIME ACTION VERSION ID COMMENTS
-------------------- -------------- ---------- ---------- --------------------
13-NOV-09 05.54.51.9 UPGRADE 10.2.0.4.0 Upgraded from 10.2.0
01116 PM .1.0

The above query will display the series of upgrades/downgrades done on your database.


(OR)

$ORACLE_HOME/OPatch/opatch lsinventory

Modifying the default gateway address used by the Oracle 10g VIP

By default, the server's default gateway is used as a ping target during the Oracle RAC 10g VIP status check action. Upon a ping failure, Oracle will decide that the current interface where the VIP is running has failed, and will initiate an interface / internode VIP failover.

Though the VIP check action will work as designed in most situations, it will not function
correctly as designed under special circumstances where the server's default gateway resides on
a different network from the client LAN network (the network where the VIP is configured on).
In order for the VIP check action to function as designed, the ping target address needs to be
modified after installing RAC 10g.


o Example of a network configuration where use of the default gateway is sufficient
Since the VIP, clients and default gateway are all on the same network segment, the VIP
check action will function correctly in the above case.

o Example of a network configuration where the ping target needs to be modified

Since the default gateway is configured on a different network segment from the VIP/Clients,the VIP check action will not work correctly as designed in the above configuration.

Since the VIP check action will use the default gateway (192.168.1.1) as the ping target,and Oracle will not be able to detect failures in the client network (146.56.20.X) as expected.

In this case, the ping target used by the VIP needs to be modified to an IP address in the 146.56.20.X network segment in order for the check action to function correctly as designed.

The target address needs to remain static and highly available, as VIP availability directly leads to service / instance availability. In order to achieve high availability for the ping target, make sure to use redundant links / hardware for the host / router to be used as the target.


Modifying the ping target for the Oracle 10g VIP
----------------------------------------------------------
The following steps need to be performed on every node within the cluster.

1. Stop all node applications.

% srvctl stop instance -d -i
% srvctl stop asm -n
% srvctl stop nodeapps -n

2. As root, modify the following script and change the value of DEFAULTGW variable.
# vi $ORA_CRS_HOME/bin/racgvip

* Examples of modifying the ping target

BEFORE)
DEFAULTGW=

AFTER)
DEFAULTGW=146.56.20.1

3. Start the node applications and other necessary resources.
% srvctl start nodeapps -n
% srvctl start asm -n
% srvctl start instance -d -i

Make sure to repeat these steps for all CRS homes within the cluster.


References: Metalink