eth0 192.168.1.0 global public eth1 10.10.10.0 global cluster_interconnect
The goal is to get the output of "oifcfg getif" to include both public and cluster_interconnect interfaces, of course you should exchange your own IP addresses and interface name from your environment. To get the proper IPs in your environment run this command:
eth0 192.168.1.0 eth1 10.10.10.0
If you have not yet run root.sh on the last node, implement workaround for issue #2 above and run root.sh (you may skip running the vipca portion below. If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 above, and then run vipca manually.
Running VIPCA:
After implementing the above workaround(s), you should be able invoke vipca (as root, from last node) manually and configure the VIP IPs via the GUI interface.
Make sure the DISPLAY environment variable is set correctly and you can open X-clock or other X applications from that shell.
Once vipca completes running, all the Clusterware resources (VIP, GSD, ONS) will be started, there is no need to re-run root.shsince vipca is the last step in root.sh.
To verify the Clusterware resources are running correctly:
Name Type Target State Host ------------------------------------------------------------ ora....ux1.gsd application ONLINE ONLINE raclinux1 ora....ux1.ons application ONLINE ONLINE raclinux1 ora....ux1.vip application ONLINE ONLINE raclinux1 ora....ux2.gsd application ONLINE ONLINE raclinux2 ora....ux2.ons application ONLINE ONLINE raclinux2 ora....ux2.vip application ONLINE ONLINE raclinux2
You may now proceed with the rest of the RAC installation.