Oracle RAC一节点系统重做问题(一)

2014-11-24 19:01:00 · 作者: · 浏览: 57

在Oracl RAC 10.2.0.4 两个节点,操作系统为Linux 的环境中,一节点服务器的本地硬盘突然全部损坏,停止运行。剩下的一个节点还能正常工作,继续提供对外数据库服务。


问题很清楚,硬盘损坏的服务器在操作系统重做后,如何添加到RAC 集群中去?


在Google 以及METALINK 上查了一下,倒是有完全一样的问题,但没有想要的答案。


...............


I appreciate any help, and I'm greatful for your time.


(老外有点比较好,最后都是感谢的话。)


有人给出这样的解决方法:


[root@webrac1 crs_1]# more root.sh


#!/bin/sh


/u01/app/oracle/product/10.2.0/crs_1/install/rootinstall


/u01/app/oracle/product/10.2.0/crs_1/install/rootconfig



[root@webrac1 crs_1]# ./root.sh


WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root


WARNING: directory '/u01/app/oracle/product' is not owned by root


WARNING: directory '/u01/app/oracle' is not owned by root


Checking to see if Oracle CRS stack is already configured


/etc/oracle does not exist. Creating it now.



Setting the permissions on OCR backup directory


Setting up NS directories


Oracle Cluster Registry configuration upgraded successfully


WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root


WARNING: directory '/u01/app/oracle/product' is not owned by root


WARNING: directory '/u01/app/oracle' is not owned by root


clscfg: EXISTING configuration version 3 detected.


clscfg: version 3 is 10G Release 2.


Successfully accumulated necessary OCR keys.


Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.


node :


node 1: webrac1 webrac1-priv webrac1


node 2: webrac2 webrac2-priv webrac2


clscfg: Arguments check out successfully.



NO KEYS WERE WRITTEN. Supply -force parameter to override.


-force is destructive and will destroy any previous cluster


configuration.


Oracle Cluster Registry for cluster has already been initialized


Startup will be queued to init within 30 seconds.


Adding daemons to inittab


Expecting the CRS daemons to be up within 600 seconds.


CSS is active on these nodes.


webrac1


webrac2


CSS is active on all nodes.


Waiting for the Oracle CRSD and EVMD to start


Oracle CRS stack installed and running under init(1M)


Running vipca(silent) for configuring nodeapps



在 (0) 节点上创建 VIP 应用程序资源 .


在 (0) 节点上创建 GSD 应用程序资源 .


在 (0) 节点上创建 ONS 应用程序资源 .


启动 (2) 节点上的 VIP 应用程序资源 ...


启动 (2) 节点上的 GSD 应用程序资源 ...


启动 (2) 节点上的 ONS 应用程序资源 ...



Done.



第六步,修改配置文件/etc/oratab


这个文件从幸存节点拷贝过来,修改一下属性和内容。


[root@webrac2 archivelog]# scp /etc/oratab webrac1:/etc/


root@webrac1's password:


oratab


100% 766 0.8KB/s 00:00


[root@webrac2 archivelog]#




[root@webrac1 etc]# chown -R oracle:root oratab


[root@webrac1 etc]# ls -ltr oratab


-rw-r--r-- 1 oracle root 766 05-08 17:12 oratab


[root@webrac1 etc]# vi oratab


#


+ASM1:/u01/app/oracle/product/10.2.0/db_1:N


webdb:/u01/app/oracle/product/10.2.0/db_1:N


~



第七步,执行RDBMS 下的root.sh


[root@webrac1 db_1]# ./root.sh


Running Oracle10 root.sh script...



The following environment variables are set as:


ORACLE_OWNER= oracle


ORACLE_HOME= /u01/app/oracle/product/10.2.0/db_1



Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...


Copying oraenv to /usr/local/bin ...


Copying coraenv to /usr/local/bin ...



Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created


Finished running generic part of root.sh script.


Now product-specific root actions will be performed.



第八步,修改配置文件$ORACLE_HOME/network/admin