六、安装Clusterware
--先进行安装前校验 cd ./clusterware/cluvfy
./cluvfy.sh -stage pre crsinst -n rac1,rac2 -verbose
会列出很多内容,检测是否符合安装Clusterware的条件,其中需要单独安装一个compat_db的包,oracle-validated包不会装这个 另外会报几个其他compat包检测失败则不用理会,因为只是版本不对而已 如果还碰到其他的没有passed的内容,则需要处理,直到除以上几个错误之外全都pass为止
--开始安装 cd ./clusterware
./runInstall -ignoreSysPreReqs (参数可以忽略大小写,命令不允许)
运行OUI到最后,会要求分别在2个节点执行2个脚本,顺序为:脚本1:RAC1->RAC2 ->脚本2:RAC1->RAC2 前3个次执行都没有问题,到第4步,在节点2上执行root.sh的时候,会报错:
[root@rac2 bin]# /u01/oracle/10.2.0/crs_1/root.sh
WARNING: directory '/u01' is not owned by root Checking to see if Oracle CRS stack is already configured
NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 rac2 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps /u01/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
需要耐心等待好几分钟(90s+600s),然后会出现一个报错,这是由于Oracle在10.2.0.1上的bug所致,解决办法是通过修改$ORA_CRS_HOME/bin下满的vipca和sr vctl文件,在文件末尾添加unset LD_ASSUME_KERNEL保存退出,再重新再节点2执行root.sh
[root@rac2 bin]# /u01/oracle/10.2.0/crs_1/root.sh WARNING: directory '/u01' is not owned by root Checking to see if Oracle CRS stack is already configured Oracle CRS stack is already configured and will be running under init(1M) [root@rac2 bin]# ./crs_stat -t CRS-0202: No resources are registered.
此时由于还没有配置vip,因此没有资源被注册,去任意节点运行vipca(前提是这个节点的vipca已修改过),如果报以下错误:
[oracle@rac1 bin]$ vipca Error 0(Native: listNetInterfaces:[3]) [Error 0(Native: listNetInterfaces:[3])]
那么需要配置一下网卡: [oracle@rac1 bin]$ ./oifcfg iflist eth0 192.168.1.0 eth1 10.0.0.0 [oracle@rac1 bin]$ ./oifcfg getif [oracle@rac1 bin]$ ./oifcfg setif -global eth0/192.168.1.0:public [oracle@rac1 bin]$ ./oifcfg setif -global eth1/10.10.10.1:cluster_interconnect [oracle@rac1 bin]$ ./oifcfg getif eth0 192.168.1.0 global public eth1 10.10.10.1 global cluster_interconnect
注意要有打开图形界面的权限,并用root用户去执行vipca,而不是oracle用户,否则会报权限不足 [oracle@rac1 bin]$ vipca Insufficient privileges. Insufficient privileges.
接着就会跳出vip配置助手的OUI界面,开始配置vip,输入vip的节点别名后会自动填补vip的IP地址(过程略) 运行完vipca后退出,再次执行crs_stat,就会发现资源都已经注册到crs了 [root@rac1 bin]# ./crs_stat -t Name Type Target State Host ------------------------------