solaris 10(x86)构建oracle 10g rac之--配置系统环境(2) 系统环境: 操作系统:solaris 10(x86-64) cluster: oracle crs 10.2.0.1.0 oracle: oracle 10.2.0.1.0 如图所示:rac 系统架构 650) this.width=650; src=http://www.68idc.cn/help/uploads/
solaris 10(x86)构建oracle 10g rac之--配置系统环境(2)
系统环境:
操作系统:solaris 10(x86-64)
cluster: oracle crs 10.2.0.1.0
oracle: oracle 10.2.0.1.0
如图所示:rac 系统架构
650) this.width=650; src=http://www.68idc.cn/help/uploads/allimg/151111/121423jy-0.jpg title=rac.jpg alt=wkiom1s_b-oshjhcaagktznt6w4920.jpg width=650 style=max-width:90%>
一、建立主机之间的信任关系(在所有node)
1、配置主机hosts.equiv文件
[root@node1:/]# cat /etc/hosts.equiv node1 rootnode1 oraclenode1-vip rootnode1-vip oraclenode1-priv rootnode1-priv oraclenode2 rootnode2 oraclenode2-vip rootnode2-vip oraclenode2-priv rootnode2-priv oracle
2、配置oracle用户.rhosts文件
[oracle@node1:/export/home/oracle]$ cat .rhosts node1 rootnode1 oraclenode1-vip rootnode1-vip oraclenode1-priv rootnode1-priv oraclenode2 rootnode2 oraclenode2-vip rootnode2-vip oraclenode2-priv rootnode2-priv oracle
3、启动相关的服务,验证
[root@node1:/]# svcs -a |grep rlogindisabled 10:05:17 svc:/network/login:rlogin[root@node1:/]# svcadm enable svc:/network/login:rlogin[root@node1:/]# svcadm enable svc:/network/rexec:default[root@node1:/]# svcadm enable svc:/network/shell:default[root@node1:/]# svcs -a |grep rloginonline 11:37:34 svc:/network/login:rlogin[root@node1:/]# su - oracleoracle corporation sunos 5.10 generic patch january 2005[oracle@node1:/export/home/oracle]$ rlogin node1last login: wed jan 21 11:29:36 from node2-privoracle corporation sunos 5.10 generic patch january 2005
二、安装crs前系统环境的检测(在node1)
[oracle@node1:/export/home/oracle]$ unzip 10201_clusterware_solx86_64.zip[oracle@node1:/export/home/oracle/clusterware/cluvfy]$ ./runcluvfy.sh usage:cluvfy [ -help ]cluvfy stage { -list | -help }cluvfy stage {-pre|-post} [-verbose]cluvfy comp { -list | -help }cluvfy comp [-verbose][oracle@node1:/export/home/oracle/clusterware/cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verboseperforming pre-checks for cluster services setup checking node reachability...check: node reachability from node node1 destination node reachable? ------------------------------------ ------------------------ node1 yes node2 yes result: node reachability check passed from node node1.checking user equivalence...check: user equivalence for user oracle node name comment ------------------------------------ ------------------------ node2 passed node1 passed result: user equivalence check passed for user oracle.checking administrative privileges...check: existence of user oracle node name user exists comment ------------ ------------------------ ------------------------ node2 yes passed node1 yes passed result: user existence check passed for oracle.check: existence of group oinstall node name status group id ------------ ------------------------ ------------------------ node2 exists 200 node1 exists 200 result: group existence check passed for oinstall.check: membership of user oracle in group oinstall [as primary] node name user exists group exists user in group primary comment ---------------- ------------ ------------ ------------ ------------ ------------ node2 yes yes yes yes passed node1 yes yes yes yes passed result: membership check for user oracle in group oinstall [as primary] passed.administrative privileges check passed.checking node connectivity...interface information for node node2 interface name ip address subnet ------------------------------ ------------------------------ ---------------- e1000g0 192.168.8.12 192.168.8.0 e1000g1 10.10.10.12 10.10.10.0 interface information for node node1 interface name ip address subnet ------------------------------ ------------------------------ ---------------- e1000g0 192.168.8.11 192.168.8.0 e1000g1 10.10.10.11 10.10.10.0 check: node connectivity of subnet 192.168.8.0 source destination connected? ------------------------------ ------------------------------ ---------------- node2:e1000g0 node1:e1000g0 yes result: node connectivity check passed for subnet 192.168.8.0 with node(s) node2,node1.check: node connectivity of subnet 10.10.10.0 source destination connected? ------------------------------ ------------------------------ ---------------- node2:e1000g1 node1:e1000g1 yes result: node connectivity check passed for subnet 10.10.10.0 with node(s) node2,node1.suitable interfaces for the private interconnect on subnet 192.168.8.0:node2 e1000g0:192.168.8.12node1 e1000g0:192.168.8.11suitable interfaces for the private interconnect on subnet 10.10.10.0:node2 e1000g1:10.10.10.12node1 e1000g1:10.10.10.11error: could not find a suitable set of interfaces for vips.result: node connectivity check failed.---vip 网络检测失败checking system requirements for 'crs'...check: total memory node name available required comment ------------ ------------------------ ------------------------ ---------- node2 1.76gb (1843200kb) 512mb (524288kb) passed node1 1.76gb (1843200kb) 512mb (524288kb) passed result: total memory check passed.check: free disk space in /tmp dir node name available required comment ------------ ------------------------ ------------------------ ---------- node2 3gb (3150148kb) 400mb (409600kb) passed node1 2.74gb (2875128kb) 400mb (409600kb) passed result: free disk space check passed.check: swap space node name available required comment ------------ ------------------------ ------------------------ ---------- node2 2gb (2096476kb) 512mb (524288kb) passed node1 2gb (2096476kb) 512mb (524288kb) passed result: swap space check passed.check: system architecture node name available required comment ------------ ------------------------ ------------------------ ---------- node2 64-bit 64-bit passed node1 64-bit 64-bit passed result: system architecture check passed.check: operating system version node name available required comment ------------ ------------------------ ------------------------ ---------- node2 sunos 5.10 sunos 5.10 passed node1 sunos 5.10 sunos 5.10 passed result: operating system version check passed.check: operating system patch for 118345-03 node name applied required comment ------------ ------------------------ ------------------------ ---------- node2 unknown 118345-03 failed node1 unknown 118345-03 failed result: operating system patch check failed for 118345-03.check: operating system patch for 119961-01 node name applied required comment ------------ ------------------------ ------------------------ ---------- node2 119961-06 119961-01 passed node1 119961-06 119961-01 passed result: operating system patch check passed for 119961-01.check: operating system patch for 117837-05 node name applied required comment ------------ ------------------------ ------------------------ ---------- node2 unknown 117837-05 failed node1 unknown 117837-05 failed result: operating system patch check failed for 117837-05.check: operating system patch for 117846-08 node name applied required comment ------------ ------------------------ ------------------------ ---------- node2 unknown 117846-08 failed node1 unknown 117846-08 failed result: operating system patch check failed for 117846-08.check: operating system patch for 118682-01 node name applied required comment ------------ ------------------------ ------------------------ ---------- node2 unknown 118682-01 failed node1 unknown 118682-01 failed result: operating system patch check failed for 118682-01.---系统补丁检测失败check: group existence for dba node name status comment ------------ ------------------------ ------------------------ node2 exists passed node1 exists passed result: group existence check passed for dba.check: group existence for oinstall node name status comment ------------ ------------------------ ------------------------ node2 exists passed node1 exists passed result: group existence check passed for oinstall.check: user existence for oracle node name status comment ------------ ------------------------ ------------------------ node2 exists passed node1 exists passed result: user existence check passed for oracle.check: user existence for nobody node name status comment ------------ ------------------------ ------------------------ node2 exists passed node1 exists passed result: user existence check passed for nobody.system requirement failed for 'crs'pre-check for cluster services setup was unsuccessful on all the nodes.
----在以上的系统环境检测中,vip网络检查失败;
如果在检测前没有配置vip网络,可以用一下方式进行配置;如果已经配置过,就不会检测失败。
配置vip network(node1):
[root@node1:/]# ifconfig e1000g0:1 plumb up
[root@node1:/]# ifconfig e1000g0:1 192.168.8.13 netmask 255.255.255.0
[root@node1:/]# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.8.11 netmask ffffff00 broadcast 192.168.8.255
ether 8:0:27:28:b1:8c
e1000g0:1: flags=4001000842 mtu 1500 index 2
inet 192.168.8.13 netmask ffffff00 broadcast 192.168.8.255
e1000g1: flags=1000843 mtu 1500 index 3
inet 10.10.10.11 netmask ffffff00 broadcast 10.10.10.255
ether 8:0:27