主要为了验证11gr2 rac中asm实例通过gpnp profile获得spfile信息来启动asm实例,同时验证了gpnp profile的修改等内容;结论与实验如下: 验证结论: 1./u01/app/11.2.0/grid/gpnp/profiles/peer下的cat profile.xml内容是旧的,使用spset/spmove时均未被更新
主要为了验证11gr2 rac中asm实例通过gpnp profile获得spfile信息来启动asm实例,同时验证了gpnp profile的修改等内容;结论与实验如下:
验证结论:
1./u01/app/11.2.0/grid/gpnp/profiles/peer下的cat profile.xml内容是旧的,使用spset/spmove时均未被更新,一些文档说这个 profile.xml是全局的。
gpnp使用的是/u01/app/11.2.0/grid/gpnp/rac1/profiles/peer下的cat profile.xml内容,即$oracle_home/gpnp/[hostname]/profiles/peer/
2.修改是通过asmcmd的spset/spmove命令来实现的修改gpnp profile,
通过gpnptool edit -p=profile.xml -asm_spf=+data1/rac-cluster/asmparameterfile/registry.253.857644239 这种命令,显示修改成功,但是未发现profile.xml中信息变化,重新启动os/has等方式均未发现使用新的修改,求告知原因。
3.gpnp profile不能手动修改,手动编辑会导致文件损坏,查看gpnpd.log可以发现校验文件出错的信息,但是可以从缓存中查找gpnp profile并启动。
[ clwal][3031893712]clsw_initialize: olr initlevel [70000]
[ clsdmt][3022580624]listening to (address=(protocol=ipc)(key=rac1dbg_gpnpd))
2016-05-07 23:39:58.106: [ clsdmt][3022580624]pid for the process [3252], connkey 10
2016-05-07 23:39:58.106: [ clsdmt][3022580624]creating pid [3252] file for home /u01/app/11.2.0/grid host rac1 bin gpnp to /u01/app/11.2.0/grid/gpnp/init/
2016-05-07 23:39:58.106: [ clsdmt][3022580624]writing pid [3252] to the file [/u01/app/11.2.0/grid/gpnp/init/rac1.pid]
2016-05-07 23:39:58.153: [ gpnp][3031893712]clsgpnpd_validateprofile: [at clsgpnpd.c:2888] result: (86) clsgpnp_sig_invalid. profile failed to verify. prf=0x99fec78
2016-05-07 23:39:58.153: [ gpnp][3031893712]clsgpnpd_openlocalprofile: [at clsgpnpd.c:3461] result: (86) clsgpnp_sig_invalid. local best profile from file cache provider (lcp-fs) is invalid - destroyed.
2016-05-07 23:39:58.155: [ gpnp][3031893712]clsgpnpd_validateprofile: [at clsgpnpd.c:2919] gpnpd taken cluster name 'rac-cluster'
2016-05-07 23:39:58.155: [ gpnp][3031893712]clsgpnpd_openlocalprofile: [at clsgpnpd.c:3532] got local profile from olr cache provider (lcp-olr).
2016-05-07 23:39:58.168: [ gpnp][3031893712]clsgpnpd_lopen: [at clsgpnpd.c:1734] listening on ipc://gpnpd_rac1
2016-05-07 23:39:58.169: [ default][3031893712]gpnpd started on node rac1.
实验1:验证asm实例启动时依赖gpnp profile中的spfile信息
1.修改gpnp profile中关于spfile的信息并验证修改成功
asmcmd> spset +data1/rac-cluster/asmparameterfile/spfile.ora
asmcmd> spget
+data1/rac-cluster/asmparameterfile/spfile.ora
查看/u01/app/11.2.0/grid/gpnp/profiles/peer下的cat profile.xml内容,可以发现未修改,仍是profilesequence=4 。
查看/u01/app/11.2.0/grid/gpnp/rac1/profiles/peer下的cat profile.xml内容,发现已经修改,profilesequence=8 ,spfile=+data1/rac-cluster/asmparameterfile/spfile.ora。
此时/u01/app/11.2.0/grid/gpnp/rac1/profiles/peer下还出现了一个pending.xml文件,里面的内容是最新的信息。
此时使用 kfed read /dev/asm-diskb|grep spfile可以发现spfile的信息未变化。
[grid@rac1 peer]$ kfed read /dev/asm-diskb|grep spfile
kfdhdb.spfile: 58 ; 0x0f4: 0x0000003a
此时启动has,查看asm的alert log中使用的asm spfile信息,可以发现使用了新的gpnp-profile中的配置,出现如下报错:
error: spfile in diskgroup data1 does not match the specified spfile +data1/rac-cluster/asmparameterfile/spfile.ora
此时使用默认的参数启动asm实例,然后磁盘组做为资源--ora.data1.dg也被agent发出的mount命令成功挂载,
这里根据实验结果来推测一种可能-->如何找到asm spfile:(如下步骤1、2谁先谁后应该都可以,我是根据gpnp profile中discoverystring在前觉得应该是先进行步骤1)
1.gpnp profile中discoverystring字段找到相应的磁盘,读取磁盘头获取spfile信息,如kfed读到的信息:[grid@rac1 peer]$ kfed read /dev/asm-diskb|grep spfile
kfdhdb.spfile: 58 ; 0x0f4: 0x0000003a
2.从磁盘头获取了spfile信息并去读取--?后,再与gpnp profile中spfile=指定的文件信息做对比,如果一致,则使用。如果不一致,则去$oracle_home/dbs下查找spfile+asm1.ora这种查找路径,如果仍找不到,再使用默认参数启动。
3.问题点在于discoverystring=/dev/asm* spfile=部分,在实验中可以发现,将spfile=改为一个不存在的文件;此时通过kfed读取磁盘头可以找到正确的spfile信息,但是并未被使用;因此有此推断。
实验2:asmcmd> spmove registry.253.857644239 +data2/spfileasm.ora
此命令会将asm spfile文件移到到+data2/spfileasm.ora --别名;原来asm spfile在+data1,只能移到不同的磁盘组;
spmove 移动后,会自动更新gpnp profile中信息;同时使用kfed 读取磁盘头信息,也同步进行了更新。
############################################
如下是实验过程的简要步骤与具体实验信息:
1.查看gpnp profile信息
2.查看asm的alert log中使用的asm spfile信息
实验1:
1.修改gpnp profile中关于spfile的信息并验证修改成功
2.启动has,查看asm的alert log中使用的asm spfile信息
实验2:
1.spmove 验证kfed 中读到的信息是否变化及gpnp profile中信息也被修改
实验3:设置为正确的asm spfile信息,经过crsctl stop/start has 和重启os,查看gpnp profile所在目录中还存在pending.xml
但是在实验2中,spmove后不存在pending.xml了。
############################################
############################################
############################################
1.查看gpnp profile信息
通过gpnptool get查询
查看/u01/app/11.2.0/grid/gpnp/profiles/peer下的cat profile.xml内容
查看/u01/app/11.2.0/grid/gpnp/rac1/profiles/peer下的cat profile.xml内容
---
[grid@rac1 peer]$ gpnptool get --->>>可以看到这里的 profilesequence=7warning: some command line parameters were defaulted. resulting command line: /u01/app/11.2.0/grid/bin/gpnptool.bin get -o-el42pypgxchfoff3yjz7lv/c/+q=nbq10c8ajmsz2qdznovppywmm0wrp0pwlub1mgfadelvtry4j+dfopjwp/hyrvrr6xgcq4h4qkrb2njp0nb863e36jbwema9vmygajujsahonx/ln4/vjwpsl8l3xwxlnwylgnddtdtmevdznpyw7vvdnx92xzpg+mmbw049cui=success.
asm的profile参数信息:
[grid@rac1 peer]$ pwd/u01/app/11.2.0/grid/gpnp/profiles/peer[grid@rac1 peer]$ ls -lrttotal 12-rw-r--r-- 1 grid oinstall 1828 sep 7 2014 profile_orig.xml-rw-r--r-- 1 grid oinstall 1891 may 7 22:40 profile.xml-rw-r--r-- 1 grid oinstall 1891 may 8 20:50 profile.xml1[grid@rac1 peer]$ cat profile.xml --->>>可以看到这里的profilesequence=4 ,是较老的版本。9j7pntauc/tyr/90c5onuylcufa=nn1cikzx5/72lpetbyzt/t60s2ehhpuw2vn97qnurnwjos6rzgrc0uzmorjqbh+giyorhhsup8irlcwzz4ysz+1l/hmr5f/7duvgnb9oys05mf49svuikwnrlaol2hsi1z+skcfvdfnfpf0yur8mnnkpklvilzt9snqgsvg4aee=[grid@rac1 peer]$
[grid@rac1 peer]$
[grid@rac1 peer]$ cd -
/u01/app/11.2.0/grid/gpnp/rac1/profiles/peer
[grid@rac1 peer]$ ls -lrt
total 16
-rw-r--r-- 1 grid oinstall 1828 sep 7 2014 profile_orig.xml
-rw-r--r-- 1 grid oinstall 1871 may 7 23:59 profile.old
-rw-r--r-- 1 grid oinstall 1891 may 8 10:27 profile.xml
-rw-r--r-- 1 grid oinstall 1891 may 8 20:52 profile.xml1
[grid@rac1 peer]$ cat profile.xml--->>>可以看到这里的profilesequence=7 ,是当前使用的版本。
el42pypgxchfoff3yjz7lv/c/+q=nbq10c8ajmsz2qdznovppywmm0wrp0pwlub1mgfadelvtry4j+dfopjwp/hyrvrr6xgcq4h4qkrb2njp0nb863e36jbwema9vmygajujsahonx/ln4/vjwpsl8l3xwxlnwylgnddtdtmevdznpyw7vvdnx92xzpg+mmbw049cui=
asm的profile参数信息:
[grid@rac1 peer]$ date
sun may 8 21:03:13 cst 2016
[grid@rac1 peer]$ asmcmd --->>>从gpnp profile查询的asm spfile的位置
asmcmd>
asmcmd>
asmcmd> spget
+data1/rac-cluster/asmparameterfile/registry.253.857644239
asmcmd>
2.查看asm的alert log中使用的asm spfile信息
sun may 08 20:53:35 2016
instance shutdown complete
sun may 08 20:59:26 2016
note: no asm libraries found in the system
memory_target defaulting to 1128267776.
* instance_number obtained from css = 1, checking for the existence of node 0...
* node 0 does not exist. instance_number = 1
starting oracle instance (normal)
warning: you are trying to use the memory_target feature. this feature requires the /dev/shm file system to be mounted for at least 1140850688 bytes. /dev/shm is either not mounted or is mounted with available space less than this size. please fix this so that memory_target can work as expected. current available is 525660160 and used is 0 bytes. ensure that the mount point is /dev/shm for this directory.
license_max_session = 0
license_sessions_warning = 0
initial number of cpu is 1
private interface 'eth1:1' configured from gpnp for use as a private interconnect.
[name='eth1:1', type=1, ip=169.254.162.219, mac=08-00-27-54-4c-ad, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
public interface 'eth0' configured from gpnp for use as a public interface.
[name='eth0', type=1, ip=192.168.57.225, mac=08-00-27-35-fe-56, net=192.168.57.0/24, mask=255.255.255.0, use=public/1]
cell communication is configured to use 0 interface(s):
cell ip affinity details:
numa status: non-numa system
cellaffinity.ora status: n/a
cell communication will use 1 ip group(s):
grp 0:
picked latch-free scn scheme 2
using log_archive_dest_1 parameter default value as /u01/app/11.2.0/grid/dbs/arch
autotune of undo retention is turned on.
license_max_users = 0
sys auditing is disabled
starting up:
oracle database 11g enterprise edition release 11.2.0.4.0 - production
with the real application clusters and automatic storage management options.
oracle_home = /u01/app/11.2.0/grid
system name: linux
node name: rac1.bys.com
release: 2.6.32-200.13.1.el5uek
version: #1 smp wed jul 27 20:21:26 edt 2011
machine: i686
using parameter settings in server-side spfile +data1/rac-cluster/asmparameterfile/registry.253.857644239 ---这里可以看到使用的
system parameters with non-default values:
large_pool_size = 12m
instance_type = asm
remote_login_passwordfile= exclusive
asm_diskstring = /dev/asm*
asm_diskgroups = data2
asm_power_limit = 1
diagnostic_dest = /u01/app/grid
cluster communication is configured to use the following interface(s) for this instance
169.254.162.219
cluster interconnect ipc version:oracle udp/ip (generic)
#############################
#############################
#############################
实验1:验证asm实例启动时依赖gpnp profile中的spfile信息
1.修改gpnp profile中关于spfile的信息并验证修改成功
使用asmcmd> spset修改
asmcmd> spset +data1/rac-cluster/asmparameterfile/spfile.ora
验证修改结果:--指定的spfile.ora 事实上是不存在的
asmcmd> spget
+data1/rac-cluster/asmparameterfile/spfile.ora
asmcmd> exit
查看/u01/app/11.2.0/grid/gpnp/profiles/peer下的cat profile.xml内容,可以发现未修改,仍是profilesequence=4 。
查看/u01/app/11.2.0/grid/gpnp/rac1/profiles/peer下的cat profile.xml内容,发现已经修改,profilesequence=8 ,spfile=+data1/rac-cluster/asmparameterfile/spfile.ora。
此时/u01/app/11.2.0/grid/gpnp/rac1/profiles/peer下还出现了一个pending.xml文件,里面的内容是最新的信息。
此时使用 kfed read /dev/asm-diskb|grep spfile可以发现spfile的信息未变化。
[grid@rac1 peer]$ kfed read /dev/asm-diskb|grep spfile
kfdhdb.spfile: 58 ; 0x0f4: 0x0000003a
-------------
[grid@rac1 peer]$ ls -lrt
total 20
-rw-r--r-- 1 grid oinstall 1828 sep 7 2014 profile_orig.xml
-rw-r--r-- 1 grid oinstall 1891 may 8 10:27 profile.old
-rw-r--r-- 1 grid oinstall 1891 may 8 20:52 profile.xml1
-rw-r--r-- 1 grid oinstall 1879 may 8 21:04 profile.xml
-rw-r--r-- 1 grid oinstall 1879 may 8 21:04 pending.xml
[grid@rac1 peer]$ date
sun may 8 21:04:48 cst 2016
[grid@rac1 peer]$ cat pending.xml
/>+nyunjl9fhhz5pp/z3tq7vpuqhe=sojcstwngkks7jvepmgy6c1sr35qax7qwqusgnuatirwp/0a0rxzt99f2nk+rcsf5opecdd4kjl8raunufyzm8uwsicsoenzekrugtdajinf1vn+e0qseiuzqfmc9e1srlnmzphdwy2y3twcpem6qit5ilvulxwv+aqjmoumsc0=[grid@rac1 peer]$
[grid@rac1 peer]$
[grid@rac1 peer]$ cat profile.xml
>/>+nyunjl9fhhz5pp/z3tq7vpuqhe=sojcstwngkks7jvepmgy6c1sr35qax7qwqusgnuatirwp/0a0rxzt99f2nk+rcsf5opecdd4kjl8raunufyzm8uwsicsoenzekrugtdajinf1vn+e0qseiuzqfmc9e1srlnmzphdwy2y3twcpem6qit5ilvulxwv+aqjmoumsc0=[grid@rac1 peer]$
[grid@rac1 peer]$ cd -
/u01/app/11.2.0/grid/gpnp/profiles/peer
[grid@rac1 peer]$ ls -lrt
total 12
-rw-r--r-- 1 grid oinstall 1828 sep 7 2014 profile_orig.xml
-rw-r--r-- 1 grid oinstall 1891 may 7 22:40 profile.xml
-rw-r--r-- 1 grid oinstall 1891 may 8 20:50 profile.xml1
[grid@rac1 peer]$ ls
profile_orig.xml profile.xml profile.xml1
[grid@rac1 peer]$ cat profile.xml
9j7pntauc/tyr/90c5onuylcufa=nn1cikzx5/72lpetbyzt/t60s2ehhpuw2vn97qnurnwjos6rzgrc0uzmorjqbh+giyorhhsup8irlcwzz4ysz+1l/hmr5f/7duvgnb9oys05mf49svuikwnrlaol2hsi1z+skcfvdfnfpf0yur8mnnkpklvilzt9snqgsvg4aee=[grid@rac1 peer]$
2.启动has,查看asm的alert log中使用的asm spfile信息
此时可以发现使用了新的gpnp-profile中的配置,出现如下报错:
error: spfile in diskgroup data1 does not match the specified spfile +data1/rac-cluster/asmparameterfile/spfile.ora
此时使用默认的参数启动asm实例,然后磁盘组做为资源--ora.data1.dg也被agent发出的mount命令成功挂载上。
sun may 08 21:07:48 2016
sql> alter diskgroup all mount /* asm agent call crs *//* {0:0:2} */
note: diskgroup used for voting files is:
data1
diskgroup used for ocr is:data1
…………
success: diskgroup data1 was mounted
success: alter diskgroup all mount /* asm agent call crs *//* {0:0:2} */
sun may 08 21:08:08 2016
sql> alter diskgroup data2 mount /* asm agent *//* {1:23346:2} */
查看asm实例:
[grid@rac1 ~]$ sqlplus / as sysasm
sql*plus: release 11.2.0.4.0 production on sun may 8 22:19:54 2016
copyright (c) 1982, 2013, oracle. all rights reserved.
connected to:
oracle database 11g enterprise edition release 11.2.0.4.0 - production
with the real application clusters and automatic storage management options
sql> show parameter large_pool_size
name type value
------------------------------------ ----------- ------------------------------
large_pool_size big integer 0
sql> show parameter spfile
name type value
------------------------------------ ----------- ------------------------------
spfile string
------------------------------------
sun may 08 10:29:39 2016
instance shutdown complete
sun may 08 21:07:40 2016
note: no asm libraries found in the system
error: spfile in diskgroup data1 does not match the specified spfile +data1/rac-cluster/asmparameterfile/spfile.ora
memory_target defaulting to 1128267776.
* instance_number obtained from css = 1, checking for the existence of node 0...
* node 0 does not exist. instance_number = 1
starting oracle instance (normal)
warning: you are trying to use the memory_target feature. this feature requires the /dev/shm file system to be mounted for at least 1140850688 bytes. /dev/shm is either not mounted or is mounted with available space less than this size. please fix this so that memory_target can work as expected. current available is 525660160 and used is 0 bytes. ensure that the mount point is /dev/shm for this directory.
license_max_session = 0
license_sessions_warning = 0
initial number of cpu is 1
private interface 'eth1:1' configured from gpnp for use as a private interconnect.
[name='eth1:1', type=1, ip=169.254.162.219, mac=08-00-27-54-4c-ad, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
public interface 'eth0' configured from gpnp for use as a public interface.
[name='eth0', type=1, ip=192.168.57.225, mac=08-00-27-35-fe-56, net=192.168.57.0/24, mask=255.255.255.0, use=public/1]
cell communication is configured to use 0 interface(s):
cell ip affinity details:
numa status: non-numa system
cellaffinity.ora status: n/a
cell communication will use 1 ip group(s):
grp 0:
picked latch-free scn scheme 2
using log_archive_dest_1 parameter default value as /u01/app/11.2.0/grid/dbs/arch
autotune of undo retention is turned on.
license_max_users = 0
sys auditing is disabled
starting up:
oracle database 11g enterprise edition release 11.2.0.4.0 - production
with the real application clusters and automatic storage management options.
oracle_home = /u01/app/11.2.0/grid
system name: linux
node name: rac1.bys.com
release: 2.6.32-200.13.1.el5uek
version: #1 smp wed jul 27 20:21:26 edt 2011
machine: i686
warning: using default parameter settings without any parameter file
cluster communication is configured to use the following interface(s) for this instance
169.254.162.219
cluster interconnect ipc version:oracle udp/ip (generic)
ipc vendor 1 proto 2
sun may 08 21:07:43 2016
pmon started with pid=2, os id=6577
sun may 08 21:07:43 2016
psp0 started with pid=3, os id=6581
sun may 08 21:07:44 2016
vktm started with pid=4, os id=6585 at elevated priority
vktm running at (1)millisec precision with dbrm quantum (100)ms
sun may 08 21:07:44 2016
gen0 started with pid=5, os id=6591
sun may 08 21:07:44 2016
diag started with pid=6, os id=6595
sun may 08 21:07:44 2016
ping started with pid=7, os id=6599
sun may 08 21:07:44 2016
dia0 started with pid=8, os id=6603
sun may 08 21:07:44 2016
lmon started with pid=9, os id=6607
sun may 08 21:07:45 2016
lmd0 started with pid=10, os id=6611
* load monitor used for high load check
* new low - high load threshold range = [960 - 1280]
sun may 08 21:07:45 2016
lms0 started with pid=11, os id=6615 at elevated priority
sun may 08 21:07:45 2016
lmhb started with pid=12, os id=6621
sun may 08 21:07:45 2016
mman started with pid=13, os id=6625
sun may 08 21:07:45 2016
dbw0 started with pid=14, os id=6629
sun may 08 21:07:45 2016
lgwr started with pid=15, os id=6633
sun may 08 21:07:45 2016
ckpt started with pid=16, os id=6637
sun may 08 21:07:45 2016
smon started with pid=17, os id=6641
sun may 08 21:07:45 2016
rbal started with pid=18, os id=6645
sun may 08 21:07:45 2016
gmon started with pid=19, os id=6649
sun may 08 21:07:45 2016
mmon started with pid=20, os id=6653
sun may 08 21:07:45 2016
mmnl started with pid=21, os id=6657
lmon registered with nm - instance number 1 (internal mem no 0)
reconfiguration started (old inc 0, new inc 2)
asm instance
list of instances:
1 (myinst: 1)
global resource directory frozen
* allocate domain 0, invalid = true
communication channels reestablished
master broadcasted resource hash value bitmaps
non-local process blocks cleaned out
lms 0: 0 gcs shadows cancelled, 0 closed, 0 xw survived
set master node info
submitted all remote-enqueue requests
dwn-cvts replayed, valblks dubious
all grantable enqueues granted
post smon to start 1st pass ir
submitted all gcs remote-cache requests
post smon to start 1st pass ir
fix write in gcs resources
reconfiguration complete
sun may 08 21:07:46 2016
lck0 started with pid=22, os id=6661
oracle_base not set in environment. it is recommended
that oracle_base be set in the environment
sun may 08 21:07:48 2016
sql> alter diskgroup all mount /* asm agent call crs *//* {0:0:2} */
note: diskgroup used for voting files is:
data1
diskgroup used for ocr is:data1
note: cache registered group data1 number=1 incarn=0xae0a68c1
note: cache began mount (first) of group data1 number=1 incarn=0xae0a68c1
note: assigning number (1,0) to disk (/dev/asm-diskb)
note: gmon heartbeating for grp 1
gmon querying group 1 at 3 for pid 24, osid 6665
note: cache opening disk 0 of grp 1: data1_0000 path:/dev/asm-diskb
note: f1x0 found on disk 0 au 2 fcn 0.0
note: cache mounting (first) external redundancy group 1/0xae0a68c1 (data1)
* allocate domain 1, invalid = true
note: attached to recovery domain 1
note: cache recovered group 1 to fcn 0.1846
note: redo buffer size is 256 blocks (1053184 bytes)
sun may 08 21:07:55 2016
note: lgwr attempting to mount thread 1 for diskgroup 1 (data1)
process lgwr (pid 6633) is running at high priority qos for exadata i/o
note: lgwr found thread 1 closed at aba 71.485
note: lgwr mounted thread 1 for diskgroup 1 (data1)
note: lgwr opening thread 1 at fcn 0.1846 aba 72.486
note: cache mounting group 1/0xae0a68c1 (data1) succeeded
note: cache ending mount (success) of group data1 number=1 incarn=0xae0a68c1
sun may 08 21:07:55 2016
note: instance updated compatible.asm to 11.2.0.0.0 for grp 1
success: diskgroup data1 was mounted
success: alter diskgroup all mount /* asm agent call crs *//* {0:0:2} */
sql> alter diskgroup all enable volume all /* asm agent *//* {0:0:2} */
success: alter diskgroup all enable volume all /* asm agent *//* {0:0:2} */
sun may 08 21:07:57 2016
warning: failed to online diskgroup resource ora.data1.dg (unable to communicate with crsd/ohasd)
note: attempting voting file refresh on diskgroup data1
note: refresh completed on diskgroup data1
. found 1 voting file(s).
note: voting file relocation is required in diskgroup data1
note: attempting voting file relocation on diskgroup data1
note: successful voting file relocation on diskgroup data1
sun may 08 21:07:57 2016
note: [crsd.bin@rac1.bys.com (tns v1-v3) 6684] opening ocr file
starting background process asmb
sun may 08 21:07:57 2016
asmb started with pid=26, os id=6705
sun may 08 21:07:57 2016
note: client +asm1:+asm registered, osid 6709, mbr 0x0
sun may 08 21:08:08 2016
sql> alter diskgroup data2 mount /* asm agent *//* {1:23346:2} */
note: cache registered group data2 number=2 incarn=0x29ba68c3
note: cache began mount (first) of group data2 number=2 incarn=0x29ba68c3
note: assigning number (2,1) to disk (/dev/asm-diskd)
note: assigning number (2,0) to disk (/dev/asm-diskc)
sun may 08 21:08:14 2016
note: gmon heartbeating for grp 2
gmon querying group 2 at 7 for pid 30, osid 6876
note: cache opening disk 0 of grp 2: data2_0000 path:/dev/asm-diskc
note: f1x0 found on disk 0 au 2 fcn 0.0
note: cache opening disk 1 of grp 2: data2_0001 path:/dev/asm-diskd
note: cache mounting (first) external redundancy group 2/0x29ba68c3 (data2)
sun may 08 21:08:15 2016
* allocate domain 2, invalid = true
sun may 08 21:08:15 2016
note: attached to recovery domain 2
note: cache recovered group 2 to fcn 0.5980
note: redo buffer size is 256 blocks (1053184 bytes)
sun may 08 21:08:15 2016
note: lgwr attempting to mount thread 1 for diskgroup 2 (data2)
note: lgwr found thread 1 closed at aba 70.929
note: lgwr mounted thread 1 for diskgroup 2 (data2)
note: lgwr opening thread 1 at fcn 0.5980 aba 71.930
note: cache mounting group 2/0x29ba68c3 (data2) succeeded
note: cache ending mount (success) of group data2 number=2 incarn=0x29ba68c3
sun may 08 21:08:15 2016
note: instance updated compatible.asm to 11.2.0.0.0 for grp 2
success: diskgroup data2 was mounted
success: alter diskgroup data2 mount /* asm agent *//* {1:23346:2} */
sun may 08 21:08:15 2016
note: diskgroup resource ora.data2.dg is updated
sun may 08 21:09:04 2016
alter system set local_listener=' (description=(address_list=(address=(protocol=tcp)(host=192.168.57.227)(port=1521))))' scope=memory sid='+asm1';
[grid@rac1 trace]$
[grid@rac1 trace]$ crsctl stat res -t
--------------------------------------------------------------------------------
name target state server state_details
--------------------------------------------------------------------------------
local resources
--------------------------------------------------------------------------------
ora.data1.dg
online online rac1
ora.data2.dg
online online rac1
ora.listener.lsnr
online online rac1
ora.asm
online online rac1 started
ora.gsd
offline offline rac1
ora.net1.network
online online rac1
ora.ons
online online rac1
--------------------------------------------------------------------------------
cluster resources
--------------------------------------------------------------------------------
ora.listener_scan1.lsnr
1 online online rac1
ora.cvu
1 online online rac1
ora.oc4j
1 online online rac1
ora.rac.db
1 offline offline instance shutdown
2 offline offline
ora.rac.sales.svc
1 offline offline
2 offline offline
ora.rac1.vip
1 online online rac1
ora.rac2.vip
1 online intermediate rac1 failed over
ora.scan1.vip
1 online online rac1
##############################################################################
##############################################################################
##############################################################################
实验3:
1.spmove 验证kfed 中读到的信息是否变化及gpnp profile中信息也被修改
[grid@rac1 peer]$ kfed read /dev/asm-diskb|grep spfile
kfdhdb.spfile: 58 ; 0x0f4: 0x0000003a
开始修改--只能移到其它磁盘组
asmcmd> pwd
+data1/rac-cluster/asmparameterfile
asmcmd> spmove registry.253.857644239 +data1/rac-cluster/spfileasm.ora
ora-15056: additional error message
ora-17502: ksfdcre:4 failed to create file +data1/rac-cluster/spfileasm.ora
ora-15268: internal oracle file +data1.253.1 already exists.
ora-06512: at line 7 (dbd error: ocistmtexecute)
asmcmd> spmove registry.253.857644239 +data2/spfileasm.ora
验证:
asmcmd> cd +data2
asmcmd> ls
rac/
rac-cluster/
spfileasm.ora
asmcmd> spget
+data2/spfileasm.ora
asmcmd>
asmcmd> cd +data1/rac-cluster/asmparameterfile/
asmcmd-8002: entry 'asmparameterfile' does not exist in directory '+data1/rac-cluster/'
set linesize 140 pagesize 1400
col file name format a40
set head on
select name file name,
au_kffxp au number,
number_kffxp file number,
disk_kffxp disk number,
group_kffxp group number
from x$kffxp, v$asm_alias
where group_kffxp = group_number
and number_kffxp = file_number
and name in ('registry.253.857644239')
order by disk_kffxp,au_kffxp;
file name au number file number disk number group number
---------------------------------------- ---------- ----------- ----------- ------------
spfileasm.ora 1977 253 0 2
sql> col path for a40
sql> select disk_number,path,group_number,name from v$asm_disk;
disk_number path group_number name
----------- ---------------------------------------- ------------ ------------------------------
1 /dev/asm-diskd 2 data2_0001
0 /dev/asm-diskc 2 data2_0000
0 /dev/asm-diskb 1 data1_0000
[grid@rac1 peer]$ kfed read /dev/asm-diskb|grep spfile
kfdhdb.spfile: 0 ; 0x0f4: 0x00000000
[grid@rac1 peer]$ kfed read /dev/asm-diskc|grep spfile
kfdhdb.spfile: 1977 ; 0x0f4: 0x000007b9
[grid@rac1 peer]$ kfed read /dev/asm-diskd|grep spfile
kfdhdb.spfile: 0 ; 0x0f4: 0x00000000
[grid@rac1 peer]$
[grid@rac1 peer]$ pwd
/u01/app/11.2.0/grid/gpnp/rac1/profiles/peer
[grid@rac1 peer]$ ls -lrt
total 16
-rw-r--r-- 1 grid oinstall 1828 sep 7 2014 profile_orig.xml
-rw-r--r-- 1 grid oinstall 1891 may 8 20:52 profile.xml1
-rw-r--r-- 1 grid oinstall 1876 may 9 11:38 profile.old
-rw-r--r-- 1 grid oinstall 1854 may 9 11:58 profile.xml
[grid@rac1 peer]$ cat profile.xml
2mofriwog5xptit2qe/pe9e0zcc=w8h6ou1pqg9xjxmq3lvkh6cgszasftwiqkonb5okljtnr/gj2puzs5wtnx7xpru5v0uhb9a/lb2bzj265vv8lrzq2mt0aao7m5jflnfkosg2tdscdbp8clh1tt81snopie65rlrmrvagvgius+2mhk7cj1mnckifmuyvu+us38s=[grid@rac1 peer]$
[grid@rac1 peer]$
[grid@rac1 peer]$
[grid@rac1 peer]$ cat profile.old
>/>xvmehrofn1wxceetl9qyhoewsxe=c99z3hlrubrxjpnvyoybe2kyr1oyn4wjbtdmyjbei2urhcvdyjv7lucvefl0zvihgtop5gjnh3r42itnn6jivee3l9zzxedzbvyoeaet0dg3rhleuj1k8+pwqmor+sxaigogjgmomoorajca5ip6tyy5tgbzx6zdcb5ub+khdzw=[grid@rac1 peer]$
[grid@rac1 peer]$
[grid@rac1 peer]$ cd -
/home/grid
[grid@rac1 ~]$ cd /u01/app/11.2.0/grid/gpnp/profiles/peer/
[grid@rac1 peer]$ ls
profile_orig.xml profile.xml profile.xml1
[grid@rac1 peer]$ ls -lrt
total 12
-rw-r--r-- 1 grid oinstall 1828 sep 7 2014 profile_orig.xml
-rw-r--r-- 1 grid oinstall 1891 may 7 22:40 profile.xml
-rw-r--r-- 1 grid oinstall 1891 may 8 20:50 profile.xml1
####################################################
####################################################
实验3:设置为正确的asm spfile信息,经过crsctl stop/start has 和重启os,查看gpnp profile所在目录中还存在pending.xml
但是在实验2中,spmove后不存在pending.xml了。
[grid@rac1 peer]$ date
mon may 9 10:18:58 cst 2016
[grid@rac1 peer]$ asmcmd
asmcmd> spget
+data1/rac-cluster/asmparameterfile/spfile.ora
asmcmd> cd +data1/rac-cluster/asmparameterfile/
asmcmd> ls
registry.253.857644239
asmcmd> spset +data1/rac-cluster/asmparameterfile/registry.253.857644239
asmcmd>
asmcmd> spget
+data1/rac-cluster/asmparameterfile/registry.253.857644239
asmcmd>
[grid@rac1 peer]$ ls -lrt
total 20
-rw-r--r-- 1 grid oinstall 1828 sep 7 2014 profile_orig.xml
-rw-r--r-- 1 grid oinstall 1891 may 8 20:52 profile.xml1
-rw-r--r-- 1 grid oinstall 1879 may 8 21:04 profile.old
-rw-r--r-- 1 grid oinstall 1891 may 9 10:19 profile.xml
-rw-r--r-- 1 grid oinstall 1891 may 9 10:20 pending.xml
[grid@rac1 peer]$
[grid@rac1 peer]$
[grid@rac1 peer]$ cat pending.xml
ounycqofao3v51dlgmaxy2goaga=tdyvuciccaebsgt3/faw8wxrcfo3tf0rh2pdnmy2bbo3ypijx3mrzwfwz5aoaeyqd69gfiepabh9udoxbff4mxfjjz8deozrb7nr2on/su2qnadj8vnep9htph7oyujmmsgb8ncyww2+yrm1umy0otblapjr/uiunwctt/4na84=[grid@rac1 peer]$