Collect logs : clsnap -d '/tmp' -p2 -n 'node1,node2'
or
snap -e
If cspoc pb : /tmp/cspoc.log <= more details
If Vg configuration is inconstitent between nodes :
1) Validate that all disks in a VG are know, on both nodes
node1 # lspv grep vg1
hdisk3 005a2b2a4dc045f3 vg1 active
hdisk4 005a2b2ab58a59b2 vg1 active
node 2 # lspv grep vg1
hdisk3 005a2b2a4dc045f3 vg1
hdisk4 is missing ....
if HACMP is > 5.4, and VG are not ehanced concurrent (which is not the case, since the vg is not "concurrent" but "active" then :
node 2 # lspv grep 005a2b2ab58a59b2
hdisk4 005a2b2ab58a59b2 None
2) Integrate it in the vg correctly
node1 # lqueryvg -p hdisk3 -T > /usr/es/sbin/cluster/etc/vg/vg1 <= this is to save the good timestamp for the cluster
node1 # varyonvg -ub vg1 <= From now on, NO MORE manipulation on vg1, on node 1, must occur...
node2 # importvg -L vg1 hdisk3
vg1
node2 # lqueryvg -p hdisk3 -T > /usr/es/sbin/cluster/etc/vg/vg1 <= This way, the timestamp is correct on both nodes.
node1 #varyonvg vg1 <= Things are back to normal, now.
This is the simplest way to refefine correctly the vg on backup node... But, this is when things are going smooth.... its not always that way....
First, if the pvid is not known on node 2... First, is it's zoning correctly defined ? if yes, you MUST have a disk in "none None" on your backup node. If you want it to be correctly defined on you second node, you must do a "rmdev/cfgmgr" while vg1 is in mode 'unlocked' on node 1, via the varyonvg -ub command.
If it has been known, and now, it is no more, it means you have 'phantom' disks. Some disks must be "Defined" on node 2, as others are defined in place, with no definition (None = no pvid none = no vg defined). The good way to define them correctly, is to remove the "None none" disk, and to "mkdev" the Defined one, again.
For the timestamp definition, since HACMP 5.4, the timestamp is synchronised via the clveryfy command.
Aucun commentaire:
Enregistrer un commentaire