/sbin/gfs_mkfs -p lock_dlm -t vmcluster1:vmfs1 -j 8 /dev/sdb /dev/sdb /gfs2 gfs rw,hostdata=jid=0:id=65537:first=1 0 0 Start /sbin/fenced Then start the following services: rgmanager gfs gfs2 cman clvmd /sbin/fenced /etc/init.d/gfs start /etc/init.d/gfs2 start /etc/init.d/rgmanager start /etc/init.d/cman start /etc/init.d/clvmd start cman tends to hang in fenced until the other node starts. LVM how-to http://tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html lvcreate -L4G -nlv1 vg1 lvremove to undo. mount -t gfs /dev/vg1/lv1 /local/lv1 http://sourceware.org/cluster/wiki/DRBD_Cookbook A.2. Sharing LVM volumes (from LVM-Howto) [root@CentOS-5 rc.d]# vgchange -cy /dev/vg1 Volume group "vg1" successfully changed Hi Nirmal, I think what happened here is that maybe you have locking_type = 2 but clvmd isn't running. Therefore, lvm can't do the clustered locking necessary to "see" the volume group, and it has to "see" the volume group before it can change its attributes. Your choices are to (1) activate the clustered locking mechanism and leave the volume group clustered, or (2) change the locking_type = 1 in your /etc/lvm/lvm.conf file, at least temporarily, so you can change the clustered bit off. To get the clustered volume to be active, make sure locking_type = 2 (for RHEL4 and similar) or locking_type = 3 (for RHEL5 and similar) in /etc/lvm/lvm.conf, then you need to run the clustered LVM manager: service clvmd start. The clvmd service can only run properly, however, if the rest of the cluster infrastructure is running. If you're sharing the volume group between systems, you want to keep the clustered bit on. If not, you can turn it off. After clvmd is started, or locking_type changed back to 1, you should be able to do what you want. For example: vgscan vgchange -aln vgchange -cn rac Regards, [root@CentOS-5 rc.d]# man gfs_tool [root@CentOS-5 rc.d]# gfs_tool list 4175118336 253:2 virtual_cluster:vmfs1.0