KVM with GFS2 shared storage on clustered lvm

We will create a clustered storage GF2 and connect our 3 KVM hosts to it, so that we will be able to live migrate guests from one host to another. Ok let’s begin 🙂

First on every host Install prerequisites for KVM ( I choose Virtual Machine Host during installation, so my systems are already able to create guests):

yum install kvm kmod-kvm qemu libvirt
modprobe kvm-intel ( or modprobe kvm-amd )

It will be easier when we disable  SELinux and iptables

Ok now the goo things. I post https://henroo.wordpress.com/2011/08/03/booting-from-san-on-multipath-device-hp-p2000-storage/ we have created /dev/multipathX storages. So now we need to have a similar thing, but in HP 2000 we need to map our disks to our three hosts. It should look like that:

Ok so I assume that on our three servers we can see the same LUN from storage:
[root@robot1 ~]# multipath -ll
mpathe (3600c0ff00011dbc282b7374e01000000) dm-5 HP,P2000 G3 FC
size=2.5T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 1:0:0:4 sde 8:64 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 1:0:1:4 sdj 8:144 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:0:4 sdo 8:224 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 2:0:1:4 sdt 65:48 active ready running

Ok now we can install our cluster software. We will need this to create clustered storage
On every host install:
yum groupinstall -y Clustering
yum groupinstall -y cluster-storage

Configure /etc/hosts so that every node has the same records
[root@robot3 ~]# cat /etc/hosts
10.1.1.100 robot1
10.1.1.101 robot2
10.1.1.102 robot3

For our demonstration purposes /etc/cluster/cluster.conf should look like that on every node:

<!--?xml version="1.0"?>-->
<cluster config_version="4" name="TEST_CLUSTER">
<fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="21"/>
<clusternodes>
<clusternode name="robot1" nodeid="1" votes="1">
<fence/>
<!--clusternode>-->
<clusternode name="robot2" nodeid="2" votes="1">
<fence/>
</clusternode>
<clusternode name="robot3" nodeid="3" votes="1">
<fence/>
</clusternode>
</clusternodes>
<cman/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>

Great. Now what we need to do is to enable cman and clvmd daemons on every node to get our cluster running.
chkconfig cman on
chkconfig clvmd on
chkconfig gfs2 on

You also need to inform lvm that you will be using clusters. To do this edit /etc/lvm/lvm.conf file and change line:
locking_type = 3
or using command:
lvmconf --enable-cluster

If you can reboot all nodes after this step. When they will start you should have a nice 3 node cluster running:)
Sometimes I had a problem with automatic starting cluster services, so if they won’t start you can start them manually.
Now we will create clustered Volume Groups and Logical Volumes.
pvcreate /dev/mapper/mpathe
vgcreate Cluster_VG /dev/mapper/mpathe
vcreate -L 1400G -n Cluster_LV Cluster_VG

now using lvs command check if on every node you see logical volume Cluster_LV:
lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
Cluster_VG Cluster_LV -wi-ao 1,4t

Now we can create GFS2 filesystem on our clustered logical volume:
mkfs.gfs2 -p lock_dlm -t TEST_CLUSTER:cluster_fs_test -j 3 /dev/mapper/Cluster_VG-Cluster-LV
and mount it on every node
mkdir /gfs2
mount /dev/mapper/Cluster_VG-Cluster-LV /gfs2

It’s recommended to add every gfs2 filesystem to fstab because your system will not shutdown unless filesystem is not unmounted.It is also recommended to user noatime and nodiratime options Fstab record:
/dev/mapper/Cluster_VG-Cluster-LV /gfs2 gfs2 defaults,noatime,nodiratime 0 0

Uff almost done 🙂 now what we need to do is open our virt managers, connect to our nods, and add on every one a storage pool that is pointing to our clustered gfs2 storage- /gfs2.
Now live migration should work fine.
You should remember to configure network interfaces exactly the same on all nodes if your guests have network controllers.

Unfortunately using GFS2 has drastically decreased my storage write speed. From around 900MB/s to 200-300 MBs/ and I haven’t found any solution to this

Advertisements

One thought on “KVM with GFS2 shared storage on clustered lvm

  1. jijunlx says:

    Great ,it helps me a lot!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s