Configure libvirt Fencing (KVM Fencing)

 

 Start doing it!


I will show how to configure fencing on the Luci web tool.

Login to the Luci tool, then follow the links as below:

Click on Fence Devices ---> Add ---> Select Fence virt (Multicast Mode) ---> Type kvmfence as name, or any other name.

Now you have to assign this fence name to all your cluster nodes.

Click Nodes ---> click one by one on each node and repeat below process:

Click Add Fence Method ---> Type kvmfence as Method Name ---> Click on Add Fence Instance ---> Select kvmfence as Fence Device, submit, then type the name of your virtual machine (VM) as Domain. This name should match the exact name of your VM that you have created earlier in libvirt / KVM.

Repeat abovr process for all the nodes in your cluster.

Host Configuration

Host is your KVM machine on which the virtual cluster nodes are running.

Install fence-virtd rpm on your host machine.

# yum install fence-virtd

Verify:

# rpm -q fence-virtd
fence-virtd-0.2.3-18.el6.x86_64

I got above version installed on my host, your version may differ and this should be fine.

Next, you should creaye keys. These keys are for authentication during the fencing process. You could call these as fenceing keys.

# dd if=/dev/urandon of=/etc/cluster/fence_xvm.key bs=4k count=1

Push the fencing key to all your cluster nodes.

# scp /etc/cluster/fence_xvm.key root@node1:/etc/cluster
# scp /etc/cluster/fence_xvm.key root@node2:/etc/cluster
# scp /etc/cluster/fence_xvm.key root@node3:/etc/cluster 

I have three nodes, thus I copied on all three using scp command.

Also, make sure each node has access to other nodes, you can create a /etc/hosts files with hostnames and IP address mapping for a small cluster or your lab experiments.

Configure fence_virtd daemon

On your host machine you should configure the fence_virtd daemon.

This configuratoin will prepare this daemon to listen to the multicast network on which your nodes are running.

You can find the configuration file at the end of thie article.

Start the fence_virtd configuration:

# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:

Available backends:
    libvirt 0.1
Available listeners:
    multicast 1.1

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on the default network
interface.  In environments where the virtual machines are
using the host machine as a gateway, this *must* be set
(typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: this should be the bridge to which your nodes are listening

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]: this is the same file you generated using the "dd" command above.

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:

The libvirt backend module is designed for single desktops or
servers.  Do not use in environments where virtual machines
may be migrated between hosts.

Libvirt URI [qemu:///system]:

Configuration complete.

=== Begin Configuration ===
fence_virtd {
    listener = "multicast";
    backend = "libvirt";
    module_path = "/usr/lib64/fence-virt";
}

listeners {
    multicast {
        key_file = "/etc/cluster/fence_xvm.key";
        address = "225.0.0.12";
        family = "ipv4";
        port = "1229";
        interface = "virbr0";
    }

}

backends {
    libvirt {
        uri = "qemu:///system";
    }

}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y

This will save above setup to /etc/fence_virt.conf

 If firewall is running on your host, then open the ports:


# iptables -I INPUT -p udp -m state --state NEW -m udp --dport 1229 -j ACCEPT 
# iptables -I INPUT -d 225.0.0.12/32 -p igmp -j ACCEPT
 
(do not restart iptables in above situation) 

to make this permanent, insert these commands in /etc/sysconfig/iptables
 
-A INPUT -p udp -m state --state NEW -m udp --dport 1229 -j ACCEPT 
-A INPUT -d 225.0.0.12/32 -p igmp -j ACCEPT
 
then restart iptables.
 
# service iptables restart  

Start fence_virtd daemon:

# /etc/init.d/fence_virtd restart

Enable to start automatically after reboots:
# chkconfig fence_virtd on

Verify if fencing is working on KVM/libvirt

root node1 # fence_xvm -dddd -o list
node1                f67f0067-3587-b0b5-5577-cde570e01fd7 on
node2                6e8387da-4c4a-40de-cd36-53b2e8657b24 on
node3                caddeddc-a41d-37bc-b3d3-b10aabf744ce on
node4                e51b3780-d812-b116-8c21-d579ecc4d140 on

You should get above list from all your cluster nodes, if your fencing works, else you get below status and you need to re-configure you fence settings:

Sending to 225.0.0.12 via 172.19.1.1
Setting up ipv4 multicast send (225.0.0.12:1229)
Joining IP Multicast group (pass 1)
Joining IP Multicast group (pass 2)
Setting TTL to 2 for fd4
ipv4_send_sk: success, fd = 4
Opening /dev/urandom
Sending to 225.0.0.12 via 192.168.122.175
Waiting for connection from XVM host daemon.
Setting up ipv4 multicast send (225.0.0.12:1229)
Joining IP Multicast group (pass 1)
Joining IP Multicast group (pass 2)
Setting TTL to 2 for fd4
ipv4_send_sk: success, fd = 4
Opening /dev/urandom
Sending to 225.0.0.12 via 127.0.0.1
Setting up ipv4 multicast send (225.0.0.12:1229)
Joining IP Multicast group (pass 1)
Joining IP Multicast group (pass 2)
Setting TTL to 2 for fd4
ipv4_send_sk: success, fd = 4
Opening /dev/urandom
Sending to 225.0.0.12 via 172.17.1.1
Setting up ipv4 multicast send (225.0.0.12:1229)
Joining IP Multicast group (pass 1)
Joining IP Multicast group (pass 2)
Setting TTL to 2 for fd4
ipv4_send_sk: success, fd = 4
Opening /dev/urandom
Sending to 225.0.0.12 via 172.16.1.1
Setting up ipv4 multicast send (225.0.0.12:1229)
Joining IP Multicast group (pass 1)
Joining IP Multicast group (pass 2)
Setting TTL to 2 for fd4
ipv4_send_sk: success, fd = 4
Opening /dev/urandom
Sending to 225.0.0.12 via 172.18.1.1
Setting up ipv4 multicast send (225.0.0.12:1229)
Joining IP Multicast group (pass 1)
Joining IP Multicast group (pass 2)
Setting TTL to 2 for fd4
ipv4_send_sk: success, fd = 4
Opening /dev/urandom
Sending to 225.0.0.12 via 172.19.1.1
Setting up ipv4 multicast send (225.0.0.12:1229)
Joining IP Multicast group (pass 1)
Joining IP Multicast group (pass 2)
Setting TTL to 2 for fd4
ipv4_send_sk: success, fd = 4
Opening /dev/urandom
Sending to 225.0.0.12 via 192.168.122.175
Waiting for connection from XVM host daemon.
Timed out waiting for response
Operation failed

If you get waiting for response and operation timed out, then reset you fence settings as below:

backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        interface = "virbr0";
        port = "1229";
        family = "ipv4";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

[root@oc7221142633 etc]# pwd
/etc
[root@oc7221142633 etc]# cat /etc/fence_virt.conf
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        interface = "virbr0"; this should be modified as virbr0, and your VM must have one interface with gateway for your KVM host to the bridge virbr0 or other bridge that your KVM has
        port = "1229";
        family = "ipv4";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

Manually Configure the multicast IP in Luci 

 

Login to Luci interface. Then click on Configure ---> click

 

Test your fence 

 

On node2, display cluster status:

# clustat -i 5

The -i option with a value of 5 will refresh cluster status every 5 seconds for you automatically.

On node1, stop network services. 

# service network stop

You can watch your cluster status on node2 that  node1 shows offline after a few seconds.

Monitor node1 console window and you can see it has been rebooted automatically by the fencing you have done and after sometime you should see that node1 re-joined your cluster.

Cheers!

Please do put your comments.

Comments

  1. This comment has been removed by a blog administrator.

    ReplyDelete

Post a Comment

Popular posts from this blog

Install Puppet Client on RHEL 6.4

Install IBM Tivoli TSM 5.5 Backup Server (trial) on your Laptop using Linux KVM virtual machine

IBM TSM Client 5.x install (trial) on your Laptop using KVM Virtual Machines