Tuesday, August 3, 2010

How do I set-up Red Hat Cluster Suite 4?

First install all of the required packages for Red Hat Cluster Suite 4.  Once all Cluster Suite packages are installed, execute the following command to launch the cluster configuration interface:

This tool provides the capability to configure the cluster members, fencing, resources and services for this cluster.
First, configure the cluster nodes. Click on the Cluster Nodes label. A button will show up in the bottom right hand corner that says Add a Cluster Node (Figure 1). Click on this button. Add the hostname for the cluster nodes. Use the output from uname -n on the nodes for the member names. Give each node 1 quorum vote, unless there is a large system that needs to be weighted more (Figure 2).

Fencing set-up. Fencing is what reboots the nodes if there is a failure. This is absolutely required in Red Hat Cluster Suite 4, since fencing keeps data corruption from happening. First, there must be a power switch that the nodes are plugged into. Click on the Fence Devices label and then click on the Add a Fence Device button in the bottom right hand corner of the interface. Select the make of the power switch and enter the power switch specific information here (Figure 3). Once this is complete the host specific fencing information can be configured.

Select the individual nodes under the Cluster Nodes label. Click the Manage Fencing For This Node button to bring up the fencing configuration dialog (Figure 4). Click on the Add a New Fence Level button. This will create Fence-Level-1. Click on the Fence-Level-1 label and then click the Add a New Fence to this Level button. Select the fence device that was configured in the previous step. Enter any node specific information, such as which power socket this node is plugged into, then click OK (Figure 5).

Services set-up. First, add resources. The available resources are:
  • GFS – This is a Global File System resource, create this if you are mounting a GFS file system
  • File System – The shared partition the service’s data will be on
  • IP Address – The IP address that clients will connect to the service through
  • NFS Mount – Use this option if there is no shared storage and instead the system is using an NFS mount for the service’s shared data
  • Script – This is the init scrip that will control the service
Note: There are other NFS options, but they will be changing as of Red Hat Enterprise Linux 4 Update 3, so they will not be described here.
A service will generally use a few of the above resources, not all. For example, if I were to setup Apache as a service, I would first create an IP address resource for the clients to connect to, then a Script resource that would point to /etc/init.d/httpd and then a File System that would point to my shared storage where the web pages are held. The File System resource could in this case be replaced with an NFS mount of the web page. Once the resources are created, you would want to create a service and add the resources to that service. You can do this by clicking on the Orange Services label, and then clicking the Create a Service button.
Once you are satisfied with your configuration, navigate to the File->Save option in the menu. This will save the configuration in /etc/cluster/cluster.conf. In case something else is wrong that may have missed, it is best to use scp to copy the configuration over to the other nodes in the cluster when the cluster is initially set-up. Once this is complete, start the cluster services with the following commands:

service ccsd start

service cman start

service fence start

service rgmanager start

If there are problems with this step, ensure any firewalls are off and all of the nodes can ping each other.

Ubuntu 10.10 Maverick Meerkat schedule changed !!

The release schedule of Ubuntu 10.10 Maverick Meerkat has changed again.
As per Ubuntu official website, the following dates has been finalized:
Alpha 1 -> June 3rd 2010
Alpha 2 -> July 1st 2010
Alpha 3 -> August 5th 2010
Beta -> September 2nd 2010
Release candidate -> October 1st 2010
Final Release(GA) -> October 10th 2010