Photo of the cover of WebSphere Application Server Administration Using Jython

Official companion web site

Create a Cluster

Most recently modified on 2010-07-20
Available from
Back to home page Back to table of contents page Back to interesting articles page

From the point of view of somebody who administers WebSphere Application Server, a cluster is nothing more than a list that contains as many as three lists:

  • The first is a list of zero or more servers that are the members of this cluster. Those servers can be:
    • Application servers. This is the most common possibility
    • Proxy servers
    • OnDemand routers
  • If we have a cluster of application servers, there are two additional lists
    • A list of zero or more applications to be executed on all of those servers at all times
    • A list of zero or more Service Integration Buses to be shared in some way by those application servers

WebSphere Application Server makes a few promises to you when you create a cluster. There are things you can do administratively to change some of the promises. As a rule, you don't want to do any of those things.

  1. WAS promises to keep all servers that are members of this cluster up and running at all times
  2. WAS promises to deploy all cluster applications to all member servers
  3. WAS promises to start the messaging engine of one member of the of any cluster that creates a queue and to hold the messaging engines of any other cluster members on hot standby.

There are two reasons to create a cluster. You might want to distribute work across two or more servers. You might want to protect yourself from server failure by having one or more servers standing by ready to pick up the load if a server dies. You might want to do both of the above.

It is easy to create a cluster. The command is AdminTask.createCluster(). The only information you have to provide is

  • A name for the cluster. This parameter is mandatory
  • Whether or not to create a ReplicationDomain for the cluster. This parameter is optional. The default is false

Here is some sample code

Once you create a cluster, all you have is an empty list. There are no servers, no applications and no message buses connected to the cluster. You will need to

  • Add members to the cluster
  • Deploy any applications you would like to run on the cluster
  • Add the cluster as a member of any Service Integration Bus you desire

To add members to a cluster, use AdminTask.createClusterMember() This method accepts several parameters. You must provide either the configuration ID of the cluster that will contain this new member server or the name of the cluster that will contain the new server. If you choose to pass the configuration ID of a cluster, that becomes the target of this method. Otherwise, this method takes no target. If you choose to pass the name of a cluster, that becomes the name parameter of this method. It is a syntax error to pass both a cluster configuration ID and a cluster name. In the sample code, we choose to pass the cluster name. After that, you must pass one or possibly two steps

  • memberConfig step. This step is always mandatory. (See line 36 and line 101 and line 109 of the sample code)
    • The name of the Node that will hold the new server. This information is mandatory
    • The name of the new server. This information is mandatory
    • Optionally, whether the new server will be part of a replication domain
  • firstMember step. This step is only required when you add your first member server to the cluster. (See line 38 of the sample code) Once you have added your first member server to the cluster, all future members will automatically be configured using this data.
    • The name of the application server template to use when creating this member and every member that will ever be created for this cluster. If you do not supply a template name, you must supply the node name and the server name of an application server to be used as a template for every cluster member that is ever added. Most folks supply a template name. We do on line 38 of the sample code
    • The name of the node that holds an application server that you want to use as a model, a pattern, a template for every cluster member that is ever added. Do not supply this parameter if you supplied the application server template listed above.
    • The name of the application server within the above node that you want to use as a model, a pattern, a template for every cluster member that is ever added. Do not supply this parameter if you supplied the application server template listed above.
    • The name of the node group to which each cluster member must belong. The node group must already exist. This parameter will have an effect when you try to add member servers to the cluster. Any node name you specify when you attempt to add member servers to this cluster must be a part of the node group for this cluster. If you do not choose to supply this optional parameter, all member servers must live in nodes that are part of the DefaultNodeGroup.
    • The name of the core group to which each cluster member must belong. The core group must already exist. This parameter will have an effect when you try to add member servers to the cluster. The member servers you add will automatically become part of the core group you specify here. If you choose not to supply this optional parameter, member servers will automatically be added to the DefaultCoreGroup.

Your choice of core group for member servers of this cluster will have an effect on the scalability of your cluster. Clusters depend on the high availability manager for several services.

  • Memory-to-memory replication
  • Singleton failover
  • Workload management routing
  • On-demand configuration routing

See this InfoCenter article about the high availability manager for additional details. (Although the details are from the InfoCenter for WAS 7.0, they apply to any version of WAS after 6.0)

The communications and other system resources required to support those services rises significantly with each additional server. If all the servers in your cell are members of the DefaultCoreGroup, as the number of servers in your cell grows, you will start to consume significant system resources. If you make each cluster a member of its own core group, the resource burden lowers considerably. In addition, if you go a step further and group application servers that are not part of any cluster into a core group of their own, you can choose to turn off the high availability manager for servers in that core group.

For more detail about the cost of communications within a core group, see this InfoCenter article about the workings of the protocols that the high availability manager uses. This InfoCenter article about core group scaling considerations is also of interest.

Arthur Kevin McGrath

Bio:

The author is an engineer with the consulting firm, Contract Engineers. He has consulted and lectured extensively since 1987 about the infrastructure that makes electronic commerce possible. His publications include Leading Practices for WebSphere Dynamic Process Edition V6.2 (SG24-7776-00) and Websphere Application Server Administration Using Jython (ISBN 0137009526), the definitive book on WAS scripting.

Photo of the author
Click here to schedule a speaker at your location
Click here to inquire about consulting for your company
Click here to inquire about training for your company