Use JMS Clients to Utilize Free Computer Resources
Pages: 1, 2, 3, 4, 5

Message-driven bean recipient

The message-driven bean instances will listen on a reply queue for finished unit of work objects. A skeletal sample follows:

public class MessageWorkBean implements MessageDrivenBean,

                                            MessageListener {


  // This method will receive a unit of work object to store

 public void onMessage(Message msg) {

   ObjectMessage om = (ObjectMessage) msg;

   try {

     UnitOfWork unit = (UnitOfWork)om.getObject();



   catch(JMSException ex) {

    log("Message Driven Bean: Could not retrieve Unit of Work.");





The interesting method here is onMessage(). This simply receives a finished object from the reply queue. It then calls its print() and store() methods. My goal was for the server to offload its processing of this unit of work onto other computers. I've accomplished this through the JMS clients and used the message-driven bean as a means to communicate the results back to the server.

Scalability considerations

In a real implementation of this framework, several issues should be addressed to make the example scalable.

  • Consider using a sizable pool of message-driven beans to handle responses.
  • If no foreign consumers are available for the request queues, a few message-driven beans should be created to consume the request queues on the server itself. This goes against the spirit of this article, but it will prevent the queue from overflowing and becoming underutilized if no consumers are available.
  • If there are multiple types of units of work, each should have its own request and response queues.
  • For WebLogic Server, consider using JMS paging to prevent out-of-memory problems when there are too many messages on the queues that are not being consumed in a timely manner.
  • For WebLogic Server, consider using the throttling features of WebLogic JMS, if the producers, the servlets, are producing too much work that is not being consumed.
  • For WebLogic Server, consider using distributed destinations for the queues as this would distribute the queues to multiple servers. In this case, the servlets themselves should be clustered and coordinated to not create duplicate units of work requests.

The references at the end of the article should also be considered. An additional consideration that goes beyond the server is how to deliver the client piece to the various machines. One way is voluntarily, in which each machine owner would download an installer that can be configured and run on the client machine. Another way is to use a commercial software distribution package that automatically downloads the latest version of the client and installs it on the client machine.

Using WebLogic Integration Workflows to Distribute Work

The previous section presented a straightforward approach for distributing units of work to clients using servlets and message-driven beans. Although the approach can be implemented quite easily, it does not address several issues, such as how one kicks off the process in a self-supporting manner to regularly deliver requests to request queue(s) through set intervals. Surely an administrator is not expected to write a shell script to continuously call the servlet. Also, you should address the issue of throttling the amount of requests to be served in a manner that can be controlled a priori by the application. With this in mind, what follows is a more sophisticated example of distributing and responding to units of work to remote JMS clients for the use of underutilized computers.

This approach will have two BEA WebLogic Workshop-developed WebLogic Integration (WLI) workflows known as Java Process Definition (JPD) files, which are a precursor to BPEL/J (Business Process Engineering Language for Java). BPEL/J is specified in JSR 207. The first workflow starts in response to some Web service request and performs initialization to subscribe to a JMS request queue through a JMS control. The workflow uses a Timer control to loop continuously and wake up a while loop at set intervals to place more units of work on the request queue. The workflow will also use a custom Java control supplied in this article's associated code to browse the request queue to determine if more requests need to be placed on the queue to prevent it from being overburdened. Finally, the workflow will also wait for a stop message from a Web service to stop processing. The second workflow performs the same task as the message-driven bean in the previous example in that it will respond to messages in the response queue to call the print() and store() methods from the dequeued response queue. This is a short-lived workflow, and WebLogic Integration will spawn as many instances as required.

Browsing a JMS queue

WebLogic Integration is used as a mechanism for the construction and assembly of services for remote processes. The assembly of components off the shelf known as Java controls makes it quite easy to build composite applications without the need for extensive development. Although WebLogic Integration provides JMS controls out of the box to abstract away the internal details for using JMS, in certain situations, for reasons of fine-grained access to lower level methods, it is best to create a custom control that can be reused to accomplish a desired task. In this example framework, I need to browse the worker request queue to count the number of items that are pending on the queue to determine whether I can place more items on the request queue without overburdening it. To accomplish this, a custom Java control was written called JMSBrowse that has one method of interest:

public interface JMSBrowse extends Control {

  int numberOfElementsInQueue(String qFactory, String qName);


The implementation for this control uses the JMS QueueBrowser class to look into a given JMS queue with a given JMS connection factory. It returns the number of instances pending on the queue. The complete implementation is supplied in the accompanying code.

Pages: 1, 2, 3, 4, 5

Next Page ยป