BEA WebLogic Server 9.0 introduces a console application that allows JMS administrators to view the contents of a destination as well as creating new messages and deleting in-flight messages. By itself, this feature is good enough reason to upgrade to WebLogic Server 9.0! Figure 2 shows the in-flight messages in our example queue.
Figure 2. Viewing messages in a JMS destination
As you can see in Figure 2, the contents of queues are now manageable. You can create new messages (which I have done for this example), import or export messages, delete individual messages, or drain an entire queue. While this functionality requires building careful security around it in a production environment, the functionality makes developing and testing applications much simpler.
Finally, you can also view the header and contents of individual messages. Figure 3 shows the results of displaying the second message in Figure 2.
Figure 3. Viewing message details
Two other details deserve some examination. The first is that JMS destinations can be paused. Anyone who has ever had a problem with messages looping —a message-driven bean (MDB) retrieves a message, throws an exception, rolls the message back, the next MDB retrieves message, and so on—will welcome this functionality. Administrators will now be able to pause a destination, view the offending message(s) in question, and take appropriate action. You can pause JMS destinations in one or more of the following three states:
All of the pausing states can be activated at boot-time or during runtime.
The last tidbit is the advanced logging capability. WebLogic Server 9.0 includes the new WebLogic Diagnostic Service, in which JMS resources can take part. Logging can be turned on and off for each JMS entity, giving administrators very granular control. By default, logs are saved in the
<server_name> is the WebLogic Server name, and
<jms_server_name> is the name of the JMS server. Log headers can be set to include some or all of the standard JMS headers, and can also include any custom properties. Below is a snippet from a log, showing the producing and removal of a single message.
####<May 11, 2005 10:07:10 PM EDT> <> <> <1115863630224> <141490><ID:<225269.1115863629999.0>> <test-message-6><MikesTestModule!MIKES_TEST_QUEUE><Produced> <weblogic> <> <<?xml version="1.0" encoding="UTF-8"?> <mes:WLJMSMessagexmlns:mes="http://www.bea.com/WLS/JMS/Message"><mes:Header/></mes:WLJMSMessage>> ####<May 11, 2005 10:08:07 PM EDT> <> <> <1115863687830> <202875> <ID:<225269.1115863629999.0>> <test-message-6> <MikesTestModule!MIKES_TEST_QUEUE> <Removed> <weblogic> <> <<?xml version="1.0" encoding="UTF-8"?> <mes:WLJMSMessage xmlns:mes="http://www.bea.com/WLS/JMS/Message"><mes:Header/></mes:WLJMSMessage>>
WebLogic Server 9.0 now has the ability to forward messages to a remote WebLogic JMS server, even if that server is not available at the time. The store-and-forward (SAF) service really looks like the WebLogic Message Bridge on steroids. SAF can be distinguished from the Message Bridge in three ways:
The SAF mechanism consists of three parts:
Figure 4 shows how the SAF service moves messages. In this example, the JMS producer places a message on a local JMS destination (queue or topic) and continues with its work. The message is forwarded or picked up by the local SAF sending agent. The agent forwards the message to the remote SAF receiving agent, and in case it is not available, messages are persisted on the local server until the remote system recovers. If the local server goes down before the SAF agent can forward it, the message will be persisted until the JMS server recovers. Once the message arrives on the remote imported destination, the JMS consumer retrieves the message. Both the JMS destination and the imported destination are available through the WebLogic JNDI tree.
Figure 4: Store-and-forward message flow
All persistent messages are delivered with the Quality of Service (QOS) of Exactly-once, and are guaranteed to be delivered in order. Non-persistent messages that are sent in a unit-of-order are delivered with the QOS of At-most-once, and may arrive out of order if a failure occurs. SAF queues and topics can be defined with one of three QOS levels: Exactly-once, At-least-once, or At-most-once.
The key to the SAF architecture is that neither the JMS producer nor the consumer have any indication that the SAF service is involved.
WebLogic 9.0 comes with a new unified persistent store subsystem, used throughout the WebLogic server where persistence is required. The SAF agents, JMS servers, reliable Web services, diagnostic service, and JTA logs all use the persistent stores.
Every WebLogic Server instance comes with a default file store, in the
data\store\default directory. Any subsystem requiring persistence not explicitly configured with a persistent store will use the default store. Subsystems can use the same store, as long as they are all targeted to the same server. Stores can be either file based or JDBC based, with the exception that the default store must be a file store.
The ability to have multiple stores and allow multiple applications to access a single store gives administrators the ultimate flexibility in designing their WebLogic domains. For example, one store may be on a highly available, SAN-based disk, while another store may be a JDBC-based store. The first gives high performance but with manual failover (assuming a clustered environment). The second gives lower performance but allows for automatic failover. The administrator can determine what the appropriate stores are for each application, based on customer requirements.
One performance trick to getting the most out of the persistent stores is to take advantage of the store's transactional support. For example, assume a JMS server and an EJB timer are enlisted in the same transaction. If the same store is used for both the JMS server and the EJB timer, the transaction can be one-phased, requiring only a single write to the store. However, if separate stores are used, the transaction will be two-phased and will require two writes per store, plus one write to the transaction log, for a total of five writes. This may not sound dramatic, but the performance savings on a system processing many transactions per second can be substantial.