Workload Management in WebLogic Server 9.0
Pages: 1, 2, 3, 4, 5

How Are WorkManagers Resolved During Runtime Execution?

As mentioned in the previous section, applications set a dispatch policy to associate request entry points with named WebLogic WorkManagers. During deployment, each module looks at the configured dispatch policy and maps the servlet or EJB method call to a particular WorkManager instance. It is possible to map multiple method calls to a single WorkManager instance. All user requests made to that particular method call or servlet invocation will be automatically dispatched to the associated WorkManager and therefore get its properties. Applications can be redeployed with a new dispatch policy. 

There are plans to support deployment in WebLogic Server 9.1. This is subject to change. Check the 9.1 release notes when they become available for confirmation.

Using execute queues instead of WorkManagers

It is possible to turn off WorkManagers and self-tuning mode in WebLogic Server 9.0 and use execute queues. Just start the server with -Dweblogic.Use81StyleExecuteQueues=true or set this KernelMBean property in config.xml. It is recommended that you set the attribute in config.xml instead of using the system property since it will turn on execute queue configuration and monitoring in the WebLogic console. Here is an example config.xml snippet:

<server>
  <name>myserver</name>
  <use81-style-execute-queues>true</use81-style-execute-queues>
  <listen-address/>
</server>

Turning on this property will disable all WorkManager configuration and the thread self-tuning mode. Users will get the exact WebLogic Server 8.1 execute queue behavior. When this property is enabled, WorkManager configurations are converted into execute queues using the following steps:

  1. If the WorkManager has a minimum threads constraint and/or maximum threads constraint, then a dedicated execute queue is created with the WorkManager name. The thread count of the execute queue is based on the constraint.
  2. If the WorkManager has no constraints, then the global default execute queue is used. A dedicated execute queue is not created.

This flag is useful for applications migrated from version 8.1 that are still not taking advantage of the self-tuning thread pool and WorkManagers. Perhaps the customers want to get their applications into production while investigating the use of the WorkManagers.

The server logs should have an entry like this when this flag is enabled :

<Sep 30, 2005 11:02:52 AM PDT>
 <Notice> <Kernel> <BEA-000805> <Self-tuning thread pool
 is disabled. An execute queue will be created for 
 each WorkManager definition.>

Migrating from WebLogic Server 8.1 to WebLogic Server 9.0

An application migrated from WebLogic Server 8.1 to WebLogic Server 9.0 will still have execute queues in the server configuration if they were present before the migration. Automatic conversion of execute queues to WorkManagers is not possible since in some cases WorkManagers are not required. If a WebLogic Server 8.1 application with execute queues is deployed in a WebLogic Server 9.0 server, then the configured execute queues are created and used by requests. Please note that requests without a dispatch-policy will continue to use the self-tuning thread pool. Only requests whose dispatch-policy maps to an execute queue will use the configured execute queue.

Runtime Monitoring Support

Extensive monitoring support is provided through RuntimeMBeans. Administrators can monitor individual WorkManagers, constraints, request classes, and the common self-tuning thread pool parameters. The WebLogic console also exposes a lot of this information under the appropriate tabs. Administrators can go to a particular application and then monitor all the WorkManager components related to that application. The common thread pool parameters can be monitored under the server monitoring console tab. Here are some of the RuntimeMBeans that provide information about workload management:

  1. ThreadPoolRuntimeMBean under the ServerRuntimeMBean: Provides monitoring information about the self-tuning thread pool.
  2. WorkManagerRuntimeMBeans under the ApplicationRuntimeMBean: Contain information about the individual WorkManagers in the application. The ApplicationRuntimeMBean is a child of ServerRuntimeMBean.
  3. RequestClassRuntimeMBeans, MinThreadsConstraintRuntimeMBeans, and MaxThreadsConstraintRuntimeMBeans under the ApplicationRuntimeMBean: Provide information about WorkManager components that are shared by multiple WorkManagers.

Thread Dump Changes

In WebLogic Server 8.1 and earlier releases, threads belonging to execute queues could be identified in the thread dump using the name of the execute queue. That has changed in WebLogic Server 9.0. Since all WorkManagers use the common thread pool, thread dumps will not be able to identify which threads belong to which WorkManagers. This is because threads are shared by all WorkManagers. All execute threads have the term "self-tuning" in their names to indicate that they belong to the common self-tuning pool. The thread names are prepended with their states. The three states that appear in the thread names are:

  1. ACTIVE: The thread is active and is executing work or is ready to pick up work when it arrives.
  2. STANDBY: The thread is removed from the active thread pool and is not picking up work. It can still execute work from the minimum threads constraint work set if the constraint is not met. The self-tuning implementation has removed this thread from the active pool since it is not improving throughput. It can be moved back into the active thread pool if needed later.
  3. STUCK: A thread is stuck executing work for more than the configured stuck thread interval. The thread could be stuck due to a deadlock or a slow responding back end connection.

Timer and WorkManager Specification Support

The Timer and WorkManager specification standardizes how applications submit asynchronous work in an application server. WebLogic Server 9.0 implements this specification, providing applications a handle to the internal self-tuning thread pool implementation. Let's call the WorkManagers found in the Timer and WorkManager specification, TWM WorkManagers. Applications can look up a TWM WorkManager through JNDI and submit work for asynchronous execution. The important thing to note here is that the TWM WorkManager can be associated with any WebLogic WorkManager configured using deployment descriptors by using the WorkManager name in the resource-ref entry. Here is an example:

<ejb-jar>
  <enterprise-beans>
    <session>
      <ejb-name>WorkEJB</ejb-name>
      ...  
      <resource-ref>
        <res-ref-name>MyAppScopedWorkManager-1</res-ref-name>
        <res-type>commonj.work.WorkManager</res-type>
        <res-auth>Container</res-auth>
        <res-sharing-scope>Shareable</res-sharing-scope>
      </resource-ref>
      ...
</ejb-jar>

Here the res-ref-name points to the name of the WebLogic WorkManager defined in weblogic-application.xml. If the WebLogic WorkManager cannot be found, then the application default is used. Application code can look up the TWM WorkManager as follows:

InitialContext ic = new InitialContext();


commonj.work.WorkManager mgr =
              (commonj.work.WorkManager)
              ic.lookup("java:comp/env/MyAppScopedWorkManager-1");
mgr.schedule(myWork);

Conclusion

BEA WebLogic Server 9.0 has introduced advanced techniques in workload management. Instead of requiring administrators to set low-level kernel parameters such as thread counts, and configuring static thread pools, it now provides higher level concepts like fair shares, response-time goals, and context-based shares for differentiated service. Thread count is self-tuned to maximize throughput. WebLogic Server 9.0 also provides a programmatic way to access the thread pool so that application developers can get the same benefits internal subsystems receive. A lot of monitoring support has also been added. We hope all these improvements will result in a better user experience.

Naresh Revanuru Naresh Revanuru is a senior software engineer working in the BEA WebLogic engineering team. Naresh is the team lead for the BEA WebLogic core/clustering subsystem.