by Naresh Revanuru
Workload management is the term used to describe how client-generated work is accepted and handled in an application server. BEA WebLogic Server 9.0 introduces new concepts for workload management, not available in previous releases. These concepts replace execute queues as defined in earlier releases, and include notions of work prioritization, thread pool management, and overload protection. This article describes how workload management is handled in WebLogic Server 9.0, and how it differs from previous releases.
Prior to WebLogic Server 9.0, customers had to configure execute queues with a fixed thread count and a pending work queue length. However, dealing directly with low-level kernel attributes such as the number of threads has problems. Here are some of the difficulties that administrators face:
With WebLogic Server 9.0 administrators can move away from configuring dedicated execute queues and start describing application requirements in a language they understand. Here are some of the salient features:
I will describe more about these features in the following sections.
WebLogic Server 9.0 has a single thread pool for requests from all applications. Similarly, all pending work is enqueued in a common priority-based queue. The thread count is automatically tuned to achieve maximum overall throughput. Priority of the requests is dynamic and computed internally to meet the stated goals. Administrators state goals simply, using application-level parameters like fair-share, response time goals.
In earlier releases, each servlet or RMI request was associated with a dispatch policy that mapped to an execute queue. Requests without an explicit dispatch policy use the server-wide default execute queue. In WebLogic Server 9.0, requests are still associated with a dispatch policy but are mapped to a WorkManager instead of to an execute queue. Note that the concept of WorkManager described here is completely different from the Timer and WorkManager specification described on page 5.
Requests without an explicit dispatch policy use the default WorkManager of the application. This means that each application has its own default WorkManager that is not shared with other applications. This distinction is important to note. Execute queues are always global whereas WorkManagers are always application scoped. Even WorkManagers defined globally in the console are application scoped during runtime. This means that each application gets into own runtime instance that is distinct from others, but all of them share the same characteristics like fair-share goals. This is done to track work at the application layer and to provide capabilities like graceful suspension of individual applications.
As mentioned earlier, each servlet or RMI request is associated with a WorkManager. By default, all requests are associated with the application default WorkManager. The dispatch-policy element can be used to associate a request with a specific WorkManager either defined within an application scope or globally at the server level. I will provide examples on how to use the dispatch policy in a later section.
One of the major differences between execute queues and the new thread scheduling model is that the thread count does not need to be set. In earlier releases, customers defined new thread pools and configured their size to avoid deadlocks and provide differentiated service. It is quite difficult to determine the exact number of threads needed in production to achieve optimal throughput and avoid deadlocks. WebLogic Server 9.0 is self-tuned, dynamically adjusting the number of threads to avoid deadlocks and achieve optimal throughput subject to concurrency constraints. It also meets objectives for differentiated service. These objectives are stated as fair shares and response-time goals as explained in the next section.
The self-tuning thread pool monitors the overall throughput every two seconds and uses the collected data to determine if thread count needs to change. Present thread count, the measured throughput, and the past history is taken into account by the algorithm to determine if the thread count needs to increase or decrease, and new threads are automatically added to the pool or removed, as needed.