This FAQ answers common performance-related questions about the Sun Java System Application Server. The questions and answers are divided into the following topics:
This document is not intended to be a replacement for the Sun Java System Application Server Performance Tuning Guide. That guide gives extensive performance tuning information that covers both the application server and your own applications. This FAQ addresses only some of the most common tuning and performance-related questions we get about the Sun Java System application server, as well as some late-breaking tips that couldn't make it into that guide.
J2SE 5.0 provides two implementations of the HotSpot Java virtual machine (JVM):
By default, the Application Server uses the JVM setting appropriate to the purpose:
You can override the default by changing the JVM settings in the Admin Console under Configurations > config-name > JVM Settings (JVM Options).
The Application Server uses RMI in the Administration module for monitoring. Garbage cannot be collected in RMI-based distributed applications without occasional local collections, so RMI forces a periodic full collection. Control the frequency of these collections with the property -sun.rmi.dgc.client.gcInterval.
For example, java -Dsun.rmi.dgc.client.gcInterval=3600000 specifies explicit collection once per hour instead of the default rate of once per minute. Alternately, you can disable these Full GCs altogether by specifying this option: -XX:+DisableExplicitGC
The security manager is expensive because calls to required resources must call the doPrivileged() method and must also check the resource with the server.policy file. If you are sure that no malicious code will be run on the server and you do not use authentication within your application, then you can disable the security manager.
To disable use of the server.policy file, use the Admin Console. Under
Configurations > config-name > JVM Settings (JVM Options) delete the option that
looks like this:
There are four main new EJB features which are added in 8.1 release that will impove the performacne. They are : read-only EJBs, prefetching of Container Managed Relationship (CMR) beans, Version Consistency, and the ability to run remote EJBs in different thread pools (request partitioning).
How do I use Read-Only EJBs?
Read-only beans allow you to cache data from the database. In the EJB lifecycle, the EJB container call the ejbLoad() method of a read-only bean only once. The container makes multiple copies of the EJB component from that data, and since the beans do not update the database, the container never calls the ejbStore() method. This greatly reduces database traffic for these beans.
If there is a bean that never updates the database, use a read-only bean in its place to improve performance.
A read-only bean is appropriate if either:
To create a read-only bean, add the following to the EJB deployment descriptor sun-ejb-jar.xml:
How do I use CMR prefetching?
If a container-managed relationship (CMR) exits in your application, loading of one bean will load all its related beans. The canonical example of CMR is an Order-OrderLine relationship where you have one Order EJB component that has N related OrderLine EJB components. In previous releases of the application server, to use all those beans would require multiple database queries: one for the Order bean and one for each of the OrderLine beans in the relationship. In general, if a bean has n relationships, using all the data of the bean would require n+1 database accesses. Use CMR pre-fetching to retrieve all the data for the bean and all its related beans in one database access.
For example, you have this relationship defined in the ejb-jar.xml file:
When a particular Order is loaded, you can load its related OrderLines by adding this to the sun-cmp-mapping.xml file for the application:
Now when an Order is retrieved, the CMP engine issues SQL to retrieve all related OrderLines with a SELECT statement that has the following WHERE clause: OrderTable.OrderID = OrderLineTable.OrderLine_OrderID This clause indicates an outer join. These OrderLines are pre-fetched.
Pre-fetching generally improves performance because it reduces the number of database accesses. However, if the business logic often uses Orders without referencing their OrderLines, then this can have a performance penalty, that is, the system has spent the effort to pre-fetch the OrderLines that are not actually needed. Avoid pre-fetching for specific finder methods; this can often avoid that penalty. For example, consider an order bean has two finder methods: a findByPrimaryKey method that uses the orderlines, and a findByCustomerId method that returns only order information and hence doesn’t use the orderlines. If you’ve enabled CMR pre-fetching for the orderlines, both finder methods will pre-fetch the orderlines. However, you can prevent pre-fetching for the findByCustomerId method by including this information in the sun-ejb-jar.xml descriptor:
How do I use Version Consistency?
Version consistency is a mechanism that can be used to protect the integrity of data in the database. To review: application servers can use multiple copies of the same EJB at the same time and potentially corrupt the state of that bean if the data for the bean is not protected from that simultaneous access. In previous versions of the application server, this is accomplished by using a consistency level of lock-when-loaded. That locks the database row associated with a particular bean which means that the bean cannot be accessed by two simultaneous transactions. This protects data but slows down access, since all access to the EJB is now effectively serialized.
Version consistency is another approach to protecting EJB data integrity. To use version consistency, you specify a column in the database to use as a version number. The EJB lifecycle then proceeds like this:
Version consistency is advantageous when you have EJB components that are rarely modified, because it allows two transactions to use the same EJB component at the same time. Because neither transaction modifies the data, the version number is unchanged at the end of both transactions, and both succeed. But now the transactions can run in parallel. If two transactions occasionally modify the same EJB component, one will succeed and one will fail and can be retried using the new values—which can still be faster than serializing all access to the EJB component if the retries are infrequent enough (though now your application logic has to be prepared to perform the retry operation).
To use version consistency, the database schema for a particular table must include a column where the version can be stored. You then specify that table in the sun-cmp-mapping.xml deployment descriptor for a particular bean:
In addition, you must establish a trigger on the database to automatically update the version column when data in the specified table is modified. The Application Server requires such a trigger to use version consistency. Having such a trigger also ensures that external applications that modify the EJB data will not conflict with EJB transactions in progress.
For example, the following DDL illustrates how to create a trigger for the Order table:
CREATE TRIGGER OrderTrigger
BEFORE UPDATE ON OrderTable
FOR EACH ROW
WHEN (new.VC_VERSION_NUMBER = old.VC_VERSION_NUMBER)
:NEW.VC_VERSION_NUMBER := :OLD.VC_VERSION_NUMBER + 1;
Request partitioning enables you to assign a request priority to an EJB component. This gives you the flexibility to make certain EJB components execute with higher priorities than others. An EJB component which has a request priority assigned to it will have its requests (services) executed within an assigned threadpool. By assigning a threadpool to its execution, the EJB component can execute independently of other pending requests. In short, request partitioning enables you to meet service-level agreements that have differing levels of priority assigned to different services. Request partitioning applies only to remote EJB components (those that implement a remote interface). Local EJB components are executed in their calling thread (for example, when a servlet calls a local bean, the local bean invocation occurs on the servlet’s thread).
ble request partitioning
Yes! Add the following properties to the definition of the database connection pool:
<jdbc-connection-pool datasource-classname= "oracle.jdbc.pool.OracleDataSource" ...>
<property name="ImplicitCachingEnabled" value="true"/>
<property name="MaxStatements" value="200"/>
If you are using Solaris 8, use the mtmalloc library that provides a collection of malloc routines for concurrent access to heap space. To use mtmalloc:
The keep alive (or HTTP/1.1 persistent connection handling) subsystem of the Sun Java System Application Server is designed to be massively scalable. However, the out of the box configuration can be less than optimal if your clients are non-persistent (i.e., the clients send HTTP/1.0 requests without a KeepAlive header, or you serve lots of dynamic content without setting the content-length headers) . The default tunings are also not appropriate for a lightly loaded system primarily servicing keep alive connections.
There are several tuning parameters that can help improve the performance in these situations. Since HTTP/1.0 results in large number of new incoming connections, the default acceptor threads (1) per listen socket will be sub-optimal. Increasing this to a higher number should improve performance for HTTP/1.0 style workloads; you may want to try increasing this as high as the number of CPUs on your server. You can also change the Thread Count parameter, which specifies the maximum number of simultaneous requests the server can handle. Increasing this value will reduce HTTP response latency times. If your site is processing many requests that take many seconds, you might need to increase the number of maximum simultaneous requests.
Adjust the thread count value based on your load and the length of time for an average request. In general, increase this number if you have idle CPU time and requests that are pending; decrease it if the CPU becomes overloaded. If you have many HTTP 1.0 clients (or HTTP 1.1 clients that disconnect frequently), adjust the timeout value to reduce the time a connection is kept open.
Suitable Request Thread Count values range from 100 to 500, depending on the load. If your system has extra CPU cycles, keep incrementally increasing thread count and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing thread count.
You should also set the MaxConnections parameter to be equal to the number of connections you expect to sustain, this paremeter controls the maximum number of keep-alive connections the server maintains. Adjust this setting based on number of keep alive connections the server is expected to service and the server’s load, because it will add up to resource utilization and might increase latency. The number of connections specified by Max Connections is divided equally among the keep alive threads. If Max Connections is not equally divisible by Thread Count, the server can allow slightly more than Max Connections simultaneous keep alive connections.
Can I increase the throughput of my web applications?
For HTTP/1.1 connections, there is a tradeoff between thruput and latency while tuning server persistent connection handling. The KeepAliveQueryMeanTime directive controls latency. Lowering KeepAliveQueryMeanTime is intended to lower latency on lightly loaded systems (e.g., reduce page load times). Raising KeepAliveQueryMeanTime is intended to raise the aggregate throughput on heavily loaded systems (e.g., increase the number of requests/s the server can handle). However, if there's too much latency and too few clients, aggregate throughput will suffer as the server sits idle unnecessarily. As a result, the general keep-alive subsystem tuning rules at a particular load are:
if there's idle CPU time, decrease KeepAliveQueryMeanTime
if there's no idle CPU time, increase KeepAliveQueryMeanTime
I have a lot of JSPs that never change. Should they be handled differently?
By default, the application server will periodically check to see if your JSPs have been modified and dynamically reload them; this allows you to deploy modifications without restarting the server. However, there is a small performance penalty for that checking. If you don't need it, then you can disable dynamic JSP reloading by editing the default-web.xml file in the config directory for each instance. Change the servlet definition for a JSP to look like this:
Yes. If you spend a lot of time re-running the same servlet/JSP, you can cache its results and return results out of the cache the next time it is run. This is useful, for example, for common queries that all visitors to your site run: you want the results of the query to be dynamic because it might change day to day, but you don't need to run the logic for every user.
To enable caching, you turn on caching parameters in the sun-web.xml file for your application. See http://docs.sun.com/app/docs/doc/819-2556/6n4rap8qn?a=view#beagm for more details.
My JSPs send a large (greater than 8K) amount of data back to clients. Are the problems with that?
For HTTP1.1 clients, sending large amounts of data requires a change to the useOutputStreamSize parameter in the obj.conf file of the appserver. By default, this is set to 8K; amounts greater than 8K will be sent in separate chunks. If you do a lot of big transfers, increase this number to decrease the amount of chunking the server must perform.
Can I improve SSL performance?
If you are using Solaris 8, optimize SSL by using the mtmalloc library that provides a collection of malloc routines for concurrent access to heap space. To use mtmalloc:
The exact syntax to define an environment variable depends on the shell you use.