FAQ ABOUT SUN ONE[tm] APPLICATION SERVER PERFORMANCE

 
 

This FAQ answers common performance-related questions about the Sun Java System Application Server. The questions and answers are divided into the following topics:

Tuning the JVM for best performance

Getting the best performance for EJBs

Getting the best performance for servlets and JSPs

This document is not intended to be a replacement for the Sun Java System Application Server Performance Tuning Guide. That guide gives extensive performance tuning information that covers both the application server and your own applications. This FAQ addresses only some of the most common tuning and performance-related questions we get about the Sun Java System application server, as well as some late-breaking tips that couldn't make it into that guide.

Tuning the JVM

How do I use the -server option for the Java Virtual Machine?

What's a good way to size the heap?

I'm seeing a lot of Full garbage collections. What's going on?

Getting the best performance for EJBs

Should I turn off the security manager to improve performance?

Can I do anything to help the performance of EJB's?

I'm using an Oracle database and Oracle's JDBC drivers. Can I do anything to speed up JDBC performance?

I'm using a Type 2 JDBC driver on a multiprocessor system. Can I do anything to speed up JDBC performance?

Getting the best performance for servlets and JSPs

I'm having problems accepting lots of connections. What can I do?

Can I increase the throughput of my web applications?

My site runs the same servlet or JSP for every user. Can I speed this up?

I use a lot of JSPs that never change. Should they be handled differently?

My JSPs send a large (greater than 8K) amount of data back to clients. Are there problems with that?

Can I improve SSL performance?

How do I use the -server option for the Java Virtual Machine?

J2SE 5.0 provides two implementations of the HotSpot Java virtual machine (JVM):

  • The client VM is tuned for reducing start-up time and memory footprint. Invoke it by using the -client JVM command-line option.
  • The server VM is designed for maximum program execution speed. Invoke it by using the -server JVM command-line option.

By default, the Application Server uses the JVM setting appropriate to the purpose:

  • Platform Edition, targeted at application developers, uses the -client JVM flag to optimize startup performance and conserve memory resources.
  • Enterprise Edition, targeted at production deployments, uses the default JVM startup mode. With J2SE 5.0, the HotSpot VM provides server-class machine detection, which will use the server VM if it detects “server-class” hardware (at least two
    CPUs and two GB of physical memory).

You can override the default by changing the JVM settings in the Admin Console under Configurations > config-name > JVM Settings (JVM Options).

What's a good way to size the heap?





To size the Java heap:
  • Decide the total amount of memory you can afford for the JVM. Accordingly, graph your own performance metric against young generation sizes to find the best setting.</>
  • Make plenty of memory available to the young generation. The default is calculated from NewRatio and the -Xmx setting</>
  • Larger eden or younger generation spaces increase the spacing between full GCs. But young space collections could take a proportionally longer time. In general, keep the eden size between one fourth and one third the maximum heap size. The old generation must be larger than the new generation.
    </>

I'm seeing a lot of Full garbage collections (about one every minute). What's going on?

The Application Server uses RMI in the Administration module for monitoring. Garbage cannot be collected in RMI-based distributed applications without occasional local collections, so RMI forces a periodic full collection. Control the frequency of these collections with the property -sun.rmi.dgc.client.gcInterval.
For example,    java -Dsun.rmi.dgc.client.gcInterval=3600000 specifies explicit collection once per hour instead of the default rate of once per minute. Alternately, you can disable these Full GCs altogether by specifying this option:  -XX:+DisableExplicitGC



Should I turn off the security manager to improve performance?

The security manager is expensive because calls to required resources must call the doPrivileged() method and must also check the resource with the server.policy file. If you are sure that no malicious code will be run on the server and you do not use authentication within your application, then you can disable the security manager.

To disable use of the server.policy file, use the Admin Console. Under
Configurations > config-name > JVM Settings (JVM Options) delete the option that
looks like this:
-Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy

Can I do anything to help the performance of EJBs?

There are four main new EJB features which are added in 8.1 release that will impove the performacne. They are : read-only EJBs, prefetching of Container Managed Relationship (CMR) beans, Version Consistency, and the ability to run remote EJBs in different thread pools (request partitioning).

How do I use Read-Only EJBs?

Read-only beans allow you to cache data from the database. In the EJB lifecycle, the EJB container call the ejbLoad() method of a read-only bean only once. The container makes multiple copies of the EJB component from that data, and since the beans do not update the database, the container never calls the ejbStore() method. This greatly reduces database traffic for these beans.

If there is a bean that never updates the database, use a read-only bean in its place to improve performance.
A read-only bean is appropriate if either:

  • Database rows represented by the bean do not change.
  • The application can tolerate using out-of-date values for the bean.

To create a read-only bean, add the following to the EJB deployment descriptor sun-ejb-jar.xml:
<is-read-only-bean>true</is-read-only-bean>
<refresh-period-in-seconds>600</refresh-period-in-seconds>



How do I use CMR prefetching?

If a container-managed relationship (CMR) exits in your application, loading of one bean will load all its related beans. The canonical example of CMR is an Order-OrderLine relationship where you have one Order EJB component that has N related OrderLine EJB components. In previous releases of the application server, to use all those beans would require multiple database queries: one for the Order bean and one for each of the OrderLine beans in the relationship. In general, if a bean has n relationships, using all the data of the bean would require n+1 database accesses. Use CMR pre-fetching to retrieve all the data for the bean and all its related beans in one database access.

For example, you have this relationship defined in the ejb-jar.xml file:

<relationships>
<ejb-relation>
<description>Order-OrderLine</description>
<ejb-relation-name>Order-OrderLine</ejb-relation-name>
<ejb-relationship-role>
<ejb-relationship-role-name>
Order-has-N-OrderLines
</ejb-relationship-role-name>
<multiplicity>One</multiplicity>
<relationship-role-source>
<ejb-name>OrderEJB</ejb-name>
</relationship-role-source>
<cmr-field>
<cmr-field-name>orderLines</cmr-field-name>
<cmr-field-type>java.util.Collection</cmr-field-type>
</cmr-field>
</ejb-relationship-role>
</ejb-relation>
</relationships>

When a particular Order is loaded, you can load its related OrderLines by adding this to the sun-cmp-mapping.xml file for the application:

<entity-mapping>
<ejb-name>Order</ejb-name>
<table-name>...</table-name>
<cmp-field-mapping>...</cmp-field-mapping>
<cmr-field-mapping>
<cmr-field-name>orderLines</cmr-field-name>
<column-pair>
<column-name>OrderTable.OrderID</column-name>
<column-name>OrderLineTable.OrderLine_OrderID</column-name>
</column-pair>
<fetched-with>
<default>
</fetched-with>
</cmr-field-mapping>
</entity-mappping>

Now when an Order is retrieved, the CMP engine issues SQL to retrieve all related OrderLines with a SELECT statement that has the following WHERE clause: OrderTable.OrderID = OrderLineTable.OrderLine_OrderID This clause indicates an outer join. These OrderLines are pre-fetched.

Pre-fetching generally improves performance because it reduces the number of database accesses. However, if the business logic often uses Orders without referencing their OrderLines, then this can have a performance penalty, that is, the system has spent the effort to pre-fetch the OrderLines that are not actually needed. Avoid pre-fetching for specific finder methods; this can often avoid that penalty. For example, consider an order bean has two finder methods: a findByPrimaryKey method that uses the orderlines, and a findByCustomerId method that returns only order information and hence doesn’t use the orderlines. If you’ve enabled CMR pre-fetching for the orderlines, both finder methods will pre-fetch the orderlines. However, you can prevent pre-fetching for the findByCustomerId method by including this information in the sun-ejb-jar.xml descriptor:

<ejb>
<ejb-name>OrderBean</ejb-name>
...
<cmp>
<prefetch-disabled>
<query-method>
<method-name>findByCustomerId</method-name>
</query-method>
</prefetch-disabled>
</cmp>
</ejb>

How do I use Version Consistency?

Version consistency is a mechanism that can be used to protect the integrity of data in the database. To review: application servers can use multiple copies of the same EJB at the same time and potentially corrupt the state of that bean if the data for the bean is not protected from that simultaneous access. In previous versions of the application server, this is accomplished by using a consistency level of lock-when-loaded. That locks the database row associated with a particular bean which means that the bean cannot be accessed by two simultaneous transactions. This protects data but slows down access, since all access to the EJB is now effectively serialized.

Version consistency is another approach to protecting EJB data integrity. To use version consistency, you specify a column in the database to use as a version number. The EJB lifecycle then proceeds like this:

  • The first time the bean is used, the ejbLoad() method loads the bean as normal, including loading the version number from the database.
  • The ejbStore() method checks the version number in the database versus its value when the EJB component was loaded.
  • If the version number has been modified, it means that there has been simultaneous access to the EJB component and ejbStore() throws a ConcurrentModificationException.
  • Otherwise, ejbStore() stores the data and completes as normal.

Version consistency is advantageous when you have EJB components that are rarely modified, because it allows two transactions to use the same EJB component at the same time. Because neither transaction modifies the data, the version number is unchanged at the end of both transactions, and both succeed. But now the transactions can run in parallel. If two transactions occasionally modify the same EJB component, one will succeed and one will fail and can be retried using the new values—which can still be faster than serializing all access to the EJB component if the retries are infrequent enough (though now your application logic has to be prepared to perform the retry operation).

To use version consistency, the database schema for a particular table must include a column where the version can be stored. You then specify that table in the sun-cmp-mapping.xml deployment descriptor for a particular bean:

<entity-mapping>
<cmp-field-mapping>
...
</cmp-field-mapping>
<consistency>
<check-version-of-accessed-instances>
<column-name>OrderTable.VC_VERSION_NUMBER</column-name>
</check-version-of-accessed-instances>
</consistency>
</entity-mapping>

In addition, you must establish a trigger on the database to automatically update the version column when data in the specified table is modified. The Application Server requires such a trigger to use version consistency. Having such a trigger also ensures that external applications that modify the EJB data will not conflict with EJB transactions in progress.

For example, the following DDL illustrates how to create a trigger for the Order table:

CREATE TRIGGER OrderTrigger
BEFORE UPDATE ON OrderTable
FOR EACH ROW
WHEN (new.VC_VERSION_NUMBER = old.VC_VERSION_NUMBER)
DECLARE
BEGIN
:NEW.VC_VERSION_NUMBER := :OLD.VC_VERSION_NUMBER + 1;
END;

What is Request Partitioning ?

Request partitioning enables you to assign a request priority to an EJB component. This gives you the flexibility to make certain EJB components execute with higher priorities than others. An EJB component which has a request priority assigned to it will have its requests (services) executed within an assigned threadpool. By assigning a threadpool to its execution, the EJB component can execute independently of other pending requests. In short, request partitioning enables you to meet service-level agreements that have differing levels of priority assigned to different services. Request partitioning applies only to remote EJB components (those that implement a remote interface). Local EJB components are executed in their calling thread (for example, when a servlet calls a local bean, the local bean invocation occurs on the servlet’s thread).

To ena ble request partitioning

1. Configure additional threadpools for EJB execution using the Admin Console.
2. Add the additional threadpool IDs to the Application Server’s ORB.
You can do this by editing the domain.xml file or through the Admin Console.

For example, enable threadpools named priority-1 and priority-2 to the <orb> element as follows:

<orb max-connections="1024" message-fragment-size="1024"
use-thread-pool-ids="thread-pool-1,priority-1,priority-2">

3. Include the threadpool ID in the use-thread-pool-id element of the EJB component’s sun-ejb-jar.xml deployment descriptor.

For example, the following sun-ejb-jar.xml deployment descriptor for an EJB component named “TheGreeter” is assigned to a thread pool named priority-2:

<sun-ejb-jar>
    <enterprise-beans>
         <unique-id>1</unique-id>
         <ejb>
            <ejb-name>TheGreeter</ejb-name>
            <jndi-name>greeter</jndi-name>
            <use-thread-pool-id>priority-1</use-thread-pool-id>
        </ejb>
   </enterprise-beans>
</sun-ejb-jar>

4. Restart the Application Server.

I'm using an Oracle database and Oracle's JDBC drivers. Can I do anything to speed up JDBC performance?

Yes! Add the following properties to the definition of the database connection pool:

<jdbc-connection-pool datasource-classname= "oracle.jdbc.pool.OracleDataSource" ...>

<property name="ImplicitCachingEnabled" value="true"/>

<property name="MaxStatements" value="200"/>

</jdbc-connection-pool>

I'm using a Type 2 JDBC driver on a multiprocessor system. Can I do anything to speed up JDBC performance?

If you are using Solaris 8, use the mtmalloc library that provides a collection of malloc routines for concurrent access to heap space.  To use mtmalloc:

  • Get patch 111308-03 from SunSolve Onlineand install it.
  • Edit the startserv script located in bin/startserv for your domain, and define the LD_PRELOAD environment variable to be /usr/lib/libmtmalloc.so.

I'm having problems scaling connections. What can I do?

The keep alive (or HTTP/1.1 persistent connection handling) subsystem of the Sun Java System Application Server is designed to be massively scalable. However, the out of the box configuration can be less than optimal if your clients are non-persistent (i.e., the clients send HTTP/1.0 requests without a KeepAlive header, or you serve lots of dynamic content without setting the content-length headers) . The default tunings are also not appropriate for a lightly loaded system primarily servicing keep alive connections.

There are several tuning parameters that can help improve the performance in these situations. Since HTTP/1.0 results in large number of new incoming connections, the default acceptor threads (1) per listen socket will be sub-optimal. Increasing this to a higher number should improve performance for HTTP/1.0 style workloads; you may want to try increasing this as high as the number of CPUs on your server. You can also change the Thread Count parameter, which specifies the maximum number of simultaneous requests the server can handle. Increasing this value will reduce HTTP response latency times. If your site is processing many requests that take many seconds, you might need to increase the number of maximum simultaneous requests.

Adjust the thread count value based on your load and the length of time for an average request. In general, increase this number if you have idle CPU time and requests that are pending; decrease it if the CPU becomes overloaded. If you have many HTTP 1.0 clients (or HTTP 1.1 clients that disconnect frequently), adjust the timeout value to reduce the time a connection is kept open.
Suitable Request Thread Count values range from 100 to 500, depending on the load. If your system has extra CPU cycles, keep incrementally increasing thread count and monitor performance after each incremental increase. When performance saturates (stops improving), then stop increasing thread count.

You should also set the MaxConnections parameter to be equal to the number of connections you expect to sustain, this paremeter controls the maximum number of keep-alive connections the server maintains. Adjust this setting based on number of keep alive connections the server is expected to service and the server’s load, because it will add up to resource utilization and might increase latency. The number of connections specified by Max Connections is divided equally among the keep alive threads. If Max Connections is not equally divisible by Thread Count, the server can allow slightly more than Max Connections simultaneous keep alive connections.

Can I increase the throughput of my web applications?

For HTTP/1.1 connections, there is a tradeoff between thruput and latency while tuning server persistent connection handling. The KeepAliveQueryMeanTime directive controls latency. Lowering KeepAliveQueryMeanTime is intended to lower latency on lightly loaded systems (e.g., reduce page load times). Raising KeepAliveQueryMeanTime is intended to raise the aggregate throughput on heavily loaded systems (e.g., increase the number of requests/s the server can handle). However, if there's too much latency and too few clients, aggregate throughput will suffer as the server sits idle unnecessarily. As a result, the general keep-alive subsystem tuning rules at a particular load are:

  • if there's idle CPU time, decrease KeepAliveQueryMeanTime

  • if there's no idle CPU time, increase KeepAliveQueryMeanTime


I have a lot of JSPs that never change. Should they be handled differently?

By default, the application server will periodically check to see if your JSPs have been modified and dynamically reload them; this allows you to deploy modifications without restarting the server. However, there is a small performance penalty for that checking. If you don't need it, then you can disable dynamic JSP reloading by editing the default-web.xml file in the config directory for each instance. Change the servlet definition for a JSP to look like this:

<servlet>
<servlet-name>jsp</servlet-name>
<servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class>
...
<load-on-startup>3</load-on-startup>
</servlet>


My site runs the same servlet or JSP for every user. Can I speed this up?

Yes. If you spend a lot of time re-running the same servlet/JSP, you can cache its results and return results out of the cache the next time it is run. This is useful, for example, for common queries that all visitors to your site run: you want the results of the query to be dynamic because it might change day to day, but you don't need to run the logic for every user.

To enable caching, you turn on caching parameters in the sun-web.xml file for your application. See http://docs.sun.com/app/docs/doc/819-2556/6n4rap8qn?a=view#beagm  for more details.

My JSPs send a large (greater than 8K) amount of data back to clients. Are the problems with that?

For HTTP1.1 clients, sending large amounts of data requires a change to the useOutputStreamSize parameter in the obj.conf file of the appserver. By default, this is set to 8K; amounts greater than 8K will be sent in separate chunks. If you do a lot of big transfers, increase this number to decrease the amount of chunking the server must perform.

Can I improve SSL performance?

If you are using Solaris 8, optimize SSL by using the mtmalloc library that provides a collection of malloc routines for concurrent access to heap space. To use mtmalloc:

  • Get patch 111308-03 from SunSolve Onlineand install it.
  • Edit the startserv script located in bin/startserv for your domain, and define the LD_PRELOAD environment variable to be /usr/lib/libmtmalloc.so.

The exact syntax to define an environment variable depends on the shell you use.

Left Curve
Java SDKs and Tools
Right Curve
Left Curve
Java Resources
Right Curve
JavaOne Banner Java 8 banner (182)