Tuning Garbage Collection
|See also Performance Docs|
The graph below models an ideal system that is perfectly scalable with the exception of GC. The top line (red) is an application spending only 1% of the time in GC on a uniprocessor; this translates into more than 20% loss in throughput at 32 processors. At 10%, not considered an outrageous amount of time in GC in uniprocessor applications, more than 75% of throughput is lost when scaling up.
This demonstrates that issues that appear lost in the noise when developing on small systems may become principal bottlenecks when scaling up. The silver lining is that small improvements in such a bottleneck can produce large gains in performance. For a sufficiently large system it becomes well worthwhile to tune garbage collection.
This document is written from the perspective of 1.3.1 JVM on the Solaris (SPARC Platform Edition) operating environment, because that platform provides the most scalable hardware/software Java 2 platform today. However, the descriptive text applies to other supported platforms, including Linux, Microsoft Windows, and the Solaris (Intel Architecture) operating environment, to the extent that scalable hardware is available. Although command line options are consistent across platforms, some platforms may have different defaults than described here.
Some objects do live longer, and so the distribution stretches out to the the right. For instance, there are typically some objects allocated at initialization that live until the process exits. Between these two extremes are objects that live for the duration of some intermediate computation, seen here as the lump to the right of the infant mortality peak. Some applications have very different looking distributions, but a surprisingly large number possess this general shape. Efficient collection is made possible by focusing on the fact that a majority of objects die young.
To do this, memory is managed in generations: memory pools holding objects of different ages. Garbage collection occurs in each generation when it fills up; these collections are represented on the diagram above with vertical bars. Objects are allocated in eden, and because of infant mortality most objects die there. When Eden fills up it causes a minor collection, in which some surviving objects are moved to an older generation. When older generations need to be collected there is a major collection that is often much slower because it involves all living objects.
The diagram shows a well-tuned system in which most objects die before they survive to the first garbage collection. The longer an object survives, the more collections it will endure and the slower GC becomes. By arranging for most objects to survive less than one collection, garbage collection can be very efficient. This happy situation can be upset by applications with unusual lifetime distributions, or by poorly sized generations that cause collections to be too frequent.
The default garbage collection parameters were designed to be effective for most small applications. They aren't optimal for many server applications. This leads to the central tenet of this document:
If GC has become a bottleneck, you may wish to customize the generation sizes. Check the verbose GC output, and then explore the sensitivity of your individual performance metric to the GC parameters.
At initialization, a maximum address space is virtually reserved but not allocated physical memory unless it is needed. The complete address space reserved for object memory can be divided into the young and old generations.
The young generation consists of eden plus two survivor spaces . Objects are initially allocated in eden. One survivor space is empty at any time, and serves as the destination of the next copying collection of any living objects in eden and the other survivor space. Objects are copied between survivor spaces in this way until they age enough to be tenured (copied to the old generation.)
(Other virtual machines, including the production JVM version 1.2 for the Solaris operating environment, used two equally sized spaces for copying rather than one large eden plus two small spaces. This means the options for sizing the young generation are not directly comparable; see the Performance FAQ for an example.)
The old generation is collected in place by mark-compact. One portion called the permanent generation is special because it holds all the reflective data of the JVM itself, such as class and method objects.
|Unless you have problems with pauses, try granting as much memory as possible to the JVM. The default size (64MB) is often too small.
Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the JVM. On the other hand, the JVM can't compensate if you make a poor choice.
Be sure to increase the memory as you increase the number of processors, since allocation can be parallelized, but GC is not parallel.
client JVM: 8)
|First decide the total amount of memory you can afford to give the JVM. Then graph your own performance metric against young generation sizes to find the best setting.
Unless you find problems with excessive major collection or pause times, grant plenty of memory to the young generation. The default MaxNewSize (32MB) is generally too small.
Increasing the young generation becomes counterproductive at half the total heap or less.
Be sure to increase the young generation as you increase the number of processors, since allocation can be parallelized, but GC is not parallel.
specifies explicit collection once per hour instead of the default rate of once per minute. However, this may also cause some objects to take much longer to be reclaimed. These properties can be set as high as Long.MAX_VALUE to make the time between explicit collections effectively infinite, if there is no desire for an upper bound on the timeliness of DGC activity.LD_LIBRARY_PATH/usr/lib/lwp
As used on the web site, the terms "Java Virtual Machine and "JVM" mean a virtual machine for the Java platform.
Copyright © 1999 Sun Microsystems, Inc. All Rights Reserved.
Please send comments to: firstname.lastname@example.org