Profiling WebLogic Servers with Sun Studio Performance Tools

By Marty Itzkowitz, Project Lead, Sun Studio Performance Tools, Sun Microsystems, August 25, 2006  
This article describes how to profile servers being run under BEA's WebLogic® system using the Sun Studio Performance Tools. The instructions are based on WebLogic 8.1, and were developed with LiYu Yi, of Boldtech Systems, Dallas, TX.

To profile WebLogic servers, you will need the  Sun Studio 11 Performance Tools running on a supported version of the Solaris OS or Linux, including any required patches. You will also need a supported version of Java installed. Running the collect command with no arguments will run a script that will verify that all the appropriate patches are installed. (See also Profiling Java Applications With Sun Studio Performance Tools , and the Sun Studio Performance Analyzer documentation .)

A server run under BEA's WebLogic is a Java application that you launch by running a script to invoke the JVM. To profile a server, you must ensure that the JVM command launching the server is prepended with a collect command, with appropriate arguments, to invoke the Sun Studio Collector. In the discussion below, the shell variable ${COLLECTOR} is used to refer to that command and arguments.

In the next section, we describe  setting ${COLLECTOR} for the data collection options you will want to use. In the second section we describe the scripts used to launch a server, and how to edit them to insert ${COLLECTOR}. In the third section, we describe navigating through the measured performance data. Finally, in the last section, we give links to the BEA WebLogic documentation.

Setting Data Collection Options


Data Options


The default experiment is a clock-profiling experiment. You may also use hardware counter profiling or sychronization tracing. The experiment begins when the server is launched, and terminates when the server exits. If the profiling session is longer than about 20 minutes, low-resolution profiling should be used to avoid high volumes of data.

Since WebLogic servers are Java-based, the -j on option to collect is always needed.


Experiment Naming and Storage

You may want to use the date command to generate a string representing the current date and time and embed that string in the experiment name.


Signal Controls

-y <signal> collect

To profile server initialization, you would use -y <signal>,r , send the signal after initialization, and then send it again before and after a load is applied, as above.

If data volume is not a problem, but you will be running multiple loads in a single session, you may pass the -l <signal> option to collect, and send the signal to insert sample markers after initialization and then after each benchmark load is applied.

When using either of these techniques, you should disable periodic sampling, using the -S off option to collect. WebLogic does not interfere SIGPROF, so you can use it to generate samples or to toggle pause/resume, or to generate samples, but not both. Such use does not interfere with SIGPROF usage for clock-profiling. You can use a second signal, SIGUSR1, for example, if you want to both control pause and resume and to insert sample markers in the experiment.


Descendant Process Controls

-F on-F


Archiving Controls

-A copy

Initiating Data Collection

This section describes how you can modify the scripts used to launch WebLogic servers to enable data collection.


Simply-Launched Servers

To profile WebLogic servers launched with either of those scripts, edit the script to prepend ${COLLECTOR} (as determined above) to the line that launches the JVM.


Node-Manager Launched Servers

In this case, we choose to name the script; the two properties to be set are:

Property Value
NativeVersionEnabled False

To create a very simple version of, find the script in the WebLogic installation, and use it as a template for Edit that script to have it use a ${COLLECTOR} variable to control launching the JVM, as above. Then, set properties for the Node Manager to tell it to use the new script, and restart the Node Manager. When a server is started by the NodeManager, with ${COLLECTOR} set as above, the experiment is started. When it is stopped by the Node Manager, the experiment is terminated.

The simple script above will use the same profiling for all servers it launches; for selective control over profiling of the server system, you may set up two domains for two Node Managers, and control which servers are profiled by moving servers between Node Managers.

Alternatively, a more complicated script can be created to read a simple configuration file, and use it to decide which servers are profiled, and with which arguments. The launch script used by the Node Manager is invoked with four arguments:

  • The arguments to pass to the JVM
  • The name of a file into which output from the run should be written
  • The name of the file into which stderr from the run is written
  • The name of the file into which the PID of the server is written

One of the tokens in the arguments passed to the JVM is -Dweblogic.Name= <name> You may use sed to extract the server name from it and use it to format the experiment name, or directory, or to find specific collection parameters for that server from a configuration file.

The second argument is a full path to the file to be used for stdout. You may change the script to extract the directory from that path, and use it to put experiments in the same place the Node Manager logs are put.

The Node Manager launch script is expected to write the PID of the launched server to the file named with The fourth argument You may also write it to a script that will send the signal for pause-resume or sample control for the process. Or you may want to edit a more general script to simultaneously send the signal to all monitored processes.

An example of such a complex script appears below. The script also automatically creates a shell-script file named kill. name that can be executed to send SIGPROF to the target server, to simplify use of the signal controls described above.

Server Profiles

This section describes tips for examining the server profiles.

Unless your server has created additional descendant processes which are profiled, and single run creates a single experiment. No experiment filtering or selectivity is needed.

If you have used either of the signal mechanisms described above you may use sample filtering in the Analyzer or er_print to examine the profile for only part of the run. For example, you may want to look at the startup only, or you may want to look at the data for the individual benchmark loads that were run.

When looking at the Timeline in Analyzer, you may want to color methods from the WebLogic infrastructure all one color, and use other coloring to look at specific sections of you code. You can also show or hide clock profile events that are not CPU-time (that is, they are system CPU-time, or any of the wait states).

WebLogic Documentation

Collect Launch Script


# @(#) 1.1 06/01/20


# This script is used by the WebLogic NodeManager to start up Managed servers

# on Unix systems under the control of an Admin server. The Admin

# server supplies the arguments to this script.

# The script is invoked with 4 arguments:

# Arg1: is the command line used to start up a Managed server

# Arg2: is the file to which stdout is to be redirected to

# Arg3: is the file to which stderr is to be redirected to

# Arg4: is the file into which the process id of the Managed server

# is saved.

# This script uses just one variable:

# JAVA_HOME - which is used to determine the Java version that is

# to be used to start up the WebLogic Managed server.


# set up WL_HOME, the root directory of your WebLogic installation



# set up common environment

. "${WL_HOME}/common/bin/"


# verify that JAVA_HOME points to a real Java

# ?? But why check for javac, as opposed to java? Only java is needed


if [ ! -f "$JAVA_HOME/bin/javac" ]; then

echo "The JDK wasn't found in directory $JAVA_HOME." > $3

exit 1 # fail if not found



# --- last steps -- commented out here, and repeated below

# Spawn the Java

# "$JAVA_HOME/bin/java" $1 >$2 2>$3 &


# Save its PID, write it to the file named in the fourth argument

# PID=$!

# echo $PID > $4



# Begin customization, to enable Sun Studio Data collection on the launched server


# Set NEWARG1 to the argument to be used for invoking the JVM

# NEWARG1=`echo $1`


# At this point, massage NEWARG1 if there are any arguments to be removed

# when profiling, or extra arguments to add for profiling.


# use sed on arg1 to extract the token beginning with -Dweblogic.Name= and extract the string

# following the = which is the name of the application being launched; set APPNAME to it

APPNAME=`echo $NEWARG1 | sed 's/^.*-Dweblogic.Name=\([^ $]*\).*/\1/'`


# use sed on arg2 to remove the trailing basename to yield the directory in which

# the log files are being written and set APPDIR to it

APPDIR=`dirname $2`


# construct a name for the experiment, EXPNAME, as ${APPDIR}/${APPNAME}

# where mm.dd... is the current time stamp

# use that EXPNAME as a -o argument to collect

EXPNAME=${APPDIR}/${APPNAME}.`date '+%m%d_01/20/06M%S'`.er


# construct a name for a script file to send SIGPROF to the process as ${APPDIR}/kill.${APPNAME}



# Set $COLLECTOR to command and arguments -- in this case

# Default clock profiling

# Signal-controlled pause and resume

# Archive copying for portability

# experiment name constructed above

COLLECTOR="collect -j on -y PROF -S off -A copy -o ${EXPNAME}


# Or, use APPNAME to extract COLLECTOR from a configuration file

# e.g., grep for $APPNAME.COLLECTOR, and then sed to extract

# the remainder of the line and set COLLECTOR to it


# Or, use APPNAME to extract components for construction of

# COLLECTOR for that application from a configuration file

# Also, could grep for additional Java arguments, etc.


# create a log of the information processed in this script

echo "arg1= " $1 > ${APPDIR}/col.log

echo "arg2= " $2 >> ${APPDIR}/col.log

echo "arg3= " $3 >> ${APPDIR}/col.log

echo "arg4= " $4 >> ${APPDIR}/col.log

echo "" >> ${APPDIR}/col.log

echo "NEWARG1= " ${NEWARG1} >> ${APPDIR}/col.log

echo "" >> ${APPDIR}/col.log

echo "APPNAME= "${APPNAME} >> ${APPDIR}/col.log

echo "APPDIR= "${APPDIR} >> ${APPDIR}/col.log

echo "SIG_SCRIPT= "${SIG_SCRIPT} >> ${APPDIR}/col.log

echo "EXPNAME= "${EXPNAME} >> ${APPDIR}/col.log

echo "" >> ${APPDIR}/col.log

echo "COLLECTOR= "${COLLECTOR} >> ${APPDIR}/col.log

echo "" >> ${APPDIR}/col.log

echo "COMMAND= ${COLLECTOR} \"${JAVA_HOME}/bin/java\" ${NEWARG1} >$2 2>$3 &" >> ${APPDIR}/col.log

echo "" >> ${APPDIR}/col.log


# Now actually spawn the JVM under COLLECTOR

${COLLECTOR} "${JAVA_HOME}/bin/java" ${NEWARG1} >$2 2>$3 &


# Save the PID, write it to the fourth argument (as in original script)


echo $PID > $4


# and write it to the collector log file

echo "PID= " $PID >> ${APPDIR}/col.log


# write a script to send SIGPROF to the process

# and make the script executable by anyone

echo "#!/bin/sh" > ${SIG_SCRIPT}

echo "kill -PROF $PID" >> ${SIG_SCRIPT}

chmod 777 ${SIG_SCRIPT}


See Also:

Rate and Review
Tell us what you think of the content of this page.
Excellent   Good   Fair   Poor  
Your email address (no reply is possible without an address):
Sun Privacy Policy

Note: We are not able to respond to all submitted comments.
Left Curve
System Administrator
Right Curve
Left Curve
Developer and ISVs
Right Curve
Left Curve
Related Products
Right Curve