As Published In
Oracle Magazine
July/August 2006


Improve ODP.NET Performance

By Mark A. Williams Oracle ACE

Take advantage of connection pooling, fetch size controls, and statement caching.

For features and performance, Oracle Data Provider for .NET (ODP.NET) is your best choice for connecting .NET applications with Oracle Database.

Performance-related features that enable your applications to consume fewer database resources and therefore perform and scale better are built into the provider. Three of these features are connection pooling, controlling fetch size, and statement caching.

I ran some tests to confirm that these features could improve application performance. I used version of both ODP.NET and Oracle Database for all tests in this article. Note that the statement caching feature was added in ODP.NET version

Connection Pooling Overview

Creating a physical connection to a database can be an expensive operation in terms of time and system resources. During the busiest times, applications with precreated connections that can be allocated when needed perform better than when applications create the connections on demand. Using the connection pooling feature of ODP.NET, your application can create and maintain a ready pool of precreated connections.

By default, connection pooling is enabled in the provider. You can turn connection pooling on or off by setting Pooling—a Boolean connection string attribute—to either TRUE or FALSE, respectively. (Other connection string parameters that control various aspects of connection pooling are fully documented in the ODP.NET documentation.)

When your application uses connection pooling, the provider creates a pool of connections based on the connection string signature. The signature is based on the connection string parameter values. If any one of the connection string parameters is modified, ODP.NET will create a new connection pool for your application when the next connection is requested.

Subsequent connection requests can use a connection from the pool rather than go through the process of creating a new connection. When your application is finished using a connection, the Close or Dispose method returns the connection to the pool rather than physically closing or destroying it. Note that for nonpooled connections, it is best to call the Dispose method, because it explicitly frees system resources.

Connection Pooling in Action

The sample code for this column demonstrates one technique for measuring the performance benefit of using connection pooling. The code in JulyAug2006.cs—available for download and in Listing 1 —creates two connection strings: one that uses connection pooling and one that does not. The code then opens and closes the connections in a loop. The number of iterations is controlled by a parameter passed into the test method.

The connection string with no connection pooling—where Pooling=FALSE—uses the sample HR user and the default password: 

string no_pool =
  "User Id=hr;
   Data Source=oramag;

The connection string with connection pooling—where Pooling=TRUE—also uses the sample HR user and the default password: 

string with_pool =
  "User Id=hr;
   Data Source=oramag;

The main method calls each of the member methods to perform the tests. One member method—ConnectionPoolTest—captures the current time, executes the connection testing loop, and captures the current time again. The code then calculates the difference between the two time intervals.

On my test system, I achieved the following results with an iteration value of 100: 

Beginning Connection Pool Test with 100 iterations...
No Pooling: 7.8312608 total seconds.
With Pooling: 0.1301872 total seconds.

As you can see, with no connection pooling, the loop executed in almost 8 seconds. With connection pooling, the loop executed in approximately .13 seconds—dramatically quicker.

Making specific connection pooling parameter recommendations is difficult, because workloads can vary greatly. One general recommendation is to have a sufficiently high minimum pool size. I have often seen connection pools drained to the minimum during a lull in activity. When activity picks up rapidly, the connection pool has to create connections quickly rather than use connections in the pool. In most cases, the application server reduces the number of connections and remains relatively idle. Thus, there is no reason to have a low minimum pool size.

Controlling Fetch Size

Retrieving data from the database tier to the middle tier (or client tier) is one of the most expensive operations with respect to performance. If the end user consumes a lot of data, you will want your application to minimize the number of round-trip data fetches.

By default, ODP.NET will read 64KB of data from a result set at a time. You change this value by setting the FetchSize attribute for an OracleDataReader object. However, rather than arbitrarily setting the fetch size, ODP.NET provides the ability to control the number of rows that are fetched from a result set per round trip. The two properties you use to do this are RowSize and FetchSize.

RowSize is a property of the OracleCommand object, and the value for this property is assigned by ODP.NET when the statement associated with the command object is executed. You explicitly set the value of the FetchSize property. For example, if you want to retrieve 100 rows at a time from the result set per round trip, you set the FetchSize property: 

dr.FetchSize = cmd.RowSize * 100;

Here, dr represents an OracleDataReader and cmd represents an OracleCommand object.

Fetch Size Benefits

To illustrate the benefits of controlling fetch size, I used SQL*Plus to create a simple new table—fetch_test—in the HR sample schema: 

create table fetch_test
  id   number primary key,
  data varchar2(32)

and populated it with 100,000 rows:

  for i in 1..100000 loop
    insert into fetch_test
    values (i, to_char(i, '000000'));
  end loop;

You could also use a tool suite such as Oracle Developer Tools for Visual Studio .NET to accomplish this. Be sure to issue a COMMIT after populating the table if you are using a tool that does not do so by default.

After populating the table with data, I gathered statistics so the optimizer could determine the appropriate way to access the data. I used the dbms_stats package: 

    ownname => 'HR',
    tabname => 'FETCH_TEST',
    estimate_percent => null,
    method_opt => 'for all indexed columns size 1',
    cascade => TRUE);

Next I performed a test similar to that used for connection pooling: capture the current time, set the fetch size to an appropriate value, read the data from the result set, capture the current time again, and compute the difference. I created a test method called FetchTest to set the fetch size and read the data—it accepts an integer parameter that controls the number of rows to be fetched. To set the fetch size and read the data, I used the following: 

OracleCommand cmd = con.CreateCommand();

cmd.CommandText = "select * from fetch_test";

OracleDataReader dr = cmd.ExecuteReader();

dr.FetchSize = cmd.RowSize * numRows;

while (dr.Read())

I then executed the test method, using five different row values. The results of the tests are summarized here: 

________  __________________________

1         5.4378192 seconds
10        1.1115984 seconds
100       0.2703888 seconds
1000      0.200288 seconds
10000     0.1902736 seconds

As you can see, if you are tempted to fetch only a single row at a time to "save resources and increase performance" (I have sometimes heard this advice), you will actually be dramatically reducing, rather than increasing, runtime performance. In addition, for this set of data, there is not much performance increase when fetching more than 1,000 rows at a time. Of course, this is also a narrow data set (with only two small columns). Apply this technique to a result set representative of your environment to determine optimal values for setting your fetch size.

Of course, there is a cost if the fetch size is arbitrarily large. More client-side memory and processor cycles will be needed to store and manage a larger amount of data. The goal is to find a high-performing balance between the number of round trips and the amount of data retrieved per trip.

Statement Caching

Using bind variables in your application is a good practice. Bind variables can help ensure that Oracle Database can reuse SQL statements in your session and in other sessions that may use the same statement. (See the ODP.NET column in the September/October 2005 issue of Oracle Magazine , for more details on the benefits of bind variables.) For those applications about which you know, at design time, which statements will be reused, you can further increase the performance benefits of bind variables by using a feature exposed by ODP.NET known as statement caching.

When you use statement caching, you are telling Oracle Database to keep a copy of the statement cached in memory on the server, because you will be reusing that statement in your session. This allows Oracle Database to more quickly use that statement upon subsequent executions. Note that statement caching does not increase the runtime performance of the statement itself. Statement caching allows Oracle Database to find the statement in a memory area that is associated with your session, rather than all sessions, and avoid reparsing the statement. Keep in mind that the statement, not the data, is cached. Executing a cached statement always returns the latest database results.

You enable the statement caching feature by setting the Statement Cache Size connection string parameter. The default value of this parameter is 0, meaning that statement caching is disabled by default. You set this parameter to the number of statements you plan to keep cached during your application's lifetime. For example, if you plan to cache 16 statements, you will set this value to 16. ODP.NET will then cache the 16 most recently used statements. The 17th most recently used statement is aged out of the cache by the last 16 unique statements.

With statement caching enabled, ODP.NET will automatically cache any statement you execute. If you have enabled statement caching and you do not want to place a statement in the cache, set the AddToStatementCache property on OracleCommand to false: 

OracleCommand cmd = 
new OracleCommand();

cmd.Connection = con;

cmd.AddToStatementCache = false;

cmd.CommandText = "select data from fetch_test where id = :1";

SQL and PL/SQL statements can be stored in the statement cache.

Statement Caching Results

I employed the same pattern to test statement caching as I did for connection pooling and controlling the fetch size. I used the fetch_test table to repeatedly select data without using statement caching and then repeated the same test with statement caching.

Here is the connection string for the test without statement caching: 

string constr =
  "User Id=hr;
   Data Source=oramag;

And here is the connection string for the test with statement caching: 

string constr =
  "User Id=hr;
   Data Source=oramag;
   Statement Cache Size=1";

Note that I set the cache size to only 1. This is because I knew that I would be caching only a single statement. The bind variable may change, but the statement will remain cached. If a literal value were used instead of a bind value, Oracle Database would need to reparse the statement every time the literal value changed, instead of using the cache. In addition, the new statement would replace the statement currently in the cache.

I executed the sample statement 1,000 times without statement caching and 1,000 times with statement caching, which produced these results on my system: 

No Statement Caching: 
0.6409216 total seconds.
With Statement Caching: 
0.3905616 total seconds.


Next Steps

 VISIT .NET Developer Center

the sample code for this column
Oracle Developer Tools for Visual Studio .NET

The latest ODP.NET releases use a Windows Registry parameter that turns on statement caching by default and sets the cache size to 10. The registry setting applies to all applications using this ODP.NET instance. You can override the registry value for each connection pool by setting Statement Cache Size to another value.

Going Further

I performed all of these tests on a single laptop hosting both the database and the client application. Although it is convenient to test features in such an environment, for best results test your applications in an environment that accurately reflects your production environment. When the client and database server are on separate machines, the performance improvement should be even greater. Of course, your tests are almost certain to show different numbers in the results than those presented here.

This investigation limited performance observations to time only. However, other performance benefits include reduced network traffic and reduced parsing in the database. I encourage you to work with your system administrators to develop tests and measurement techniques applicable to your environment. By using the features offered by ODP.NET, you can increase performance in your applications with a minimum of effort. 

Mark A. Williams ( is an Oracle ACE, an Oracle Certified Professional DBA, and the author of Pro .NET Oracle Programming (Apress, 2004). He focuses on solutions for Oracle on Windows and contributes to the Oracle Data Provider for .NET forums on Oracle Technology Network.

Send us your comments