As Published In
Oracle Magazine
May/June 2010


Time for a .NET Tune-Up

By Christian Shay

Tune your .NET application’s use of Oracle Database with Oracle Developer Tools for Visual Studio.

Developing a .NET and Oracle application in Microsoft Visual Studio follows a typical lifecycle. It includes many steps, such as creating users and roles, assigning privileges, creating tables and stored procedures, generating SQL scripts, writing .NET code with Oracle Data Provider for .NET (ODP.NET), and checking everything into source control.

Another important phase of the development lifecycle involves tuning your application’s use of Oracle Database to maximize performance. Luckily, tools that enhance performance are among the major new features in Oracle Developer Tools for Visual Studio Release Oracle Performance Analyzer analyzes your application’s use of Oracle Database under load and provides recommendations for resolving performance problems. SQL Tuning Advisor tunes SQL—both ad hoc SQL typed in at development time and SQL flagged by Oracle Performance Analyzer as poorly performing. These tools leverage Oracle Database’s Automatic Workload Repository (AWR) feature, which captures database statistics, and Automatic Database Diagnostic Monitor (ADDM) feature, which analyzes these statistics and generates recommendations for fixing problems.

In this article, we will use these tools to help us clean up a poorly performing application, aptly named ReallyBadCode.


To follow along with the example in this article, you will need Oracle Data Access Components Release or later, a bundle that includes both Oracle Developer Tools for Visual Studio and ODP.NET. You will also need Microsoft Visual Studio 2008 or 2005, Oracle Database 10g or later with the sample database (and HR schema) installed, and licenses for both Oracle Diagnostics Pack and Oracle Tuning Pack. You’ll also need SYSDBA privileges to use Oracle Performance Analyzer, and the HR user will need the ADVISOR privilege to use SQL Tuning Advisor.

The application we will develop requires two new tables based on the HR schema. From Visual Studio, right-click the root node of Server Explorer and create a new Oracle connection, as HR. Next, right-click the HR connection and choose Query Window . In the query window that opens, enter create table employees2 as select * from employees and click the execute button (the green arrow in the query window toolbar). Next, erase that statement, enter create table departments2 as select * from departments , and again execute the statement.

Tuning SQL

To follow along, download the Visual Studio project associated with this article, available at . Open the Visual Studio project; you’ll see that the bulk of the code in the main source file (ReallyBadCode.cs) is shown in Listing 1. Take a close look at it. It creates a connection pool and then repeatedly executes a query, with the WHERE clause in the query changing dynamically with every loop. It then fetches all the data for each query.

Code Listing 1: Main() function of the ReallyBadCode application


static void Main(string[] args)
        string constr = "User Id=hr; Password=hr; Data Source=; 
                         Max Pool Size = 50; Connection Lifetime = 1";

        int sal = 3000;
        while (true)
                OracleConnection con = new OracleConnection(constr);

                StringBuilder sbSQL = new StringBuilder();
                sbSQL.Append("select employees2.salary from employees2, departments2 where
       employees2.department_id = departments2.department_id and 
       union select employees.salary from employees, departments where
       employees.department_id = departments.department_id and 
       employees.salary = ");

                OracleCommand cmd = new OracleCommand();
                cmd.Connection = con;
                cmd.CommandText = sbSQL.ToString();
                OracleDataReader dr = cmd.ExecuteReader();

                dr.FetchSize = 10;
                while (dr.Read()) {


Consider the large query created against the HR.EMPLOYEES, HR.DEPARTMENTS, HR. EMPLOYEES2, and HR.DEPARTMENTS2 tables. Given that we’re still in the development phase without access to real-world production data, how can we tell if that query will perform well? Well, one way is to put it into SQL Tuning Advisor and see what comes back.

Copy and paste the query from the code into the HR query window you opened earlier. The query in the .NET code is missing a value at the end of the WHERE clause, so just add the number 0 at the end. Your query should look like this:


select employees2.salary 
from employees2, departments2 
where employees2.department_id=
and employees2.salary <3000 union select employees.salary 
from employees, departments 
where employees.department_id=departments.department_id 
and employees.salary =0;

Locate the wrench icon in the query window toolbar, and click it to launch SQL Tuning Advisor (see Figure 1). Alternatively, you can right-click anywhere in the query window and select Tune SQL from the menu.


figure 1
Figure 1: SQL Tuning Advisor

The exact result you get may vary, depending on your database configuration. But, as shown in Figure 1, you will most likely see three results in the Tune SQL tab at the bottom of the query window: one RESTRUCTURE SQL and two STATISTICS finding types. You can read the detailed result descriptions by hovering your cursor over the Recommendations column for each finding. Alternatively, you can click the View Report button, in the lower right corner of the window, to open a text file that contains additional details in an easy-to-read format.

In this example, the STATISTICS recommendations warn that the DEPARTMENT2 and EMPLOYEES2 tables have not been analyzed because statistics are not being collected for them. To enable statistics collection, simply highlight each STATISTICS recommendation and click the Implement Recommendation button for it.

Next, look at the RESTRUCTURE SQL recommendation. The text indicates a warning that UNION is an expensive operation and that you should use UNION ALL if possible. So modify the SQL to use UNION ALL, and then click the wrench icon again.

This time you should see some INDEX warnings, which mean that you are missing indexes. Again, the number of index recommendations you see here can vary, depending on your system configuration. In my case, I see four of them. Click the View Report button to easily view all your INDEX recommendations at once.

Next, go to the query window and paste the CREATE INDEX statements suggested in your report. Don’t erase the original SQL statement from the query window, because you will want to re-execute it later. By highlighting each CREATE statement and then clicking the execute arrow, you can have the query window execute only the SQL you highlight.

Finally, highlight the original SQL statement and click Tune SQL one more time. This time, SQL Tuning Advisor should respond with a “no recommendations” message. So although it still isn’t a pretty piece of SQL, the performance is now probably as good as we are going to get at design time.

Tuning the Running Application

Now, modify the SQL statement in the actual application code to use the UNION ALL statement that was recommended by SQL Tuning Advisor. Change the password and datasource parameters in the connect string if needed, build the project, and then choose Start Without Debugging from the Visual Studio Debug menu.

Back in Server Explorer, create a new Oracle connection with SYSDBA privileges (if you don’t have one). While the application is still running, right-click the SYSDBA connection and select the >Oracle Performance Analyzer menu item to launch the Oracle Performance Analyzer interface (see Figures 2 and 3). Set the duration of the analysis in hours and minutes. I recommend 20 to 30 minutes for this particular test. Five minutes of database time (the actual time Oracle Database spends processing our requests) is required to receive any results, with more database time often resulting in better analysis. Generally database time is only a fraction of the actual time you set here.

Finally, click the Start button to begin the timer countdown. When the timer reaches 0, the results (if any) will appear in the panel below. Don’t stop your application until you’ve verified that you have received good results as described below.

If sufficient time was spent in analysis, you should see several findings listed in the tree control on the left side of the lower pane of Oracle Performance Analyzer (see Figure 3). The key findings you should see are Hard Parse Due to Literal Usage and Session Connect and Disconnect. Sometimes both findings will appear, and other times you need to fix one finding and then rerun Oracle Performance Analyzer to see the other.


figure 3
Figure 3: Oracle Performance Analyzer

(In some cases, due to your specific database or network configuration, you may be missing one of these findings entirely.)

If you didn’t run the analysis long enough to receive either finding, you can rerun Oracle Performance Analyzer for a longer time period or find information later in the article about how to extend the analysis time without having to start all over again. You might see additional findings, such as CPU Usage and Soft Parse, but as we shall see, these turn out to be side effects of the two primary problem findings and will disappear once those issues are fixed.

For each main finding, click each of the child nodes in the tree control to read more about each finding. Key information is usually found in the Recommendations nodes’ Rationales pane (which happens to be blank for these findings) and in the Description field in the Action nodes. The Action node for Hard Parse Due to Literal Usage has this description: “Investigate application logic for possible use of bind variables instead of literals.” The Action node for Session Connect and Disconnect states, “Investigate application logic for possible reduction of connect and disconnect calls. For example, you might use a connection pool.”

The code makes it clear that we have violated the most sacrosanct of all database programming best practices: we have modified the SQL statement with literal strings rather than bind variables! Without bind variables, Oracle Database is generally forced to reparse the entire SQL statement—a very costly operation—and key resources in the form of open cursors for each statement are consumed on the server. By using a bind variable, we can cut this down to a single parse and one open cursor.

To add a bind variable, first modify the SQL statement to include the bind variable placeholder ":salary".

Replace sbSQL.Append(sal); with sbSQL.Append(":salary");

Next, add these lines of code just before the call to ExecuteReader:


OracleParameter p_salary = new OracleParameter();
p_salary.OracleDbType = OracleDbType.Decimal;
p_salary.Value = sal;

The Session Connect and Disconnect finding is a little more difficult to understand. After all, we are using connection pooling, which by definition keeps connections open for us—excessive connections and disconnections should not be an issue. However, given this clue, we can take advantage of ODP.NET’s performance counters to examine the connection pool in action. To enable all performance counters, run Registry Editor (regedit.exe), navigate to HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\ODP.NET\<your ODP.NET Version>, and set the PerformanceCounters value to 4095 to enable all counters. (For more information, see the ODP.NET online help.)

Next, with the .NET application running, run Windows’ Performance Monitor (perfmon.exe). Click the + sign to add a counter to watch, go to the Performance Object list and choose Oracle Data Provider for .NET , and add HardConnectsPerSecond as well as SoftConnectsPerSecond from the list of available counters.

Watching these counters makes it clear that hundreds of soft connects (requests from the pool for an already existing connection) are occurring each second. Every few seconds, though, several hard connections (real Oracle Database authentications) occur. These hard connections are what Oracle Performance Analyzer is complaining about. Even a few unneeded hard connections per second can spoil performance, and with a properly tuned connection pool, this should not happen.

To resolve the problem, go back to the code and examine the connection pool connection string attributes—starting with the Connection Lifetime attribute. By specifying this attribute and setting it to one second, we have unwittingly designated that all connections returned to the pool that have been alive for more than one second will get hard-disconnected from Oracle Database and destroyed rather than put back into the pool.

Furthermore, because we are not performing a Close() and Dispose() on the connection objects when we are done with them (another ODP.NET best practice), the garbage collector takes longer than one second to return the connection to the pool. When this happens, one of the next requests for a connection from the pool will necessarily result in a hard connect. Not good.

We can fix this problem by either greatly increasing the connection string value or removing it entirely. Because it’s not really needed in this example, delete the Connection Lifetime attribute reference. Next, add con.Close() and con.Dispose() to the end of the program to address the garbage collector issue mentioned above. Finally, reset the registry entry of the performance counters to 0 to turn them off, lest they contribute to bad performance.

Tuning is an iterative process. When you fix one problem, you often uncover others. So it is important to repeatedly tune, fix, and repeat until no more problems are found. For this example, build and run the modified application and then run Oracle Performance Analyzer again.

This time when the timer counts down and the results are displayed, you should see an Unusual Network Wait Event finding with the following description: “Investigate the cause for high ‘SQL*Net message to client’ waits.” (Note: depending on your specific database and networking configuration, you may not receive this finding.) Looking up more information about these waits reveals that the application is likely suffering from low SQL*Net throughput. Now look at the code and note the Fetchsize setting, 10. That setting specifies only 10 bytes of data per round-trip, whereas the default fetch size is about 130K! It looks as if someone dropped a zero or two.

Remove the Fetchsize code (so it will use the default), and run Oracle Performance Analyzer again. This time you should have no significant results, so you are done tuning (for now). If you would like to see the results of prior Oracle Performance Analyzer runs, look in the ADDM Tasks node (found in Server Explorer under the schema of the user running Oracle Performance Analyzer) and double-click the task in which you are interested to redisplay the results.

Performance Analysis Without Using the Timer Interface

So far we have used only the Oracle Performance Analyzer timer interface to do the performance analysis. Internally, this interface first creates what is called an AWR snapshot when the timer is started. This snapshot is a collection of statistics. Then a second snapshot is created when the timer expires and an ADDM task is created, with the two snapshots defining the analysis period. The actual analysis of the snapshots occurs within the ADDM task. The results of the ADDM task analysis are immediately displayed in the Oracle Performance Analyzer window. The AWR snapshots and ADDM tasks are then listed in Server Explorer (see Figure 2).


figure 2
Figure 2: Launching Oracle Performance Analyzer from Server Explorer


Next Steps

 READ more Shay

ODP.NET and Oracle Developer Tools for Visual Studio
the sample application for this article

 LEARN more about Oracle and .NET

 READ more about Oracle Database Performance Tuning Guide

Using the Server Explorer nodes, menus, and dialog boxes, it is possible to perform these steps manually instead of using the timer. You can create an AWR snapshot by going to the AWR Snapshots Server Explorer node, right-clicking, and choosing New AWR Snapshot . The dialog box that appears gives you the option of two levels of statistics gathering, Typical or All. One benefit of manually creating snapshots is that you have the option of collecting a larger number of statistics if you so choose. (However, gathering all statistics does affect database performance).

After creating one snapshot, wait until the desired analysis time has elapsed and then create a second one. Then right-click the new snapshot node in Server Explorer and choose New ADDM Task . The dialog box that appears will ask for two snapshots, identified by the times they were taken. After you provide them and click OK , the analysis will be displayed in the Oracle Performance Analyzer window.

This manual method enables you to avoid waiting the entire time again when you use the timer and end up with too little database time to get the desired results. If that happens, simply keep the application running and wait a bit longer. Then create a new AWR snapshot and a new ADDM task. In the New ADDM Task dialog box, select the original snapshot created by the timer as the starting time and the new snapshot you just created as the ending time. Another benefit of this manual method is that you have the flexibility of shutting down or restarting Visual Studio during the analysis period if necessary, because the timer is not running.


In this article, we simulated part of the development lifecycle of an Oracle Database and .NET application by testing ad hoc SQL as well as running performance tests of the application under load. The new SQL Tuning Advisor and Oracle Performance Analyzer features of Oracle Developer Tools for Visual Studio make it easy for Visual Studio developers to handle these key tasks as part of the overall job of building an Oracle Database and .NET application. 

Christian Shay ( is a principal product manager at Oracle.

Send us your comments