11g logo

Oracle Database 11g:
The Top Features for DBAs and Developers

by Arup Nanda Oracle ACE Director

Caching and Pooling

Explore how to enhance performance by using SQL Result Cache, PL/SQL Function Cache and Client Side Caches, and Database Resident Connection Pooling.

See Series TOC 

SQL Result Cache

Accessing memory is far quicker than accessing hard drives, and that will most likely be the case for next several years unless we see some major improvements in hard drive architecture. This fact gives rise to caching: the process of storing data in memory instead of disks. Caching is a common principle of Oracle database architecture, in which users are fed data from the buffer cache instead of the disks on which the database resides.

The advantage of caching is singularly visible in the case of relatively small tables that have static data—for example, reference tables such as STATES, PRODUCT_CODES, and so on. However, consider the case of a large table named CUSTOMERS that stores the customers of a company. The list is relatively static but not entirely so; the table changes rarely, when the customers are added or removed from the list.

Caching would probably provide some value here. But if you were to cache the table somehow, how would you make sure you get the correct data when something changes?

Oracle Database 11g has the answer: with the SQL Result Cache. Consider the following query. Run it to get the execution statistics and the response times:

SQL> set autot on explain stat

select
  state_code,
  count(*),
  min(times_purchased),
  avg(times_purchased)
from customers
group by state_code
/

The results are:

ST   COUNT(*) MIN(TIMES_PURCHASED) AVG(TIMES_PURCHASED)
-- ---------- -------------------- --------------------
NJ 1 15 15
NY 994898 0 15.0052086
CT 5099 0 14.9466562
MO 1 25 25
FL 1 3 3

5 rows selected.

Elapsed: 00:00:02.57

Execution Plan
----------------------------------------------------------
Plan hash value: 1577413243

--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5 | 30 | 1846 (25)| 00:00:23 |
| 1 | HASH GROUP BY | | 5 | 30 | 1846 (25)| 00:00:23 |
| 2 | TABLE ACCESS FULL| CUSTOMERS | 1000K| 5859K| 1495 (7)| 00:00:18 |
--------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
5136 consistent gets
5128 physical reads
0 redo size
760 bytes sent via SQL*Net to client
420 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processed

Note a few things:
  • The explain plan shows that a full table scan was performed.
  • There are 5,136 consistent gets (logical I/Os).
  • It took 2.57 seconds to execute.
Since the table is pretty much unchanged, you can use a hint that stores the results of the query to be cached in the memory:

select /*+ result_cache */
        state_code,
        count(*),
        min(times_purchased),
        avg(times_purchased)
from customers
group by state_code
/

The query is identical to the first one except for the hint. The result (the second execution of this query):

ST   COUNT(*) MIN(TIMES_PURCHASED) AVG(TIMES_PURCHASED)
-- ---------- -------------------- --------------------
NJ 1 15 15
NY 994898 0 15.0052086
CT 5099 0 14.9466562
MO 1 25 25
FL 1 3 3

5 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 1577413243

--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5 | 30 | 1846 (25)| 00:00:23 |
| 1 | RESULT CACHE | gk69saf6h3ujx525twvvsnaytd | | | | |
| 2 | HASH GROUP BY | | 5 | 30 | 1846 (25)| 00:00:23 |
| 3 | TABLE ACCESS FULL| CUSTOMERS | 1000K| 5859K| 1495 (7)| 00:00:18 |
--------------------------------------------------------------------------------------------------


Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=4; dependencies=(ARUP.CUSTOMERS); parameters=(nls);name="select /*+ result_cache */
state_code,
count(*),
min(times_purchased),
avg(times_purchased)
from customers
group by state_c"



Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
760 bytes sent via SQL*Net to client
420 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processed

Note several differences from the first case.
  • The response time is now 0.01 seconds instead of almost 3 seconds earlier.
  • Consistent gets is now 0; no logical I/O was performed for this query. (Actually, the first time the query runs with the hint in place the logical I/O will remain steady because the database has to perform the I/O to build the cache. Subsequent invocations will serve data from the cache, making the logical I/O none.)
  • The explain plan shows RESULT CACHE as an operation.
  • The note after the explain plan shows what type of caching was performed and on which result.
The savings in time is phenomenal: from 3 seconds virtually nothing! This is thanks to the fact that with the second query, where we used a cache, the results came back straight from database memory (the result cache), not after the execution of the query.


SQL Result Cache is another cache in the SGA, just like buffer cache or program global area. When you execute a query with the hint result_cache, Oracle performs the operation just like any other operation but the results are stored in the SQL Result Cache. Subsequent invocations of the same query do not actually go to the table(s) but get the results from the cache. The size of the cache is determined by several initialization parameters:

Parameter Description
result_cache_max_size Maximum size of the result cache (5M for 5 MB, for example). If you set this to 0, result caching will be completely turned off.
result_cache_max_result Specifies the percentage of result_cache_max_size that any single result can use
result_cache_mode If set to FORCE, all the queries are cached if they fit in the cache. The default is MANUAL, which indicates that only queries with the hint will be cached.
result_cache_remote_expiration Specifies the number of minutes that a cached result that accesses a remote object will remain valid. The default is 0.

Now, a logical question follows: What happens when a table row changes? Will the query get a new value or the old one? Well, let's see what happens. From another SQL*Plus session, update a row in the table:

SQL> update customers set times_purchased = 4
  2  where state_code = 'FL';
 
1 row updated.

but do not commit. On the original window where you ran the query the first time, run it again. The cached result is still used, because the change was not yet committed. The session that runs the query still looks at the most up-to-date version of the data and the cache is still valid.


Now, from the session where you did the update, issue a commit and run the query.

ST   COUNT(*) MIN(TIMES_PURCHASED) AVG(TIMES_PURCHASED)
--       ----------           --------------------           --------------------
NJ          1                   15                   15
NY     994898                    0           15.0052086
CT       5099                    0           14.9466562
MO          1                   25                   25
FL          1                    4                    4

Note that the data for FL updated to 4 automatically. A change in the underlying table merely invalidated the cache, which resulted in a dynamic refresh next time it was queried. You are guaranteed correct results whether or not you use the SQL Result Cache.


Differences vs. Materialized Views

Those familiar with Materialized Views (MVs) may wonder how this functionality differs from that of MVs. The answer is: a lot. On the surface they appear similar—both preserve the results in some way and provide the answers from that preserved set—but that's where the similarity ends. MVs preserve the data in the database storage, whereas SQL Result Caches are in memory; they don't use more disk space and they disappear when the database instance is shut down or the space inside the result_cache is exhausted.

MVs are also static; when the data in the underlying tables changes, the MV does not know about it. Unless you refresh the MV, the users may either get the stale data if you set query_rewrite_integrity to stale_tolerated, or they re-run the base query against the underlying tables, which takes a lot longer. With SQL Result Cache, you don't need to explicitly refresh the cache; the cache is automatically refreshed the next time the query is run.

MVs provide a much more sophisticated rewrite algorithm. A cached result is only reused if the same query or query fragment is run again after it was first cached (and no underlying data was changed). Queries that benefit from query rewrite against MVs may still roll-up data from materialized views, join back to tables or other materialized views, and apply additional predicates, a trait very much prized in data warehouse environments.

So, MVs and SQL Result Caches are not necessarily comparable or interchangeable; each has its unique advantages.

Subqueries

You can also use SQL Result Cache in sub-queries. Consider the following query:

select prod_subcategory, revenue
from (
   select /*+ result_cache */ p.prod_category,
      p.prod_subcategory,
      sum(s.amount_sold) revenue
   from products p, sales s
   where s.prod_id = p.prod_id 
   and s.time_id between to_date('01-jan-1990','dd-mon-yyyy')
   and to_date('31-dec-2007','dd-mon-yyyy')
   group by rollup(p.prod_category, p.prod_subcategory)
)
where prod_category = 'software/other'
/

In the above query, the caching occurs in the sub-query in the inline view. So as long as the inner query remains the same, the outer query can change yet use the cache.


To check how much memory is used for the SQL Result Cache in the database, you can use a supplied package
dbms_result_cache, as shown below:

SQL> set serveroutput on size 999999
SQL> execute dbms_result_cache.memory_report
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1K bytes
Maximum Cache Size = 2560K bytes (2560 blocks)
Maximum Result Size = 128K bytes (128 blocks)
[Memory]
Total Memory = 126736 bytes [0.041% of the Shared Pool]
... Fixed Memory = 5132 bytes [0.002% of the Shared Pool]
... Dynamic Memory = 121604 bytes [0.040% of the Shared Pool]
....... Overhead = 88836 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 21 blocks
........... Used Memory = 11 blocks
............... Dependencies = 4 blocks (4 count)
............... Results = 7 blocks
................... SQL = 5 blocks (4 count)
................... Invalid = 2 blocks (2 count)

PL/SQL procedure successfully completed.

If you want to flush the cache (both result cache and function cache, described below) for some reason, you can use:

begin 
   dbms_result_cache.flush;
end;

After you execute the above, when you run the original query against CUSTOMERS with the result_cache hint, you will see that the query again takes about three seconds to complete.


Of course, after the first execution, the results will be cached again and the subsequent executions will get the values from the result cache and hence they will execute much faster. If you want to invalidate the cache of only one table, not the entire cache, use the following:

begin
  dbms_result_cache.invalidate('ARUP','CUSTOMERS');
end;

There are several data dictionary views that show the statistics on the SQL Result Cache:

View Description
V$RESULT_CACHE_STATISTICS Shows various settings, especially memory consumption
V$RESULT_CACHE_MEMORY Shows various pieces of the memory that make up the SQL Result Cache
V$RESULT_CACHE_OBJECTS Shows the objects that make up the SQL Result Cache
V$RESULT_CACHE_DEPENDENCY Shows the dependencies between the various objects that make up the SQL Result Cache

The SQL Result Cache enables you to cache the results of queries that access a lot of data. The cache is automatically invalidated when the underlying table changes without your intervention or writing any additional code.


PL/SQL Function Result Cache

Suppose you have a PL/SQL function instead of the SQL query that returns the values. It's a common practice to use a function to return a value to make the code modular. Consider a case of two tables: CUSTOMERS that store information on all customers along with the state_code. The other table TAX_RATE stores the tax rate of each state. To get the tax rate applicable to customers, you have to join the tables in a query. So, to make it simple, you decide to write a function shown below that accepts the customer ID as a parameter and returns the tax rate applicable based on state_code:

create or replace function get_tax_rate
(
        p_cust_id       customers.cust_id%type
)
return sales_tax_rate.tax_rate%type
is
        l_ret   sales_tax_rate.tax_rate%type;
begin
        select tax_rate
        into l_ret
        from sales_tax_rate t, customers c
        where c.cust_id = p_cust_id
        and t.state_code = c.state_code;
        -- simulate some time consuming
        -- processing by sleeping for 1 sec
        dbms_lock.sleep (1);
        return l_ret;
exception
        when NO_DATA_FOUND then
                return NULL;
        when others then
                raise;
end;
/

Execute the function a few times as shown below. Remember to set timing on to record the elapsed time in each case.

SQL> select get_tax_rate(1) from dual;
 
GET_TAX_RATE(1)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:01.23
SQL> select get_tax_rate(1) from dual;
 
GET_TAX_RATE(1)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:01.17

It consistently took pretty the same time for each execution. (I have deliberately placed a sleep statement to delay the processing inside the function; otherwise it would have returned too quickly.) If you examine the code you will notice that the function will most likely return the same value every time it is called. A customer does not change states that frequently and the tax rate for a state rarely changes, so for a given customer, the tax rate will be most likely the same on all executions. The rate changes if and only if the tax rate of a state changes or the customer moves out of the change. So, how about caching the results of this function?


Oracle Database 11g allows you to do exactly that. You can enable the results of a function to be cached as well, simply by placing a clause result_cache. But what about the scenario when the state actually changes the tax rate or the customer moves out of the state? The feature allows you to specify the dependence on the underlying tables so that any data change in those tables will trigger invalidation and subsequent rebuilding of the cache in the function. Here is the same function with the result cache code added (in bold):

create or replace function get_tax_rate
(
        p_cust_id       customers.cust_id%type
)
return sales_tax_rate.tax_rate%type
                               
result_cache
relies_on (sales_tax_rate, customers)
is l_ret sales_tax_rate.tax_rate%type; begin select tax_rate into l_ret from sales_tax_rate t, customers c where c.cust_id = p_cust_id and t.state_code = c.state_code; -- simulate some time consuming -- processing by sleeping for 1 sec dbms_lock.sleep (1); return l_ret; exception when NO_DATA_FOUND then return NULL; when others then raise; end; /
After this change, create and execute the function in the same way:

QL> select get_tax_rate(1) from dual;
 
GET_TAX_RATE(1)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:01.21

It took 1.21 seconds, as it did earlier with the non-cached way, but then look at the subsequent executions:

SQL> select get_tax_rate(1) from dual;
 
GET_TAX_RATE(1)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:00.01

The elapsed time was merely 0.01 seconds! What happened? The function executed normally the first time making the elapsed time 1.21 seconds. But the important difference this time was that it cached the results as it executed. The subsequent invocations didn't execute the function; it merely got the results from the cache. So, it didn't sleep for 1 second that was in the function code.


The cache was for the customer_id 1 only. What if you executed the function for a different customer?

SQL> select get_tax_rate(&n) from dual;
Enter value for n: 5
old   1: select get_tax_rate(&n) from dual
new   1: select get_tax_rate(5) from dual
 
GET_TAX_RATE(5)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:01.18
SQL> /
Enter value for n: 5
old   1: select get_tax_rate(&n) from dual
new   1: select get_tax_rate(5) from dual
 
GET_TAX_RATE(5)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:00.00
SQL> /
Enter value for n: 6
old   1: select get_tax_rate(&n) from dual
new   1: select get_tax_rate(6) from dual
 
GET_TAX_RATE(6)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:01.17

As you can see, the first time each parameter is executed, it a caches the result. The subsequent invocations retrieved the value from the cache. As you keep on executing the function for each customer, the cache builds up.


Notice the clause "relies on" in the function code. It tells the function that the cache depends on those two tables: customers and tax_rate. If the data in those tables change, the cache needs to be refreshed. The refresh happens automatically without your intervention. If the data does not change, the cache continues to provide the cached values as quickly as possible. (In Oracle Database 11g Release 2, the function knows the tables it gets the data from and automatically checks for the data change in them; the RELIES_ON clause is not necessary. The syntax still allows the table name but it is redundant.)

If you need to bypass the cache for some reason, you can call a procedure in the supplied package DBMS_RESULT_CACHE:

SQL> exec dbms_result_cache.bypass(true);
 
PL/SQL procedure successfully completed.
 
Elapsed: 00:00:00.01

SQL> select get_tax_rate(&n) from dual;
Enter value for n: 6
old   1: select get_tax_rate(&n) from dual
new   1: select get_tax_rate(6) from dual
 
GET_TAX_RATE(6)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:01.18

The cache was not used, as you can see from the execution time.


Caching vs. Package Variables

You can also accomplish in-memory caching of values using a package variable—either a scalar datatype or a PL/SQL collection—for SQL Result Cache as well as PL/SQL Function Cache. The applications can access the variable rather than the table row or a function. Being memory based, it acts sort of like a cache, so what value does SQL Result Cache add?

The differences are many. After you executed the function for a customer, say cust_id = 5, from another session, execute the function for the same customer:

SQL> select get_tax_rate(&n) from dual
  2  /
Enter value for n: 5
old   1: select get_tax_rate(&n) from dual
new   1: select get_tax_rate(5) from dual
 
GET_TAX_RATE(5)
       ---------------
              6
 
1 row selected.
 
Elapsed: 00:00:00.00


Note the execution time; it indicates that the results came from the cache, not by executing the function. So even though the function was not cached in a session, it was still used from the cache in any session that called it.

The cache is for a database instance; not for a session. This ability to make caching in one session available in all others is very different from using package variables that hold the value in memory, which are visible in a session only.


Furthermore, the package variables have no idea about the underlying table being changed. You have to manually refresh them when the data changes; otherwise the apps will get stale data. Both SQL Result Cache and PL/SQL Function Cache automatically refresh the cache when the underlying table data changes; no user intervention is required.

Client Query Result Cache

Consider a situation where the client has to call the same data over some slow network link. Although the database can send the result to the client almost instantaneously from its cache, the result must travel over the wire to the client, adding to the overall execution time. There are specialized middleware frameworks, such as Oracle Coherence, for caching data in Java, PHP, and Ruby, but what if there were a generic way to cache the data at the client level?

Oracle Database 11 g provides Client Query Result Cache for that purpose. All database client stacks that use the OCI8 drivers (C, C++, JDBC-OCI and so on) can use this new feature, which allows clients to cache the results of SQL queries locally, not on the server. In summary, Client Query Result Cache furnishes the following benefits by:

  • freeing application developers from building a consistent but per-process SQL Result Cache shared by all sessions
  • extending server-side query caching to client side memory, by leveraging cheaper client-side memory and caching each application's working set locally
  • ensuring better performance by eliminating round trips to the server
  • improving server scalability by saving server resources
  • offering transparent cache management: memory management, concurrent access of result-sets, and so on
  • transparently maintaining cache consistency with server-side changes
  • offering consistency in RAC environment
To use this feature, all you have to do is to set an initialization parameter:

CLIENT_RESULT_CACHE_SIZE = 1G

This defines the client cache to be of 1GB, which is the combined size of caches in all clients. (This is a static parameter so you have to bounce the database to set it.) You can set the caches at each client by specifying other parameters in the file SQLNET.ORA file at the client location:

Parameter Description
OCI_RESULT_CACHE_MAX_SIZE Specifies the size of the cache in that particular client
OCI_RESULT_CACHE_MAX_RSET_SIZE Specifies the maximum size of result sets
OCI_RESULT_CACHE_MAX_RSET_ROWS Same as above but the number of rows in the result sets

Let's see this in action. Here is some simple Java code that connects to the database using the OCI8 driver and executes the SQL statement: select /*+ result_cache */ * from customers. The hint causes the statement to cache the result (as other parameters have been set).

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class CacheTest {
private String jdbcURL = "jdbc:oracle:oci8:@PRONE3";
private Connection conn = null;
public CacheTest( ) throws ClassNotFoundException {
Class.forName("oracle.jdbc.driver.OracleDriver");
}
public static void main(String[] args) throws ClassNotFoundException, SQLException {
CacheTest check = new CacheTest();
check.dbconnect();
check.doSomething();
}
public void dbconnect() throws SQLException {
System.out.println("Connecting with URL="+jdbcURL+" as arup/arup");
try {
conn = DriverManager.getConnection( jdbcURL, "arup" , "arup");
System.out.println("Connected to Database");
} catch (SQLException sqlEx) {
System.out.println(" Error connecting to database : " + sqlEx.toString());
}
}
public void doSomething() throws SQLException {
Statement stmt = null;
ResultSet rset = null;
try {
stmt = conn.createStatement();
System.out.println("Created Statement object");
rset = stmt.executeQuery("select /*+ result_cache */ * from customers");
System.out.println("Retrieved ResultSet object");
if(rset.next())
System.out.println("Result:"+rset.getString(1));
} catch (SQLException sqlEx) {
} finally {
try {
System.out.println("Closing Statment & ResultSet Objects");
if (rset != null) rset.close();
if (stmt != null) stmt.close();
if (conn != null) {
System.out.println("Disconnecting...");
conn.close();
System.out.println("Disconnected from Database");
}
} catch (Exception e) { }
}
}
}

Save the file as CacheTest.java and then compile the code:

$ORACLE_HOME/jdk/bin/javac CacheTest.java

Now execute the compiled class:

$ORACLE_HOME/jdk/bin/java -classpath .:$ORACLE_HOME/jdbc/lib/ojdbc5.jar CacheTest
Connecting with URL=jdbc:oracle:oci8:@PRONE3 as arup/arup
Connected to Database
Created Statement object
Retrieved ResultSet object
Result :M
Closing Statment & ResultSet Objects
Disconnecting...
Disconnected from Database

Execute this a few times. After a few executions, you can see that the client has cached the values via the dynamic views shown below:

select * from client_result_cache_stats$
/
select * from v$client_result_cache_stats
/

Client-Side Query Result Cache comes in very handy for lookup tables that usually do not change. (Even if they do change, the cache is refreshed.) This is different from SQL Result Cache, where the cache is on the server. Since the client caches the results, there is no need for the client to make the round trip to the server to get the data—which not only saves network bandwidth but server CPU cycles as well. For more information, refer to the Oracle Call Interface Programmers Guide .


Database Resident Connection Pooling

In traditional client/server architectures, there is a one-to-one correspondence between a user session and a database connection. In Web-based systems however, this may not be the case.

Web based systems are "stateless" in nature--when you visit a page, a database connection is established with the database and when the page loading is over, the connection to the database is severed. Later, when the user clicks again on the page, a new connection is established that is severed after the desired effect. This process makes it unnecessary to maintain a large number of simultaneous connections.

Establishing connections is expensive in overhead, so connection pooling is an important requirement in these apps. In this model, when a page needs database access, it allocates one of the already established connections out of the pool. After the work is done, the Web session returns the connection back to the pool.

The problem with traditional client-side or middle-tier connection pooling, however, is:
  • Each pool is confined to a single middle-tier node.
  • A proliferation of pools results in excessive number of pre-allocated database servers and excessive database server memory consumption.
  • Workload is distributed unequally across pools.
To obviate these problems, Oracle Database 11 g provides a server-side pool called Database Resident Connection Pool (DRCP). DRCP is available to all database clients that use the OCI driver including C, C++, and PHP.

Oracle Database 11 g comes preinstalled with a default connection pool but it is shut down. To start it, use:

execute dbms_connection_pool.start_pool;

Now to connect to the pooled connections instead of a regular session, all you have to do is add a line (SERVER=POOLED) to the TNS entry as shown below:

PRONE3_POOL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prolin3.proligence.com)(PORT = 1521))
(CONNECT_DATA =


(SERVER = POOLED)

(SID = PRONE3)
)
)
The clients can connect to the connection pool, using the connect string PRONE3_POOL. That's it. Now your apps will connect to the pool instead of the server. If you use the standard connect string without the TNSNAMES.ORA file, you can use the POOLED clause. For instance, in PHP, you will connect as:

$c = oci_pconnect('myuser', 'mypassword','prolin3.proligence.com/PRONE3:POOLED');

or

$c = oci_pconnect('myuser', 'mypassword','PRONE3_POOLED');

In the above description you started the default pool that comes with Oracle with the default options. You can use the procedure CONFIGURE_POOL in the supplied package DBMS_CONNECTION_POOL:

Parameter

Description

POOL_NAME

Name of the pool. Use ‘’ (two single quotes for the default pool)

MINSIZE

The minimum number of sessions to keep in the pool

MAXSIZE

The maximum number of sessions allowed in the pool

INCRSIZE

When a polled server is not available, the pool creates this many new servers

SESSION_CACHED_CURSORS

This turns on the session cached cursor

INACTIVITY_TIMEOUT

If the sessions are idle for this long, they are disconnected

MAX_THINK_TIME

After a client gets a server from the pool, it must issue an SQL statement within this time otherwise the client loses the server

MAX_USE_SESSION

Maximum times a connection can be taken from and into the pool

MAX_LIFETIME_SESSION

How long a session should stay on


DRCP functionality is important because a single pool can sustain tens of thousands of simultaneous users on a commodity platform. Furthermore, a single pool can be shared across multiple clients or middle-tier nodes, and in RAC and Data Guard environments, the combination of DRCP and FAN events furnish fast connection failover.


For an excellent treatise on the database resident connection pooling with examples in PHP, refer to the Oracle technical white paper PHP Scalability and High Availability .

Back to Series TOC