Oracle Database 10g: The Top 20 Features for DBAs

Join Oracle Magazine's 2003 "DBA of the Year" Arup Nanda over the next 20 weeks as he presents his list of the top Oracle Database 10g features for database administrators

Follow along! Download Oracle Database 10g


Week 20
Best of the Rest

Automatic statistics collection, guaranteed retention of Undo data, and easier and more secure encryption are some of the other features DBAs always wanted and now have in Oracle Database 10g

Wow! This is the final week of our amazing journey into the most important new features for DBAs in Oracle Database 10g. Over these last 19 installments I have attempted to cover all the tools, tips, and techniques that have this fundamental appeal: making our jobs easier and more satisfying.

If a feature has "star" quality but still doesn't do much to help me as a DBA, it didn't make it on this list. Even so, 20 articles are not enough to explore all that Oracle 10g has to offer—I would probably need a hundred more. So in this penultimate installment of this series, I will describe just a few of the remaining new features of Oracle 10g that deserve to be mentioned.

Are Your Stats Stale? Don't Leave it to Chance

As most of you know, the Rules-Based Optimizer (RBO) is finally desupported (not deprecated) as of Oracle 10g. In anticipation of that long-awaited development, many Oracle9i Database shops upgraded to the Cost Based Optimizer (CBO) to get into the support loop and to take advantage of advanced features such as query rewrite and partition pruning. The problem, however, is statistics—or rather, the absence of them.

Because the CBO depends on accurate (or reasonably accurate) statistics to produce optimal execution paths, DBAs need to ensure that statistics are gathered regularly, creating yet another enforcement checklist. Prior to 10g, this process could be futile for various reasons. This difficulty gave rise to the theory that the CBO has a "mind of its own"—which implies behavior such as changing execution paths at will!

Many of these worries have been put to rest in 10g, in which statistics can be collected automatically. In Oracle9i, you could check if the data in a table had changed significantly by turning on the table monitoring option ( ALTER TABLE ... MONITORING) and then checking the view DBA_TAB_MODIFICATIONS for those tables.

In 10g, the MONITORING statement is gone. Instead, statistics are collected automatically if the initialization parameter STATISTIC_LEVEL is set to TYPICAL or ALL. (The default value is TYPICAL, so automatic statistics gathering is enabled out of the box.) Oracle Database 10g has a predefined Scheduler (you learned about Scheduler in Week 19) job named GATHER_STATS_JOB, which is activated with the appropriate value of the STATISTIC_LEVEL parameter.

The collection of statistics is fairly resource-intensive, so you may want to ensure it doesn't affect regular operation of the database. In 10g, you can do so automatically: a special resource consumer group named AUTO_TASK_CONSUMER_GROUP is available predefined for automatically executed tasks such as gathering of statistics. This consumer group makes sure that the priority of these stats collection jobs is below that of the default consumer group, and hence that the risk of automatic tasks taking over the machine is reduced or eliminated.

What if you want to set the parameter STATISTIC_LEVEL to TYPICAL but don't want to make the statistics collection automatic? Simple. Just disable the Scheduler job by issuing the following:

BEGIN 
   DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB');
END;

And why would you want to do that? There are a variety of legitimate reasons—one being that although most of the table's rows changed the distribution may not have changed, which is common in data warehouses. In this case you don't want to collect statistics again, but just want to reuse the old statistics. Another reason could be that you are using partition exchange to refresh a materialized view (MV) and don't want to collect statistics on the MV, as the statistics on the exchanged table will be imported as well. However, you could also exclude certain tables from the automatic stats collection job, eliminating the need to disable the entire job.

Statistics History

One of the complications that can occur during optimizer statistics collection is changed execution plans—that is, the old optimization works fine until the statistics are collected, but thereafter, the queries suddenly go awry due to bad plans generated by the newly collected statistics. This is a not infrequent problem.

To protect against such mishaps, the statistics collection saves the present statistics before gathering the new ones. In the event of a problem, you can always go back to the old statistics, or at least examine the differences between them to get a handle on the problem.

For example, let's imagine that at 10:00PM on May 31 the statistics collection job on the table REVENUE is run, and that subsequently the queries perform badly. The old statistics are saved by Oracle, which you can retrieve by issuing:

begin
   dbms_stats.restore_table_stats (
      'ARUP',
      'REVENUE',
      '31-MAY-04 10.00.00.000000000 PM -04:00');
end;
This command restores the statistics as of 10:00PM of May 31, given in the TIMESTAMP datatype. You just immediately undid the changes made by the new statistics gathering program.

The length of the period that you can restore is determined by the retention parameter. To check the current retention, use the query:

SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         31
which in this case shows that 31 days worth of statistics can be saved but not guaranteed. To discover the exact time and date to which the statistics extend, simply use the query:
SQL> select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY from dual;

GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------
17-MAY-04 03.21.33.594053000 PM -04:00
which reveals that the oldest available statistics date to 3:21AM on May 17.

You can set the retention period to a different value by executing a built-in function. For example, to set it to 45 days, issue:

execute DBMS_STATS.ALTER_STATS_HISTORY_RETENTION (45)
Guaranteed Undo Retention

Automatic Undo Retention, introduced in Oracle9i, was of significant help in reducing the chances of the dreaded ORA-1555 "Snapshot Too Old" error. But these errors still appeared, albeit in vastly reduced numbers. Why?

To answer this question, you need to understand how undo segments work. When data changes inside Oracle, the blocks in cache (in SGA) are immediately changed and the past images are stored in the undo segments. When the transaction commits, old images are no longer necessary, hence they can be reused. If all the space in the undo segment is used by an active transaction, Oracle will try to re-use the oldest extent of the segment (a process known as "wrapping," as shown in the WRAPS column in the V$ROLLSTAT view). However, in some cases, especially in long-running transactions, the segment will extend to make room for the active transactions, as shown in the column EXTENDS in V$ROLLSTAT. If a query needs data in extents of the undo segment to create a consistent view of data but that extent has been reused, the query throws the ORA-1555 error.

The initialization parameter UNDO_RETENTION_PERIOD specifies how much undo data must be retained (in seconds). By specifying a time, Oracle ensured that old undo extents are not reused, even if they are inactive, if they have been changed within the undo retention period. This approach reduced the chance that an inactive extent could be reused accidentally by a query later, and hence the occurrence of ORA-1555s.

However, although UNDO_RETENTION_PERIOD specifies how much undo data can be kept, it does not guarantee it. When the segments can't extend, the oldest inactive extent is reused to satisfy the current transaction. Therefore, some long-running queries unrelated to the modifying transaction may fail and issue ORA-1555 when those extents are queried.

This problem is solved in 10g: when you create the undo tablespace, you can now specify an undo retention "guarantee." Here's an example:

CREATE UNDO TABLESPACE UNDO_TS1
DATAFILE '/u01/oradata/proddb01/undo_ts1_01.dbf'
SIZE 1024M 
RETENTION GUARANTEE;
Note the concluding clause, which causes the undo tablespace to guarantee the retention of the unexpired undo extents. Existing undo tablespaces can also made to comply with the guarantee by ALTERing them, as in:
ALTER TABLESPACE UNDO_TS2 RETENTION GUARANTEE;
What if you don't want to guarantee the retention (the Oracle9i behavior)? Well, do this:
ALTER TABLESPACE UNDO_TS2 RETENTION NOGUARANTEE;
You can verify that the tablespace has guaranteed undo retention with:
SELECT RETENTION
FROM DBA_TABLESPACES
WHERE TABLESPACE_NAME = 'UNDO_TS1';
End-to-End Tracing

A common approach to diagnosing performance problems is to enable sql_trace to trace database calls and then analyze the output later using a tool such as tkprof. However, the approach has a serious limitation in databases with shared server architecture. In this configuration, several shared server processes are created to service the requests from the users. When user BILL connects to the database, the dispatcher passes the connection to an available shared server. If none is available, a new one is created. If this session starts tracing, the calls made by the shared server process are traced.

Now suppose that BILL's session becomes idle and LORA's session becomes active. At that point the shared server originally servicing BILL is assigned to LORA's session. At this point, the tracing information emitted is not from BILL's session, but from LORA's. When LORA's session becomes inactive, this shared server may be assigned to another active session, which will have completely different information.

In 10g, this problem has been effectively addressed through the use of end-to-end tracing. In this case, tracing is not done only by session, but by an identifiable name such as a client identifier. A new package called DBMS_MONITOR is available for this purpose.

For instance, you may want to trace all sessions with the identifier account_update. To set up the tracing, you would issue:

exec DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE('account_update');
This command enables tracing on all sessions with the identifier account_update. When BILL connects to the database, he can issue the following to set the client identifier:
exec DBMS_SESSION.SET_IDENTIFIER ('account_update')
Tracing is active on the sessions with the identifier account_update, so the above session will be traced and a trace file will be generated on the user dump destination directory. If another user connects to the database and sets her client identifier to account_update, that session will be traced as well, automatically, without setting any other command inside the code. All sessions with the client identifier account_update will be traced until the tracing is disabled by issuing:
exec DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE('account_update');
The resulting trace files can be analyzed by tkprof. However, each session produces a different trace file. For proper problem diagnosis, we are interested in the consolidated trace file; not individual ones. How do we achieve that?

Simple. Using a tool called trcsess, you can extract information relevant to client identifier account_update to a single file that you can run through tkprof. In the above case, you can go in the user dump destination directory and run:

trcsess output=account_update_trc.txt clientid=account_update *
This command creates a file named account_update_trc.txt that looks like a regular trace file but has information on only those sessions with client identifier account_update. This file can be run through tkprof to get the analyzed output.

Contrast this approach with the previous, more difficult method of collecting trace information. Furthermore, tracing is enabled and disabled by some variable such as client identifier, without calling alter session set sql_trace = true from that session. Another procedure in the same package, SERV_MOD_ACT_TRACE_ENABLE, can enable tracing in other combinations such as for a specific service, module, or action, which can be set by dbms_application_info package.

Database Usage

As your Oracle sales representative will confirm, partitioning is an extra-cost option, and in this age of cost control, you may wonder if users ever use it—and if so, how often.

Instead of relying on replies from the users, ask the database. Automatic Workload Repository (introduced in Week 6) collects usage information on all installed features, albeit only once per week.

Two very important views display the usage pattern of the database. One, DBA_HIGH_WATER_MARK_STATISTICS, shows the maximum value of each of the features used in the present database. Here is an example output.

NAME             HIGHWATER LAST_VALUE DESCRIPTION
--------------- ---------- ---------- ----------------------------------------------------------
USER_TABLES            401        401 Number of User Tables
SEGMENT_SIZE    1237319680 1237319680 Size of Largest Segment (Bytes)
PART_TABLES             12          0 Maximum Number of Partitions belonging to an User Table
PART_INDEXES            12          0 Maximum Number of Partitions belonging to an User Index
USER_INDEXES           832        832 Number of User Indexes
SESSIONS                19         17 Maximum Number of Concurrent Sessions seen in the database
DB_SIZE         7940079616 7940079616 Maximum Size of the Database (Bytes)
DATAFILES                6          6 Maximum Number of Datafiles
TABLESPACES              7          7 Maximum Number of Tablespaces
CPU_COUNT                4          4 Maximum Number of CPUs
QUERY_LENGTH          1176       1176 Maximum Query Length
As you can see, this view shows several valuable pieces of information on the usage of the database—such as the fact that the users created a maximum of 12 partitioned tables but none are being used now ( LAST_VALUE = 0). This information is persistent across shutdowns and can prove very useful for planning operations such as migration to a different host.

The above view still does not answer all our questions, however. It tells us that only 12 partitioned tables were ever created, but not the last time this feature was used. Another view, DBA_FEATURE_USAGE_STATISTICS, which shows the usage of the various features of the database, can answer that question. Here is how the view looks for the partitioning feature with columns shown in vertical format.

DBID                          : 4133493568
NAME                          : Partitioning
VERSION                       : 10.1.0.1.0
DETECTED_USAGES               : 12
TOTAL_SAMPLES                 : 12
CURRENTLY_USED                : FALSE
FIRST_USAGE_DATE              : 16-oct-2003 13:27:10
LAST_USAGE_DATE               : 16-dec-2003 21:20:58
AUX_COUNT                     :
FEATURE_INFO                  :
LAST_SAMPLE_DATE              : 23-dec-2003 21:20:58
LAST_SAMPLE_PERIOD            : 615836
SAMPLE_INTERVAL               : 604800
DESCRIPTION                   : Oracle Partitioning option is being used -
                                there is at least one partitioned object created.
As this view shows, the partitioning feature is not being used in the database now (column CURRENTLY_USED is FALSE) and the last time it was accessed was Dec 16 2003, at 9:20PM. The usage sampling is done every 604,800 seconds or 7 days, as shown in the column SAMPLE_INTERVAL. The column LAST_SAMPLE_DATE shows the last time this usage was sampled, indicating how current the information is.

In addition to the command-line interface, Enterprise Manager 10g also shows this information. In EM, go to the Administration tab and click on the "Database Usage Statistics" link under Configuration Management. (See Figures 1 and 2.)

figure 1
Figure 1: Database Usage Statistics page


figure 2
Figure 2: Database Usage Statistics; feature drill-down


Easier and More Secure Encryption

Remember the package DBMS_OBFUSCATION_TOOLKIT (DOTK)? It was the only available method to achieve encryption inside the database in Oracle9i and below. While the package was sufficient for most databases, like most security products, it was quickly rendered ineffective against sophisticated hacker attacks involving highly sensitive information. Notable among the missing functionality was support for Advanced Encryption Standard (AES), a more powerful successor to the older Digital Encryption Standard (DES) and Triple DES (DES3).

In 10g, a more sophisticated encryption apparatus, DBMS_CRYPTO, comes to the rescue. This built-in package offers all the functionalities lacking in DOTK, in addition to enhancing existing functions and procedures. For example, DBMS_CRYPTO can encrypt in the new 256-bit AES algorithm. The function ENCRYPT (which is also overloaded as a procedure) accepts a few parameters:

Parameter Description
SRC The input to be encrypted. It must be in RAW data type; any other data type must be converted. For instance, the character variable l_inp is converted by:
utl_i18n.string_to_raw (p_in_val, 'AL32UTF8');
Because the string must be converted to RAW and the character set AL32UTF8, a new package called UTL_IL8N is used. Unlike DOTK, DBMS_CRYPTO does not accept character variables as parameters. Another point to note is that you do not have to pad the character to make the length a multiple of 16, as it was in DOTK package. The function (or procedure) pads it automatically.
KEY The encryption key is specified here. The key must be of appropriate length based on the algorithm used.
TYP The type of encryption and padding used is specified in this parameter. For example, if you want to use AES 256-bit algorithm, Cipher Block Chaining, and PKCS#5 padding, you would use the built-in constants here as:
typ => dbms_cryptio.encrypt_aes256 + 
       dbms_cryptio.chain_cbc + 
           dbms_cryptio.pad_pkcs5
                                    

The ENCRYPT function returns the encrypted value in RAW, which can be converted into strings using
utl_i18n.raw_to_char (l_enc_val, 'AL32UTF8')
which is the reverse of the casting to RAW.

The opposite of encryption is decryption, provided by the function (and overloaded as a procedure) DECRYPT, which accepts analogous parameters. Using this new package, you can build sophisticated security models inside your database applications.

Conclusion

As I mentioned earlier, it would be impossible to cover all the new features relevant to DBAs in Oracle Database 10g, but I have made an attempt to present a select few in these 20 weeks, with this week devoted to important but miscellaneous tidbits that could not be covered in the previous 19 installments. I hope you did find the series informative and helpful. Once again, please feel free to offer your comments, and don't forget to drop me a line to let know which feature you like the best.

Back to Series Index


Please rate this document:

Excellent Good Average Below Average Poor

Send us your comments