Oracle Database 11g:
The Top Features for DBAs and Developers

by Arup Nanda Oracle


Learn how the Health Monitor monitors the health of some database components automatically and then records its findings in Automatic Diagnostic Repository, how to create a package to be sent to Oracle Support in case of problems, and how to read the new version of the alert log, listener log, and other logs.

See Series TOC

Automatic Health Monitor

How do you know if your database is humming along smoothly? Well, one way is to check "everything"—a rather time-consuming and error-prone process. In some shops dedicated DBAs perform the same task over and over to assess and report the health of the database, but most can't afford to hire full time staff for this effort. The alternative is to have the regular DBA staff perform health checks, but the result is usually not very encouraging. Dividing one's attention across too many things can lead to missing something potentially dangerous.

In Oracle Database 11g the effort has become somewhat simpler with the introduction of the Automatic Health Monitor. Similar to the Advisors introduced in Oracle Database 10g, the Automatic Health Monitor "checkers" monitor (automatically after a failure or on demand) various components, such as the datafiles and dictionary, to make sure they are not corrupted physically or logically. When the checkers find something, the information is reported and then may be fed to various recovery advisors. At a minimum, the new Incident Packaging Service (described later) allows you to make a bundle of all issues and supporting files for easier reporting to Oracle Support.

Like many other features of Oracle Database 11g, this process can be managed either from the command line or via the Oracle Enterprise Manager GUI. Here you'll see how it's done with the latter.

On the main Database page, scroll all the way down to the section marked Related Links as shown below.

From this list of hyperlinks, click on Advisor Central, which brings up the Advisors and Checkers screen. Click on the tab labeled Checkers. The top portion of the screen is shown below.

This is a very important screen that shows the multiple checkers available as well as the automated checker runs that have been executed.

First, let's focus on the multiple checkers available.

DB Structure Integrity Check. This checker is best explained through an example. First click on DB Structure Integrity Checks, which brings up a small screen where you can name the run as "DB_Struct_Int1", as an example. The other input you make here is to time-limit the run, which you may ignore to effectively state that there is no limit.

After the health check runs successfully, the confirmation comes up as shown in the top portion of the screen, shown below:

The lower portion of the screen also shows the run that was just made. The name is what you entered earlier: DB_Struct_Int1. The important difference is the column Run Type, which shows "Manual" for this run, as opposed to "Reactive" for others. You can choose this run by ticking the radio button to the left and then clicking on the button labeled Details. The resulting screen shows the details of the run, such as what type of damage was detected and so on.

In this case, a datafile was somehow corrupted and this checker will identify that. The information is then fed to the Data Recovery Advisor (discussed in the installment on RMAN) to take appropriate action. You can invoke this checker any time to check the datafile's integrity.

Your most popular checker will most likely be this one. On the screen that shows the past runs, choose any run of that type and click on the Details button, and you will see the screen as shown below:

This screen shows all the findings on this issue: datafile #7 has become corrupt. You can launch the Recovery Advisor if you wish to get advice on what needs to be done next.

Data Block Integrity Checker. Data Block Integrity Checker is similar to DB Structure Integrity Check but checks only specific blocks rather than the entire file. As previously, you give it a name and other relevant details. Here is what the screen looks like:

Note that you have to enter the datafile number and the block number. (I entered 7 and 20 respectively.) After entering the details, press OK. This starts the check process and the lower portion will reflect the status of the run as shown below:

Again, if there were a problem with the block, the checker would have found it. Using the hyperlink you could navigate to the details page to learn about the issues.

Redo Integrity Check. This checker scans the contents of the redo and archive logs for accessibility and corruption.

Undo Segment Integrity Check. This check finds logical undo corruptions, which are sometimes identified during rollback operations. After locating an undo corruption, this check uses PMON and SMON to try to recover the corrupted transaction. If this recovery fails, then Automatic Health Monitor stores information about the corruption in V$CORRUPT_XID_LIST. Most undo corruptions can be resolved by forcing a commit.

Transaction Integrity Check. Transaction Integrity Check is almost identical to Undo Segment Check except that it checks only one specific transaction, which is passed to the check as an input parameter. After locating an undo corruption, this check uses PMON and SMON to try to recover the corrupted transaction. If this recovery fails, then Automatic Health Monitor stores information about the corruption in V$CORRUPT_XID_LIST. Most undo corruptions can be resolved by forcing a commit.

Dictionary Integrity Check. This check examines the integrity of core dictionary objects, such as tab$ and col$. It verifies the contents of dictionary entries for each dictionary object, that logical constraints on rows in the dictionary are enforced, and that parent-child relationships between dictionary objects are enforced.

Automatic Health Checks

Remember the Checkers main screen? Note the list of Checker Runs at the bottom of the page, which shows the various checker runs that have occurred and their Run Type. If you had run a checker manually, as you did earlier in this section, the Run Type would show "Manual". Checker runs listed as "Reactive" means they were run automatically when an error was detected somewhere. If the run finds something, it is recorded and you can access the findings by clicking on these hyperlinks. For instance, clicking on the first run, HM_RUN_140013, will show the details of that checker run:

The screen clearly shows the cause of the failure. Actually there are at least two types of failure: a corrupt online redo log file as well as a datafile. The first thing you would want to do is to ask for advice by clicking the button Launch Recovery Advisor. After continuing through the wizard-based interface, you will come to a screen where the advisor directs you to perform a specific action:

The action would be to run the SQL file reco_767216331.hm in the directory /home/oracle/diag/rdbms/odel11/ODEL11/hm. If you open the file, you will see the contents:

/*Clear the Log Group*/

The corrupt logfile belongs to a group that is not active so it will be fine to clear them—and that was the advice from the Recovery Advisor. If you decide to go ahead with the advice, press the Continue button and the recovery will continue. After that is completed, if you go back to Support Workbench, you will see that the redo corruption is gone but a new corruption in an archive log has been detected.

As usual, you can launch the advisor to fix those errors.

Automatic Diagnostic Repository

When the checkers find something, they need to record it somewhere for further analysis and subsequent processing. All this metadata is recorded in a new facility called Automatic Diagnostic Repository that records all critical events, not just those detected by checkers. It's like the SYSTEM tablespace of the critical events in the database. Let's see how you can use it via the Enterprise Manager.

From the Enterprise Manager main Database page, click on the tab named Software and Support and then click on Support Workbench, which brings up a screen similar to the one shown below.

This shows just a summary as reported by the checkers: ORA-603 errors have occurred. Click on the + sign to the left of the error and the details of those errors show up.

If you click on the link corresponding to each incident id under the Incidents heading, you can view the incidents. For instance, take the example of the incident 14435. Clicking on the link brings up the details of the incident in a screen shown below:

The top portion of the screen shows the details that are probably self-explanatory. The lower portion of the screen shows the supporting details of the incident such as trace files. These files are sent over to Oracle Support for analysis (but only if you use the Incident Packaging Service to package them and have Oracle Configuration Manager configured with proper credentials). Click on the eyeglass icon next to the first trace file, which brings up the trace file as shown below:

Note how the lines in the file are parsed and presented in a very user-friendly manner. Enterprise Manager reads each line, identifies the dependent lines, and presents them properly indented. If you click on the hyperlinks in the file, you will see the raw section from the trace file.

You can combine the incidents into one "envelope" to be sent to Oracle Support and that's what you should do next: package them. To do this, click inside the check box under Select and click on the button labeled Package. This brings up a screen similar to one shown below:

For now, ignore Custom Packaging. Click on Quick Packaging and press Continue, which brings up a screen such as the following:

Enter all the details and press Next. The subsequent screens confirm what is being packaged, the manifest of the package, and so on. Here you see that all relevant trace files associated with the error have been identified and added to the package. This process saves you from the error-prone task of identifying the correct set of trace files for a particular error, and the tool also has the intelligence to gather trace files of related errors in the same package. This is important because trace files of all related errors are needed by Oracle to determine the root cause of the problem; the error that you decided to package may be a symptom and not the root cause. Finally, you can press Submit to send it to Oracle. Once you submit, the screen looks a bit different, as shown below:

Note how there is an entry under the Packaged column now ("Yes"). This confirms that the incidents were packaged. Note one more section:

This shows that the upload file was generated along with corresponding details like when it was generated, the main issue that is in the package, and so on. However the file was not uploaded to Oracle Support because you have either not installed or not configured the Configuration Manager yet. You will learn about that feature later; for now, if you click on the name, you will see more details about the package:

The details are all self-explanatory. The incidents are all shown with hyperlinks, which, when clicked, will go to the incident page you have seen earlier. Clicking on the tab labeled Files brings you to the page that shows all the files contained in the package. A little eyeglass icon on the far right means that you can view the files by clicking on them.

Click on the Activity Log tab now, which will show you the history of the package, when it was created, when it was sent to Oracle (if it was), and so on.

Customizing the Package

One of the most useful features of this tool is the ability to customize the package contents to be sent to Oracle. To accomplish that, click on the button labeled Customize. Clicking this button will bring up the screen shown below:

Note the panel at the right hand side, Packaging Tasks. In this panel, you will see the hyperlinks to perform various tasks related to the package. Here are some key tasks:

Edit Contents
  1. Add Problems: lets you add more problems to the same package. Useful if you think those problems could be related.
  2. Exclude Problems: allows you to remove problems; helpful if you think those problems are not related to the other problems in the package.
  3. View Package Manifest: the manifest is a document that describes the contents of the package (what it is about, what problems it covers, what are the various trace files included, and so on). This is a text file. Clicking on this link brings up the manifest for you.
Scrub User Data

  1. Copy out Files to Edit Contents: here you can copy traces and other files included in the package to a local OS folder and edit them, if desired. You may want to do this if the files contain sensitive data.
  2. Copy in Files to Replace Contents: after you edit the files, you can replace the copy in the package with the edited copy.
Additional Diagnostic Data
  1. Gather Additional Dumps: allows you to add several other dumps, if not included already (such as a trace file generated as a result of setting sql_trace = true) and test cases.
  2. Add External Files: you can add any other file (init.ora or listener.ora, say) to the package if required by Support (or otherwise relevant).
Send to Oracle Support
  1. Finish Contents Preparation: as the name suggests, you can confirm the contents are fine and ready to be sent to Oracle Support.
  2. Generate Upload File: allows you to generate the upload file once again, along with all the additions and deletions you made to the list of files.
  3. View/Send Upload Files: lets you upload the file to Support, if you have configured the Configuration Manager.

Configuration Manager

Wouldn't it be nice if the package were created and sent to Oracle Support, and a draft Service Request (TAR) created with the relevant details such as your CSI and MetaLink userid? It's possible, using a tool called Oracle Configuration Manager.

During Oracle Database 11g installation, you were asked if you wanted to install and configure that tool. If you answered "yes", then you have already done it, but if you answered "no", you can configure it now.

Go to Oracle Home and cd to ccr/bin. Then run the file setupCCR to set up Configuration Manager for the very first time. It displays a license message, and in the end you will be asked to accept or reject it. Later you will be asked some questions such as CSI#, MetaLink userid, and so on. The following text shows the interactive session. The user input is shown in bold.

*** Do you accept this license agreement? (Y/N) [Y]:  
Configuration requires the following piece(s) of information. Customer Support Identifier (CSI):
Oracle MetaLink User Name:
The two character country code:
** Installing base package ** Deploying core - Version ** Registering installation with Oracle Configuration Manager server(s) ** Deploying engines - Version Deploying metricdata - Version Deploying scripts - Version ** Getting package updates from ContentServer ** ** Starting the Oracle Configuration Manager Scheduler ** Oracle Configuration Manager - Release: - Production Copyright (c) 2005, 2007, Oracle. All rights reserved. ------------------------------------------------------------------ Starting Oracle Configuration Manager... Waiting for status from Oracle Configuration Manager.... Start Date 16-Oct-2007 09:28:46 Last Collection Time - Next Collection Time 17-Oct-2007 09:28:00 Collection Frequency Daily at 09:28 Collection Status scheduled collection running Log Directory /home/oracle/app/oracle/product/11.1/db_1/ccr/log Registered At 16-Oct-2007 09:28:17 Automatic Update On Collector Mode Connected Oracle Configuration Manager successfully started.
Once Configuration Manager is set up, you can change the parameters using the script configCCR found in the same directory.

ADR Home

Since all the focus is on the diagnostic ability of the database, shouldn't Oracle Database store all the trace files, log files, and so on organized in a structured way?

It does, in Oracle Database 11g. The Automatic Diagnostic Repository (ADR) files are located in directories under a common directory specified as the Diagnostic Destination (or ADR Base). This directory is set by an initialization parameter (diagnostic_dest). By default it is set to $ORACLE_BASE, but you could explicitly set to some exclusive directory. (This is not recommended however.) Under this directory, there is a subdirectory called diag under which you will find the subdirectories where the diagnostic files are stored.

The ADR houses logs and traces of all components—ASM, CRS, listener, and so on—in addition to those of the database itself. This makes it convenient for you to look for a specific log at a single location.

Inside the ADR Base, there could be several ADR Homes, one for each component and instance. For example, if the server has two Oracle instances, there will be two ADR Homes. Here is the directory structure of the ADR Homes for the database instances.

Directory Name


<Directory mentioned in the DIAGNOSTIC_DEST parameter>






    →<Name of the Database>


       →<Name of the Instance>



The alert log in XML format is stored here.


Core dumps are stored here, the equivalent of the core_dump_dest in earlier versions.


The Health Monitor runs checks on many components, and it stores some files here.


All incidents dumps are stored here.

            →<all incident directories exist here>

Each incident is stored in a different directory, which are all stored here.


When you package incidents (learn about packaging in this article), certain supporting files are stored here. 


Metadata about problems, incidents, packages and so on is kept here.


User traces and background traces are kept here, along with the text version of the alert log.

For instance, if your database name is ODEL11 and the instance name is ODEL11 (in uppercase) as well, the path of the ADR Home is /home/oracle/diag/rdbms/odel11/ODEL11. You can now see the different subdirectories under this ADR Home:

$ ls
alert  cdump  hm  incident  incpkg  ir  lck  metadata  stage  sweep  trace

To support this new structure, the *_dest parameters in previous releases (background_dump_dest and user_dump_dest) are ignored. (core_dump_dest is not ignored; in fact Oracle recommends that you set it as core dumps can be very large.) You shouldn't set them at all and if you are upgrading from 10g to 11g, you should remove them from the initialization parameter file to avoid confusion later.

The ADR directory structure for other components is similar. For instance, for ASM instance, the directory under "diag" is named asm, instead of rdbms. The rest of the directory structure remains the same. The name of the target in case of asm is +asm. For instance, here is how my ADR Home for ASM looks:

$ pwd
$ ls
alert  cdump  hm  incident  incpkg  ir  lck  metadata  stage  sweep  trace

For the listener, the directory under diag is called tnslsnr, under which another directory exists with the hostname, and then under that another directory with the listener name as the directory name. Under that you will see the other directories.

<Directory mentioned in the DIAGNOSTIC_DEST parameter>
   → diag
      → tnslsnr
         → <hostname of the server>
             → <name of the listener>
               → alert
               → trace ...

For instance, for a host named oradba3, and a listener named "listener" (the default name), the directory will look like /home/oracle/diag/tnslsnr/oradba3/listener. Under this directory all the others (alert, trace, metadata, and so on) are created. Like the alert log, the listener log file is also stored as XML entries, under the subdirectory alert. The usual text listener log file is still produced, under the directory trace.

A new view V$DIAG_INFO shows all the details about the ADR Homes. In my RDBMS home, it appears like this:

SQL> select * from v$diag_info;
 INST_ID NAME                           VALUE
    -------- ------------------------------                -----------------------------------------------------------------
       1 Diag Enabled                   TRUE
       1 ADR Base                       /home/oracle
       1 ADR Home                       /home/oracle/diag/rdbms/odel11/ODEL11
       1 Diag Trace                     /home/oracle/diag/rdbms/odel11/ODEL11/trace
       1 Diag Alert                     /home/oracle/diag/rdbms/odel11/ODEL11/alert
       1 Diag Incident                  /home/oracle/diag/rdbms/odel11/ODEL11/incident
       1 Diag Cdump                     /home/oracle/diag/rdbms/odel11/ODEL11/cdump
       1 Health Monitor                 /home/oracle/diag/rdbms/odel11/ODEL11/hm
       1 Default Trace File             /home/oracle/diag/rdbms/odel11/ODEL11/trace/ODEL11_ora_3908.trc
       1 Active Problem Count           3
       1 Active Incident Count          37
11 rows selected.

This shows the ADR information about this instance only. To see that for another instance, simply connect to that instance and select from v$diag_info. The columns are self-explanatory. The default trace file indicates the trace file for your current session. The Active Problem and Incident counts are for problems and incidents described earlier.

You can access the files and perform other operations on the ADR in two ways. The easiest way is via Enterprise Manager as you saw earlier. The other option is to use a command line tool called asrci. Let's see how you can use the tool. From the UNIX (or Windows) command prompt, type "adrci":

$ adrci 
ADRCI: Release - Beta on Sun Sep 23 23:22:24 2007
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
ADR base = ”/home/oracle”

As you learned earlier, there are several ADR Homes, one for each instance of the Oracle components. So, the first task is to show how many homes exist. The command is show homes.

adrci> show homes
ADR Homes: 

As you can see, there are several homes. To operate on a specific home, you should use set homepath command:

adrci> set homepath diag/rdbms/odel11/ODEL11

Once set, you can issue many commands at the prompt. The first command you may try is help, which will show all the available commands. Here is a brief excerpt of the output:

adrci> help
 HELP [topic]
   Available Topics:
If you want to know more about a specific command, issue help <command>. For instance, if you want to get help on the usage of show incident commands, you will issue:

adrci> help show incident            
  Usage: SHOW INCIDENT [-p <predicate_string>] 
                       [-mode BASIC|BRIEF|DETAIL]
                       [-last <num> | -all] 
                       [-orderby (field1, field2, ...) [ASC|DSC]]
  Purpose: Show the incident information. By default, this command will
           only show the last 50 incidents which are not flood controlled.
    [-p <predicate_string>]: The predicate string must be double-quoted.
    [-mode BASIC|BRIEF|DETAIL]: The different modes of showing incidents.
[... and so on ...]

This technique of decoupling of collecting and publishing stats can also be used with partitioned tables. Suppose you are loading a table partition by partition. You don't want to feed partial information to the optimizer; you rather want the stats of all partitions to be visible to the optimizer at the same time. But you also want to take advantage of the time right after the partition is loaded. So, you can collect the stats on a partition right after it is loaded but not publish it. After all partitions are analyzed, you can publish them all at once.

From the output you know the usage. Now to know how many incidents have been recorded, you can issue:

adrci> show incident -mode basic  

ADR Home = /home/oracle/diag/rdbms/odel11/ODEL11:
-------------------- - --------------------------------------------------- ----------------------------------------
14556 ORA 600 [KSSRMP1] 2007-10-17 04:01:57.725620 -04:00
14555 ORA 600 [KSSRMP1] 2007-10-16 18:45:03.970884 -04:00
14435 ORA 603 2007-10-16 06:06:46.705430 -04:00
14427 ORA 603 2007-10-16 06:06:42.007937 -04:00
14419 ORA 603 2007-10-16 06:06:30.069050 -04:00
6001 ORA 4031 2007-08-28 14:50:01.355783 -04:00
5169 ORA 4031 2007-09-04 19:09:36.310123 -04:00
5121 ORA 4031 2007-09-03 14:40:14.575457 -04:00
5017 ORA 4031 2007-09-04 19:09:30.969226 -04:00
4993 ORA 4031 2007-09-04 19:09:33.179857 -04:00
4945 ORA 4031 2007-09-04 19:09:30.955524 -04:00
4913 ORA 4031 2007-09-04 19:09:31.641990 -04:00

This shows a list of all incidents. Now, you can get the details of a specific incident as shown below:

adrci> show incident -mode detail -p "incident_id=14556"

ADR Home = /home/oracle/diag/rdbms/odel11/ODEL11:

STATUS ready
CREATE_TIME 2007-10-17 04:01:57.725620 -04:00
[... and so on ...]
INCIDENT_FILE /home/oracle/diag/rdbms/odel11/ODEL11/trace/ODEL11_mmon_14831.trc
INCIDENT_FILE /home/oracle/diag/rdbms/odel11/ODEL11/incident/incdir_14556/ODEL11_mmon_14831_i14556.trc
1 rows fetched

The information shown in the adcri command line is analgous to what you will see in the Enterprise Manager screens. The latter may, however, be simpler and much more user friendly. adcri is very helpful when you don't have access to EM Support Workbench for some reason. You can also use adcri to do things like tailing the alert log file or searching some log (listener, css, crs, alert, etc.) for specific patterns. adcri is also helpful if you want to work on ADR programmatically.


New Alert Log

In Oracle Database 11g, the alert log is written in XML format. For the sake of compatibility with older tools, the traditional alert log is also available in the ADR Home under the trace directory. For instance, in my example shown above, the directory is /home/oracle/diag/rdbms/odel11/ODEL11/trace, where you can find the alert_ODEL11.log. However, the other alert logs are in XML format, and are located in the alert subdirectory under ADR Home. Let's see the files:

$ pwd
$ ls -ltr
total 60136
-rw-r-----  1 oracle oinstall 10485977 Sep 13 17:44 log_1.xml
-rw-r-----  1 oracle oinstall 10486008 Oct 16 06:35 log_2.xml
-rw-r-----  1 oracle oinstall 10485901 Oct 16 07:27 log_3.xml
-rw-r-----  1 oracle oinstall 10485866 Oct 16 08:12 log_4.xml
-rw-r-----  1 oracle oinstall 10486010 Oct 17 23:56 log_5.xml
-rw-r-----  1 oracle oinstall  9028631 Oct 21 20:07 log.xml

Note that there are several files: log_1.xml, log_2.xml, and so on. When the log.xml reaches a certain size, the file is renamed to log_?.xml and a new file is started. This prevents the alert log from becoming too large and unmanageable.

The new alert log is accessed via the adrci utility: the ADR command line tool, which you learned about in the previous section. From the adrci tool, issue:

adrci> show alert Choose the alert log from the following homes to view: 
1: diag/rdbms/odel11/ODEL11 
2: diag/clients/user_oracle/host_1967384410_11 
3: diag/clients/user_unknown/host_411310321_11 
4: diag/tnslsnr/oradba3/listener 
Q: to quit Please select option:

You can choose one from the menu or you can supply a specific home:

adrci> set homepath diag/rdbms/odel11/ODEL11
adrci> show alert 
ADR Home = /home/oracle/diag/rdbms/odel11/ODEL11:

[... and the whole alert log show up here ...]

Instead of selecting the entire alert log, you may want to specify only a few lines at the end, e.g. 10 lines (similar to the tail -10 command in UNIX):

adrci> show alert -tail 10
2007-09-23 19:57:44.502000 -04:00
Errors in file /home/oracle/diag/rdbms/odel11/ODEL11/trace/ODEL11_arc1_20810.trc:
[... the rest of the 10 lines ...]

Perhaps the most frequent use of this will be to constantly display the last lines of the alert log, something similar to the tail -f command in UNIX.

adrci> show alert -tail -f

You can execute scripts from the adrci command line prompt. Here is an example of a Windows script that sets the home and displays the last 10 lines of the alert log:

C:\>type show_alert_10lines.cmd
set homepath diag\rdbms\lapdb11\lapdb11
show alert -tail 10

You can call this script as shown below:

adrci script=show_alert_10lines.cmd

A similar functionality is the exec parameter, which allows you to run commands directly from the command line:

adrci exec=”show homes; show catalog”

At the adrci prompt, you can also run a command using the "run" command or the "@" sign:
adrci>> @show_alert_10lines.cmd

One of the best things with the alert log being an XML file is that information is written in a structured way. Gone are the days when the alert log was a repository of unstructured data. The XML format makes the file viewable as a table in adrci. To see the fields of this "table", use the describe command:

adrci>>describe alert_ext
Name                          Type            NULL?
-----------------------------               ---------------        -----------
NORMALIZED_TIMESTAMP          timestamp
ORGANIZATION_ID               text(65)
COMPONENT_ID                  text(65)
HOST_ID                       text(65)
HOST_ADDRESS                  text(17)
MESSAGE_TYPE                  number
MESSAGE_LEVEL                 number
MESSAGE_ID                    text(65)
MESSAGE_GROUP                 text(65)
CLIENT_ID                     text(65)
MODULE_ID                     text(65)
PROCESS_ID                    text(33)
THREAD_ID                     text(65)
USER_ID                       text(65)
INSTANCE_ID                   text(65)
DETAILED_LOCATION             text(161)
UPSTREAM_COMP_ID              text(101)
DOWNSTREAM_COMP_ID            text(101)
EXECUTION_CONTEXT_ID          text(101)
ERROR_INSTANCE_ID             number
MESSAGE_TEXT                  text(2049)
MESSAGE_ARGUMENTS             text(129)
SUPPLEMENTAL_DETAILS          text(129)
PARTITION                     number
RECORD_ID                     number
FILENAME                      text(513)
PROBLEM_KEY                   text(65)

Now that the information is structured, you can search with precision. Suppose you want to search for lines in the alert logs that match a specific value in a field. Here is an example:

adrci>> show alert -p "module_id='DBMS_SCHEDULER'"

This shows all the lines written by processes with the module id dbms_scheduler. You can also use the inequality operator (not containing DBMS_SCHEDULER):

adrci>>show alert -p "module_id != 'DBMS_SCHEDULER'"

Likewise you can use the pattern-matching operators:

adrci>>show alert -p "module_id like '%SCHEDULER'"

The spool command works just like its namesake command in SQL*Plus. You can spool the output to a file:

adrci>> spool a
adrci>> show alert -tail 50
adrci>> spool off

It creates a file (a.ado) containing the last 50 lines of the alert log. A great use of this option is to extract specific types of messages from the alert log. If you want to extract the Streams related statements from the alert log, you would use:

adrci> show alert -p "message_text like '%STREAM%'"

You can see all the trace files generated in the ADR base directory from the adrci command prompt as well.

adrci>> show tracefile 

The above command shows a list of all the trace files generated in the ADR directory. To show specific types of trace files( "reco", for example) in reverse chronological order:

adrci>>show tracefile %reco% -rt
   18-JUL-07 22:59:50  diag\rdbms\lapdb11\lapdb11\trace\lapdb11_reco_4604.trc
   12-JUL-07 09:48:23  diag\rdbms\lapdb11\lapdb11\trace\lapdb11_reco_4236.trc
   11-JUL-07 10:30:22  diag\rdbms\lapdb11\lapdb11\trace\lapdb11_reco_3256.trc

adrci offers many more options to view the alert log and related files in the most efficient manner. For a complete description of adrci commands, see the documentation.

Back to Series TOC