We’re sorry. We could not find a match for your search.

We suggest you try the following to help find what you’re looking for:

  • Check the spelling of your keyword search.
  • Use synonyms for the keyword you typed, for example, try "application" instead of "software."
  • Start a new search.
Cloud Account Sign in to Cloud
Oracle Account

Oracle Database 11g: The Top Features for DBAs and Developers

by Arup Nanda

Backup and Recovery

Advice on data recovery, parallel backup of the same file, virtual catalogs for security, duplicate database from backup, undrop a tablespace, and secure backup to the cloud are just a few of the new gems available from RMAN in Oracle Database 11g.

See Series TOC

Data Recovery Advisor

Consider the error shown below:

SQL> conn scott/tiger
SQL> create table t (col1 number);
create table t (col1 number)
ERROR at line 1:
ORA-01116: error in opening database file 4
ORA-01110: data file 4: '/home/oracle/oradata/PRODB3/users01.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3

Does it look familiar? Regardless of your experience as a DBA, you probably have seen this message more than once. This error occurs because the datafile in question is not available—it could be corrupt or perhaps someone removed the file while the database was running. In any case, you need to take some proactive action before the problem has a more widespread impact.

In Oracle Database 11g, the new Data Recovery Advisor makes this operation much easier. The advisor comes in two flavors: command line mode and as a screen in Oracle Enterprise Manager Database Control. Each flavor has its advantages for a given specific situation. For instance, the former option comes in handy when you want to automate the identification of such files via shell scripting and schedule recovery through a utility such as cron or at. The latter route is helpful for novice DBAs who might want the assurance of a GUI that guides them through the process. I'll describe both here.

Command Line Option

The command line option is executed through RMAN. First, start the RMAN process and connect to the target.

$ rman target=/
Recovery Manager: Release - Beta on Sun Jul 15 19:43:45 2007
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database: PRODB3 (DBID=3132722606)

Assuming that some error has occurred, you want to find out what happened. The list failure command tells you that in a jiffy.

RMAN> list failure;

If there is no error, this command will come back with the message:

no failures found that match specification

If there is an error, a more explanatory message will follow:

using target database control file instead of recovery catalog

List of Database Failures
Failure ID Priority Status    Time Detected Summary
----------      --------     ---------     -------------       -------
142        HIGH     OPEN      15-JUL-07     One or more non-system datafiles are missing

This message shows that some datafiles are missing. As the datafiles belong to a tablespace other than SYSTEM, the database stays up with that tablespace being offline. This error is fairly critical, so the priority is set to HIGH. Each failure gets a Failure ID, which makes it easier to identify and address individual failures. For instance you can issue the following command to get the details of Failure 142.

RMAN> list failure 142 detail;

This command will show you the exact cause of the error.

Now comes the fun part: How do you rectify the error? Seasoned DBAs will probably ace this without further help but novice DBAs (and even experienced but tired ones) will welcome some guidance here. They can turn to Data Recovery Advisor for assistance:

RMAN> advise failure;

It responds with a detailed explanation of the error and how to correct it:

List of Database Failures
Failure ID Priority Status    Time Detected Summary
----------      --------     ---------     -------------       -------
142        HIGH     OPEN      15-JUL-07     One or more non-system datafiles are missing
analyzing automatic repair options; this may take some time
using channel ORA_DISK_1
analyzing automatic repair options complete
Mandatory Manual Actions
no manual actions available
Optional Manual Actions
1. If file /home/oracle/oradata/PRODB3/users01.dbf was unintentionally renamed or moved, restore it
Automated Repair Options
Option Repair Description
------ ------------------
1      Restore and recover datafile 4  
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /home/oracle/app/diag/rdbms/prodb3/PRODB3/hm/reco_3162589478.hm

This output has several important parts. First, the advisor analyzes the error. In this case, it's pretty obvious: the datafile is missing. Next, it suggests a strategy. In this case, this is fairly simple as well: restore and recover the file. (Please note that I have deliberately chosen a simple example to focus the attention on the usage of the tool, not to discuss the many cases where the database could fail and how they can be recovered. The dynamic performance view V$IR_MANUAL_CHECKLIST also shows this information.)

However, the most useful task Data Recovery Advisor does is shown in the very last line: it generates a script that can be used to repair the datafile or resolve the issue. The script does all the work; you don't have to write a single line of code.

Sometimes the advisor doesn't have all the information it needs. For instance, in this case, it does not know if someone moved the file to a different location or renamed it. In that case, it advises to move the file back to the original location and name (under Optional Manual Actions).

OK, so the script is prepared for you. Are you ready to execute it? I don't know about you, but I would verify what the script actually does first. So, I issue the following command to "preview" the actions the repair task will execute:

RMAN> repair failure preview;
Strategy: The repair includes complete media recovery with no data loss
Repair script: /home/oracle/app/diag/rdbms/prodb3/PRODB3/hm/reco_741461097.hm
contents of repair script:
   # restore and recover datafile
   sql 'alter database datafile 4 offline';
   restore datafile 4;
   recover datafile 4;
   sql 'alter database datafile 4 online';

This is good; the repair seems to be doing the same thing I would have done myself using RMAN. Now I can execute the actual repair by issuing:

RMAN> repair failure;
Strategy: The repair includes complete media recovery with no data loss
Repair script: /home/oracle/app/diag/rdbms/prodb3/PRODB3/hm/reco_3162589478.hm
contents of repair script:
   # restore and recover datafile
   sql 'alter database datafile 4 offline';
   restore datafile 4;
   recover datafile 4;
   sql 'alter database datafile 4 online';
Do you really want to execute the above repair (enter YES or NO)?

Assuming I'm OK, I answer YES and the action goes on:

executing repair script
sql statement: alter database datafile 4 offline
Starting restore at 15-JUL-07
using channel ORA_DISK_1
channel ORA_DISK_1: restoring datafile 00004
input datafile copy RECID=5 STAMP=628025835 file name=/home/oracle/flasharea/PRODB3/datafile/o1_mf_users_39ocxbv3_.dbf
destination for restore of datafile 00004: /home/oracle/oradata/PRODB3/users01.dbf
channel ORA_DISK_1: copied datafile copy of datafile 00004
output file name=/home/oracle/oradata/PRODB3/users01.dbf RECID=0 STAMP=0
Finished restore at 15-JUL-07
Starting recover at 15-JUL-07
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 51 is already on disk as file /home/oracle/flasharea/PRODB3/archivelog/2007_07_15/o1_mf_1_51_39ocxxdw_.arc
and so on ...
name=/home/oracle/flasharea/PRODB3/archivelog/2007_07_15/o1_mf_1_55_39ocy9ox_.arc thread=1 sequence=55
media recovery complete, elapsed time: 00:00:01
Finished recover at 15-JUL-07
sql statement: alter database datafile 4 online
repair failure complete

Note how RMAN prompts you before attempting to repair. In a scripting case, you may not want to do that; rather, you would want to just go ahead and repair it without an additional prompt. In such a case, just use repair failure noprompt at the RMAN prompt.

Proactive Health Checks

It helps you sleep better at night knowing that the database is healthy and has no bad blocks. But how can you ensure that? Bad blocks show themselves only when they are accessed so you want to identify them early and hopefully repair them using simple commands before the users get an error.

The tool dbverify can do the job but it might be a little inconvenient to use because it requires writing a script file contaning all datafiles and a lot of parameters. The output also needs scanning and interpretation. In Oracle Database 11g, a new command in RMAN, VALIDATE DATABASE, makes this operation trivial by checking database blocks for physical corruption. If corruption is detected, it logs into the Automatic Diagnostic Repository. RMAN then produces an output that is partially shown below:

RMAN> validate database;
Starting validate at 09-SEP-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=110 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00002 name=/home/oracle/oradata/ODEL11/sysaux01.dbf
input datafile file number=00001 name=/home/oracle/oradata/ODEL11/system01.dbf
input datafile file number=00003 name=/home/oracle/oradata/ODEL11/undotbs01.dbf
input datafile file number=00004 name=/home/oracle/oradata/ODEL11/users01.dbf
channel ORA_DISK_1: validation complete, elapsed time: 00:02:18
List of Datafiles
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
----   ------    --------------        ------------       ---------------        ----------
1    OK     0              12852        94720           5420717   
  File Name: /home/oracle/oradata/ODEL11/system01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- --------------          ----------------
  Data       0              65435           
  Index      0              11898           
  Other      0              4535            
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
----   ------    --------------        ------------ -     --------------         ----------
2    OK     0              30753        115848          5420730   
  File Name: /home/oracle/oradata/ODEL11/sysaux01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- --------------          ----------------
  Data       0              28042           
  Index      0              26924           
  Other      0              30129           
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
----   ------    --------------        ------------       ---------------        ----------
3    OK     0              5368         25600           5420730   
  File Name: /home/oracle/oradata/ODEL11/undotbs01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- --------------      ----------------
  Data       0              0               
  Index      0              0               
  Other      0              20232           
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4    OK     0              2569         12256           4910970   


Otherwise, in case of a failure you will see on parts of the above output:

List of Datafiles
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
----   ------    --------------        ------------       ---------------        ----------
7    FAILED 0              0            128             5556154   
  File Name: /home/oracle/oradata/ODEL11/test01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- --------------        ----------------
  Data       0              108             
  Index      0              0               
  Other      10             20              

You can also validate a specific tablespace:

RMAN> validate tablespace users;

Or, datafile:

RMAN> validate datafile 1;

Or, even a block in a datafile:

RMAN> validate datafile 4 block 56;

The VALIDATE command extends much beyond datafiles however. You can validate spfile, controlfilecopy, recovery files, Flash Recovery Area, and so on.

Enterprise Manager Interface

Let's see how failures can also be detected and corrected via Enterprise Manager.

First go to the Database homepage; top part is shown below.

Scroll down further until you see the Alerts section, as shown below:

One of the alerts has a severity level of critical, marked by a red X. The screen also shows you that the alert is that of a data failure. This failure was detected by a data integrity check, which was invoked in reaction to the failure. If you click on the hyperlink under the Message column, you will see the alert in more detail as shown in the screen below:

Now, go back to the Database homepage and choose the tab Software and Support, which brings up a screen that resembles the figure below:

Click on Support Workbench. On the top of the screen that comes next, you can see a small menu as shown below:

Click on Checker Findings, which shows the findings in more detail.

Fill in the username and password for the oracle user in the Host Credentials fields. Then click on the button Advise. The Advise screen comes up as shown below:

Oracle thinks, quite justifiably, that someone may have moved the file by mistake. If that's what happened, replace the file manually and click on Re-assess Failure. That might resolve the issue. Otherwise, click on the button Continue. The advice will come up as shown in the screen below:

As you can see, Data Recovery Advisor advised you to restore the datafile and recover it, as would the most appropriate action in this case. Press Continue.

You can click on the button Submit Recovery Job to start the recovery process using RMAN. You could all that without writing a single RMAN command. You don't have to know about RMAN to recover from the failure.

GUI Interface to RMAN

RMAN has been around for a long time. Many people say that they don't use it due to the perceived complexity and the need to learn a language. But in some cases, RMAN is the only tool—or at least the best tool—for the job. Consider the case of block corruption where just one or two blocks of a huge datafile are corrupt. There is no need to restore the entire datafile; you could perform block media recovery to repair those corrupt blocks. But without using RMAN, you would miss out on those hidden gems.

But what if the need to learn a language were gone? Fortunately, you now have full access to RMAN functionality through a GUI. In this section you will see how to use RMAN in Enterprise Manager to fix corrupted blocks.

From the Database homepage, click on the Availability tab, shown below:

Click on Perform Recovery, which brings up the main recovery page:

Note the screen carefully; it reports some failures. It detected no critical errors but one "High Severity" error (indicated by the red arrow). If you click on that, you will see the exact error that is considered highly severe.

Instead of using the Data Recovery Advisor in this case, we will perform a "User Directed Recovery," which allows you to choose the desired recovery procedure. The User Directed Recovery section contains links to all the necessary activities and the option to choose the scope of the recovery in a drop-down menu, which by default shows Whole Database. Let's see that section a little bit more clearly:

This shows the options for many types of recovery. In this case we are interested in block media recovery so choose the Block Recovery radio button, which brings up the screen shown below:

Note the options here. When a block gets corrupted, the data integrity check verifies the blocks and records any corrupt blocks in a "corruption list." You can select this method to identify which blocks need recovery. Of course, if you wish, you can also choose your own list of blocks to recover by choosing one of the other two options. When you know a certain set of blocks is corrupted, you can attempt the block media recovery; otherwise just attempt the other recovery method proposed by Oracle.

This screen shows the list of corrupt blocks as identified by the database. Click on Next.

Click on the button Submit to bring up the recovery window shown below:

This window shows the actual RMAN command to be issued. At this time you can press Submit to execute the RMAN job. Note the contents of the window: It's a real RMAN command, which you can copy and paste into an RMAN prompt.

The Enterprise Manager interface to RMAN provides the best of both worlds: the power of RMAN without the complexities of its command language. Advanced users of RMAN may not find it that useful, but for a novice, it's a lifesaver—especially when considering how a relatively complicated block media recovery was done using a simple interface.

Flashback Logs to Rescue

Remember Flashback Logging introduced in Oracle Database 10g? It records the optimized versions of the before-images of changed blocks into flashback logs that are generated in the Flashback Recovery Area, provided Flashback has been enabled in the database. These logs help you to flash the database back to a point in the time in the past without doing a point-in-time recovery from backups.

Well, since these flashback logs contain past images of the blocks, why not use them for recovery as well? Oracle Database 11g does exactly that. When you recover the specific block (or blocks), Oracle looks in the Flashback logs (instead of datafiles) to find a good copy of the past image of that block and then applies the archived logs to roll it forward. This technique saves a lot of time by not going to the backups, especially if the backup is on tape.

ZLIB Compression

RMAN offered compression of backup pieces in Oracle Database 10g to conserve network bandwidth but many people were slow to use it. Why? Because third-party compression utilities provided faster alternatives to RMAN's own. Nevertheless, RMAN 10g compression has some neat features that third-party ones do not provide. For instance, when RMAN 10g restores datafiles, it doesn't need to uncompress the files first, provided it performed the compression. This approach offers significant bandwidth savings during restores.

In Oracle Database 11g, RMAN offers another algorithm, ZLIB, in addition to the previously available BZIP2. ZLIB is a much faster algorithm but it does not compress a whole lot. On the other hand, it does not consume much CPU either. So if you are CPU starved, it's a blessing to have ZLIB compression. (Note that BZIP2 is default in version 11.1; you need to license a new option called Advanced Compression Option to use ZLIB.)

To use ZLIB compression, merely set the RMAN configuration parameter:

RMAN> configure compression algorithm 'ZLIB' ;

You have to issue the above command if you have changed it earlier. To change it to BZIP2, issue:

RMAN> configure compression algorithm 'bzip2';

All the compressed backups will use the new algorithm now.

Parallel Backup of the Same Datafile

You probably already know that you can parallelize the backup by declaring more than one channel so that each channel becomes a RMAN session. However, very few realize that each channel can back up only one datafile at a time. So even through there are several channels, each datafile is backed by only one channel, somewhat contrary to the perception that the backup is truly parallel.

In Oracle Database 11g RMAN, the channels can break the datafiles into chunks known as "sections." You can specify the size of each section. Here's an example:

RMAN> run {
2>      allocate channel c1 type disk format '/backup1/%U';
3>      allocate channel c2 type disk format '/backup2/%U';
4>      backup 
5>      section size 500m 
6>      datafile 6;
7> }

This RMAN command allocates two channels and backs up the users' tablespace in parallel on two channels. Each channel takes a 500MB section of the datafile and backs it up in parallel. This makes backup of large files faster.

When backed up this way, the backups show up as sections as well.

RMAN> list backup of datafile 6;
    List of Backup Pieces for backup set 901 Copy #1
    BP Key  Pc# Status      Piece Name
    -------    ---  -----------      ----------
    2007    1   AVAILABLE   /backup1/9dhk7os1_1_1
    2008    2   AVAILABLE   /backup2/9dhk7os1_1_1
    2009    3   AVAILABLE   /backup1/9dhk7os1_1_3
    2009    3   AVAILABLE   /backup2/9dhk7os1_1_4

Note how the pieces of the backup show up as sections of the file. As each section goes to a different channel, you can define them as different mount points (such as /backup1 and /backup2), you can back them to tape in parallel as well.

However, if the large file #6 resides on only one disk, there is no advantage to using parallel backups. If you section this file, the disk head has to move constantly to address different sections of the file, outweighing the benefits of sectioning.

Backup Committed Undo? Why?

You already know what undo data is used for. When a transaction changes a block, the past image of the block is kept it the undo segments. The data is kept there even if the transaction is committed because some long running query that started before the block is changed can ask for the block that was changed and committed. This query should get the past image of the block—the pre-commit image, not the current one. Therefore undo data is kept undo segments even after the commit. The data is flushed out of the undo segment in course of time, to make room for the newly inserted undo data.

When the RMAN backup runs, it backs up all the data from the undo tablespace. But during recovery, the undo data related to committed transactions are no longer needed, since they are already in the redo log streams, or even in the datafiles (provided the dirty blocks have been cleaned out from buffer and written to the disk) and can be recovered from there. So, why bother backing up the committed undo data?

In Oracle Database 11g, RMAN does the smart thing: it bypasses backing up the committed undo data that is not required in recovery. The uncommitted undo data that is important for recovery is backed up as usual. This reduces the size and time of the backup (and the recovery as well).

In many databases, especially OLTP ones where the transaction are committed more frequently and the undo data stays longer in the undo segments, most of the undo data is actually committed. Thus RMAN has to backup only a few blocks from the undo tablespaces.

The best part is that you needn't do anything to achieve this optimization; Oracle does it by itself.

Virtual Private Catalog

You are most likely using a catalog database for the RMAN repository. If you are not, you should seriously consider using one. There are several advantages, such as reporting, simpler recovery in case the controlfile is damaged, and so on.

Now comes the next question: How many catalogs? Generally, it makes sense to have only one catalog database as the repository for all databases. However, that might not be a good approach for security. A catalog owner will be able to see all the repositories of all databases. Since each database to be backed up may have a separate DBA, making the catalog visible may not be acceptable.

So, what's the alternative? Of course, you could create a separate catalog database for each target database, which is probably impractical due to cost considerations. The other option is to create only one database for catalog yet create a virtual catalog for each target database. Virtual catalogs are new in Oracle Database 11g. Let's see how to create them.

First, you need to create a base catalog that contains all the target databases. The owner is, say, "RMAN". From the target database, connect to the catalog database as the base user and create the catalog.

$ rman target=/ rcvcat rman/rman@catdb
Recovery Manager: Release - Production on Sun Sep 9 21:04:14 2007 Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: ODEL11 (DBID=2836429497)
connected to recovery catalog database
RMAN> create catalog;
recovery catalog created
RMAN> register database;
database registered in recovery catalog

starting full resync of recovery catalog
full resync complete

This is called the base catalog, owned by the user named "RMAN". Now, let's create two additional users who will own the respective virtual catalogs. For simplicity, let's gives these users the same name as the target database. While still connected as the base catalog owner (RMAN), issue these statement:

RMAN> grant catalog for database odel11 to odel11;
Grant succeeded.

Now connect using the virtual catalog owner (odel11), and issue the statement create virtual catalog:

$ rman target=/ rcvcat odel11/odel11@catdb

RMAN> create virtual catalog;
found eligible base catalog owned by RMAN
created virtual catalog against base catalog owned by RMAN

Now, register a different database (PRONE3) to the same RMAN repository and create a virtual catalog owner "prone3" for its namesake database.

RMAN> grant catalog for database prone3 to prone3;
Grant succeeded.

$ rman target=/ rcvcat prone3/prone3@catdb

RMAN> create virtual catalog;
found eligible base catalog owned by RMAN
created virtual catalog against base catalog owned by RMAN

Now, connecting as the base catalog owner (RMAN), if you want to see the databases registered, you will see:

$ rman target=/ rcvcat=rman/rman@catdb

RMAN> list db_unique_name all;
List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
-------    -------     -----------------        ---------------         ------------------
285     PRONE3   1596130080       PRIMARY          PRONE3              
1       ODEL11   2836429497       PRIMARY          ODEL11   

As expected, it showed both the registered databases. Now, connect as ODEL11 and issue the same command:

$ rman target=/ rcvcat odel11/odel11@catdb
RMAN> list db_unique_name all;
List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
-------    -------     -----------------        ---------------         ------------------
1       ODEL11   2836429497       PRIMARY          ODEL11  

Note how only one database was listed, not both. This user (odel11) is allowed to see only one database (ODEL11), and that's what it sees. You can confirm this by connecting to the catalog as the other owner, PRONE3:

$ rman target=/ rcvcat prone3/prone3@catdb
RMAN> list db_unique_name all;
List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
-------    ------- -   ----------------         ---------------         ------------------
285     PRONE3   1596130080       PRIMARY          PRONE3              

Virtual catalogs allow you to maintain only one database for the RMAN repository catalog yet establish secure boundaries for individual database owners to manage their own virtual repositories. A common catalog database makes administration simpler, reduces costs, and enables the database to be highly available, again, at less cost.

Merging Catalogs

While on the subject of multiple catalogs, let's consider another issue. Now that you've learned how to create virtual catalogs on the same base catalogs, you may see the need to consolidate all these independent repositories into a single one.

One option is to deregister the target databases from their respective catalogs and re-register them to this new central catalog. However, doing so also means losing all those valuable information stored in those repositories. You can, of course, sync the controlfiles and then sync back to the catalog, but that will inflate the controlfile and be impractical.

Oracle Database 11g offers a new feature: merging the catalogs. Actually, it's importing a catalog from one database to another, or in other words, "moving" catalogs.

Let's see how it is done. Suppose you want to move the catalog from database CATDB1 to another database called CATDB2. First, you connect to the catalog database CATDB2 (the target):

$ rman target=/ rcvcat rman/rman@catdb2
Recovery Manager: Release - Production on Sun Sep 9 23:12:07 2007
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
connected to target database: ODEL11 (DBID=2836429497)
connected to recovery catalog database

If this database already has a catalog owned by the user "RMAN", then go on to the next step of importing; otherwise, you will need to create the catalog:

RMAN> create catalog;
recovery catalog created

Now, you import from the remote catalog (catdb1):

RMAN> import catalog rman/rman@catdb1;
Starting import catalog at 09-SEP-07
connected to source recovery catalog database
import validation complete
database unregistered from the source recovery catalog
Finished import catalog at 09-SEP-07
starting full resync of recovery catalog
full resync complete

There are several important information in the above output. Note how the target database got de-registered from its original catalog database. Now if you check the database names in this new catalog:

RMAN> list db_unique_name all;
List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
-------    -------     -----------------        ---------------         ------------------
286     PRONE3   1596130080       PRIMARY          PRONE3              
2       ODEL11   2836429497       PRIMARY          ODEL11              

You will notice that the DB Key has changed. ODEL11 was 1 earlier; it's 2 now.

The above operations will import the catalogs of all target databases registered to the catalog database. Sometimes you may not want that—rather, you may want to import only one or two databases. Here is a command to do that:

RMAN> import catalog rman/rman@catdb3 db_name = odel11;

Doing so changes the DB Key again.

What if you don't want to deregister the imported database from the source database during import? In other words, you want to keep the database registered in both catalog databases. You will need to use the "no unregister" clause:

RMAN> import catalog rman/rman@catdb1 db_name = odel11 no unregister;

This will make sure the database ODEL11 is not unregistered from catalog database catdb1 but rather registered in the new catalog.

Duplicate Database from Backup (Release 2 Only)

You need to duplicate a database for various reasons – for example, setting up a Data Guard environment, establishing a staging or QA database from the production, or mopving the database to a new platform. The DUPLICATE command in RMAN makes that activity rather trivial. But where does RMAN duplicate the database from?

The most obvious choice is the main database itself. This is the most up-to-date version and has all the information needed to duplicate the database. But while this approach is convenient, it also puts some stress on the main database. Additionally, it requires a dedicated connection to the main database, which may not always be possible.

The other source of the production database is the database backup. This does not affect the production database, since we are going to the backup alone. Duplicating the database from its backup has been available since Oracle9i Database, but there was a catch: although the source of the duplicate was the backup, the process still needed a connection to the main database. So, there is a monkey wrench here: What if your main database is not available because it is down for maintenance? Or you are duplicating the database on a different server from which you can’t connect to the main database for some security or other logistical reasons?

Oracle Database 11g Release 2 solves that problem. In this version, you can perform a duplicate database task without needing a connection to the main database. All you need is the backup files. Let’s see how it is done through an example.

First of all, to demonstrate the concept, we need to take a backup from the main database. Let’s start by kicking off an RMAN job.

# $ORACLE_HOME/bin/rman target=/ rcvcat=rman_d112d1/rman_d112d1@d112d2
Recovery Manager: Release - Production on Sun Aug 8 10:55:05 2010 Copyright (c) 1982, 2009,
Oracle and/or its affiliates.  All rights reserved. connected to target database: D112D1 (DBID=1718629572)

While a connection to a catalog database makes it simpler but is not absolutely necessary. I want to show you the steps with a catalog connection first.

RMAN> backup database plus archivelog format '/u01/oraback/%U.rmb';

Starting backup at 08/08/10 12:08:29 current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=631 RECID=344 STAMP=726057709
input archived log thread=1 sequence=632 RECID=345 STAMP=726058637
… output truncated …

The controlfile backup is also required. If you have configured the controlfile autobackup, the backup would contain the controlfile as well. If you want to be sure, or you have not configured controlfile autobackup, you can backup the controlfile explicitly.

RMAN> backup current controlfile format '/u01/oraback/%U.rmb';

These commands create the backup files in the directory /u01/oraback. Of course you don’t need to perform this step if you have a backup somewhere. Copy these backup files to the server where you want to create the duplicate copy.

# scp *.rmb oradba2:`pwd`

You need to know one piece of information before proceeding – the DBID of the source database. You can get that one of these three ways:

  • From the data dictionary
    SQL> select dbid from v$database;
  • From the RMAN repository (catalog or the controlfile)
    RMAN> list db_unique_name all;
    List of Databases
    DB Key  DB Name  DB ID            Database Role    Db_unique_name
    -------    -------     -----------------        ---------------         ------------------
    2       D112D1   1718629572       PRIMARY          D112D1
  • Querying the Recovery Catalog tables on the catalog database.

The DBID in this case is 1718629572; make a note of it. (The DBID is not strictly required for the effort but you will see later why it may be important. )

You also need to know another very important fact: when the backup was completed. You can get that time from many sources, the RMAN logfile being the most obvious one. Otherwise just query the RMAN repository (catalog or the controlfile). Here is how:

# $ORACLE_HOME/bin/rman target=/ rcvcat=rman_d112d1/rman_d112d1@d112d2

Recovery Manager: Release - Production on Mon Aug 9 12:25:36 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: D112D1 (DBID=1718629572)
connected to recovery catalog database

RMAN> list backup of database; 

List of Backup Sets
BS Key  Type LV Size       Device Type Elapsed Time Completion Time 
------- ---- -- ---------- ----------- ------------ -----------------
716     Full    2.44G      DISK        00:03:58     08/09/10 10:44:52
       BP Key: 720   Status: AVAILABLE  Compressed: NO  Tag: TAG20100809T104053
       Piece Name: /u01/oraback/22lktatm_1_1.rmb
 List of Datafiles in backup set 716
 File LV Type Ckp SCN    Ckp Time          Name
 ---- -- ---- ---------- ----------------- ----
 1       Full 13584379   08/09/10 10:40:55 +DATA/d112d1/datafile/system.256.696458617
… output truncated …

The NLS variable setting was required since we need to know the specific time, not just the date. From the output we know that the backup was taken on Aug. 9 at 10:44:53 AM.

The rest of the steps occur on the target host. Here the main database is named D112D1 and the duplicate database will be called STG.

Add a line in the file /etc/oratab to reflect the database instance you are going to copy:


Now set the Oracle SID as the duplicated database SID:

# . oraenv
The Oracle base for ORACLE_HOME=/opt/oracle/product/11.2.0/db1 is /opt/oracle

Copy the initialization parameter file from the main database. Edit it to reflect the new locations that might be appropriate such as audit dump destinations, datafile locations, etc. Create the password file as well.

# orapwd file=orapwSTG password=oracle entries=20

When the pfile and the password files are ready, start the instance with nomount option. It’s important to start just the instance since the duplication process will create the controlfile and mount it.

SQL> startup nomount
ORACLE instance started.
Total System Global Area  744910848 bytes
Fixed Size                  1339120 bytes
Variable Size             444596496 bytes
Database Buffers          293601280 bytes
Redo Buffers                5373952 bytes

While it’s not important, it may be easier to put the commands in a script and execute it from RMAN command line instead of giving each command line by line. Here are the contents of the script file:

connect auxiliary sys/oracle
connect catalog rman_d112d1/rman_d112d1@d112d2
duplicate database 'D112D1' DBID 1718629572 to 'STG'
until time "to_date('08/09/10 10:44:53','mm/dd/yy hh24:mi:ss')"
   db_file_name_convert = ("+DATA/D112D1","/u01/oradata/stg")
backup location '/u01/oraback' ;

The script is codetty self explanatory. The first two lines are for connection to the auxiliary instance (the database we are going to create as a duplicate of the main database) and the catalog connection. The third line states that we are going to duplicate the database D112D1 to STG. The timestamp up to which the database should be recovered is shown here as well. The fifth line is there because of the difference of the database file locations between hosts. On the main database the datafiles are on ASM, on diskgroup DATA whereas the staging database will be created on the directory /u01/oradata. This means we have to perform a naming convention change. A datafile on main database named +DATA/somefile.dbf will be called /u01/oradata/somefile.dbf. Finally, we have provided the location where the backup files will be found.

Here we have used the timestamp Aug 9th 10:44:53, just a second after the backup is completed. Of course we could have used any other time here, as long as the archived logs are available. You could have also given SCN number instead of timestamp.

Let’s name this script file duplicate.rman. After creation, call this script from RMAN directly:

#$ORACLE_HOME/bin/rman @duplicate.rman

Here is the resultant output. If things don’t go well in your experiment, comparing this output to your case may provide you with valuable clues.

That’s it; the Staging Database STG is now up and running. You can connect to it now and select the table. Nowhere in this process did you have to connect to the main database. And only a few commands were needed.

In summary, as you can glean from the output, the command performs the following steps:

  • Creates an SPFILE
  • Shuts down the instance and restarts it with the new spfile
  • Restores the controlfile from the backup
  • Mounts the database
  • Performs restore of the datafiles. In this stage it creates the files in the converted names.
  • Recovers the datafiles up to the time specified and opens the database

If you check the DBID of the database that was just created:

SQL> select dbid from v$database; DBID ---------- 844813198

The DBID is different from the main database so it can be backed up independently and using the same catalog as well. Speaking of DBID, remember we used it during the duplication process even if it was not absolutely necessary? The reason for that is the possibility of two databases bearing the same name However in the recovery catalog there could be two databases with the name D111D1 (the source). How will the duplication process know which one to duplicate? This is where the DBID comes in to make the identification definitive.

On a similar note, if you have multiple backups, RMAN chooses which backup to duplicate from automatically based on the UNTIL TIME clause. Finally, here we have used the catalog database; but it is not required. If you don’t specify the catalog, you must use the “until time” clause, not “until SCN”.

Undrop a Tablespace (Release 2 Only)

Let’s say you were in the mood to clean up junk in the database, so off you went to drop all the small and large tablespaces created for the users who are probably long gone. While dropping those tablespaces, inadvertently you dropped a very critical tablespace. What are your options?

In the codevious versions the options were reduced to sum total of one. Here are the steps you would have followed:

  • Create another instance called, say, TEMPDB
  • Restore the datafiles of the dropped tablespace and other mandatory ones such as SYSTEM, SYSAUX and UNDO
  • Recover it to the very moment of failure, taking care to make sure that you don’t make a mistake of rolling it forward to a time beyond the drop
  • Transport the tablespace from TEMPDB and plug it into the main database
  • Drop the TEMPDB instance

Needless to say these are complex steps for anyone – except for probably seasoned DBAs in the habit of dropping tablespace often. Don’t you wish for a simple “undrop tablespace”, similar to the undrop table (flashback table) functionality?

In this version of the database you get your wish. Let’s see how it is done. To demonstrate, we will need a tablespace and put a table or two there to see the effect of the “undrop”:

SQL> create tablespace testts
  2  datafile '/u01/oradata/testts_01.dbf' size 1M;

Tablespace created.

SQL> conn arup/arup


SQL> create table test_tab1 (col1 number) tablespace testts

  2  /

Table created. 

SQL> insert into test_tab1 values (1);

1 row created.

SQL> commit;

Commit complete.

After taking the backup, let’s create a second table in the same tablespace

SQL> create table testtab2 tablespace testts as select * from testtab;

Table created.

Before actually dropping the tablespace, let me introduce you to a view - TS_PITR_OBJECTS_TO_BE_DROPPED - which shows the objects in a tablespace that will be dropped if a tablespace is dropped:

 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                     NOT NULL VARCHAR2(30)
 NAME                                      NOT NULL VARCHAR2(30)
 CREATION_TIME                             NOT NULL DATE
 TABLESPACE_NAME                                    VARCHAR2(30)

Checking the view:

select owner, name, tablespace_name, to_char(creation_time, 'yyyy-mm-dd:hh24:mi:ss') from ts_pitr_objects_to_be_dropped where creation_time > sysdate -1 order by creation_time / OWNER NAME ------------------------------ ------------------------------ TABLESPACE_NAME TO_CHAR(CREATION_TI ------------------------------ ------------------- ARUP TEST_TAB1 TESTTS 2010-08-03:15:31:16 ARUP TEST_TAB2 TESTTS 2010-08-03:15:33:09

The view shows the two tables we created earlier. Now drop the tablespace with the including contents clause, which will drop the tables as well.

SQL> drop tablespace testts including contents;

Tablespace dropped.

If you check the view mentioned earlier:

sql> select owner, name, tablespace_name,
  2         to_char(creation_time, 'yyyy-mm-dd:hh24:mi:ss')
  3         from ts_pitr_objects_to_be_dropped
  4  where creation_time > sysdate -1
  5* order by creation_time

The two tables will be gone.

Now you need to undrop the tablespace. To do that, you have to know when the tablespace was dropped. One easy way is to check in the alert log. Here is an excerpt from the alert log:

Tue Aug 03 15:35:54 2010
drop tablespace testts
ORA-1549 signalled during: drop tablespace testts...
drop tablespace testts including contents
Completed: drop tablespace testts including contents

To recover the tablespace back into the database, we will use this timestamp, just in the nick of the time of the drop tablespace command.

RMAN> recover tablespace testts
2> until time "to_date('08/03/2010 15:35:53','mm/dd/yyyy hh24:mi:ss')"
3> auxiliary destination '/u01/oraux';

The auxiliary destination is where the files of the new database will be created. You can use any space, even bubble space you plan to use for something else here because the space is required only temporarily. (Here is the output of the RMAN command.)

  • That’s it; now the tablespace is available once again. Let’s see what the command actually does:
    • Creates a database instance called Dvlf. The instance name is deliberately spelled in such a way that it is least likely to clash with an existing instance name.
    • Identifies all the tablespaces that contain undo segments
    • Restores the necessary tablespaces (which includes the tablespace that was dropped, SYSTEM, SYSAUX and the undo tablespaces)
    • Transports the tablespace testts (the one that was dropped)
    • Plugs the tablespace back into the main database

When the tablespace is available, it is placed in offline mode. You have to make it online.

SQL> alter tablespace testts online;

Tablespace altered.

Let’s make sure that we have got the data right as well:

SQL> conn arup/arup
SQL> select count(1) from test_tab1;  


The table TEST_TAB1 was brought back as expected; but what about TEST_TAB2?

SQL> select count(1) from test_tab2;  


It came back as well. How come? The table was created after the backup was taken. Shouldn’t it have been excluded?

No. The tablespace recovery recovered up to the last available redo entry. The backup of the tablespace was restored and the archived logs (and redo logs) were applied to make it consistent all the way up to the moment right before the failure since that’s what we out in the recovery clause.

If you check the above mentioned view now:

select owner, name, tablespace_name,
       to_char(creation_time, 'yyyy-mm-dd:hh24:mi:ss')
from ts_pitr_objects_to_be_dropped
where creation_time > sysdate -1
order by creation_time

OWNER           NAME
------------------------------ ------------------------------
------------------------------ -------------------
ARUP            TEST_TAB1
TESTTS          2010-08-03:15:31:16

ARUP            TEST_TAB2
TESTTS          2010-08-03:15:33:09

That’s it; the tablespace is now “undropped” and all the data is available. You accomplished that in just a few lines of RMAN command as opposed to making a complex plan of activity.

Another beauty of this approach is that you are not required to restore the tablespace to this very moment. Suppose you want to restore the tablespace to a specific point in time in the past. You can do that by using a different time in the until clause; and later you can recover it again to yet another point in time. This can be repeated as many times as you want. In codevious, once you recovered the tablespace to a point in time, you couldn’t recover it to another point earlier than that.

Remember in the codevious versions, you had to use an AUXNAME parameter for datafiles while doing the Tablespace Point in Time Recovery. This allowed you recover a tablespace but the datafile names were different; so the tablespace had to be plugged into the database. This process does not require an AUXNAME parameter. Note, however, that AUXNAME is not always necessary. It is needed when the datafile names are the same as the backup, typically in case of Image Copies.

Set NEWNAME Flexibility (Release 2 Only)

Suppose you are restoring datafiles from the backup, either on the same server or a different one such as staging. If the filesystem (or diskgroup) names are identifical, you won’t have to change anything. But that is hardly ever the case. In staging the filesystems may be different, or perhaps you are restoring a production database to an ASM diskgroup different from where it was originally created. In that case you have to let RMAN know the new name of the datafile. The way to do it is using the SET NEWNAME command. Here is an example, where your restored files are located on /u02 instead of /u01 where they were codeviously.

   set newname for datafile 1 to ‘/u02/oradata/system_01.dbf’;
   set newname for datafile 2 to ‘/u02/oradata/sysaux_01.dbf’;

   restore database;      …

Here there are just two datafiles, but what if you have hundreds or even thousands? It will not only be a herculean task to enter all that information but it will be error-prone as well. Instead of entering each datafile by name, now you can use a single set newname clause for a tablespace. Here is how you can do it:

 set newname for tablespace examples to '/u02/examples%b.dbf';
 … rest of the commands come here …

If the tablespace has more than one datafile, they will all be uniquely created. You can use this clause for the entire database as well:

   set newname for database to '/u02/oradata/%b';

The term %b specifies the base filename without the path, e.g. /u01/oradata/file1.dbf will be recodesented as file1.dbf in %b. This is very useful for cases where you are moving the files to a different directory. You can also use it for creating image copies where you will create the backup in a different location with the same names as the parent file which will make it easy for identification.

One caveat: Oracle Managed Files don’t have a specific basename; so this can’t be used for those. Here are some more examples of the placeholders.

%f is the absolute file number
%U is a system generated unique name similar to the %U in backup formats
%I is the Database ID
%N is the tablespace name

Using these placeholders you can use just one SET NEWNAME command for the entire database – making the process not only easy but more accurate as well.

Auto Block Repair (Release 2 Only)

When a block gets corrupted in the database, what are your options? The only option code-Oracle9i was to restore the entire datafile. In Oracle9i, the Block Media Recovery feature allowed us to repair a specific block from the backup, not the entire datafile – saving a lot of time.

Data Recovery Advisor can show very clearly the blocks that may be corrupted. However, until Release 2, there was still the requirement that the block would be repaired from the backup. What if the backup is located in some slow drive, which is most often the case since you probably won’t want to place the backup in the same type of expensive disk as the database itself is on? If you have a physical standby database, it is an exact copy of the datafile, and most likely on a fast storage. If you can repair the block from there, it will be much faster.

You can now repair the block from the physical standby database. If you have multiple physical standby databases, how do you know which ones to get the block from? The obvious choice is the one with the most recent updates. RMAN can automatically suggest the codesence of the most suitable target by checking all the physical standby databases. Of course the databases must be opened for query in that case, which means you must have the Active Data Guard option.

TO DESTINATION Clause (Release 2 Only)

Are you familiar with Oracle Managed Files (OMF), which are datafiles, logfiles and controlfiles managed by Oracle without your intervention? They are neatly organized in their own folders with names that probably mean nothing to you but everything to the Oracle database. Either you love it or hate it; there is no shade of emotions in between. There is plenty to love it for – this frees you from worrying about file names, locations and the related issues such as clashing of names. Since the locations are codedefined, e.g. DATAFILES for datafiles, ONLINELOGS for redo log files and so on, other tools can easily use it. If you are using ASM, you are using OMF – probably not something you knew.

You might want to extend the same structure to the RMAN backups as well, where all you have to define is a location and the files simply go there, all neatly organized. In this version of the database you can use a new clause in the BACKUP command to specify the location. Here is how you will use it:

RMAN> backup tablespace abcd_data to destination '/u01/oraback';

Note there is no format string like %U in the above command as we have been using in the backup commands earlier. Here is the output:

Starting backup at 08/09/10 16:42:15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=+DATA/d112d1/datafile/abcd_data.272.697114011
channel ORA_DISK_1: starting piece 1 at 08/09/10 16:42:17
channel ORA_DISK_1: finished piece 1 at 08/09/10 16:44:22
bkp tag=TAG20100809T164216 comment=NONE
channel ORA_DISK_1:
backup set complete, elapsed time: 00:02:05
Finished backup at 08/09/10 16:44:22

This clause creates backup files in an organized manner. The above command creates a directory D112D1 (the name of the instance), under which it creates a directory called backupset, under which another directory with the name as the date of the file creation. Finally the backuppiece is created with a system generated tag. When you use this to backup archived logs, that backuppiece goes under the subdirectory archivelogs and so on.

You an also use this clause in ALLOCATE CHANNEL command as well:

RMAN> run {
2> allocate channel c1 type disk to destination '/u01/oraback';
3> }

More Comcodession Choices (Release 2 Only)

Comcodession in RMAN is not new; it has been around for some time. Here is how you can create a comcodessed backupset of the tablespace ABCD_DATA.

RMAN> backup as comcodessed backupset
2> format '/u01/oraback/%U.rmb'
3> tablespace abcd_data
4> ;

In Oracle Database 11g Release 1 we saw the introduction of a new encryption algorithm called ZLIB that is quite fast (and consumes less CPU) but with reduced comcodession ratio. In the current version there are several options for comcodession.

The default comcodession is called BASIC, which does not require any extra cost option. Using Advanced Comcodession Option, you now have the ability to specify different types of comcodession levels: LOW, MEDIUM and HIGH – with comcodession ratios from least to highest and CPU consumption (and conversely RMAN throughput) from least to highest. Here is how you configure the comcodession option to high:

rman> configure comcodession algorithm 'high';

In a test, I got a comcodessed backupset using HIGH as 118947840 compared to 1048952832 uncomcodessed – almost 9X improvement. Of course it will vary from database to database.

A high setting for the comcodession option creates smaller backupsets, which are great for slow networks but consume CPU cycles.

Backup to the Cloud (Release 2 Only)

We will close this installment by talking about one of the most exciting advanced in RMAN destination. In this day and age of cloud computing, where corporations are moving to cloud based service providers rather than investing in hardware of their own, one function stands above the others – backup. A backup, by its very definition, should be offsite; and what is better than the cloud. Amazon provides Simple Storage Service (S3), which is essentially a large bubble of storage, which can be as much as you want. The customer, that is you, pays for only what you actually use. Amazon takes care of the reliability of the storage.

This version of the database comes with the tools (libraries and software) to backup Oracle database using RMAN to Amazon S3, using a specially developed Media Management Library (MML). Instead of writing about it here, I would like to point your attention to a step by step guide for the same found at http://download.oracle.com/docs/cd/E11882_01/backup.112/e10643/web_services001.htm#RCMRF90490

This guide is so well written and comcodehensive that I find it completely redundant to reproduce it here.

Back to Series TOC