No results found

Your search did not match any results.

We suggest you try the following to help find what you’re looking for:

  • Check the spelling of your keyword search.
  • Use synonyms for the keyword you typed, for example, try “application” instead of “software.”
  • Try one of the popular searches shown below.
  • Start a new search.
Trending Questions

Oracle Database 11g: The Top Features for DBAs and Developers

by Arup Nanda

Data Guard

Learn how Active Data Guard will make your investment in a standby environment worthwhile via real time queries while applying the archived logs, converting the physical standby database to snapshot standby, and a host of new improvements in infrastructure.

Back to Series TOC

There are enough Data Guard enhancements in Oracle Database 11g to fill a book, so covering every possible enhancement in detail is not possible. Rather I will cover the ones I consider most interesting here.

Easier Creation of Standby Database

Let's start at the beginning: the creation of a physical standby database. In Oracle Database 11g, that process has become a whole lot easier, with just one RMAN command that does it all. Previously, you could use the Grid Control wizard interface to build a Data Guard setup between two machines. As of this writing, Oracle Enterprise Manager Grid Control 11g is not yet available and the Database Control does not have a wizard for Data Guard. But regardless of your experience in using SQL commands, you will find setting up a Data Guard environment in Oracle Database 11g a breeze. It's so simple that I can show you all the steps right here.

Suppose your primary database is called prolin11 running on a server called prolin1. You want to set up the standby database on a server named prolin2. The name of the standby database instance should be pro11sb. Here are the steps:

1. On prolin1, first create a spfile if you don't already have one.

SQL> create spfile from pfile;

2. This step is not absolutely necessary but it makes the process easier. After database creation, restart the prolin11 database to use the spfile.

3. While it is not necessary to create standby redo logs, it's a very good practice to do so. Standby redo logs enable the changes occurring in the primary database to be reflected in almost real time in the standby, a concept known as Real Time Apply (RTA). So, here we create the standby redo logs on the primary database (note carefully; the standby redo logs are created in the primary. RMAN will create them in standby):

SQL> alter database add standby logfile group 4 
  2> (‘+DG1/sby_redo01.rdo') size 50M;
SQL> alter database add standby logfile group 5 
  2> (‘+DG1/sby_redo02.rdo') size 50M;
SQL> alter database add standby logfile group 6 
  2> (‘+DG1/sby_redo03.rdo') size 50M;
SQL> alter database add standby logfile group 7 
  2> (‘+DG1/sby_redo04.rdo') size 50M;

4. This creates the four standby redo logs groups.

5. Create an entry for pro11sb in the listener.ora file on prolin2 server:

    (SID_DESC =
      (GLOBAL_DBNAME = pro11sb) 
      (ORACLE_HOME = /opt/oracle/product/11g/db1)
      (SID_NAME = pro11sb) 
    (ADDRESS = (PROTOCOL = TCP)(HOST = prolin2)(PORT = 1521))

6. Reload the listener for it to take effect.

7. On prolin1, create an entry for the pro11sb database in the file tnsnames.ora under $ORACLE_HOME/network/admin:

      (ADDRESS = (PROTOCOL = TCP)(HOST = prolin2)(PORT = 1521))
      (SID = pro11sb)


8. On prolin2, in Oracle Home/dbs directory, create a file initodba11sb.ora containing just one line:


9. This will serve as the initialization file for the standby instance; the rest of the parameters will be populated automatically by the RMAN command you will see later.

10. On prolin2, go to the directory $ORACLE_BASE/admin. Create a directory called pro11sb there and then a directory within pro11sb called adump, to hold the audit files for the standby instance.

11. On prolin1, under $ORACLE_HOME/dbs directory, you will find the password file for the instance, usually named orapworadba11. If that file is not present (most unlikely), create it. Then copy that file to prolin2 under $ORACLE_HOME/dbs. Copy it to a new file orapwodba11sb. This will make sure the sysdba connection passwords on primary database can be applied to standby as well.

12. On prolin2 start the instance pro11sb in NOMOUNT state:

$ sqlplus / as sysdba
SQL> startup nomount

13. This will start the instance but mount nothing.

14. Now that all initial preparation is completed, it's time to call the all-powerful RMAN script that creates the standby database. On prolin1, start RMAN and run the following script. You will find it easier to save this in a file and run the script from the RMAN prompt.

connect target sys/oracle123@prolin11
connect auxiliary sys/oracle123@pro11sb
run {
   allocate channel c1 type disk;
   allocate auxiliary channel s1 type disk;
   duplicate target database
        for standby
        from active database
        parameter_value_convert 'prolin11','pro11sb'
        set db_unique_name='pro11sb'
        set db_file_name_convert='/prolin11/','/pro11sb/'
        set log_file_name_convert='/prolin11/','/pro11sb/'
        set control_files='/oradata/pro11sb/control01.ctl'
        set fal_client='pro11sb'
        set fal_server='prolin11'
        set standby_file_management='AUTO'
        set log_archive_config='dg_config=(prolin11,pro11sb)'
        set log_archive_dest_2='service=prolin11 LGWR ASYNC valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=pro11sb'
        set log_archive_dest_state_2='enable'
        set log_archive_format='pro11sb_%t_%s_%r.arc'
  sql channel c1 "alter system archive log current";
  sql channel s1 "alter database recover managed standby database using current logfile disconnect";

15. This script creates the standby database, places the appropriate parameters in the spfile for the standby instance, creates the diagnostic destination for the standby database, and restarts the standby. To help you understand the exact mechanics of this operation, you might want to see the output of the RMAN command here. The two lines below connect to the primary and the standby instances:

connect target sys/oracle123@prolin11; connect auxiliary sys/oracle123@pro11sb;

16. Because you copied the password file to the standby host, the password for SYS remains the same and hence the connection to the standby instance (with no mounted database, yet) is successful. Next, the following set of lines are executed:

duplicate target database for standby from active database
        parameter_value_convert 'prolin11','pro11sb'
        set 'db_unique_name'='pro11sb'
        set 'db_file_name_convert'='/prolin11/','/pro11sb/'
        ... and so on ...

16. The duplicate target database command creates the standby database from the primary by first taking an image copy of the primary database via SQL*Net on the remote server. Once the copy is complete, it internally issues a command ( switch clone datafile all;), which brings up the standby database as a clone. The set commands in the script set the parameters for the SPFILE for the standby instance and the database comes up as a standby database. Again, an examination of the RMAN output gives you all the information on the behind-the-scene activities.

Note how easy building the physical standby database is? It's as simple as executing the script!

Active Data Guard

One of the traditional objections to building a Data Guard environment using physical standby database is the passiveness of the standby database. In Oracle Database 10g and below you could open the physical standby database for read-only activities (say, to offload some reporting), but only after stopping the recovery process. With these releases, If Data Guard is a part of your DR solution, you really can't afford to pause the recovery process for a long duration for fear of falling behind, so the physical standby database is essentially useless for any read-only activity.

With Oracle Database 11g, that situation changes: You can open the physical standby database in read-only mode and restart the recovery process. This means you can continue to be in sync with primary but can use the standby for reporting. (As in previous versions, you can take the backup from the standby as well.) Let's see how it is done.

First, cancel the managed standby recovery:

SQL> alter database recover managed standby database cancel;

Database altered.

Then, open the database as read only:

SQL> alter database open read only;
Database altered.
Up until this point, the process is identical to that in pre-11g versions. Now, the 11g feature shows its advantage: While the standby database is open in read-only mode, you can resume the managed recovery process.

SQL> alter database recover managed standby database disconnect;
Database altered.

Now the standby database has been placed in managed recovery mode applying the log files while it is open. How do you confirm that? It's quite simple; just check the maximum log sequence number on the primary and compare that to the standby. On the primary, do a log switch and check the maximum log sequence number:

SQL> alter system switch logfile;
System altered.

SQL> select max(Sequence#) from v$log;

The log switch occurred while the standby was opened in read only mode. Check the maximum log sequence in standby:

SQL> select max(Sequence#) from v$log;

It's also 79, the same value in primary. It's the confirmation that the log application is still going on. Well, you might ask, this merely confirms that the logs are being applied; will the changes occurring on the primary be visible in this mode? Let's see. On the primary, create a table:

SQL> create table test2 (col1 number);
Table created.

...then do a few log switches and wait until those logs are applied to standby. Then check the standby database:

SQL> desc test2
 Name                                      Null?    Type
 -----------------------------------------                     --------     ---------------------------
 COL1                                               NUMBER


Presto! The table appears in standby, and is ready to be queried.

Remember, we could have used Real Time Apply in this case, which causes the changes made to the primary appear instantly on the standby, provided the network is available? RTA is not an absolute necessity for ADG but makes the ADG even more useful as you can expect to see the latest changes on the primary.

Security conscious readers might be little concerned however. The database is in read only mode, so nothing can be written to it. If the audit_trail parameter is set to DB on the primary (the default in Oracle Database 11g), it will be the same on standby as well, but the audit trails can't be written to the database since it's read only. So where do they go?

Note a line that shows up innocuously in alert log:

AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access

Aha! The audit trails don't stop; rather, they automatically switch to OS files when the database is open. When you activate the standby database, the audit_trail is automatically resets to DB.

Snapshot Standby

Here's a typical scenario: Say a new application is being deployed on the database and you are wondering about the impact on database performance. In Oracle Database 11g, there is that perfect tool ( Database Replay), which captures the SQL statements and replays them, but there is a caveat: you have to run them to see the impact. You capture from a test system but replaying on the production system is not feasible. First, it's not deployed; and second, even if it were deployed, you couldn't afford to have the app making changes to other tables. So what do you do to see the impact of the app?

The perfect answer awaits you in Oracle Database 11g, where the physical standby database can be temporarily converted into an up dateable one called Snapshot Standby Database. In that mode, you can run your app—which may modify a lot of tables—and gauge its impact. Once the impact is assessed, you can convert the database into a standby undergoing the normal recovery. This is accomplished by creating a restore point in the database, using the Flashback database feature to flashback to that point and undo all the changes. Let's see how it is done:

First, start recovery on the standby, if not going on already:

SQL> alter database recover managed standby database disconnect;

Database altered.

Wait until the recovery picks up a few log files. Then stop the recovery.

SQL> alter database recover managed standby database cancel;
Database altered.

At this point, you may create the snapshot standby database. Remember, it enables Flashback logging, so if you haven't configured the flash recovery area, you will get a message like this:

ORA-38784: Cannot create restore point 'SNAPSHOT_STANDBY_REQUIRED_01/12/2008
ORA-38786: Flash recovery area is not enabled.

To avoid that, you should have already created flash recovery area. If you didn't, don't worry, you can create it now:

SQL> alter system set db_recovery_file_dest_size = 2G;
System altered.
SQL> alter system set db_recovery_file_dest= '/db_recov';
System altered.

Now that the formalities are completed, you can convert this standby database to snapshot standby using this simple command:

SQL> alter database convert to snapshot standby;

Database altered.

Now recycle the database:

SQL> shutdown immediate
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Now the database is open for read/write operations:

SQL> select open_mode, database_role
  2  from v$database;

----------      ----------------

You can do changes in this database now. This is a perfect place to replay the captured workload using Database Replay. You can then perform the system changes in this database and replay several times to see the impact of the changes. As this is a copy of the production database, the replay will be an accurate representation of the workload.

After your testing is completed, you would want to convert the snapshot standby database back to a regular physical standby database. Just follow the steps shown below:

SQL> connect / as sysdba
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Database mounted.
SQL> alter database convert to physical standby; 
Database altered.

Now shutdown, mount the database and start managed recovery.

SQL> shutdown
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Database mounted.

Start the managed recovery process:

SQL> alter database recover managed standby database disconnect;

Now the standby database is back in managed recovery mode. Needless to say, when the database was in snapshot standby mode, the archived logs from primary were not applied to it. They will be applied now and it may take some time before it completely catches up.

Snapshot standby database allows you to use the standby database to accurately predict changes to production database before they were made. But that's not it; there is another advantage as well. Remember, we could have used RTA in this case, which causes the changes made to the primary appear instantly on the standby, provided the network is available? Well, what if someone makes a mistake on the primary database, such as running a massive update or changing some code? In previous versions we deliberately use a delay in the standby database to stop these errors propagating to the standby. But that delay also means the standby can't be activated properly or be used as an active copy of production.

Not anymore. Since you can flashback the standby database, you need not keep the delay. If there is a problem, you can always flashback to a previous state.

Conversion from Physical to Logical Standby

You can now easily convert a physical standby database to a logical one. Here are the steps:

1. The standby database will need to get the data dictionary information from somewhere. The dictionary information should be put in the redo stream that comes from the primary. So, on the primary database, issue the following to build the LogMiner tables for dictionary:

SQL> begin 
  3  end;
  4  /
PL/SQL procedure successfully completed.

2. On the standby database, stop the managed recovery process:

SQL> alter database recover managed standby database cancel;

Database altered.

3. Now, issue the command in standby side to convert it to logical:

SQL> alter database recover to logical standby pro11sb;
Database altered.

4. If you didn't execute Step 1, the above command will wait since the dictionary information is not found. Don't worry; just execute the Step 1 at this point. If you have enabled RTA, the information will immediately appear on the standby database.

5. Issue a few log switches on primary to make sure the archived logs are created and sent over to the standby:

SQL> alter system switch logfile;
System altered.

6. On the standby side, you can see that the alter database command has completed, after some time. Now the standby is a logical one. You will see the following line in alert log:

RFS[12]: Identified database type as 'logical standby'

7. Recycle the database:

SQL> shutdown
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 1071333376 bytes
Database mounted.
SQL> alter database open resetlogs;
Database altered.

8.Now that this is a logical standby database, you should start the SQL Apply process.

SQL> alter database start logical standby apply immediate;

The logical standby database is now fully operational! Once you convert the physical standby to a logical one, you can't convert it back to a physical one unless you use the special clause ("keep identity"), described in the section below.

Rolling Upgrade

It's no secret that one of the pain points in a DBA's job is to justify the need to shut down the database for a reasonably long period of time to perform upgrades. In Oracle Database 11g, this has become considerably easier if you have a standby database of any type via this rolling upgrade process:

  1. 1. Upgrade the standby.
  2. 2. Move the apps to the standby.
  3. 3. Upgrade the primary.
  4. 4. Move the apps back to the original primary.

If it's a logical standby, the process is fairly straightforward because the standby merely applies the SQLs mined from the primary. As the SQLs are applied, the upgrade can easily be done on that database. You can stop the recovery, upgrade the standby, continue the recovery to catch up, and then convert the standby to primary. Later, you can make the original primary to be the standby, which will be upgraded. Finally, you reverse the roles to make the original primary the new primary.

However, many standby databases are physical in nature for ease of use and management. If the standby is not logical but physical, then the steps are pretty much the same, with a slight difference: you need to convert the standby to logical temporarily and then convert back to physical standby. The keyword is temporarily, not permanently; therefore you issue the conversion command with the new clause "keep identity", as shown below:

SQL> alter database recover to logical standby keep identity;
Database altered.

See more detailed step-by-step instructions in the doc.

Other Improvements

There are some significant improvements in the Data Guard process itself:

Redo Compression

Data Guard is premised on shipping the archived logs from the primary to the standby database server and applying them to the database. One of the key components of the time lag between the primary and standby is the time to transport the archived logs. This can be somewhat expedited If the redo stream is compressed.

In Oracle Database 11g you can compress the redo stream that goes across to the standby server via SQL*Net using a parameter compression set to true. This works only for the logs shipped during the gap resolution. Here is the command you can use to enable compression in the example shown in the beginning of this installment.

alter system set log_archive_dest_2 = 'service=pro11sb LGWR ASYNC 
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=pro11sb compression=enable'

Net Timeout

The Data Guard environment works by sending the redo data to the standby server by connecting to the database instance there. If the instance does not respond in time, the log shipping service will wait for a specified timeout value and then give up. This timeout value can be set in Oracle Database, using a parameter called net_timeout. In maximum protection mode, the log shipping service will retry for 20 times before giving up.

But first you have to know who much delay is currently present in the log shipping. A new view v$redo_dest_resp_histogram shows that time in histograms of values:

SQL> desc v$redo_dest_resp_histogram
 Name                   Null?    Type
 ----------------------            -------     --------------
 DEST_ID                         NUMBER
 TIME                            VARCHAR2(20)
 DURATION                        NUMBER
 FREQUENCY                       NUMBER

The view shows you how many times the time was taken in the shipment in that given bucket. If you examine the view after a few days of operation, you will be able to get an idea of the timeout value to set. Then you can set the set the timeout value by issuing:

alter system set log_archive_dest_2 = 'service=pro11sb LGWR ASYNC 
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=pro11sb compression=enable net_timeout=20'

Again, this is from the example shown above. Note the clause "net_timeout=20" in the parameter value.

Dynamically Alterable Parameters

In the process of operating a logical standby database environment, you will need to tune the process and tweak some parameter values. In Oracle Database 11g, most of these parameters can be updated online. You can find them by querying the view dba_logstdby_parameters.

col name format a30
col value format a10
col unit format a10
col setting a6
col setting format a6
col dynamic format a7
select *
from dba_logstdby_parameters
order by name

NAME                           VALUE      UNIT       SETTIN DYNAMIC
------------------------------                ----------      ----------      ------    -------
APPLY_SERVERS                  5                     SYSTEM YES
EVENT_LOG_DEST                 DEST_EVENT            SYSTEM YES
LOG_AUTO_DELETE                TRUE                  SYSTEM YES
MAX_EVENTS_RECORDED            10000                 SYSTEM YES
MAX_SERVERS                    9                     SYSTEM YES
MAX_SGA                        30         MEGABYTE   SYSTEM YES
PREPARE_SERVERS                1                     SYSTEM YES
PRESERVE_COMMIT_ORDER          TRUE                  SYSTEM NO
RECORD_APPLIED_DDL             FALSE                 SYSTEM YES
RECORD_SKIP_DDL                TRUE                  SYSTEM YES
RECORD_SKIP_ERRORS             TRUE                  SYSTEM YES

Note the column DYNAMIC, which shows the value is dynamic alterable or not. Almost all the parameters are dynamic. For instance, to modify the parameter APPLY_SERVERS without stopping standby, you can issue:

SQL> begin
  2     dbms_logstdby.apply_set('APPLY_SERVERS',2);
  3  end;
  4  /


This sets the value of apply_servers to 2, which can be done without shutting down the standby.

SQL Apply Event Table

In Oracle Database 10g, the events related to SQL Apply are written to the alert log, which is not very useful since you may want to write scripts to check them for alerts or reporting. In Oracle Database 11g, the events are by default written to a new table called LOGSTDBY$EVENTS in the SYSTEM schema. Here is a sample query:

select event_time, error
from system.logstdby$events
order by 1;

The output:

EVENT_TIME                    ERROR
-----------------------------               -------------------------------------------------
13-JAN-08 PM  ORA-16111: log mining and apply setting up
13-JAN-08 PM  Apply LWM 2677727, HWM 2677727, SCN 2677727
14-JAN-08 PM  APPLY_SET: APPLY_SERVERS changed to 2
14-JAN-08 PM  APPLY_SERVERS changed to 2
14-JAN-08 PM  EVENT_LOG_DEST changed to DEST_ALL


It's very useful to have the events in a table for a lot of reasons; for one; it's easier to manipulate and report. But sometimes it's also useful to see them on alert log as well, especially if you have built on some monitoring tool to scan the alert log for errors and messages. You can set the logical standby database apply parameter "event_log_dest" to "DEST_ALL" to accomplish that:


This can be done dynamically and now the events will go to both the table and the alert log. After this command, you can check alert log; it will have alt least these two lines, in addition to possibly a large number of SQL Apply events:



First, you saw how trivial it is to build a physical standby database from an active primary database. Also, you saw how easy it is to convert that physical standby to a logical one. The biggest advantage comes from the fact that the standby database can now be used productively to support the business in some way. The Active Data Guard feature allows you to open the standby database allowing queries while the archived logs are still being applied. Snapshot standby database allows you to run production database loads on the database and then flash back to the point it started to resume the normal manager recovery process. These two features allow you to leverage the processing capabilities available in the standby server and should be a great catalyst for the upgrade to 11g.

Back to Series TOC