We’re sorry. We could not find a match for your search.

We suggest you try the following to help find what you’re looking for:

  • Check the spelling of your keyword search.
  • Use synonyms for the keyword you typed, for example, try "application" instead of "software."
  • Start a new search.
Cloud Account Sign in to Cloud
Oracle Account

Oracle Database 11g: The Top Features for DBAs and Developers

by Arup Nanda

Transaction Management with LogMiner and Flashback Data Archive

Learn how to identify and roll back time by backing-out specific transactions and their dependents.

LogMiner is an often ignored yet very powerful tool in the Oracle Database. It is used to extract DML statements from the redo log files—the original SQL that caused the transaction and even the SQL that can undo the transactions. (For an introduction to LogMiner and how it works, refer to my Oracle Magazine article "Mining for Clues.") Until now, this powerful tool was commonly under-appreciated due to the lack of a simpler interface. In Oracle Database 11g, however, Oracle Enterprise Manager has a graphical interface to extract transaction from the redo logs using LogMiner, which makes it extremely easy to use the tool to examine and rollback transactions. (Note: As in previous versions, you can continue to use the DBMS_LOGMNR package to perform command line-driven log mining if you wish.)

Let's see an example how this is done. To enable log mining, you need only minor supplemental logging enabled for the database or at least the table. Flashback Transaction requires primary key logging. To enable it for the entire database, issue the following commands:

SQL> alter database add supplemental log data;
Database altered.
SQL> alter database add supplemental log data (primary key) columns;
Database altered.

Now, consider the following statements issued by an application against your database:

SQL> insert into res values (100002,sysdate,12,1);
1 row created.
SQL> commit;
Commit complete.

SQL> update res set hotel_id = 13 where res_id = 100002;

1 row updated.
SQL> commit;
Commit complete.

SQL> delete res where res_id = 100002;
1 row deleted.
SQL> commit;
Commit complete.

Note the statements carefully: each one is succeeded by a commit statement, which indicates that each statement is a transaction. Now let's see how you can examine the transactions in LogMiner in Oracle Database 11g Database Control.

In the Enterprise Manager screen, from the Database homepage, go to the tab labeled Availability.

Click View and Manage Transactions, listed under Manage. This brings up the main LogMiner interface, as shown below:

You can enter specific ranges in time or SCNs to search for transactions. In the figure above, I have entered a range of time to search in the Query Time Range. In the Query Filter, I have used only the SCOTT's transactions, because that was used to perform all the DMLs. In the Advanced Query section, you can enter any additional filter. After all fields are entered, click Continue.

This kicks up the Log Mining process that searches through the redologs (both online and archived, if needed) and finds the transactions issued by the user SCOTT. After the process is completed, you will see the results screen.

The top portion of the results screen looks like this:

The results indicate that the search found two transactions by SCOTT, which affected two records.

The bottom portion of the screen shows the details of those transactions. Here is a partial view of the screen. You can see the transactions show as 1 ins (meaning "1 insert statement"). The leftmost column shows the transaction identifiers (XID), a number that uniquely identifies a transaction.

If you click on that transaction identifier, you can see the details of that transaction as shown in the screen below:

As you can see, you can use Database Control to search and identify the transactions. Click the buttons Previous Transaction and Next Transaction to scroll through all the transactions found by the search.

Use Cases

How can you use this feature? Well, several ways. The most important use may be to find out "who" did "what." If you don't have auditing enabled for performance reasons, or just haven't kept the audit records, all you have to do is to search for the clues in the LogMiner interface by mining the redo logs—online as well as archived ones. In the search screen, you can enter additional filtering conditions in the Advanced Query field under Query Filter.

Suppose you want to find a transaction where the record for the RES_ID = 100002 was inserted, deleted, or updated. You can search for a specific value in the redo stream by using the function column_present in the dbms_logmnr package as shown below:

This function will extract all the transactions that involved 100002 in the RES_ID column in RES table under the SCOTT schema.

You can also use this feature to unearth the DDL commands issued against the database. To do that, select the radio button View DDL Only in the Query Filter section.

Backout of Selected Transactions

When you examine a transaction, what do you want do with it? One thought—perhaps the reason you are looking into the transaction in the first place—is that the transaction was made in error and you want to undo it. That's fairly simple; if the transaction is an insert, you just have to delete it; or if it is an update, then the undo will be updating the row to the older value.

However, note the transactions used in the example carefully. The first transaction inserts a row. The second one updates the row just inserted and the third one deletes that very row. The first one (the insert) is the transaction you want to backout. But, here is a problem; the row is already deleted by the subsequent transactions; so what is the undo transaction going to be, in this case?

This is where the Dependent Transaction viewing feature in Oracle Database 11g comes handy. Click Flashback Transaction. After some searches, it will present a screen similar to below:

This screen shows you the dependent transactions and update and deletes as well. Now when you back-out the transaction, you can back-out the dependents as well. To do so, choose the Cascade radio button from the list below and click OK.

It will show you the different transactions you want backed out; click the Transaction IDs to see that what SQL statements Oracle will issue to undo the specific transaction.

For instance, to undo the insert, it has to issue a delete, as shown above. If you click on the next transaction (just below it), you will see the details of what needs to be done to back that one out:

You get the idea. Click Submit and all these transactions will be rolled back, in one sweep. This is the cleanest way to undo a transaction and its dependents.

Command Line Interface

What if you don't have access to the Enterprise Manager or perhaps you want this done through a script? The package DBMS_FLASHBACK, which is also present in Oracle Database 10g, has a new procedure called TRANSACTION_BACKOUT. This procedure is overloaded so you have to pass the value to the named parameters, as shown below.

   trans_arr xid_array;
   trans_arr := xid_array('030003000D040000','F30003000D04010');
   dbms_flashback.transaction_backout (
        numtxns         => 1,
        xids            => trans_arr,
        options         => dbms_flashback.cascade

The type xid_array is also new in Oracle Database 11g. It is present to pass a series of transaction identifiers to the procedure.

Other LogMiner Improvements

If you have been using XMLType as a data type and you have more reasons to use it in Oracle Database 11g, you will be happy to see that the XML data is mined as well in LogMiner. It shows up both in SQL_REDO and SQL_UNDO columns.

You can set an option called SKIP_CORRUPTION while starting LogMiner which will skip the corrupt blocks in redo logs. So, you can still salvage valid data from the redo logs even if it is partially damaged. Here is how you can use the improved syntax:

        options => dbms_logmnr.skip_corruption
   ) ;

Flashback Data Archive

Oracle9i Database Release 2 introduced the proverbial time machine in the form of the Flashback Query, which allows you to select the pre-changed version of the data. For example, had you changed a value from 100 to 200 and committed, you can still select the value as of two minutes ago even if the change was committed. This technology used the pre-change data from the undo segments. In Oracle Database 10g, this facility was enhanced with the introduction of Flashback Versions Query, where you can even track the changes made to a row as long as the changes are still present in the undo segments.

However, there was a little problem: When the database is recycled, the undo data is cleaned out and the pre-change values disappear. Even if the database is not recycled, the data may be aged out of the undo segments to make room for new changes.

Since pre-11g flashback operations depend on the undo data, which is available only for a short duration, you can't really use it over an extended period of time or for more permanent recording such as for auditing. As a workaround, we resorted to writing triggers to make more permanent records of the changes to the database.

Well, don't despair. In Oracle Database 11g, Flashback Data Archive combines the best of both worlds: it offers the simplicity and power of the flashback queries but does not rely on transient storage like the undo. Rather, it records changes in a more permanent location, the Flashback Recovery Area.

Let's look at an example. (Note: you need to activate Automatic Undo Management for Flashback Data Archive to work.) First, you create a Flashback Data Archive, as shown below:

SQL> create flashback archive near_term
  2  tablespace far_near_term
  3  retention 1 month
  4  /
Flashback archive created.

For the time being ignore the meaning of the term "retention"; we will revisit it later. (This is a location where the changes will be recorded.) The archive is created in the tablespace far_near_term.

Assume you have to record changes to a table called TRANS. All you need to do is enable the Flashback Data Archive status of the table to start recording the changes in that archive.

SQL> alter table trans flashback archive near_term;

Table altered.

This puts the table into the Flashback Data Archive mode. All the changes to the rows of the table will be now tracked permanently. Let's see a demonstration.

First, select a specific row of the table.

SQL> select txn_amt from trans where trans_id = 2;


SQL> update trans set txn_amt = 2000 where trans_id = 2;

1 row updated.

SQL> commit;

Commit complete.

Now, if you select the row, it will always display 2000 in this column. To find out the older value as of a certain time, you can use the Flashback query as shown below:

elect txn_amt
from trans
as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')
where trans_id = 2;


Now, after some time, when the undo data has been purged out of the undo segments, query the flashback data again:

select txn_amt
from trans
as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')
where trans_id = 2;

It comes back with the result: 19325.67. The undo is gone, so where did the data come from?

Let's ask Oracle. You can do that using autotrace and see the execution plan:

SQL> set autotrace traceonly explain
SQL> select txn_amt
  2  from trans
  3  as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')
  4  where trans_id = 2;

Execution Plan
Plan hash value: 535458644


| Id  | Operation                 | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop
|   0 | SELECT STATEMENT          |                    |     2 |    52 |    10  (10)| 00:00:01 |       |
|   1 |  VIEW                     |                    |     2 |    52 |    10  (10)| 00:00:01 |       |
|   2 |   UNION-ALL               |                    |       |       |            |          |       |
|*  3 |    FILTER                 |                    |       |       |            |          |       |
|   4 |     PARTITION RANGE SINGLE|                    |     1 |    52 |     3   (0)| 00:00:01 |     1 |     1
|*  5 |      TABLE ACCESS FULL    | SYS_FBA_HIST_68909 |     1 |    52 |     3   (0)| 00:00:01 |     1 |     1
|*  6 |    FILTER                 |                    |       |       |            |          |       |
|*  7 |     HASH JOIN OUTER       |                    |     1 |  4053 |    10  (10)| 00:00:01 |       |
|*  8 |      TABLE ACCESS FULL    | TRANS              |     1 |    38 |     6   (0)| 00:00:01 |       |
|   9 |      VIEW                 |                    |     2 |  8030 |     3   (0)| 00:00:01 |       |
|* 10 |       TABLE ACCESS FULL   | SYS_FBA_TCRV_68909 |     2 |  8056 |     3   (0)| 00:00:01 |       |

Predicate Information (identified by operation id):

   3 - filter(NULL IS NOT NULL)
   5 - filter("TRANS_ID"=2 AND "ENDSCN">161508784336056 AND "ENDSCN"<=1073451 AND ("STARTSCN" IS NULL
              OR "STARTSCN"<=161508784336056))
   6 - filter("F"."STARTSCN"<=161508784336056 OR "F"."STARTSCN" IS NULL)
   7 - access("T".ROWID=("F"."RID"(+)))
  10 - filter(("ENDSCN" IS NULL OR "ENDSCN">1073451) AND ("STARTSCN" IS NULL OR "STARTSCN"<1073451))

   - dynamic sampling used for this statement

This output answers the riddle "Where did the data come from?"; it came from the table SYS_FBA_HIST_68909, which is a location in the Flashback Archive you defined earlier for that table. You can check the table but it's not supported by Oracle to directly peek at that data there. Anyway, I don't see a reason you would want to do that.

The data inside the archive is retained but until how long? This is where the retention period comes into play. It's retained up to that period. After that, when new data comes in, the older data will be purged. You can also purge it yourself, e.g.

alter flashback archive near_term purge before scn 1234567;

Release 2 Addendum:

In Release 2, FDA functionality is expanded to include DDL operations such as adding/dropping/renaming/modification of columns, Partition or Subpartition Drops/Truncates, Truncate Table, Rename Table and Adding/Dropping/Renaming of Constraints. When you perform these DDL operations the history tables are automatically updated. There is no need for manual intervention.

Let’s see that in action. We will need to enable flashback for some table. First we will create a tablespace to hold the flashback data (the history tables) and create a flashback archive in that tablespace:

SQL> create tablespace fda datafile '+DATA' size 100M;

Tablespace created.

SQL> create flashback archive near_term
tablespace fda
retention 1 month

Flashback archive created.

Now let’s create an example table.

SQL> conn arup/arup
SQL> create table trans
        trans_id        number,
        trans_dt        date,
        product_id      number(10),
        quantity        number(5),
        trans_amt       number(15,2)

Let’s enable flashback for this table using the flashback archive created earlier and enable row movement as well:

SQL> alter table trans flashback archive near_term;

Table altered.

SQL> alter table trans enable row movement
  2  /

Table altered.

Now that this table is flashback enabled, let’s perform a simple DML operation:

SQL> insert into trans values (1,sysdate,1,1,1000);

1 row created.

SQL> commit;

Commit complete.

Inserts do not create archive records; because they are not changes to data. Let’s perform an update:

SQL> update trans set trans_amt = 2000;

1 row updated.

SQL> commit;

Commit complete.

Now, let’s perform DDL operation to alter the structure:

SQL> alter table trans add (price number(15,2));

Table altered.

Note: the table was altered easily, even though it was flashback enabled. How would Oracle Database know about this new column in the history tables? To track this alteration, it actually creates several new tables for history data. Let’s take a look at those tables:

SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
SYS_FBA_HIST_79410             TABLE
SYS_FBA_TCRV_79410             TABLE
TRANS                          TABLE

Note these new tables with FBA in their names; these are the special history tables. The number 79410 is the object_id of the table TRANS. The table SYS_FBA_TCRV_79410 records the operations performed on this table, as you can see below. The Operation column shows I (for Insert) and U (for Update).

SQL> select * from SYS_FBA_TCRV_79410;

  STARTSCN     ENDSCN XID              O
---------- ---------- ---------------- -
  13585319   13585472 09001B00CF430000 I

  13585472            05001B000C440000 U

The table SYS_FBA_HIST_79410 records the changes to the rows across DDL operations.

SQL> select * from SYS_FBA_HIST_79410;

---------- ---------- ---------------- - ---------- --------- ----------
---------- ---------- ----------
  13585319   13585472 09001B00CF430000 I          1 11-AUG-10          1
         1       1000

Note that there is a PRICE column. It’s because that column was added to the main table and Oracle automatically added it to the history table as well. But when the update was done, there was no PRICE column; therefore the column value is null in this tracking table.

Finally, the table SYS_FBA_DDL_COLMAP_79410 tracks the column additions. If some column is renamed, this table tracks the old and new names:

SQL> select * from SYS_FBA_DDL_COLMAP_79410;

-------------------     -------------------------- - ----------      ------------        ----------
  13585094                     TRANS_ID   NUMBER        TRANS_ID
  13585094                     TRANS_DT   DATE          TRANS_DT
  13585094                     PRODUCT_ID NUMBER(10)    PRODUCT_ID
  13585094                     QUANTITY   NUMBER(5)     QUANTITY
  13585094                     TRANS_AMT  NUMBER(15,2)  TRANS_AMT
  13585719                     PRICE      NUMBER(15,2)  PRICE

6 rows selected.

Let’s see the effect of another modification – renaming the column TRANS_AMT to TOT_AMT:

SQL> alter table trans rename column trans_amt to tot_amt;

Table altered.

Checking the column mapping table:

SQL> select * from SYS_FBA_DDL_COLMAP_79410;

  ----------        ----------  ----------------         - ----------      -------------       ----------
  13585094                               TRANS_ID   NUMBER        TRANS_ID
  13585094                               TRANS_DT   DATE          TRANS_DT
  13585094                               PRODUCT_ID NUMBER(10)    PRODUCT_ID
  13585094                               QUANTITY   NUMBER(5)     QUANTITY
  13585094   13587833                    TOT_AMT    NUMBER(15,2)  TRANS_AMT
  13585719                               PRICE      NUMBER(15,2)  PRICE
  13587833                               TOT_AMT    NUMBER(15,2)  TOT_AMT

7 rows selected.

You can see the new row there that shows at what SCN the column was renamed. The column TOT_AMT shows its historical name was TRANS_AMT.

You still can’t drop a table that is flashback enabled. Well, if you drop the table it may not be relevant anyway. But one common operation may be the alter table exchange partition. How? By temporarily disassociating the table and then reassociating:

SQL> begin
  2     dbms_flashback_archive.disassociate_fba('ARUP','TRANS');
  3  end;
  4  /

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

After the operation, you can reassociate:

SQL> begin
  2      dbms_flashback_archive.reassociate_fba('ARUP','TRANS');
  3  end;
  4  /

PL/SQL procedure successfully completed.

With this, the flashback archive status of a table doesn’t have to be temporarily suspended to allow DDLs, which are expected to happen quite frequently in a normal database operation.

Managing Flashback Archives

You can add more than one tablespace to an archive. Conversely you can remove a tablespace from one too. If you are planning to use a tablespace that has other user data as well, you run into the risk of crowding the tablespace with the Flashback Data Archive data and leaving no space for the user data. To reduce the risk, you can establish a quota on how much space the archive can take inside the tablespace. You can set the quota by:

alter flashback archive near_term modify tablespace far_near_term quota 10M;

You can check which tables have Flashback Data Archive turned on by querying the dictionary view:

SQL> select * from user_flashback_archived_tables;

TABLE_NAME                     OWNER_NAME
------------------------------ ------------------
TRANS                          ARUP

You can find out about the archives by querying the dictionary view:

sql> select * from flashback_archives;

--------------- ------------------ ----------------- ---------- -------
NEAR_TERM                        1                30    1042653
MED_TERM                         2               365    1042744
LONG_TERM                        3              1825    1042838

Using multiple archives lets you use them creatively in different situations. For instance, a hotel company's database may need one year of reservation data but three years of payments. So you can define multiple archives with different retention policies and assign them to the tables. Or if you have a uniform retention policy, you can define only one archive and make it the default.

alter flashback archive near_term set default;

When you don't need an archive for a table, you can turn it off with:

alter table trans no flashback archive;

As you can see, you just enabled a powerful change recording system without writing a single line of code.

Differences vs. Regular Auditing

How does Flashback Data Archive differ from regular auditing? First of all, the latter requires the audit_trail parameter be set to DB or DB_EXTENDED and the trails are written to the table called AUD$ in the SYSTEM tablespace. Flashback Data Archives can be defined on any tablespace (or more than one, even on parts of a tablespace where user data exists) and therefore can be defined on cheaper storage.

Second, auditing is based on autonomous transaction, which has some performance overhead. Flashback Data Archives are written by a dedicated background process called FBDA so there is less impact on performance.

Finally, Flashback Data Archives can be purged at regular intervals automatically. Audit trails must be manually maintained.

Use Cases

Flashback Data Archive is handy for many purposes. Here are some ideas:

  • To audit for recording how data changed
  • To enable an application to undo changes (correct mistakes)
  • To debug how data has been changed
  • To comply with some regulations that require data must not be changed after some time. Flashback Data Archives are not regular tables so they can't be changed by typical users.
  • Recording audit trails on cheaper storage thereby allowing more retention at less cost


Mistakes happen but now you can assert with authority that you will be able to identify the specific changes that caused the mistake, and you have tools to roll back those mistakes in entirety using the transaction back-out. But you are no longer limited to mining the changes from the archived and online redo logs only; the changes are recorded for perpetuity in the Flashback Archives. Now you can audit the changes the tables for all practical purposes in the Flashback Recovery Area, using just a few commands.

See Series TOC