11g logo

Oracle Database 11g:
The Top Features for DBAs and Developers

by Arup Nanda Oracle ACE Director

Schema Management

Manage database objects more efficiently with new functionality that makes many common operations incredibly fast and simple.

See Series TOC

Oracle Database 11g includes a ton of features that not only make jobs simpler—but in some cases, some common time-consuming operations have also been reduced to virtually one line. In this installment you will learn about some of those features.

DDL Wait Option

Jill the DBA at Acme Retailers is trying to alter the table called SALES to add a column, TAX_CODE. It's pretty routine stuff; she issues the following SQL statement:

SQL> alter table sales add (tax_code varchar2(10));

But instead of getting something like "Table altered", she gets:


alter table sales add (tax_code varchar2(10))
            *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

The error message says it all: the table is being used right now, probably by a transaction, so getting an exclusive lock on the table may be next to impossible. Of course, the rows of the table are not locked forever. When sessions perform commit the locks on those rows are released, but before that unlock period gets very far, other sessions may update some other rows of the table—and thus the slice of time to get the exclusive lock on the table vanishes. In a typical business environment, the window for locking the table exclusively does open periodically, but the DBA may not be able to perform the alter command exactly at that time.


Of course, Jill can just keep on typing the same command over and over again until she gets an exclusive lock—or goes nuts, whichever comes first.

In Oracle Database 11g, Jill has a better option: the DDL Wait option. She issues:

SQL> alter session set ddl_lock_timeout = 10;
 
Session altered.

Now, when a DDL statement in the session does not get the exclusive lock, it will not error out. Instead, it will wait for 10 seconds. In that 10 seconds, it continually re-tries the DDL operation until it's successful or the time expires, whichever comes first. When she issues:


SQL> alter table sales add (tax_code varchar2(10));

the statement hangs and does not error out. So, instead of Jill trying repeatedly to get the elusive fraction of time when the exclusive lock is available, she outsources repeated trials to Oracle Database 11g, somewhat like a telephone programmed to re-try a busy number.

Now, Jill likes this feature so much that she shares it with all the other DBAs. As everyone faces the same issue when altering a table during busy system time, they all find this new feature very helpful. So Jill wonders, can this behavior be default so that they don't need to issue the ALTER SESSION statement every time?

Yes, it can. If you issue ALTER SYSTEM SET DDL_LOCK_TIMEOUT = 10, the sessions automatically waits for that time period during DDL operations. Just like any other ALTER SYSTEM statement, this can be overridden by an ALTER SESSION statement.

Adding Columns with a Default Value

Although happy with this feature alone, Jill ponders another issue somewhat related to the first one. She wants to add the column TAX_CODE but it has to be NOT NULL. Obviously when she adds a not null column to a non-empty table, she has to also specify a default value, 'XX'. So she writes the following SQL:


alter table sales add tax_code varchar2(20) default 'XX' not null;

But she stops there. The table SALES is huge, about 400 million rows. She knows that when she issues the statement, Oracle will add the column alright but will update the value 'XX' in all rows before returning control back to her. Updating 400 million rows will not only take a very long time, it will also fill up the undo segments, generate a large amount of redo, and create massive performance overhead. So Jill has to ask for a "quiet period"—an outage—to make this change. But is there a better approach in Oracle Database 11g?

There is. The above statement will not issue an update to all the records of the table. Well, that's not a problem for new records where the value of the column will be automatically set to 'XX', but when the user selects this column for an existing record, that will return NULL, right?

Wrong, actually. When a user selects the column for an existing record, Oracle gets the fact about the default value from the data dictionary and returns it to the user. So, you kill two birds with one stone: you can define a new column as not null and with a default value and still not incur any penalty for redo and undo generation. Nice.

Virtual Columns

Acme's database contains a table called SALES, as you saw earlier. The table has the following structure:


SALES_ID NUMBER
CUST_ID NUMBER
SALES_AMT NUMBER

Some users want to add a column called SALE_CATEGORY, which identifies the type of the sale: LOW, MEDIUM, HIGH and ULTRA, depending on the amount of sale and the customer in question. This column will help them identify the records for appropriate action and routing to the concerned employee for handling. Here are the logic for values in the column:


If sale_amt is more than: And sale_amt is less than or equal to: Then sale_category is:
0 1000 LOW
10001 100000 MEDIUM
100001 1000000 HIGH
1000001 Unlimited ULTRA

Although this column is a crucial business requirement, the development team does not want to change the code to create the necessary logic. Of course, you could add a new column in the table called sale_category, and write a trigger to populate the column using the logic shown above—a fairly trivial exercise. But performance issues would arise due to context switching from and into the trigger code.

In Oracle Database 11g, you do not need to write a single line of code in any trigger. All you have to do instead is add a virtual column. Virtual columns offer the flexibility to add columns that convey business sense without adding any complexity or performance impact.

Here's how you would create this table:


SQL> create table sales
  2  (
  3     sales_id      number,
  4     cust_id       number,
  5     sales_amt     number,
  6     sale_category varchar2(6)
  7     generated always as
  8     (
  9        case
 10           when sales_amt <= 10000 then 'LOW'
 11           when sales_amt > 10000 and sales_amt <= 100000 then 'MEDIUM'
 12           when sales_amt > 100000 and sales_amt <= 1000000 then 'HIGH'
 13           else 'ULTRA'
 14        end
 15      ) virtual
 16  );

Note lines 6-7; the column is specified as "generated always as", meaning the column values are generated at runtime, not stored as part of the table. That clause is followed by how the value is calculated in the elaborate CASE statement. Finally, in line 15,"virtual" is specified to reinforce the fact that this is a virtual column. Now, if you insert some records:

SQL> insert into sales (sales_id, cust_id, sales_amt) values (1,1,100);
 
1 row created.
 
SQL> insert into sales (sales_id, cust_id, sales_amt) values (2,102,1500);
 
1 row created.
 
SQL>insert into sales (sales_id, cust_id, sales_amt) values (3,102,100000);
 
1 row created.
 
SQL> commit;
 
Commit complete.
 
SQL> select * from sales;
 
SALES_ID   CUST_ID  SALES_AMT SALE_C
----------      ----------    ----------     ------
1          1        100       LOW
2          102      1500      LOW
3          102      100000    MEDIUM
 
3 rows selected.

The virtual column values are all populated as usual. Even though this column is not stored, you can refer to it as any other column in the table. You can even create indexes on it.


SQL> create index in_sales_cat on sales (sale_category);
 
Index created.

The result will be a function-based index.


SQL> select index_type
2 from user_indexes
3 where index_name = 'IN_SALES_CAT';

INDEX_TYPE
---------------------------
FUNCTION-BASED NORMAL

SQL> select column_expression
2 from user_ind_expressions
3 where index_name = 'IN_SALES_CAT';

COLUMN_EXPRESSION
--------------------------------------------------------------------------------
CASE WHEN "SALES_AMT"<=10000 THEN 'LOW' WHEN ("SALES_AMT">10000 AND "SALES_AMT"
<=100000) THEN CASE WHEN "CUST_ID"<101 THEN 'LOW' WHEN ("CUST_ID">=101 AND "CUS
T_ID"<=200) THEN 'MEDIUM' ELSE 'MEDIUM' END WHEN ("SALES_AMT">100000 AND "SALES
_AMT"<=1000000) THEN CASE WHEN "CUST_ID"<101 THEN 'MEDIUM' WHEN ("CUST_ID">=101
AND "CUST_ID"<=200) THEN 'HIGH' ELSE 'ULTRA' END ELSE 'ULTRA' END


You can even partition on this column, as you saw in the Partitioning installment of this series. You can't, however, enter a value for this column. If you try to, you won't get far:


insert into sales values (5,100,300,'HIGH','XX')
            *
ERROR at line 1:
ORA-54013: INSERT operation disallowed on virtual columns

 

Invisible Indexes

Do you often wonder if an index will be truly beneficial to your users' queries? It might be helping one query but hurting 10 others. Indexes definitely affect INSERT statements negatively and potentially deletes and updates as well, depending on whether the WHERE condition includes the column in the index.

A related question is, is the index being used at all and what happens to a query's performance if the index is dropped? Sure, you can drop the index and see the impact on the query, but that's easier said than done. What if the index actually did help the queries? You have to reinstate the index, and to do that, you will need to recreate. Until it is completely recreated, no one can use it. The recreation of the index is also an expensive process; it takes up a lot of database resources you would rather put to better use.

What if you had some kind of option to make an index sort of unusable for certain queries while not affecting the others? Prior to Oracle Database 11g, issuing ALTER INDEX ... UNUSABLE is not an option as it will make all DML on that table fail. But now you have precisely that option via invisible indexes. Simply stated, you can make an index "invisible" to the optimizer so that no query will use it. If a query wants to use the index, it has to explicitly specify it as a hint.

Here's an example. Suppose there is a table called RES and you created an index as shown below:

SQL> create index in_res_guest on res (guest_id);

After analyzing this table and index, if you

SQL> select * from res where guest_id = 101;

you'll find that the index is being used:

Execution Plan
----------------------------------------------------------
Plan hash value: 1519600902

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| RES | 1 | 28 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IN_RES_GUEST | 1 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("GUEST_ID"=101)

Now make the index invisible:

SQL> alter index in_res_guest invisible;
 
Index altered.

The following now shows:

SQL> select * from res where guest_id = 101
2 /

Execution Plan
----------------------------------------------------------
Plan hash value: 3824022422

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 140 (2)| 00:00:02 |
|* 1 | TABLE ACCESS FULL| RES | 1 | 28 | 140 (2)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("GUEST_ID"=101)

that the index is not being used. To make the optimizer use the index again, you have to explicitly name the index in a hint:

SQL> select /*+ INDEX (res IN_RES_GUEST) */ res_id from res where guest_id = 101;

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| RES | 1 | 28 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IN_RES_GUEST | 1 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Presto! The index is used by the optimizer again.


Alternatively, you can set a session-level parameter to use the invisible indexes:

SQL> alter session set optimizer_use_invisible_indexes = true;

This feature is very useful when you can't modify the code, as in third-party applications. When you create indexes, you can append the clause INVISIBLE at the end to build the index as invisible to the optimizer. You can also see the current setting for an index using the dictionary view USER_INDEXES.

SQL> select visibility
  2  from user_indexes
  3 where index_name = 'IN_RES_GUEST';

VISIBILITY
---------
INVISIBLE

Note that when you rebuild this index, the index will become visible. You have to explicitly make it invisible again.

So, to "what" exactly is this index invisible? Well, it's not invisible to the user. It's invisible to the optimizer only. Regular database operations such as inserts, updates, and deletes will continue to update the index. Be aware of that when you create invisible indexes; you will not see the performance gain due to the index while at the same time you may pay a price during DML operations.

Read-Only Tables

Robin, a developer for the Acme data warehouse system, ponders a classic problem. As a part of the ETL process, several tables are updated with different periodicities. When updated, the tables are opened up to the users per business rules, even though the users shouldn't modify them. So, revoking DML privilege from the users on these tables is not an option.

What Robin needs is functionality that acts as a switch, to make a table update-able and then not so. The implementation of this trivial-sounding operation is actually quite difficult. What options does Robin have?

One option is to create a trigger on the table that raises an exception on INSERT, DELETE, and UPDATE. Execution of a trigger involving context switching is not good for performance. The other option is to create a Virtual Private Database (VPD) policy that always returns a false string, such as "1=2". When a VPD policy on the table uses this function, it returns FALSE, and the DML fails. This may be more performant than the trigger option but definitely less desirable as the users will see an error message like "policy function returned error".

In Oracle Database 11g, however, you have a much better way to achieve that goal. All you do is make the table read only as shown below:

SQL> alter table TRANS read only;
 
Table altered.

Now when a user tries to issue a DML such as that shown below:

SQL> delete trans;

Oracle Database 11g throws an error right away:

delete trans
       *
ERROR at line 1:
ORA-12081: update operation not allowed on table "SCOTT"."TRANS"


The error message does not reflect the operation to the letter but conveys the message as intended, without the overhead of a trigger or VPD policy.

When you want to make the table update-able, you will need to make it read/write, as shown below:

SQL> alter table trans read write;
 
Table altered.

Now DMLs will be no problem:

SQL> update trans set amt = 1 where trans_id = 1;
 
1 row updated.

While a table is in read-only mode only DMLs are disallowed; you can perform all DDL operations (create indexes, maintain partitions, and so on). So, a very useful application of this feature is table maintenance. You can make the table read only, perform the necessary DDL, and then make it read/write again.

To see the status of the table, look for the read_only column in the data dictionary view dba_tables.

SQL> select read_only from user_tables where table_name = 'TRANS';
 
REA
---
NO

 

Fine-Grained Dependency Tracking

This feature is best explained through an example. Consider a table called TRANS, created as:

create table trans
(
    trans_id        number(10),
    trans_amt       number(12,2),
    store_id        number(2),
    trans_type      varchar2(1)
)

Users are not supposed to get the data from this table directly; they get it through a view, VW_TRANS, created as shown below:


create or replace view vw_trans
as
select trans_id, trans_amt  
from trans;

Now, the view VW_TRANS depends on the table TRANS. You can check the dependencies using this query:


elect d.referenced_name, o.status 
from user_dependencies d, user_objects o
where d.name = o.object_name
and d.name = 'VW_TRANS'
/

REFERENCED_NAME                  STATUS
---------------------------------------------------------------- -------
TRANS                            VALID

The status of the view VW_TRANS, as shown, is VALID. Then, modify the underlying table in some way, such as by adding a column:


alter table trans
add (trans_date date);

Because the view depends on the table that was altered, the view now gets invalidated in Oracle Database 10g and previous releases. You can check the dependencies and status now using the query shown above:


REFERENCED_NAME                  STATUS
---------------------------------------------------------------- -------
TRANS                            INVALID

The status shows as INVALID. Nothing has changed fundamentally that would cause the view to be permanently invalid, and it can be re-compiled easily by:


alter view vw_trans compile;

So, why was that view invalidated? The answer is simple: When a parent object changes, the child objects are automatically placed under scrutiny because something there may need to change as well. However, in this case, the change is the addition of a new column. The view does not use that column, so why should it be invalidated?

It doesn't, in Oracle Database 11g. The dependency is still set to TRANS, of course, but the status is not INVALID—it's VALID now!

REFERENCED_NAME                  STATUS
---------------------------------------------------------------- -------
TRANS                            VALID

Since the view is not invalidated, all the dependent objects of the view, such as another view or packages and procedures, are not invalidated either. This behavior adds tremendous value to the availability of the application, which in turn enhances the overall availability of the entire stack. You do not need to stop the apps while making some database changes.

Had you altered a column used in the view, such as TRANS_AMT, the view would have been invalidated. That would be desirable too, since the alter column could affect the view.

But high availability does not stop at views and tables alone; you need them for other stored objects such as procedures and packages as well. Consider a package shown below:

create or replace package pkg_trans
is
        procedure upd_trans_amt
        (
                p_trans_id      trans.trans_id%type,
                p_trans_amt     trans.trans_amt%type
        );
end;
/

create or replace package body pkg_trans
is
        procedure upd_trans_amt
        (
                p_trans_id      trans.trans_id%type,
                p_trans_amt     trans.trans_amt%type
        ) is
        begin
                update trans
                set trans_amt = p_trans_amt
                where trans_id = p_trans_id;
        end;
end;
/

Now suppose you want to write a function that increases the amount of a transaction by a specified percentage. This function uses the package pkg_trans.


create or replace function adjust
(
        p_trans_id      number,
        p_percentage    number
)
return boolean
is
        l_new_trans_amt number(12);
begin
        select trans_amt * (1 + p_percentage/100)
        into l_new_trans_amt
        from trans
        where trans_id = p_trans_id;
        pkg_trans.upd_trans_amt (
                p_trans_id,
                p_percentage
        );
        return TRUE;
exception
        when OTHERS then
                return FALSE;
end;
/

If you check the status of the function, it should be valid:


select status
from user_objects
where object_name = 'ADJUST'
/

STATUS
-------
VALID

Suppose you want to modify the package pkg_trans by adding a new procedure to update the vendor_name column. Here is the new package definition:


create or replace package pkg_trans
is
        procedure upd_trans_amt
        (
                p_trans_id      trans.trans_id%type,
                p_trans_amt     trans.trans_amt%type
        );
        procedure upd_vendor_name
        (
                p_trans_id      trans.trans_id%type,
                p_vendor_name   trans.vendor_name%type
        );
                
end;
/

create or replace package body pkg_trans
is
        procedure upd_trans_amt
        (
                p_trans_id      trans.trans_id%type,
                p_trans_amt     trans.trans_amt%type
        ) is
        begin
                update trans
                set trans_amt = p_trans_amt
                where trans_id = p_trans_id;
        end;
        procedure upd_vendor_name
        (
                p_trans_id      trans.trans_id%type,
                p_vendor_name   trans.vendor_name%type
        ) is
        begin
                update trans
                set vendor_name = p_vendor_name
                where trans_id = p_trans_id;
        end;
end;

After this package is recompiled, what will be the status of the function ADJUST? In Oracle Database 10g and below, the function, being a dependent one, will be invalidated, as shown in the status:


STATUS
-------
INVALID

If can be easily compiled by alter function ... recompile, but in Oracle Database 11g, the function will not be invalidated:


STATUS
-------
VALID

This is a tremendous boost to the notion of high availability. The function adjust does not call the changed portion of the package pkg_trans so there is no need for this function to be invalidated, and rightfully so it is not in Oracle Database 11g.

But that's not always the case. If the package were modified in such a way that the new sub-component is at the end, as shown in the above example, then the dependent stored code is not invalidated. If the sub-component is added at the beginning, as shown below:

create or replace package pkg_trans
is
        procedure upd_vendor_name ...
        procedure upd_trans_amt ...
end;

The dependent stored code, ADJUST, is invalidated, as is the case in Oracle Database 10g and below. This occurs because the new procedure, inserted before the existing ones, changes the slot numbers in the package, thereby triggering invalidations. When the procedure was inserted after the exiting ones, the slot numbers were not changed; a new slot number was merely added.


Here are some common guidelines for reducing the dependent-related invalidations.

  • Add components such as functions and procedures to the end of a package.
  • A common cause of invalidation is the change in data types. If you don't specify column names, all the columns are assumed by the procedure and any change will invalidate the procedure even if the column is not used. For instance, when you use select * from sometable, all the columns of the table are assumed. Avoid constructs like select *, datatypes like sometable%rowtype and insert into sometable values (...), where no column list is mentioned.
  • If possible, use views on tables in the stored codes. This allows you to add columns to the table that are not used by stored codes. Since the view is not invalidated, as shown above, the stored code will not be invalidated as well.
  • In case of synonyms, use
    
    create or replace synonym ...;
    

    Instead of
    
    drop synonym ...;
    create synonym ...;
    

    This does not invalidate procedures.


Also, if you have used online redefinition before you might have seen that the redefinition makes some dependent objects invalid. No longer so in Oracle Database 11g. Now, online redefinition will not invalidate objects if columns referenced by them are of the same name and type. If a column was dropped during redef, but a procedure was not using the column, the procedure is not invalidated.

Note: In Oracle Database 11g Release 2, this invalidation behavior described above is different. To demonstrate, let’s create a table in a Release 1 database:

create table trans
(
    trans_id        number(10),
    trans_amt       number(12,2),
    store_id        number(2),
    trans_type      varchar2(1)
)

Next, create a trigger:

create trigger tr_trans
before insert on trans
for each row
declare
    l_store_id number(2);
begin
    l_store_id := :new.store_id;
end;
/

Check the status of the trigger:

SQL> select status from user_objects where object_name = 'TR_TRANS';

STATUS
-------
VALID

Modify the table in some way:

SQL> alter table trans add (col1 number);

Table altered.

Now if you check the status of the trigger:

SQL> select status from user_objects where object_name = 'TR_TRANS';

STATUS
-------
INVALID

In Release 1, the trigger was invalidated even though it had nothing to do with the table modifications. However, in Release 2, it will not be invalidated because the trigger didn’t depend on the modification on the table. (It was a new column; an existing trigger would never have referred to it.)


If we recreate the scenario on an Release 2 databases and check the status:

SQL> select status from user_objects where object_name = 'TR_TRANS';

STATUS
-------
VALID

1 row selected.

SQL> alter table trans add (col1 number);

Table altered.

SQL> select status from user_objects where object_name = 'TR_TRANS';

STATUS
-------
VALID

The trigger is still valid. The story will be different when something is changed that affects the trigger, of course. Take for instance,

SQL> alter table trans modify (store_id number(3));

Table altered.

SQL> select status from user_objects where object_name = 'TR_TRANS';

STATUS
-------
INVALID

So, in many cases of table modifications, such as adding new columns, no dependent objects will be invalidated – making the database truly highly available.


Foreign Keys on Virtual Columns (Release 2 Only)

In 11g Release 1 we saw the introduction of two very important and useful features. One is virtual columns, described above. The second is partitioning based on referential integrity constraints (the so-called REF partitioning), which allows you to partition child tables exactly as the parent table even if the partition column is not present in the child.

These two features offer distinct advantages: virtual columns allow you to manipulate the table without incurring the overhead of storing the columns or making the application changes to incorporate the new columns, while REF partitioning allows you to partition the tables to take advantage of partitioning pruning in parent-child relationships without adding those columns to the child tables. But what if you want to take advantage of both these features on the same sets of tables? In 11g Release 2 you can easily do that.

Here's an example: table CUSTOMERS has two virtual column, CUST_ID, which is also used as a primary key and CATEGORY which is the partitioning column. The table SALES is a child of the CUSTOMERS table joined on the CUST_ID. Let’s see the code in action.

create table customers

(

    cust_id     number(14)

        generated always as

        (

            DECODE(plan_id, 'MEDICAL',100, 'DENTAL',200, 'HOSPITAL ONLY',300, 999)

                || ssn ||

            DECODE(member_type, 'SELF','01', 'SPOUSE','02', 'CHILD','03', '99')

        ) virtual,

    cust_name   varchar2(20),

    ssn         varchar(9),

    plan_id     varchar2(15),

    member_type varchar2(10),

    category    varchar2(1)

        generated always as

        (case

            when member_type = 'SELF' then

                 case when plan_id = 'MEDICAL' then 'A' else 'B' end

            when member_type = 'SPOUSE' then

                 case when plan_id = 'MEDICAL' then 'B' else 'C' end

            when member_type = 'CHILD' then 'C' else 'X'

         end) virtual,

        constraint pk_customers primary key (cust_id)

)

partition by list (category)

(

        partition A values ('A'),

        partition B values ('B'),

        partition C values ('C'),

        partition DEF values (default)

 

)

/


Let’s insert some rows taking care not to assign a specific value to the virtual columns. We want the virtual columns to be generated.

 

insert into insert into customers (cust_name, ssn, plan_id, member_type) values ('Jill','123456789','MEDICAL','SELF')

/ 

insert into customers (cust_name, ssn, plan_id, member_type) values
  ('John','123456789','MEDICAL','SPOUSE')

/

insert into customers (cust_name, ssn, plan_id, member_type) values
  ('Charlie','123456789','MEDICAL','CHILD')

/

 

Will the virtual columns properly return data? We can check by selecting rows from the table:

 

select * from customers;

 

       CUST_ID CUST_NAME            SSN       PLAN_ID         MEMBER_TYP C

-------------- -------------------- --------- --------------- ---------- -

10012345678901 Jill                 123456789 MEDICAL         SELF       A

10012345678902 John                 123456789 MEDICAL         SPOUSE     B

10012345678903 Charlie              123456789 MEDICAL         CHILD      C


Now that the parent table is ready, let’s create the child table:


create table sales

(

   sales_id    number primary key,

   cust_id     number not null,

   sales_amt   number,

   constraint  fk_sales_01

   foreign key (cust_id)

      references customers

)

partition by reference (fk_sales_01)

/


In 11g Release 1, this would have failed with:

ERROR at line 6:
ORA-14663: reference partitioning parent key is not supported


In 11g Release 2, this operation is possible; the statement will create the child table. Using these features, you can harness the power of two pretty useful features in Oracle to build better data models. 

IPv6 Formatting in JDBC (Release 2 Only)

IP addressing has gone through an overhaul in last few years. The traditional manner of addressing has been a set of four numbers separated by periods, e.g. 192.168.1.100. This scheme, called IPv4, is a 32-bit addressing scheme that allows for only a few IP addresses. With the explosion in the demand for IP addresses from not only Websites but devices such as IP-capable phones and PDAs, this scheme will run out of IP addresses in a short time. To address the problem a new generation of IP addressing called IPv6 has been introduced. It is a 128-bit one, capable of handling much more addresses.

In Oracle Database 11g Release 2, you can start using IPv6 schemes. To find out the IPv6 address, here is a simple example command (from Linux command line)

# /sbin/ifconfig -a
   eth0      Link encap:Ethernet  HWaddr 00:19:21:BB:9A:A5  
             inet addr:10.14.104.253  Bcast:10.14.107.255  Mask:255.255.252.0
             inet6 addr: fe80::219:21ff:febb:9aa5/64 Scope:Link
             UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   … output truncated …


Note the traditional IP address (shown against the header “inet addr”): 10.14.104.253. You can use the EZNAMES addressing scheme to connect to a database called D112D1 running on the default 1521 port:

SQL> connect arup/arup@10.14.104.253/D112D1
   Connected.


Note the output of ifconfig command. In addition to the IPv4, you can see the IPv6 addressing scheme (shown against “inet6 addr” header): fe80::219:21ff:febb:9aa5. You can use this address instead of the IPv4 address. You have to enclose IPv6 inside square brackets.

SQL> connect arup/arup@[fe80::219:21ff:febb:9aa5]/D112D1
   Connected.


The IPv6 support is not limited to SQL*Plus. You can use it JDBC as well, as shown below:

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)
   (HOST=[fe80::219:21ff:febb:9aa5]) (PORT=1526))
   (CONNECT_DATA=(SERVICE_NAME=D112D1)))


Of course, you can use IPv6 as well as IPv4. Make sure to enclose the IPv6 address in square brackets.

Segment-less Objects (Release 2 Only)

Consider a situation where a third-party application or even your own app deploys several thousand tables. Each table is at least one segment, and even though they are all empty, they occupy at least one extent in the segment. Many of these tables may or may not be filled at some point, so it may not make sense to pre-allocate all that space now. This situation inflates the footprint of the overall database, and increases the time to install the application. You could defer the creation of the table but that still precludes you from deploying dependent objects such as procedures and views without error.

In 11g Release 2, there is a quite elegant solution. In this release, the segments are not created by default when tables are created but rather when the first data is inserted. Let’s see with an example:

SQL> create table test (col1 number);
Table created.
SQL> select bytes from user_segments where segment_name = 'TEST';
no rows selected


There is no segment for the newly created table. Now insert a row into the table:

SQL> insert into test values (1);
1 row created.
SQL> select bytes from user_segments where segment_name = 'TEST';
     BYTES
   ----------
     65536


The segment is created with the initial extent. This is an irreversible process; the extent will be present even if the action is rolled back.

SQL> roll
   Rollback complete.
   SQL> /
     BYTES
   ----------
     65536


However, you may not want this default behavior - e.g., you want the segment to be created when the table is created. The parameter deferred_segment_creation controls the behavior. To create the segments when the table is created, set this parameter to FALSE. You can even control this at the session level:

SQL> alter session set deferred_segment_creation = false;
Session altered.
SQL> create table test1 (col1 number);
Table created.
SQL> select bytes from user_segments where segment_name = 'TEST1';
   
   BYTES
   ----------
   65536


Once created, the segment stays there. If you truncate the table, the segment will not dropped. Note that the feature does not apply to LOB segments, which are created regardless even if the table segment is not created.

SQL> create table testlob (col1 clob);
Table created.
SQL> select segment_name from user_lobs where table_name = 'TESTLOB';

   SEGMENT_NAME
   ------------------------------
   SYS_LOB0000079350C00001$$


However, if you look at the table segment:

SQL> select bytes from user_segments where segment_name = 'TESTLOB';
no rows selected


You can see that it was not created.

In Release 1, the deferred segment creation only worked for nonpartitioned objects. The limitation for partitioned tables has been lifted with Release 2, so now you get this functionality for partitioned objects as well.

Having said that, there is still a number of limitations for various object types, but these are fortunately documented in the SQL Language Reference Guide.

Unusable Indexes Do Not Consume Space (Release 2 Only)

Now that you realize the power of deferred segment creation, you might be tempted to enforce that behavior universally. However, if you have checked the documentation, you have seen that there is no way to declare the deferred segment creation attribute for indexes.

There is a perfect explanation for this: As secondary class structures – you always need a table first before creating an index – indexes simply follow the table’s attributes. If you create an empty table with deferred segment creation, the index will have deferred segment creation as well. You insert the first record, and voila, you will get a segment for both the table and the index.

Consider the following example where we create an index on a table with data (or where the deferred segment creation was disabled).

SQL> create index in_test on test (col1);
Index created.
SQL> select bytes from user_segments where segment_name = 'IN_TEST';
   
   BYTES
   ----------
   65536


The segment was created. But what is the point of storing this chunk of data in the database? You cannot access the data anymore, you cannot use it for rebuild, and you cannot use it for anything other than occupying space - it's useless.

There is a feature in 11g Release 2 that proves invaluable here (and takes advantage of deferred segment creation under the covers). In this release, when you make the index unusable, the corresponding useless segment is dropped:

SQL> alter index in_test unusable;
Index altered.
SQL> select bytes from user_segments where segment_name = 'IN_TEST';
no rows selected


When you rebuild the index (ostensibly to start using it), the segment is created:

SQL> alter index in_test rebuild;
Index altered.
SQL> select bytes from user_segments where segment_name = 'IN_TEST';
   
   BYTES
   ----------

      5536

This feature is particularly useful in partitioning, where you may choose to selectively make indexes unusable to save space. Let’s look at an example – the SALES table from the SH schema.

SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'SALES';
PARTITION_NAME  HIGH_VALUE
------------------------------ --------------------------------------------------------------------------------
SALES_1995      TO_DATE(' 1996-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_1996      TO_DATE(' 1997-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_H1_1997  TO_DATE(' 1997-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_H2_1997   TO_DATE(' 1998-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_1998   TO_DATE(' 1998-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_1998   TO_DATE(' 1998-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_1998   TO_DATE(' 1998-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q4_1998   TO_DATE(' 1999-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_1999   TO_DATE(' 1999-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_1999   TO_DATE(' 1999-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_1999  TO_DATE(' 1999-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q4_1999   TO_DATE(' 2000-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_2000   TO_DATE(' 2000-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_2000   TO_DATE(' 2000-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_2000   TO_DATE(' 2000-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q4_2000   TO_DATE(' 2001-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_2001   TO_DATE(' 2001-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_2001   TO_DATE(' 2001-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_2001   TO_DATE(' 2001-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q4_2001   TO_DATE(' 2002-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_2002   TO_DATE(' 2002-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_2002   TO_DATE(' 2002-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_2002   TO_DATE(' 2002-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q4_2002   TO_DATE(' 2003-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_2003   TO_DATE(' 2003-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_2003   TO_DATE(' 2003-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_2003   TO_DATE(' 2003-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q4_2003   TO_DATE(' 2004-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA


Let’s get the indexes equi-partitioned with the table:

SQL> select index_name, partitioning_type
     2  from user_part_indexes
     3  where table_name = 'SALES';

INDEX_NAME         PARTITIONING_TYPE
------------------------------    -----------------
SALES_PROD_BIX     RANGE
SALES_CUST_BIX     RANGE
SALES_TIME_BIX     RANGE
SALES_CHANNEL_BIX  RANGE
SALES_PROMO_BIX    RANGE


Take a specific index, say, SALES_CUST_BIX, and check its partitions and how much space they occupy:

SQL> select partition_name, bytes
     2  from user_segments    
     3  where segment_name = 'SALES_CUST_BIX';

   PARTITION_NAME                  BYTES
   -----------------------                    -----------------
   SALES_1995                          65536
   SALES_1996                          65536
   SALES_H1_1997                       65536
   SALES_H2_1997                       65536
   SALES_Q1_1998                      327680
   SALES_Q1_2000                      327680
   SALES_Q1_2001                      327680
   SALES_Q1_2002                       65536
   SALES_Q1_2003                       65536
   SALES_Q2_1998                      262144
   SALES_Q2_1999                      327680
   SALES_Q2_2000                      327680
   SALES_Q2_2001                      327680
   SALES_Q2_2002                       65536
   SALES_Q2_2003                       65536
   SALES_Q3_1998                      327680
   SALES_Q3_1999                      327680
   SALES_Q3_2000                      327680
   SALES_Q3_2001                      327680
   SALES_Q3_2002                       65536
   SALES_Q3_2003                       65536
   SALES_Q4_1998                      327680
   SALES_Q4_1999                      327680
   SALES_Q4_2000                      327680
   SALES_Q4_2001                      327680
   SALES_Q4_2002                       65536
   SALES_Q4_2003                       65536


This index has many partitions, all the way to 1995. In typical applications, very old data such as that in 1995 may be rarely accessed; consequently, the index partition may be rarely if ever used. Regardless, it takes up a significant amount of space. If that partition can somehow be dropped, that space could be reclaimed. However you can’t drop the partition because the data needed to be present there.

Again, in Release 2, the solution is quite simple: Make the index partition unusable (which makes the segment disappear) while leaving the table partition intact:

SQL> alter index SALES_CUST_BIX modify partition SALES_1995 unusable;
Index altered.
SQL> select partition_name, bytes
     2  from user_segments    
     3  where segment_name = 'SALES_CUST_BIX';
   
   PARTITION_NAME                   BYTES
   -----------------------                    -----------------
   SALES_1996                          65536
   SALES_H1_1997                       65536
   … output truncated …


Note: there is no segment called SALES_1995 anymore. The segment was dropped since the index partition was made unusable. If you do this for many of the older partitions in several indexes, you may be able to reclaim a lot of space without losing any data – old or new.

You may be wondering what will happen if you make a partition unusable and someone queries the data in that partition. Will that fail? Let’s see with an example.

Here is the optimization plan for a query that accesses the partition of the index that is usable:

set autot on explain
select count(1)
from sales
where cust_id = 611
and time_id between to_date('01-jan-1999','dd-mon-yyyy') and to_date('01-mar-1999','dd-mon-yyyy')
Output:    Execution Plan
----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name           | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                |     1 |    13 |     5   (0)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE               |                |     1 |    13 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE      |                |    19 |   247 |     5   (0)| 00:00:01 |     9 |     9 |
|   3 |    BITMAP CONVERSION COUNT    |                |    19 |   247 |     5   (0)| 00:00:01 |       |       |
|   4 |     BITMAP AND                |                |       |       |            |          |       |       |
|*  5 |      BITMAP INDEX SINGLE VALUE| SALES_CUST_BIX |       |       |            |          |     9 |     9 |
|   6 |      BITMAP MERGE             |                |       |       |            |          |       |       |
|*  7 |       BITMAP INDEX RANGE SCAN | SALES_TIME_BIX |       |       |            |          |     9 |     9 |
----------------------------------------------------------------------------------------------------------------
   Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("CUST_ID"=611)
   7 - filter("CUST_ID"=611 AND "TIME_ID"<=TO_DATE(' 1999-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "TIME_ID"<=TO_DATE(' 1999-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


Now, issue the same query against the partition that was dropped:

select count(1)
from sales
where cust_id = 611
and time_id between to_date('01-jan-1995','dd-mon-yyyy') and to_date('01-mar-1995','dd-mon-yyyy')
Execution Plan
----------------------------------------------------------
Plan hash value: 642363238
-------------------------------------------------------------------------------------------------
| Id  | Operation               | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |       |     1 |    13 |    36   (3)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE         |       |     1 |    13 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|       |    19 |   247 |    36   (3)| 00:00:01 |     9 |     9 |
|*  3 |    TABLE ACCESS FULL    | SALES |    19 |   247 |    36   (3)| 00:00:01 |     9 |     9 |
-------------------------------------------------------------------------------------------------
   Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("CUST_ID"=611 AND "TIME_ID"<=TO_DATE(' 1995-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "TIME_ID"<=TO_DATE(' 1995-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


The query ran fine; the optimizer did not return an error. Since the index partition was not usable, it merely reverted to the full table scan.

This is a very valuable feature for those databases that are legally required to hold records for a long period of time. It helps you keep the bare minimum of storage required, saving space and money.


Conclusion


As you can see, not only are previously laborious commands radically simpler, but in some cases, completely new avenues have opened up in terms of how you conduct day-to-day operations.


I have seen many changes in Oracle Database functionality over the course of my career, and there are some landmark ones that redefine how business is done. The ones described here belong in that category.


Back to Series TOC