Oracle Database 11g: by Arup Nanda
Oracle Database 11g delivers a rich new security functionality—from case-sensitive passwords, to Transparent Tablespace Encryption, to Access Control Lists for UTL_TCP/HTTP/SMTP.
Oracle Database 11g now offers a way to quickly identify users with default passwords, implemented in the rather ludicrously simple way of checking a single data dictionary view: DBA_USERS_WITH_DEFPWD. (Note that DBA_ is a standard prefix; it does not contain only DBA users with default passwords.) You can identify these users by issuing:
select * from dba_users_with_defpwd
USERNAME ------------------------------ DIP MDSYS WK_TEST CTXSYS OLAPSYS OUTLN EXFSYS SCOTT MDDATA ORDPLUGINS ORDSYS XDB LBACSYS SI_INFORMTN_SCHEMA WMSYS
SQL> alter user scott identified by tiger1; User altered.
SQL> select * from dba_users_with_defpwd;
In Oracle Database prior to release 11g, user passwords were case insensitive. For example:
SQL> conn scott/tiger Connected. SQL> conn scott/TIGER Connected.
Problem solved; in Oracle Database 11g passwords can be case sensitive as well. While creating the database via DBCA, you will be prompted whether you want to upgrade to the "new security standards," one of which is the case-sensitive password. If you accept, passwords will be recorded in the same case as they were created. Here is the resulting behavior, assuming you have accepted the new standard:
SQL> conn scott/tiger Connected. SQL> conn scott/TIGER ERROR: ORA-01017: invalid username/password; logon denied Warning: You are no longer connected to ORACLE.
Now, some of your apps may not be passing the password in proper case right now. A typical example is a user input form: Many forms accept passwords with no case conversion being performed. However, with Oracle Database 11g, that login may fail unless the user enters the password in case-sensitive format or the developer changes the app to convert to upper or lower case (which may not be possible quickly).
If you wish, however, it is still possible to revert to case insensitivity by altering a system parameter, SEC_CASE_SENSITIVE_LOGON, as shown in the example below.
SQL> conn / as sysdba Connected. SQL> alter system set sec_case_sensitive_logon = false; System altered. SQL> conn scott/TIGER Connected.
select username, password, password_versions from dba_users; USERNAME PASSWORD PASSWORD ------------------------- ------------------------------ -------- SYSTEM 10G 11G SYS 10G 11G MGMT_VIEW 10G 11G
The first thing you notice is that the password column is NULL, not populated with the hashed value as it is in Oracle Database 10g and prior versions. So what happened to the password? It's still stored in the database (in the table USER$) but it is not visible in the DBA_USERS view. When the user is created as either global or externally authenticated, the status is indicated—GLOBAL or EXTERNAL—but the hash value of the password is not displayed.
Next, note the column PASSWORD_VERSIONS, which is new in Oracle Database 11g. This column signifies the case sensitivity of the password. The value "10G 11G" signifies that the user was either created in 10g and migrated to 11g or created in 11g directly.
You can enforce, if you wish, the sensitivity of the SYSDBA password as well by entering a new parameter, ignorecase, while creating the password file as shown below:
$ orapwd file=orapwPRODB3 password=abc123 entries=10 ignorecase=n
In the above example the SYSDBA password will be abc123, not ABC123 or any other variation in case.
The possibility of enforcing a case-sensitive password not only makes it more difficult to crack passwords by brute force, but also enables you to meet many more compliance requirements. Even more important, you can enforce the password requirement dynamically without needing a database shutdown, which comes in handy during upgrades and debugging login issues when upgrading legacy apps.
Remember the password verification function in Oracle Database? Many of you may not be even aware of its existence, let alone use it. The function is a quick and easy way to enforce quality of database passwords—for example, they should contain a certain number of characters, should not be identical to the username, and so on. Perhaps its best feature is that it is built-in; all you have to do is turn it on. More likely than not, you didn't.
In Oracle Database 11g, the password management function has new and improved verification logic. If you examine the password verification file utlpwdmg.sql in $ORACLE_HOME/rdbms/admin, you will notice that the script creates a new password function called verify_fnction_11g. At the end, the script has the following lines:
ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME 180 PASSWORD_GRACE_TIME 7 PASSWORD_REUSE_TIME UNLIMITED PASSWORD_REUSE_MAX UNLIMITED FAILED_LOGIN_ATTEMPTS 10 PASSWORD_LOCK_TIME 1 PASSWORD_VERIFY_FUNCTION verify_function_11G;
The script attaches the function to the profile DEFAULT, which is the default profile for all users, unless something else is explicitly assigned. This makes the authentication compliant with many regulations. All you have to do is run this script to create the 11g version of the password checking function, and the script will enable the password verification feature by attaching itself to the default profile.
Auditing is another common pain point. Oracle Database includes powerful auditing features that can be used for tracking user activities. Most people, fearing an I/O contention issue, do not take advantage of them. But the truth is that some auditing can be safely turned on with little risk.
Examples include CREATE SESSION, which writes a record when a session starts and then updates the record when it ends. This audit has minimal impact on I/O but provides powerful benefits.
In Oracle Database 11g, two simple changes have been made to provide an even more powerful auditing solution. First, the database parameter audit_trail is now set to DB by default, not NONE, as it was in previous versions. This allows you to turn on auditing on any object, statement, or privilege without recycling the database.
The second change is more statements have been placed under audit by default. Here is the list:
ALTER SYSTEM SYSTEM AUDIT CREATE SESSION CREATE USER ALTER USER DROP USER ROLE CREATE ANY TABLE ALTER ANY TABLE DROP ANY TABLE CREATE PUBLIC DATABASE LINK GRANT ANY ROLE ALTER DATABASE CREATE ANY PROCEDURE ALTER ANY PROCEDURE DROP ANY PROCEDURE ALTER PROFILE DROP PROFILE GRANT ANY PRIVILEGE CREATE ANY LIBRARY EXEMPT ACCESS POLICY GRANT ANY OBJECT PRIVILEGE CREATE ANY JOB CREATE EXTERNAL JOB
As you can see, auditing these activities would not cause significant I/O issues, making it possible to maintain some acceptable level of auditing with minimal performance impact.
These two changes create some powerful auditing capabilities out of the box. Of course, they are just database parameters and audit settings; if you want, you can turn them off easily. But if you look at the list of statements, you may actually find them worth auditing, even in development databases. You may want to fine tune them, however. (For example, in data warehouses, users create and drop a lot of temporary tables so auditing CREATE/DROP TABLE might flood the audit trail.)
Caution: When you upgrade to Oracle Database 11g, auditing is turned on by default for the above-mentioned statements. Thus audit trails will be written to the table AUD$ in the SYSTEM tablespace, which may fill up quickly. Watch this space closely.
Encryption is getting more and more attention these days, thanks to myriad new laws and regulations. You need to encrypt data somehow but the big question is, how?
For those still on Oracle Database 10g Release 1 and previous releases, the DBMS_CRYPTO and DBMS_OBFUSCATION_TOOLKIT toolkits let you build your own encryption framework. In Oracle Database 10g Release 2, this framework is baked in via the great Transparent Data Encryption feature.
Transparent Data Encryption lets you encrypt specific columns, which is adequate for most requirements. However, performance can be an issue with this feature (or any other encryption solution for that matter): Index range scan cannot be applied to encrypted columns, which creates a major drag on performance.
This is where Transparent Tablespace Encryption in Oracle Database 11g really shines. When the tablespace is declared encrypted any data on the tablespace (including transportable tablespaces, backups, and so on), not just tables individually declared as such, is encrypted. But during index scanning, the scanning occurs in memory where the data is unencrypted, therefore causing no performance impact.
Excited yet? Let's see how it is done. The encryption procedure is identical to that of Transparent Data Encryption: You need to create a wallet where the master encryption key is stored. If you don't have Transparent Data Encryption set up already, you will need to create the wallet and the key.
First, create the file location for the wallet; the default location is $ORACLE_BASE/admin//wallet. The wallet subdirectory does not exist by default; you need to create it. So, in my example, this directory will be /home/oracle/app/admin/PRODB3/wallet.
Next, create the encryption key in the wallet by issuing the following statement:
alter system set encryption key identified by "abcd1234!";
$ cd /home/oracle/app/admin/PRODB3/wallet $ ls ewallet.p12
The wallet can only be opened by a password, which in this case is abcd1234!. The statement also opens the wallet. Subsequently, you do not need to create the wallet. After the database comes up, all you have to do is to open the wallet by issuing:
alter system set wallet open identified by "abcd1234!"
For a more detaileded discussion on wallets, read this Oracle Magazine article.
Now create the tablespace:
create tablespace secure1 datafile '/home/oracle/oradata/PRODB3/secure1_01.dbf' size 1M encryption using 'AES128' default storage (encrypt) /
The clause "encryption using ... default storage (encrypt)" marks the tablespace as encrypted. (Note: we have used the AES 128-bit encryption for this tablespace. Other choices are Triple DES 168-bit key, AES 192-bit key, and AES 256-bit key.)
Now that the tablespace is created, you can create tables as you would in a regular tablespace.
create table secure_trans tablespace secure1 as select * from trans where rownum < 201 / create table secure_res tablespace secure1 as select * from res where rownum < 201 /
Now update a text field in the tables so that you can search inside the datafile for that:
update secure_trans set comments = 'Transaction Comments'; update insecure_trans set comments = 'Transaction Comments'; commit;
alter tablespace secure1 offline; alter tablespace secure1 online; alter tablespace insecure1 offline; alter tablespace insecure1 online;
$ strings insecure1_01.dbf | grep Transaction Transaction Comments ...
$ strings secure1_01.dbf | grep Transaction $
select hotel_id, sum(amt) from secure_trans t, secure_res r where t.res_id = r.res_id group by hotel_id
call count cpu elapsed disk query current rowsNow, run the same test against INSECURE_RES and INSECURE_TEST, which are on a normal (unencrypted) tablespace.
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 14 0.01 0.01 4 6 0 186
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16 0.01 0.01 4 6 0 186
Rows Row Source Operation
186 HASH GROUP BY (cr=6 pr=4 pw=4 time=5 us cost=8 size=10400 card=200)
200 HASH JOIN (cr=6 pr=4 pw=4 time=45 us cost=7 size=10400 card=200)
200 TABLE ACCESS FULL SECURE_TRANS (cr=3 pr=2 pw=2 time=8 us cost=3 size=5200 card=200)
200 TABLE ACCESS FULL SECURE_RES (cr=3 pr=2 pw=2 time=9 us cost=3 size=5200 card=200)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 14 0.00 0.01 4 6 0 186
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 16 0.01 0.01 4 6 0 186
Rows Row Source Operation
186 HASH GROUP BY (cr=6 pr=4 pw=4 time=4 us cost=8 size=10400 card=200)
200 HASH JOIN (cr=6 pr=4 pw=4 time=46 us cost=7 size=10400 card=200)
200 TABLE ACCESS FULL INSECURE_TRANS (cr=3 pr=2 pw=2 time=8 us cost=3 size=5200 card=200)
200 TABLE ACCESS FULL INSECURE_RES (cr=3 pr=2 pw=2 time=9 us cost=3 size=5200 card=200)
Note the execution times in each case; they are similar. The CPU consumption due to decryption is not significant either. Thus there is no impact on performance due to the tablespace being encrypted.
The view DBA_TABLESPACES has a new column, ENCRYPTED, to show if a tablespace is encrypted. In addition, a new view, V$ ENCRYPTED_TABLESPACES, shows what type of encryption is enabled for the tablespace.
SQL> desc v$encrypted_tablespaces
Name Null? Type
----------------------------------------- -------- ------------
SQL> select * from v$encrypted_tablespaces;
TS# ENCRYPT ENC
---------- ------- ---
5 AES128 YES
This view can be joined with the TS# column of the view V$TABLESPACE to get the complete picture. Here is how the view looks:
SQL> desc v$tablespace Name Null? Type ----------------------------------------- -------- ------------ TS# NUMBER NAME VARCHAR2(30) INCLUDED_IN_DATABASE_BACKUP VARCHAR2(3) BIGFILE VARCHAR2(3) FLASHBACK_ON VARCHAR2(3) ENCRYPT_IN_BACKUP VARCHAR2(3)
Note that the column ENCRYPT_IN_BACKUP has nothing to do with the transparent tablespace encryption. Rather it's the RMAN encryption of the tablespace during backup, introduced in Oracle Database 10g Release 2.
As you can see, Transparent Tablespace Encryption solves two problems in a rather elegant way: It encrypts data at rest on disks without affecting performance because the data management occurs inside the SGA.
Oracle Database 10g introduced one of the most powerful features for data movement: Data Pump, the successor to the original export/import tool. Apart from being just plain faster, Data Pump provided many benefits such as parallelizing the process and remapping the tablespace. In Oracle Database 11g, it also helps secure dumpfiles via a new parameter called ENCRYPTION.
A dumpfile is outside the database and realm of database security and contains potentially sensitive data. In today's security conscious environments, they present a unique set of problems. In some really security conscious environments, DBAs resort to encrypting dumpfiles through third-party utilities after exporting the data—not a very convenient approach if you do a lot of exporting.
First, let's see how a typical dumpfile might be vulnerable. Suppose you have a table called TRANS, which contains a column called COMMENTS. The value in the column is "Transaction Comments". If you export this table normally:
$ expdp scott/tiger tables=trans dumpfile=insec.dmp directory=tmp_dir
$ strings /tmp/insec.dmp | grep Transaction
Now perform the export with the new parameter, ENCRYPTION. You also have to specify what type of algorithm to use. We'll use the AES 128-bit algorithm.
$ expdp scott/tiger tables=trans dumpfile=sec.dmp directory=Search this dumpfile for the existence of the text string:
tmp_dir encryption=data_only encryption_algorithm=aes128
Export: Release 184.108.40.206.0 - Beta on Sunday, 22 July, 2007 18:17:30
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 220.127.116.11.0 - Beta
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options
'/******** AS SYSDBA' tables=scott.insecure_trans dumpfile=
sec.dmp directory=tmp_dir encryption=data_only encryption_algorithm=aes128
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."TRANS" 16.82 KB 200 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 18:17:44
$ cat /tmp/sec.dmp | grep Transaction $
No matches will come up because the values in the dumpfile are encrypted, not cleartext.
"But wait," you may ask, "Doesn't encryption require a key? Where did we pass the key to be used? Better yet, when decrypting, how will decryption occur without a key?"
The answer is very simple: The key will be from the wallet that was used in Transparent Tablespace Encryption you saw earlier. You do not need to use Transparent Tablespace Encryption feature to use the Data Pump encryption, but you need to follow the steps to create the wallet. Needless to say, the wallet must be open during the encryption and decryption processes.
If you are familiar with the Data Pump tool, you might recall that a similar feature is implemented thorough the parameter ENCRYPTION_PASSWORD. And now you may be wondering what's different now.
Good question. The 10g version encrypts the columns that were put under Transparent Data Encryption, not the whole dumpfile. If you do not use Transparent Data Encryption, you have no dumpfile encryption. In the 11g version of the tool, you can encrypt a dumpfile regardless of whether Transparent Data Encryption is used. This allows you greater flexibility and makes some practical sense. You may not want to encrypt data in the database for several reasons, performance and added security being the most obvious. But you do want to encrypt the data when it is outside the realm of the database. In that case the power of Data Pump encryption can't be beat.
You are probably familiar with the packages UTL_TCP, UTL_HTTP and UTL_SMTP; they enable communication between servers outside the realm of the database. For instance, utl_tcp is used to establish TCP/IP communication between two hosts, not through database links. Similarly, utl_http is used to make an http request from a Web server and utl_smtp is used to make SMTP mail calls between hosts.
These powerful tools are used occasionally by developers—for instance, utl_smtp to send emails from within the database, utl_http to pull Web pages that can be massaged inside a PL/SQL program, and so on. However, these tools present huge security risks. Using utl_tcp, a database user can go to any other machine approachable from that host, without even going to the system prompt. This was the modus operandi of the Voyager worm that rattled the Oracle user community just a year ago.
To address this risk, many experts advise to revoke the privilege of "execute from public" on these packages. In my Project Lockdown series, I recommended that approach as well. But what if the developers want to execute these packages for legitimate reasons?
Oracle Database 11g has a novel solution: You can grant the execute privilege package to anyone but control which resources they can call. For instance, utl_tcp can be limited to calling only a few IP addresses, a mechanism known as an Access Control List (ACL). If the host is on the ACL, the user can use in utl_tcp; a mere execute privilege on utl_tcp is not enough. Therefore it is impossible for a rogue process to overtake the utl_tcp package and make connections illegitimately.
Let's see how it works. First, create an ACL:
begin dbms_network_acl_admin.create_acl ( acl => 'utlpkg.xml', description => 'Normal Access', principal => 'CONNECT', is_grant => TRUE, privilege => 'connect', start_date => null, end_date => null ); end;
Here the parameter principal => 'CONNECT' indicates that the ACL applies to the CONNECT role. You can define a user or a role here. The ACL is created as a file called utlpkg.xml.
After the creation, you can check to make sure the ACL was added:
SELECT any_pathThe output is:
WHERE any_path like '/sys/acls/%.xml';
begin dbms_network_acl_admin.add_privilege ( acl => 'utlpkg.xml', principal => 'SCOTT', is_grant => TRUE, privilege => 'connect', start_date => null, end_date => null); end;
begin dbms_network_acl_admin.assign_acl ( acl => 'utlpkg.xml', host => 'www.proligence.com', lower_port => 22, upper_port => 55); end;
SQL> grant execute on utl_http to scott 2 / Grant succeeded. SQL> conn scott/tiger Connected. SQL> select utl_http.request('http://www.proligence.com') from dual; select utl_http.request('http://www.proligence.com') from dual * ERROR at line 1: ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1577 ORA-24247: network access denied by access control list (ACL) ORA-06512: at line 1
Now, add another rule to allow the communication:
1 begin 2 dbms_network_acl_admin.assign_acl ( 3 acl => 'utlpkg.xml', 4 host => 'www.proligence.com', 5 lower_port => 1, 6 upper_port => 10000); 7* end; 8 / PL/SQL procedure successfully completed. SQL> conn scott/tiger Connected. SQL> select utl_http.request('http://www.proligence.com') from dual; UTL_HTTP.REQUEST('HTTP://WWW.PROLIGENCE.COM') -------------------------------------------------------------------------------- </iframe><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD><TITLE>Proligence Home</TITLE> <META http-equiv=Content-Language content=en-us> ...
But this works for www.proligence.com, nothing else. If you call a different Web site, it will fail with the ORA-24247 again. This is security on the most granular level. If your business needs to connect to the host www.proligence.com, you can allow that yet prevent access to any other host, preventing a malicious user from using that facility to get to all other hosts.
To find out the details of the ACL, query the view DBA_NETWORK_ACLS:
select host, lower_port, upper_port, acl, aclid from dba_network_acls where ACL='/sys/acls/utlpkg.xml'; HOST --------------------------------------- LOWER_PORT UPPER_PORT ---------- ---------- ACL --------------------------------------- ACLID -------------------------------- prolin3.proligence.com 80 100 /sys/acls/utlpkg.xml 35D4278980DE6407E040A8C0680037D6 ... and so on ...
For my money, this is one of the best new security features of Oracle Database 11g.
In many organizations staging or QA databases are refreshed from time to time from the production database, allowing developers to run pre-production code against the database prior to going production. However, this procedure is a potential security problem -- when data is refreshed from the production to the QA database to keep them in synch, sensitive data elements may be exposed. To address that risk, many organizations alter sensitive data to some meaningless value in the non-production database, a practice known as data masking. For instance, you may want to update the Social Security Number with a random 9 character number in staging to mask the real SSNs.
But that's the easy part; making the change could be tough—you have to write the SQL script, make sure the process is restartable, take care not to overwhelm the system, and so on. Well, why not let Oracle take care of that while importing the data? In Oracle Database 11g, Data Pump has a new parameter, remap_data, for changing data during the import process.
First create a remap function:
create or replace package pkg_mask as function fn_mask_ssn (p_in varchar2) return varchar2; end; / create or replace package body pkg_mask as function fn_mask_ssn (p_in varchar2) return varchar2 is begin return lpad ( round(dbms_random.value (001000000,999999999)), 9,0); end; end; /
SQL> select * from accounts; ACC_NO ACC_NAME ACC_SSN ---------- ------------------------------ --------- 1 John Smith 123456789 2 Jane Doe 234567890
$ expdp scott/tiger tables=scott.accounts dumpfile= accounts.dmp directory=tmp_dirremap_data=accounts.acc_ssn:pkg_mask.fn_mask_ssn
This parameter places the value generate from the remap function fn_mask_ssn in the package pkg_mask. Note the format of the parameter. It follows the pattern:
The <ColumnName> is the name of the column whose value you want to mask. The actual remap logic is inside the <PackageName>.<FunctionName>.
Now, you can import the table into the non-prod database. After the import is complete, if you check the values of the table:
SQL> select * from accounts; ACC_NO ACC_NAME ACC_SSN ---------- ------------------------------ --------- 1 John Smith 645270861 2 Jane Doe 538996590
Note how the ACC_SSN values differ. These values were generated from the remap function pkg_mask.fn_mask_ssn we wrote earlier and stored in the export dumpfile. When we imported, that value was imported.
If you have already exported the table without this remap_data parameter, the values in the dumpfile are already the accurate value. You can mask them while importing, again by using the same magic parameter.
$ impdp scott/tiger dumpfile=accounts.dmp remap_data= accounts.acc_ssn:pkg_mask.fn_mask_ssn directory=tmp_dir tables=accounts
Here we have used a randomizer function; we could have used any other logic as well. For instance, suppose the requirement is replace all the numbers except the last four digits of the SSN with Xs. That would be simple too. All you have to do is to write a function like this:
create or replace package pkg_mask as function fn_mask_ssn (p_in varchar2) return varchar2; end; / create or replace package body pkg_mask as function fn_mask_ssn (p_in varchar2) return varchar2 is begin return 'XXXXX'|| substr (p_in,6,4); end; end; /
The packaged function is also reusable; it can be used in any import applied to any column. The possibilities of variations in masking data are limitless—as long as you can create a function that can return a value, you can use it to mask the value. This cuts your development and maintenance tasks in data masking and makes secure data refresh from production to non-production possible.
In previous versions most security operations are made through a tool called Oracle Security Manager. In Oracle Database 11g, Oracle Enterprise Manager contains all the tools to perform these tasks. Here is a screenshot of the Server tab of the Database Homepage. Note the section titled Security at the lower right hand side.
This section contains hyperlinks to all security related tools for procedures such as user, profile, and role management. You can also find wizards now to enable Virtual Private Database and Oracle Label Security, or create and manage application contexts from the Enterprise Manager screens.
In 11g Release 1 when you installed the database software, you were prompted with a choice: whether you want to keep the 10g-style security or the new one. If you chose the latter, the database would have been created with the 11g-specific features such as case sensitive passwords, password policy, user profiles, and so on.
In Release 2, you no longer have that choice; the database is created with the 11g-style features automatically. That means the database will be created with case sensitive passwords, a built-in password verification function, and user profiles to enforce this password verification function. This should be a welcome change, since this increases the security in the database.
What if you still want more time in migrating the users and applications to use the new case sensitive features? You can temporarily disable the case sensitive check by issuing the following SQL:
SQL> alter system set sec_case_sensitive_logon = false; System altered.
Auditing provides important insights into the database security by recording who did what. The trails are recorded based on the various database parameter settings and the type of auditing. For instance, database standard auditing is written to a table called AUD$ and fine grained auditing (FGA) is written to the table FGA_LOG$, both in the SYS schema. If the database auditing is set to OS, or XML, then regular files are written to the directories specified.
The audit trails have one distinguishing feature not visible in any other type of data: it keeps growing with activity. The user data grows only when new records are inserted, and in many databases, regular purging keeps the overall size of the user data pretty much the same. Audit trails, on the other hand, just keep growing, even when users simply select data.
Like anything that grows, you should trim the trails from time to time. Until now, the only supported way to trim was to stop the auditing and truncate the table. In 11g Release 2, there is a new packagem, dbms_audit_mgmt, that has APIs to manage the records in audit trails. It allows you to purge the audit trail as a manual process or institute a regularly scheduled job to purge.
Before you start, you have to initialize the purge process. The following SQL will initialize the process to perform a purge every 7*24 hours, i.e. one week. Please note, this does not mean that you must cleanup once a week. If needed you can also cleanup on an ad hoc basis as well.
begin dbms_audit_mgmt.init_cleanup( audit_trail_type => dbms_audit_mgmt.audit_trail_all, default_cleanup_interval => 7*24 ); end; /
Next, to perform the actual purge, you have to execute a packaged procedure dbms_audit_mgmt.clean_audit_trail().
begin dbms_audit_mgmt.clean_audit_trail( audit_trail_type => dbms_audit_mgmt.audit_trail_all ); end; /
Beware: this performs a DELETE operation so there will be a lot of redo and undo. When the initial purge is complete, you may want to set up an automated process to execute this periodically. You will need to create a DBMS Scheduler Job as shown below. This job will be run every week.
begin dbms_audit_mgmt.create_purge_job ( audit_trail_type => dbms_audit_mgmt.audit_trail_all, audit_trail_purge_interval => 7*24, audit_trail_purge_name => 'all_audit_trails_job' ); end; /
Note the parameter audit_trail_type which is set to dbms_audit_mgmt.audit_trail_all, which means all the audit trails: AUD$, SYS Audit, FGA audit trails, OS files and XML files. You can also specify those trails specifically as well.
What if the trails are too large? Since the above SQL performs a DELETE operation, it is not very conducive to a large delete operation. You may want to break it into chunks. To break it into a commit after every 100,000th record, you would set a property for that type of trail. For Database Audit trails, here is what you have to give:
begin dbms_audit_mgmt.set_audit_trail_property( audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std, audit_trail_property => dbms_audit_mgmt.db_delete_batch_size, audit_trail_property_value => 100000); end; /
begin dbms_audit_mgmt.set_audit_trail_property( audit_trail_type => dbms_audit_mgmt.audit_trail_fga_std, audit_trail_property => dbms_audit_mgmt.db_delete_batch_size, audit_trail_property_value => 100000); end; /
Note the change in the parameter audit_trail_type. The parameter accepts the following values for the associated reasons:
Purpose: to remove:
The normal AUD$ audit train the database
The FGA_LOG$ table, for Fine Grained Auditing
Both normal and FGA audit trails
The OS file audit trails
The XML audit trail files
Both the OS and XML audit trails
All of the above
After setting these up, you will obviously need to manage them. To examine the jobs for audit trail purge, issue:
SQL> select * from DBA_AUDIT_MGMT_CLEANUP_JOBS 2 / JOB_NAME ------------------------------------- JOB_STAT AUDIT_TRAIL -------- ---------------------------- JOB_FREQUENCY ------------------------------------- ALL_AUDIT_TRAILS_JOB ENABLED ALL AUDIT TRAILS FREQ=HOURLY;INTERVAL=168
This will tell you the jobs that are active. To show the various parameters of the audit trail purge processes, e.g. what the maximum size is, which is the default tablespace, etc., you can query a view called DBA_AUDIT_MGMT_CONFIG_PARAMS.
SQL> select * from dba_audit_mgmt_config_params;
PARAMETER_NAME PARAMETER_VALUE AUDIT_TRAIL
------------------------------ --------------- ------------------------------
DB AUDIT TABLESPACE AUDIT_TS STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TS FGA AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 XML AUDIT TRAIL
AUDIT FILE MAX AGE 5 OS AUDIT TRAIL
AUDIT FILE MAX AGE 5 XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 168 STANDARD AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 168 FGA AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 168 OS AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 168 XML AUDIT TRAIL
Purging of audit trail is an important and necessary task. Now there is a specific tool to accomplish that objective.
One key fact that applies to any version of the Oracle Database is that the audit trail simply keeps on growing. It’s based on activity; not the amount of data changes. The default location of the database audit trail – the AUD$ table – is the SYSTEM tablespace. If this tablespace gets filled up there could be serious consequences. The same concern is true for FGA_LOG$, which is located in the SYSAUX tablespace. If you could move these two tables from the SYSTEM tablespace to their own ones, the problem of ever expanding SYSTEM tablespace would be solved. But in the earlier versions of the database this was not acceptable to move the audit trails to a different tablespace.
In 11g Release 2, you can move the some or all audit trails to their own tablespace. First you will have to create the tablespace, of course:
create tablespace audit_ts datafile '+AUDITDG' size 1000M segment space management auto /
You have to consider one possibility very carefully: This tablespace will need to be online when the database is open. If the tablespace is offline after the database comes up and you perform an action that is auditable, you will get an ORA-02002 error. The solution, of course, is to turn off auditing for the database and then restart it. Obviously you will need to place the audit tablespace on a reliable disk location and perhaps mirrored for protection.
Then using a special package you will move the AUD$ table to the new tablespace:
begin dbms_audit_mgmt.set_audit_trail_location( audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std, audit_trail_location_value => 'AUDIT_TS'); end; /
To move the FGA_LOG$ table to this tablespace:
begin dbms_audit_mgmt.set_audit_trail_location( audit_trail_type => dbms_audit_mgmt.audit_trail_fga_std, audit_trail_location_value => 'AUDIT_TS'); end; /
The above commands move the tables to the new tablespace and free up the system and sysaux tablespaces. This is recommended when you build the database for the first time.
You generally use global application contexts when you want to share their values across session boundaries, i.e. you can set in one session and then reference them in another. The same objective can also be accomplished through the global package variables but global variables are not secure; they can be set by any user anywhere. Application contexts allow selective setting of values using their “trusted” procedure only, making them highly secure.
However, in the previous releases of the Oracle Database, the “globality” was limited to a single instance only. In a RAC database, you could see the application context value between sessions, but only if they were in the same instance.
In this 11g Release 2, you can set a global application context in one instance and reference it later in another. This makes the global application context truly global – across the entire database regardless of the instance it was set in.
This new feature makes secure application development very easy. Here is how you could use the feature across instances:
Create a context named, say, gctx.
SQL> create context gctx 2 using set_gctx 3 accessed globally;
Now create the trusted procedure of the context.
SQL> create or replace procedure set_gctx 2 ( 3 p_attr1_val varchar2 4 ) as 5 begin 6 dbms_session.set_context ('GCTX','ATTR1', p_attr1_val); 7 end; 8 / Procedure created.
The role of the trusted package is important. The only way a context attribute can be set is by calling its trusted package, no other way. To demonstrate, let’s check the value of an attribute called ATTR1 on instance 1:
SQL> select sys_context('GCTX','ATTR1') 2 from dual; SYS_CONTEXT('GCTX','ATTR1') ---------------------------
It’s not set. Now set the value to “Attrib1_Value”.
SQL> exec set_gctx('Attrib1_Value') PL/SQL procedure successfully completed.
Now if you check the value, again on instance 1:
SQL> select sys_context('GCTX','ATTR1') 2 from dual; SYS_CONTEXT('GCTX','ATTR1') --------------------------- Attrib1_Value
See how the value is set. Now, from the other instance – instance 2 – check the value:
SQL> select sys_context('GCTX','ATTR1') 2 from dual; SYS_CONTEXT('GCTX','ATTR1') --------------------------- Attrib1_Value
The same value is returned. Now, let’s change the value of the attribute in this instance – instance 2 – and check if this is visible across all instances. First check the instance:
SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- DB112H2
It’s instance 2, named DB112H2. Set the context value and check it on this instance:
SQL> exec set_gctx('Attrib1_NewValue') PL/SQL procedure successfully completed. SQL> select sys_context('GCTX','ATTR1') 2 from dual; SYS_CONTEXT('GCTX','ATTR1') --------------------------- Attrib1_NewValue
Now connect to the other instance and check the value.
SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- DB112H2 SQL> select sys_context('GCTX','ATTR1') 2 from dual; SYS_CONTEXT('GCTX','ATTR1') --------------------------- Attrib1_NewValue
As you can see the value of the attribute is visible not only in that instance but on the other instances as well. Now you have the ability to use the value in a secure manner across the entire database.
Remember setting the password for listener? Up to just prior to Oracle Database 10g, setting this password was a must for any security establishment worth its salt. In 10g, the password was not mandatory. It was not needed (and was ignored, if set) for the Oracle software owner. It was still possible to set the password for any user other than the Oracle software owner.
However, there is little or no need for any other user to start the listener. It’s most likely the Oracle user. Therefore the need for the password is less. In this version of Oracle, the password is depecrated.
There are actually two major enhancements here. Let’s examine them separately.
First, there is a new privilege: execute on a directory object. I’m sure you know about External Tables, which allowed you to present flat files as tables to the Oracle database users. What happens when you get a file that is compressed? Well, you can always uncompress it before selecting from the external table. It’s one extra step.
In 11g Release 2, that extra step is removed. It allows you to provide a pre-processor option for loading files or using external tables. These pre-processors could be as simple as unzip programs or some esoteric user written programs that convert the loaded data on the fly. The question now is where you would put this pre-processor utility, which is an Operating System executable?
It’s not as important where you put them as how you will prevent their unauthorized use. If it is an unzip program, it may not be as much concern, but what if your proprietary business logic is performed by this executable? You can’t let the utility be accessed by anyone other than the user loading or selecting the data. You can protect it by using a new privilege – execute – on the directory object where the pre-processor utility is located. You can grant it to applicable users by issuing the SQL:
grant execute, read on directory utility_dir to scott;
Now SCOTT can execute programs in the utility_dir directory. No one else without the privilege can.
Now, let’s see the second feature, which is closely related. Since this directory utility has all these sensitive programs, you may have to determine who all are executing these programs. A new auditing option allows you to capture the records of execution of programs in the directory. Here is how you audit the executions of the programs in the directory:
audit execute on directory utility_dir by session;
After this command, the executions on the utilities in this directory are audited. This allows you to track who all are using the utilities. Not only this establishes accountability, it also provides a basis for determining whether the grantees should even have that privilege.
Auditing allows you to see who is doing what. You have the option of auditing specific statements such as INSERT, DELETE, etc. What if you want to audit all statements, regardless of the type, issued by the user?
You do have an option – the AUDIT ALL command – which records all actions taken by a user. But the problem is it records all actions, not statements whether or not the user issued them directly. So you may end up with a large number of audit trail entries, quite difficult to manage. In 11g Release 2, the AUDIT ALL STATEMENTS statement allows you to do exactly that: audit statements directly issued by the user.
SQL> audit all statements; Audit succeeded.
Very important: keep in mind that it records only the top-level SQL statements – the ones issued directly by the user; not ones contained within procedural code. For instance, if you issue:
delete table1; delete table2;
Both statements will be audited; but if you have a PL/SQL stored procedure:
create or replace procedure cleanup as begin delete table1; delete table2; end; /
and execute the stored procedure begin cleanup; end; – the only statement that gets audited is the execution of the stored procedure; not the delete statements within it. If this is what you want, good; otherwise you may end up with no relevant audit trails. This new option allows you to selectively audit the activities of users and applications.
Have you ever faced a scenario like the following?
You have a stored procedure that calls a host of other procedures which in turn call other procedures and functions and access tables and views. Somewhere in the complex labyrinth of the accessing statements, some statement is failing with a non-existent privilege or perhaps a table with a wrongly spelled name. For instance, somewhere a statement similar to “select * from incorrect_table” is being issued and it fails with ORA-942 because there is no table with the name “incorrect_table”.
SQL> select * from incorrect_table; select * from incorrect_table * ERROR at line 1: ORA-00942: table or view does not exist
The problem is in a complex application code you may not find exactly which SQL statement is causing the problem. All you may have is the error code 942. It makes the task of finding the cause of the error extremely difficult.
How would you find the actual statement? One option is to trace the session with an event 10046 and examine the resultant trace file – possible but definitely not simple, especially if the trace file is very large. You can turn on auditing on all activities by issuing AUDIT ALL statement but it will generate massive amounts of audit trails, which can become unmanageable and impacts the system performance. In 11g Release 2, a new audit option – IN SESSION CURRENT – allows you to limit the scope of the auditing to the current session only. You enable auditing of all activities in the current session by issuing:
SQL> audit all in session current;
Now it will audit all activities in the current session alone, regardless of the overall auditing settings. To make the process easier, before you run the application code, get the audit sessionid of the session (note: get the audit sessionid, different from the SID):
SQL> select audsid from v$session 2 where sid = (select sid from v$mystat where rownum < 2);
AUDSID ---------- 183324
After that you execute your application in that session. This generates audit trails on all activities occurring in that session. Later you can examine the audit trails to identify which specific statement failed with an Oracle error:
SQL> select timestamp, owner, obj_name, action_name, returncode, sql_text 2 from dba_audit_trail where sessionid = 183324 3 and returncode > 0 4 / TIMESTAMP OWNER --------- ------------------------------ OBJ_NAME --------------------------------------- ACTION_NAME RETURNCODE ---------------------------- ---------- SQL_TEXT --------------------------------------- 03-OCT-09 ARUP INCORRECT_TABLE SELECT 942 select * from incorrect_table
The entry clearly shows the statement that caused the issue, and the exact error – 942 (the RETURNCODE column). The object name INCORRECT_TABLE is invalid and hence the session got ORA-942 error. You can then search for this statement in the application code find where it comes from.
The statement AUDIT CURRENT SESSION can be set by a post logon trigger so it is issued after a session is connected.
You don’t need to get the audit sessionid prior to running the code. You can always get the audit sessionid when the application code is running from the V$SESSION view in the AUDSID column.
select audsid from v$session where sid = <SID>
The above example is very simple. You may find the exact statement in different ways, as long as it is available within the audit trail.
In the previous versions of the Transparent Data Encryption (TDE) feature, the masterkeys used for colum-level TDE (introduced in 10g Release 2) and tablespace level TDE (introduced in 11g Relase 1) were different. In 11g Release 2, they are identical via the unified master encryption key.
Two specific advantages jump out:
If the encryption is key is changed, it is done for both types of TDE. Many security regulations and mandates such as PCI require that the encryption keys be changed from time to time. The change can be done only once for all types.
The master encryption key can potentially be stored in a hardware security module, which applies to both types of TDE.
In the tablespace-level TDE, the encryption key is stored in the header of the datafiles of the tablespace (encrypted, of course). Some security regulations require that the TDE keys should be rotated as well. Even though the tablespace level keys can’t changed, the masterkey used to encrypt those keys can be changed in this version.
To change the tablespace level TDE masterkey, issue the following statement:
alter system set encryption key identified by "newpass”;
Speaking of encryption, the tablespace-level TDE is available for Oracle Exadata as well. Since Exadata works by pre-filtering the results at the storage cells (and therefore causing less data to be processed by the database nodes), there is a technical challenge to getting the key to the storage cells. Now that is not a problem; the key is automatically copied to the storage cells. When the data filtering is performed by the cell, it has access to the key and therefore can decrypt the data prior to filtering.
Continuing on the same theme, there is a strong support for Oracle RAC as well. The encryption master is stored in a location called encryption wallet (or “wallet”, in short). The wallet must be opened before the encryption and decryption can table pace. To open the wallet, you have to issue:
alter system set encryption wallet open identified by "pooh";
The password is “pooh” in this case. Since this is an ALTER SYSTEM command, in the prior versions of Oracle, it was applicable to that specific instance only. So if you had a RAC database, you had to issue this on all the instances. Apart from consuming time, it was also prone to omissions by a forgetful DBA or security administrator.
In this release, the operation applies to all the RAC instances, automatically. There is no need to do it on all instances individually. This behavior applies to wallet closing and rekeying operations as well.
When you audit access to an object, there are two different ways: by Session and by Access. If a user accessed the object twice, “by session” records only once while “by access” records twice. For instance, suppose you want to audit select on the table CREDIT_CARDS by session, you need to issue this SQL:
audit select on credit_cards by session;
In pre-11g Release 2 versions, there would have been only one record for session as shown below:
USERNAME ACTION_NAME TIMESTAMP SES_ACTIONS ---------- ---------------------------- --------- ------------------- ARUP SESSION REC 11-JUN-10 ---------S------
The column SES_ACTIONS recorded what was performed. The letter “S” stands for "Success" and position of the letter indicates the operation. The position shown above is for SELECT statement. It showed that the user selected against that some unknown number of times successfully. The number of accesses and the table name is not known from the audit trail.
If the user performed some other operation such as DELETE or UPDATE, the corresponding flags of the value under SES_ACTIONS column would have been set. The ACTION_NAME would have been shown as “SESSION REC”, indicating that the record is a consolidation of the action summary.
You can clearly see the limitation of this auditing – you never know how often the access was made; just the fact the there was at least one access. To record each time the access was performed, you have to audit “by access”:
SQL> audit select on ccmaster.credit_cards by access;
After this, the audit records show once per access:
USERNAME ACTION_NAME TIMESTAMP SES_ACTIONS ---------- ---------------------------- --------- ------------------- ARUP SELECT 11-JUN-10
Note, the ACTION_NAME column now shows what command was given. If the command is given 10 times in a session, there will be 10 rows in the trail; not one. While this produced more useful information it also filled up the audit trail more. The “by session” conserved space by producing only one record per session.
In 11g Release 2, this behavior has changed. Now there is will be one record per activity even in case of “by session” auditing. In all records the SES_ACTIONS will be populated by “---------S-----“ value and the ACTION_NAME column will still show SESSION REC but there will be many entries – one per access.
What this means for you? Well, the whole point of per-session auditing was to limit the activity in audit trail to one record per session. However as you can see, in 11g Release 2, that is impossible; the audit records will be generated one per access. Therefore there is no real advantage to using session auditing for objects anymore. You should move to a per-access auditing mechanism which yields better results anyway.
This is the age of v6-style IP addressing and Oracle has caught up. In 11g Release 2 you can use IPv6-style addresses in addition to the legacy IPv4 style. This applies to the packages where IP addresses are referenced. Here is a list:
Let’s see an example. You learned how to protect specific packages by restricting access to only a specified group of users above. To restrict access to a specific host you had to use:
begin dbms_network_acl_admin.assign_acl ( acl => 'utlpkg.xml', host => '10.14.104.253', lower_port => 22, upper_port => 55); end;
Now, in addition to the IP address in that format, you can use the IPv6 address as well. Here is the new format:
begin dbms_network_acl_admin.assign_acl ( acl => 'utlpkg.xml', host => 'fe80::219:21ff:febb:9aa5', lower_port => 22, upper_port => 55); end;
Note the IPv6-style address against the host parameter. To know the IPv6 address, simply run the ifconfig command.
oracle@oradba1 /# /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr 00:19:21:BB:9A:A5 inet addr:10.14.104.253 Bcast:10.14.107.255 Mask:255.255.252.0 inet6 addr: fe80::219:21ff:febb:9aa5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:82397559 errors:0 dropped:0 overruns:0 frame:0 TX packets:25308057 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4122552682 (3.8 GiB) TX bytes:1310294427 (1.2 GiB) Interrupt:209 Base address:0xb800
The value of inet6 addr (shown in bold) shows the IPv6 address.
Also: you know that to put extended activity data such as SQL statement executed, you were using db_extended as the value of audit_trail. In this release that value deprecated. But wait; the functionality is not removed. To use extended auditing, you just have to use a new format:
audit_trail = db,extended