As Published In
Oracle Magazine
March/April 2013

TECHNOLOGY: Ask Tom

  

On Becoming Others, Limits, and Restoration

By Tom Kyte Oracle Employee ACE

 

Our technologist assumes identities, eliminates external limitations, and recovers constraints.

I got handed this requirement, and I don’t think there’s a good way to solve it. I have a user, USERA, who has some tables, data, and packages. Another user, USERB, has SELECT privileges on that data (easy enough) and the ability to make her own packages (easy enough), but the requirement is to give USERA the ability to edit USERB’s packages. I think this is impossible without giving an ANY privilege. I think it would be best if USERA just had USERB’s password. Is there any other solution?

Whatever you do, do not give out passwords like that. You’ll never know who did what in your database. That is just asking for trouble.

You can allow A to assume B’s identity, however, but still maintain the fact that A was “being B” at that time so you can audit things and know that A acting as B performed an operation. (And B can change her password without affecting A’s ability to assume her identity.)

I’ll demonstrate this capability by creating the two accounts:

SQL> create user a
  2  identified by a;
User created.

SQL> grant
  2  create session,
  3  create procedure
  4  to a;
Grant succeeded.

SQL> create user b
  2  identified by b
  3  default tablespace users
  4  quota unlimited on users;
User created.

SQL> grant
  2  create session,
  3  create table,
  4  create procedure
  5  to b;
Grant succeeded.


So now I have the two users, A and B, and they have just the privileges they require and nothing more. All that is left is to give A the ability to “become B,” and I do this by using the following ALTER USER statement:

SQL> alter user b 2 grant connect through a; User altered.

Now A is allowed to “become B” and edit B’s code. I’ll demonstrate that by creating a procedure as B and then allowing A to edit it:

SQL> connect b/b
Connected.

SQL> create
  2  procedure p
  3  as
  4  begin
  5      null;
  6  end;
  7  /
Procedure created.


Now I’ll log in as A and then, as A, assume B’s identity:

SQL> connect a/a
Connected.

SQL> show user
USER is "A"

SQL> connect a[b]/a
Connected.
SQL> show user
USER is "B"


Note how A was able to connect in SQL*Plus using A’s username and password and become user B. Now that A is logged in as B, A can use B’s privileges to perform operations:

SQL> create or replace
  2  procedure p
  3  as
  4  begin
  5      dbms_output.put_line
              ( 'new stuff' );
  6  end;
  7  /
Procedure created.

SQL> create table t2 ( x int );
Table created.

SQL> grant execute on p to a;
Grant succeeded.


Note that A not only edited the stored procedure P and replaced it but also used B’s CREATE TABLE privilege! That might not be what was intended, and I’ll show in a moment how to prevent that. But first let’s just verify that the code changes worked as expected:

SQL> connect a/a
Connected.

SQL> exec b.p
new stuff
PL/SQL procedure successfully completed.


Sure enough, user A was able to update the procedure owned by B.

Now I’ll deal with the CREATE TABLE issue. I like to follow the concept of “least privilege” at all times, so I want to make it so A can use B’s CREATE PROCEDURE and CREATE SESSION privileges but not the CREATE TABLE privilege. I can do this via roles.

I re-create user B as follows:

SQL> create role b_role1;
Role created.

SQL> create role b_role2;
Role created.

SQL> grant
  2  create procedure,
  3  create session
  4  to b_role1;
Grant succeeded.

SQL> grant
  2  create table
  3  to b_role2;
Grant succeeded.

SQL> create user b
  2  identified by b
  3  default tablespace users
  4  quota unlimited on users;
User created.

SQL> grant b_role1 to b;
Grant succeeded.

SQL> grant b_role2 to b;
Grant succeeded.


Note how I used roles to give B the ability to create procedures and tables and separated the ability to create a procedure and the ability to create a table into two roles. Now I can grant user A the ability to connect through B with only certain roles enabled:

SQL> alter user b
  2  grant connect through a
  3  with role b_role1;
User altered.


Now when user A “becomes B,” only B_ROLE1 will be enabled, giving CREATE SESSION and CREATE PROCEDURE but not CREATE TABLE privileges to user A:

SQL> connect a[b]/a
Connected.

SQL> show user
USER is "B"

SQL> create or replace
  2  procedure p
  3  as
  4  begin
  5      dbms_output.put_line
              ( 'new stuff' );
  6  end;
  7  /
Procedure created.

SQL> create table t2 ( x int );
create table t2 ( x int )
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> grant execute on p to a;
Grant succeeded.


As you can see, user A can connect as user B to create the procedure—but not the table—even though user B can create a table:

SQL> connect b/b
Connected.

SQL> create table t ( x int );
Table created.


This ability to connect through a user is called proxy authentication and can be very useful in other environments, such as middle-tier authentication in a three-tier implementation. For more details, see bit.ly/UaE6FR.

Unlimited External Tables

Can you use the SQLLDR BOUNDFILLER keyword in an external table definition? I have data like this:

"Jane Doe",2008,03,18,23,59,4
"Tom Thomas",2011,10,31,18,00,59
"Bill Williams",2012,02,31,01,10,5

The fields represent a name and a datetime. I tried creating an external table definition with a field list like this

( patient_name
,v_yyyy boundfiller char
,v_mm boundfiller char
,v_dd boundfiller char
,v_hh boundfiller char
,v_mi boundfiller char
,v_ss boundfiller char
,service_date "to_date(:v_yyyy || :v_mm || :v_dd || :v_hh || :v_mi || :b_ss , 'YYYYMMDDHH24MISS')"
)

but I receive the following error message, which indicates that the BOUNDFILLER keyword is not recognized:

ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "identifier": expecting one of: "binary_double, binary_float, comma, char, date, defaultif, decimal, double, float, integer, (, nullif, oracle_date, oracle_number, position, raw, recnum, ), unsigned, varrawc, varchar, varraw, varcharc, zoned"
KUP-01008: the bad identifier was: boundfiller
KUP-01007: at line 13 column 21

If I cannot use BOUNDFILLER, please explain another way to concatenate multiple fields in the external table definition.

This is a trick question (of sorts). You are forgetting that with an external table, you don’t have the limits of the legacy data loading tool from the twentieth century called SQL Loader (SQLLDR). With external tables, you have all the power of the SELECT statement to do whatever you want.

To demonstrate this, I’ll show you the easiest way to upgrade from SQLLDR to external tables—using SQLLDR itself. I’ll take your control file 

LOAD DATA
INFILE '/home/tkyte/t.dat'
INTO TABLE t
REPLACE
FIELDS TERMINATED BY ','
trailing nullcols
(
patient_name 
,v_yyyy boundfiller char 
,v_mm boundfiller char
,v_dd boundfiller char 
,v_hh boundfiller char
,v_mi boundfiller char 
,v_ss boundfiller char
,service_date "to_date(:v_yyyy 
|| :v_mm || :v_dd || :v_hh || :v_mi 
|| :v_ss ,'YYYYMMDDHH24MISS')"
)
 

and run it through SQLLDR, asking SQLLDR to convert it into an external table: 

$ sqlldr / t.ctl external_table=generate_only
 

Now, if I inspect the log file generated by this operation, I’ll find a CREATE TABLE statement as well as an INSERT AS SELECT statement that finishes the job, as shown in Listing 1.

Code Listing 1: Log information for external table creation

CREATE TABLE statement
CREATE TABLE "SYS_SQLLDR_X_EXT_T"
(
  "PATIENT_NAME" VARCHAR2(30),
  "V_YYYY" VARCHAR2(255), "V_MM" VARCHAR2(255),
  "V_DD" VARCHAR2(255),   "V_HH" VARCHAR2(255),
  "V_MI" VARCHAR2(255),   "V_SS" VARCHAR2(255),
  "SERVICE_DATE" VARCHAR(4000)
)
ORGANIZATION external
( 
  TYPE oracle_loader
  DEFAULT DIRECTORY MY_DIR
  ACCESS PARAMETERS
  ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
    BADFILE 'MY_DIR':'t.bad'
    LOGFILE 't.log_xt' READSIZE 1048576
    FIELDS TERMINATED BY "," LDRTRIM
    MISSING FIELD VALUES ARE NULL 
    REJECT ROWS WITH ALL NULL FIELDS
    ( "PATIENT_NAME" CHAR(255) TERMINATED BY ",",
      "V_YYYY" CHAR(255)       TERMINATED BY ",",
      "V_MM" CHAR(255)         TERMINATED BY ",",
      "V_DD" CHAR(255)         TERMINATED BY ",",
      "V_HH" CHAR(255)         TERMINATED BY ",",
      "V_MI" CHAR(255)         TERMINATED BY ",",
      "V_SS" CHAR(255)         TERMINATED BY ",",
      "SERVICE_DATE" CHAR(255) TERMINATED BY ","
    )
  ) location('t.dat')
)REJECT LIMIT UNLIMITED

INSERT AS SELECT statement
INSERT /*+ append */ INTO T
(
  PATIENT_NAME,
  SERVICE_DATE
)
SELECT
  "PATIENT_NAME",
  to_date("V_YYYY" || "V_MM" || "V_DD" || "V_HH" || "V_MI" || "V_SS" , 
'YYYYMMDDHH24MISS')
FROM "SYS_SQLLDR_X_EXT_T"
 

Because I can use any SQL I want for reading the data from the external table, I can apply any logic I want at that level—not in the external table definition.

Restoring Index/Constraints of a Dropped Table

I have a table with a primary key constraint and an associated index. When using FLASHBACK, I can restore a table, but how do I restore the constraints and indexes associated with this table? I can re-create the indexes and constraints, but is it possible to restore indexes and/or constraints by using Oracle Flashback technology?

The indexes and constraints are actually restored by flashback, but you might not have recognized them. They retain their recycle bin names, so their names start with BIN$ followed by seemingly random characters. They come back with the table, but their names are lost. All you need to do is rename them. For example 

alter index 
"BIN$/yO2LoFDTmObx1GIQtwxOA==$0" 
rename to PK_SR_TEST1; 
 

But the problem is that you might not know what their names were, or there might be a lot of things to rename. Many times you can use flashback query on the data dictionary to address such issues. To demonstrate this, I’ll create a table that I will accidentally drop in a moment (so I can then “undrop” it): 

SQL> create table t
  2  ( x int,
  3    constraint t_pk primary key(x),
  4    constraint check_x check(x>0)
  5  );
Table created.
 

Now I’ll need to know the time for some period when the constraints and indexes existed. I don’t need to know exactly when I dropped the table, just some time when the table existed with its indexes and constraints. I’m going to use a system change number (SCN) as the time, but you would probably use a time stamp (date and time). I’ll save the current SCN into a variable for use later in this demonstration: 

SQL> column SCN new_val S
SQL> select dbms_flashback.get_system
_change_number SCN
  2    from dual;

       SCN
——————————————
 106788340
 

Now I’ll accidentally (on purpose) drop the table: 

SQL> drop table t;
Table dropped.
 

Restoring this table is trivial with FLASHBACK TABLE: 

SQL> flashback table t
  2  to before drop;
Flashback complete.
 

But as you can see, the index is now named BIN$xxxxx: 

SQL> column index_name new_val I
SQL> select index_name
  2    from user_indexes
  3   where table_name = 'T';

INDEX_NAME
—————————————————————————————————————————
BIN$zwMim2IndhbgQwEAAH/duw==$0
 

All I need to do now is perform a flashback query on that view, but I’ll need a couple of grants first: 

SQL> connect / as sysdba
Connected.

SQL> grant flashback on 
user_indexes to ops$tkyte;
Grant succeeded.

SQL> grant flashback on 
user_constraints to ops$tkyte;
Grant succeeded.
 

Once those grants are in place, I can perform a flashback query on the USER_INDEXES view: 

SQL> connect /
Connected.

SQL> column index_name new_val OI
SQL> select index_name
  2    from user_indexes as of scn &S
  3   where table_name = 'T';
old   2:   from user_indexes 
as of scn &S
new   2:   from user_indexes 
as of scn  106789036

INDEX_NAME
—————————————————————————————————————————
T_PK
 

Now I have the current and old index names and can simply rename the old index: 

SQL> alter index "&I" rename 
to "&OI";
old   1: alter index "&I" rename 
to "&OI"
new   1: alter index 
"BIN$zwNBF4ITdtLgQwEAAH/Atw==$0" 
rename to "T_PK"

Index altered.
 

Using the same logic against the USER_CONSTRAINTS view, I can retrieve the old and the new constraint names as well and rename the old constraints.

Orphaned Processes and Blocked Sessions

Next Steps


 ASK Tom
Tom Kyte answers your most difficult technology questions. Highlights from that forum appear in this column.

 FOLLOW Tom on Twitter

READ
 more Tom
 Oracle Database Concepts 11g Release 2 (11.2)
 Expert Oracle Database Architecture: Oracle Database 9i, 10g, and 11g Programming Techniques and Solutions,Second Edition

 DOWNLOAD Oracle Database 11g Release 2

LEARN more about
 proxy authentication
 connection pool settings

FOLLOW Oracle Database
 on Twitter
 on Facebook
 

I have inherited an Oracle Database 10g instance, and I am getting a lot of reports that the database is running slowly. It supports a Web-based app, and I know that the application server connection pools are having problems, but looking at the database, I can see the number of inactive sessions get to about 600 and active sessions to 90, and of those sessions, approximately 45 are blocked.

I don’t have 600 users, so I expect that many of the inactive sessions are orphans. I am looking at setting up a profile to expire inactive sessions after a while. Do you have any suggestions on how I can diagnose the blocked sessions further?

Do not set up a profile to kill inactive sessions! The connection pool will have sessions in the database all the time, and many of them might be idle for a long time during periods of inactivity. If you kill them, you’ll kill the connection pool and the application will start returning error messages about not being connected anymore (and you’ll make the problem worse).

Now to the problem at hand. Your developers are leaking connections—probably connections and cursors. They have some code like this: 

try
{
  grab a connection
  prepare statement
  execute statement  
   <<<==== this locks some resource
  close statement
  prepare statement2
  execute statement2 
   <<<==== this fails 
  close statement2
  commit
  release connection back to pool
} 
catch (e exception)
{
 print out a silly message somewhere
}
 

I am 100 percent certain that they have code similar to this—I’ve seen this error pattern so many times. They have an error that gets raised every now and then—just once is enough—and they fly over the release of the connection back to the pool with their flawed exception handling. They have an outstanding transaction with locked resources associated with a connection handle they can never get access to again.

To easily prove this, have the developers set their connection pool to a minimum size of 1 and a maximum size of 1 in the test/QA environment and run a full regression. If that test cannot run forever without getting stuck (without running out of connections), this bug exists in their code. I am 100 percent certain that their code will get stuck quickly.

So, what I suggest is get a handle on your connection pool settings. Always set MIN and MAX to equal each other and set them to some reasonable number (never let these things grow dynamically). Watch the narrated video at bit.ly/11YQuhB for some very compelling evidence that this is extremely important (maybe the most important thing you can do for your system’s stability).

Run the previously mentioned connection test in the dev/QA environment with a connection pool sized to one connection until it works forever.

Until you get the number of connections in the connection pool under control, use Oracle Enterprise Manager to see the blocking/blocker graphs and kill sessions that are blocking other sessions and that have been idle for some period of time. That solution really stinks, but until the developers fix their very serious bug, it is “state of the art.”

Don’t just kill idle sessions. They could be associated with a connection in a connection pool that just hasn’t been used in a while. Do use Oracle Database’s Resource Manager feature to limit the number of concurrently active sessions in your system as a protection mechanism. You have 600 connections—if they all become active (or try to become active), your machine will melt down, because you don’t have 600 CPU cores (and that’s a safe assumption). Limit the number of concurrently executing sessions to some reasonable number—watch that video for the reasoning behind this statement. You probably want no more than 10*cpu_count sessions active at any given time (and maybe fewer than 10*cpu_count, depending on what the sessions are doing).


Tom Kyte Headshot


Tom Kyte is a database evangelist in Oracle’s Server Technologies division and has worked for Oracle since 1993. He is the author of Expert Oracle Database Architecture (Apress, 2005, 2010) and Effective Oracle by Design (Oracle Press, 2003), among other books.

 

 



Send us your comments