By Tom Kyte
Our technologist talks integrity, time, and belonging.
After going through the Oracle9i Database Concepts Manual and your site, I still do not understand the following concept:
"A constraint that is defined as deferrable can be specified as one of the following:
1. initially immediate or
2. initially deferred."
I think I know what deferred constraints are, but not "deferrable initially immediate" and "deferrable initially deferred" constraints. Please explain the difference. Also, what is the use of these constraints?
This is a common area of confusion. I hope the following examples clear it up. The initially immediate/deferred part tells how the constraint should be enforced by default:
Initially immediate—check the constraint at the end of statement execution
Initially deferred—wait to check the constraint until the transaction ends (or you invoke set constraint immediate)
Consider the following:
SQL> create table t 2 ( x int constraint check_x check ( x > 0 ) deferrable initially immediate, 3 y int constraint check_y check ( y > 0 ) deferrable initially deferred 4 ) 5 / Table created. SQL> insert into t values ( 1,1 ); 1 row created. SQL> commit; Commit complete.
So, when both constraints are satisfied, the rows are inserted without error. However, if I attempt to insert a row that violates the CHECK_X constraint, an initially immediate constraint, the constraint gets verified immediately and I observe:
SQL> insert into t values ( -1,1); insert into t values ( -1,1) * ERROR at line 1: ORA-02290: check constraint (OPS$TKYTE.CHECK_X) violated
Because CHECK_X is deferrable but initially immediate, that row is rejected straight away. CHECK_Y , however, is going to behave differently. Not only is it deferrable but it is also initially deferred, meaning the constraint will not be verified until I COMMIT or set the constraint state to immediate.
SQL> insert into t values ( 1,-1); 1 row created.
See how that succeeds (so far, anyway). I have deferred the constraint checking until COMMIT time:
SQL> commit; commit * ERROR at line 1: ORA-02091: transaction rolled back ORA-02290: check constraint (OPS$TKYTE.CHECK_Y) violated
My transaction has been rolled back by the database at this point, because the COMMIT failed due to the constraint violation. So that series of statements demonstrated the difference between initially immediate and initially deferred constraints. The initially component dictates when Oracle will validate the constraint by default—either at the end of the statement (immediate) or at the end of the transaction (deferred). I also need to explore what the deferrable clause would allow me to do. I will issue the command to make deferred all constraints that can be deferred. Note that you can issue this command for a single constraint, if you choose, as well; you don't have to make deferred all constraints that can be deferred:
SQL> set constraints all deferred; Constraint set. SQL> insert into t values ( -1,1); 1 row created.
With the initially immediate constraint set to deferred mode, that statement appears to succeed; however, see what happens when I COMMIT my transaction:
SQL> commit; commit * ERROR at line 1: ORA-02091: transaction rolled back ORA-02290: check constraint (OPS$TKYTE.CHECK_X) violated
The transaction fails and is rolled back, because the check constraint is checked upon COMMIT . Conversely, I can make the initially deferred constraint act as an "immediate" constraint would:
SQL> set constraints all immediate; Constraint set. SQL> insert into t values ( 1,-1); insert into t values ( 1,-1) * ERROR at line 1: ORA-02290: check constraint (OPS$TKYTE.CHECK_Y) violated
Now the statement that used to work until I committed fails immediately. I have changed the default constraint mode manually.
What are some real-world uses of deferrable constraints? There are many. The primary need for them is with materialized views (a.k.a. snapshots). These views can use deferrable constraints in order to perform view refreshes. During the refresh of a materialized view, integrity may be violated and the constraints will not be verifiable statement by statement. But by the time the COMMIT comes, the data integrity will be OK and the constraints will be satisfied. Without deferrable constraints, constraints on materialized views could prevent the refresh process from succeeding.
Another popular reason to use deferrable constraints is to facilitate an update cascade when you anticipate the need to update a primary key in a parent/child relationship. If you make the foreign keys deferrable but initially immediate, then you can
Set all the constraints as deferred.
Update the parent key to a new value—the child integrity constraints will not be verified as yet.
Update the child's foreign key to this new value.
COMMIT —this will succeed as long as all child records affected by the update point to an existing parent record.
Without deferrable constraints, this update process is exceedingly hard. See asktom.oracle.com for an example of the code necessary to perform an update cascade without this feature!
Additionally, you can use deferrable constraints in a variety of multistatement transactions that need to temporarily violate integrity during the course of the transaction but end up with everything as it should be.
How do you calculate the time that is displayed in the AGE column of the first page of asktom.oracle.com? I ask this because I see it in many formats, such as 3hrs, 9 months old; 2.3 years old; 19 hrs old; etc. I'm a newcomer to Oracle and was wondering what kind of date calculations you might be using.
I'm just using the good old CASE statement introduced in Oracle8i Release 2 (version 8.1.6):
Select case when sysdate-timestamp < 1/24 then round(24*60*(sysdate-timestamp)) || ' minutes old ' when sysdate-timestamp < 1 then round(24*(sysdate-timestamp)) || ' hours old ' when sysdate-timestamp < 14 then trunc(sysdate-timestamp) || ' days old ' when sysdate-timestamp < 60 then trunc((sysdate-timestamp)/7) || ' weeks old ' when sysdate-timestamp < 365 then round(months_between (sysdate,timestamp)) || ' months old ' else round(months_between (sysdate,timestamp)/12,1) || ' years old ' end age, ...
Now, if you attempt to use CASE in PL/SQL in Oracle8i, you'll get an error message, because the PL/SQL parser will not recognize the CASE statement. (Note that in Oracle9i there are no such issues.) In order to work around the Oracle8i limitation, you can
Hide the CASE statement in a view and query the view in PL/SQL.
Use a nested DECODE instead of CASE .
I personally would use the view, but a reader, Martin Burbridge, posted the following DECODE on the asktom.oracle.com Web site:
decode(sign(sysdate-timestamp-1/24),-1, round(24*60*(sysdate-timestamp)) || ' minutes old ', decode(sign(sysdate-timestamp - 1), -1, round(24*(sysdate-timestamp)) || ' hours old ', decode(sign(sysdate-timestamp-14),-1, trunc(sysdate-timestamp) || ' days old ', decode(sign(sysdate-timestamp-60),-1, trunc((sysdate-timestamp)/7) || ' weeks old ', decode(sign(sysdate-timestamp-365),-1, round(months_between (sysdate,timestamp)) || ' months old ', round(months_between (sysdate,timestamp)/12,1) || ' years old ' ))))) age
It does exactly what the CASE statement does—just not as obviously.
I am considering design options for an application in which a user will upload and store documents that will also be downloadable by others. The documents would be Microsoft Word documents with an average size of 150K. There would be an initial migration of 18,000 to 20,000 documents (from CDs), and the store size would increase to about 25,000 documents when the application is used. The browser front end will be PL/SQL cartridge pages for upload and download. Initially, the application will be accessed by 400 to 500 people (nearly) simultaneously, and within two weeks it will be accessed by 300 people a day (with scattered access). The documents themselves are to be stored in the database as BLOBs.
Do you think that is a good option, considering the usage? Will it hog the system global area (SGA)? Considering the application requirements and the front end (browser-based), are there options other than storing in BLOBs, such as the file system?
I store everything in the database. Period. If the data means anything to you, has any value whatsoever, you will, in fact, put it into a database where it is professionally managed, backed up, recoverable, and secure. In addition to those very tangible benefits, you'll also derive the capability to index and search your documents. (True, this can be done against the file system as well, but then there is no integrity between the index and the document itself). In the database, you gain the ability to convert the document format (for example, upload a DOC file and present it as HTML). Your data is totally integrated, secure, backed up, and always there for you.
Internally at Oracle, we have a single multiterabyte database used as a single file server for the entire company. All of the documents for the company are in there—backed up, searchable, indexed, and totally accessible—in a single place. Managing the thousands upon thousands of documents we have in a conventional file system would be impossible, if the file system could even hold them all.
As for the SGA issues, that is up to you entirely. BLOBs can be NOCACHE , if you don't want them in the buffer cache, so you don't need to worry about "hogging" the SGA in that fashion.
Can you tell me where the best place to put cursors is when coding in PL/SQL? Should we put them in the package specification or body? The reason I'm asking is that I work with a developer who insists on putting all cursors in the package spec. He tells me that this is the right way to do it. Surely we should put cursors in the package spec only if they are used more than once in the package. All the cursors in this package are used only once, so I believe that these should be put in the declaration section of the procedure/function they are called from. Am I right? What are the advantages and disadvantages of putting all cursors in the package spec? Does cursor placement affect performance?
My personal preference is, has been, and always will be to use implicit cursors in most cases; that is, do not explicitly define a cursor at all! Like this, for example:
is ... begin .... for x in ( select * from emp ) loop
This technique works exceptionally well for result sets of under, say, 50 to 100 rows. I prefer this technique over explicit cursors for these reasons:
It is marginally more efficient CPU-wise than using an explicit cursor.
I can glance at the code and easily see what data I'm dealing with. The query is staring me in the face.
It makes me use views when queries get big. I still see what data I'm querying in the view, but the complexity of the view is hidden. Rather than hide it in a cursor, I hide it in a view.
There are times you have to use an explicitly defined cursor, the most common one being when you have larger result sets to process and you need to use BULK COLLECT during the FETCH phase for performance. When I have to use an explicitly defined cursor, I opt for a local one, defined inside the procedure itself (not even in the package body, but right inside the procedure in the body). Why?
It is marginally more efficient CPU-wise than a global cursor (which is defined in the spec).
I can still glance at the code and easily see what I'm dealing with.
It makes me use views, because the query is in my procedure and I don't want it to obscure the rest of the code.
It is clear whom the cursor belongs to.
Do you see a pattern here? Also, a cursor with a local scope (in a procedures STACK ) cleans up after itself. There is no cursor%isopen junk littering my code. (None of my code has ever used that "feature" of isopen.) I don't have to worry that "oh, you know procedure P1 uses cursor_x , and I use cursor_x , and because they are the same cursor_x , we might stomp on each other." So I don't have that problem people always have with global variables. I view cursors in the package specification or body (not defined in a procedure, but having global scope) with the same level of distrust with which I would view a global variable; the possibility of side effects from multiple procedures accessing these global variables is too high. I use global variables only when no other choice exists.
To wrap up, in order of preference:
1. No cursors ( select into, for x in ( select..... )
2. Locally declared cursors, when forced into it for whatever reason, such as:
a. bulk binds that need to use the LIMIT clause
b. ref cursors
I recommend that you avoid globally declared cursors, in a package spec, for these reasons:
You lose the nice features of encapsulation. The cursors are globally accessible and exposed to anyone who can access the package.
You experience marginally slower performance (I stress marginally, and this is not the main point).
You reduce the readability of the package body.
They are globals, and globals are, in general, a bad coding practice to be avoided whenever possible.
It is my understanding that if I change UNDO from one tablespace to another, using the ALTER SYSTEM command, Oracle would actually switch to the other one only after all of the active transactions that are using the first one are either committed or rolled back. Is that correct? Additionally, if I wanted to find what transactions are active in the first UNDO tablespace, how would I do it? Any view or query that you have will be helpful.
Well, your understanding is not correct. Oracle starts using the other UNDO tablespace straight away. What is nice about this next example is that it not only demonstrates this but gives you the query you wanted in order to see who is using which rollback segment in an UNDO tablespace to boot.
What I'll do is start a transaction in some session and not COMMIT it. I'll query to see what sessions are using which rollback segments in which tablespaces. Then, I'll issue the ALTER SYSTEM command to switch UNDO tablespaces, perform another transaction in my session, and requery to see who is using which UNDO tablespace. At that point, I'll see my old transaction in the old UNDO tablespace as well as my new transaction in the new UNDO tablespace. First, I'll look at who is using what. This query joins V$SESSION (to get the session information) with V$TRANSACTION (to report only on sessions with active transactions) with DBA_ROLLBACK_SEGS (to report out the rollback segment information, such as name and tablespace):
select a.username, b.segment_name, b.tablespace_name tspace from v$session a, dba_rollback_segs b, v$transaction c where a.taddr = c.addr and b.segment_id = c.xidusn / USERNAME SEGMENT_NAME TSPACE -------- ------------ ------- OPS$TKYTE _SYSSMU11$ UNDO
This shows that one transaction is active, using the UNDO tablespace named UNDO. Now I'll switch UNDO tablespaces:
alter system set undo_tablespace = undo2;
Now I'll begin another transaction in this session:
update dept set deptno = deptno; 2 rows updated. select a.username, b.segment_name, b.tablespace_name tspace from v$session a, dba_rollback_segs b, v$transaction c where a.taddr = c.addr and b.segment_id = c.xidusn / USERNAME SEGMENT_NAME TSPACE -------- ------------ ------- OPS$TKYTE _SYSSMU11$ UNDO OPS$TKYTE _SYSSMU16$ UNDO2
At this point, I can see that both are being used. I cannot drop the UNDO tablespace as yet, because it has active transactions, but it won't be used for any new transactions.
And now you have the query you wanted as well.
How can you create, say, six unique random numbers between 1 and 49 with one SQL statement?
Sometimes if you think "procedurally set-based," you can get very far with SQL. SQL is considered to be a nonprocedural language, but I find sometimes that if I lay out procedural requirements, they can help me formulate a query.
In order to solve this, I need to:
Generate the set of numbers from 1 to 49. This will be the set of numbers from which I pick six random ones.
Order these 49 numbers randomly. This is sort of like assigning a random number to each of the 49 and then ordering by them.
Take the first 6 numbers from that result set.
In order to generate the set of 49 numbers, I just need any table with at least 49 rows in it. I find ALL_OBJECTS to be a very safe table to use for something like this. It will always have at least 1,000 rows in it, and everyone has access to it on all systems.
To start with, I need that set of 49 numbers. That SQL query is simply:
select rownum r
where rownum < 50
That will generate the numbers 1, 2, 3, ... 49. Next, I need to take this set and order it randomly. I'll use an inline view to accomplish that. In the following, replace the word QUERY with the query above:
from ( QUERY )
order by dbms_random.value
Now, if you run the order by dbms_ random.value query over and over in SQL*Plus, you'll find that you always get 49 rows and that each execution of the query returns them in a different order.
Now all I need to do is take the first 6. I'll use another inline view that will restrict the output of the earlier query to just the first 6 rows. The query in its entirety is then:
( select r
( select rownum r
where rownum < 50 )
order by dbms_random.value )
where rownum <= 6
6 rows selected.
And if I execute it again, I get six different numbers.
Tom Kyte (email@example.com) has worked for Oracle since 1993. Kyte is a vice president in the Oracle Government, Education, and Healthcare group and the author of Effective Oracle by Design (Oracle Press) and Expert One-on-One: Oracle (Apress).Send us your comments