Oracle Database 11g: The Top Features for DBAs and Developers
by Arup Nanda
And Don't Forget...
Finally, I want to provide an introduction to many miscellaneous features that you will come to know and love in Oracle Database 11g.
Each new version of Oracle Database brings forth a new set of abbreviations for new processes. Here is the list for Oracle Database 11g:
|ACMS||Atomic Controlfile to Memory Server||Only applicable in a RAC instance. When a distributed SGA update is issued, ACMS ensures that the update happens on all instances. If the update fails on one, this process also rolls it back on all instances. Think of ACMS as a two-phase commit coordinator process for SGA updates in a RAC cluster.|
|DBRM||Database Resource Manager||Implements the resource plans and other resource manager-related tasks|
|DIA0||Diagnosibility process 0||Detects hang situations and deadlocks. In the future there may be more than one process, so the name is set as diag0. The other processes, if they come in the future will be named dia1, dia2, etc.|
|DIAG||Diagnosibility process||Performs the diagnostics, dumps trace files, if necessary and performs global oradebug commands|
|FBDA||Flashback Data Archiver||Oracle Database 11g has a new “Flashback Archives” to write the changes made to a table (refer to “Transactions Management” installment of this series). This process writes the flashback archives.|
|GTX0||Global Transaction Process 0||In a RAC cluster, Oracle Database now provides improved handling of XA transactions. This process coordinates XA transactions. If the database load on XA grows, more processes are automatically created with names GTX1, GTX2, upto GTXJ.|
|KATE||Konductor (Conductor) of ASM Temporary Errands||Undoubtedly the grandest name of all new processes. This is seen on ASM instances; not the database. (Refer to the installment on this series on ASM for the new feature Fast Disk Resync.) When a disk goes offline, this process performs a proxy I/O on its behalf to an ASM metafile.|
|MARK||Mark AU for Resync Koordinator (coordinator)||Refer to the ASM installment for more details on the resiliency of the ASM diskgroups. When a disk fails, it becomes offline, causing writes to miss. In that case, this process marks the ASM allocation units (AUs) as stale. When the disk comes online, the stale segments will be refreshed.|
|SMCO||Space Manager||SMCO is the master space management process that dynamically allocates and deallocates space. It spawns slave processes Wnnn to implement the tasks.|
|VKTM||Virtual Keeper of TiMe process||Provides a wall clock equivalent (with 1 second updates). When running at elevated priority, it provides a 20 ms update.|
|W000||Space Management Worker Processes||Implements the instructions received from SMCO. More processes are spawned as needed with names like W000, W001 and so on.|
This new feature doesn't paint the SQL statement in color; rather, it sort of marks it as "important."
Suppose you are troubleshooting a performance issue and you suspect a specific SQL statement contributing to the problem. You want the SQL statement to be captured in every Automatic Workload Repository (AWR) snapshot. But AWR snapshots do not capture all SQL statements; just the top ones. How can you force a specific SQL to be captured, regardless of its inclusion in the top SQLs?
The procedure add_colored_sql() in the package dbms_workload_repository marks the SQL as "colored", or important enough to be captured in every AWR snapshot regardless of whether the SQL is in top SQLs. First, identify the SQL statement and then get its SQL_ID. To color it, use:
begin dbms_workload_repository.add_colored_sql( sql_id => 'ff15115dvgukr' ); end;
To find out which SQLs have been colored, you can query the AWR table WRM$_COLORED_SQL:
SQL> SELECT * FROM wrm$_colored_sql; DBID SQL_ID OWNER CREATE_TI ---------- ------------- ---------- -------- 2965581158 ff15115dvgukr 1 05-APR-08
Thereafter the SQL statement with ID ff15115dvgukr will be captured in every snapshot, even if it's not in the top SQL statements. (Of course, the SQL must be present in the library cache to be captured in the AWR snapshots.)
But what if the SQL ceases to be that colorful—that is, not important enough to be captured? You can turn it off by issuing the obverse of the procedure.
begin dbms_workload_repository.remove_colored_sql( sql_id => 'ff15115dvgukr' ); end;
This feature is extremely useful when you want to focus on a specific SQL in your tuning exercises.
COPY is Undead!
Contrary to the rumors, the COPY command is still around. And it is still the only way to copy LONG datatypes. But as it has been made deprecated, it has not kept pace with the newer developments in Oracle Database.
For instance, it does not know about the presence of a new datatype called BFILE. If you try to copy the table DOCS, which has a column as BFILE shown in the earlier example:
SQL> copy to sh@oradba11 - > replace docs_may08 - > using select * from docs > where creation_dt between '1-may-08' and '31-may-08'; Enter TO password: Array fetch/bind size is 15. (arraysize is 15) Will commit when done. (copycommit is 0) Maximum long size is 80. (long is 80) CPY-0012: Datatype cannot be copied
The error message says it all: the COPY command can't copy the BFILE datatype. But in the case of a table with a LONG column, you can't use
INSERT /*+ APPEND */ and CREATE TABLE ... NOLOGGING AS SELECT ... statements. Here is an example where we are trying to copy some rows from one table to another where there is a column defined as LONG:
SQL>create table mytab1 as select * from mytab where col1 = 'A'; create table mytab1 as select * from mytab * ERROR at line 1: ORA-00997: illegal use of LONG datatype SQL> insert /*+ APPEND */ into mytab 2> select * from mytab where col1 = 'A'; select * from mytab * ERROR at line 2: ORA-00997: illegal use of LONG datatype
So, your only easy option to copy a large amount of data without filling up the undo segments is to use the
Exports and Import: Hanging by a Thread
When Data Pump was introduced in Oracle Database 10g, the eventual obsolescence of the traditional Export and Import tools were expected. Well, keep waiting: in Oracle Database 11g, these old tools are very much there. Being deprecated, they no longer receive any more enhancements, but they are far from useless.
In terms of features Data Pump outdoes the Regular Export/Import many times over but in one simple case the latter is still immensely useful: when you have to create a dump file on the client and not the server. There is no need to create a directory before performing the operations, so in many ways a simple data export is probably easier through the Regular Export/Import.
But, again, these tools are deprecated, so there are three risks to using them:
- Oracle will not add any functionality to them so future datatypes, etc. may not be supported.
- They may disappear from a future release without advanced warning, forcing you to rewrite the code later.
- If you encounter a bug, Oracle Support may refuse to fix that code.
So, it may be worthwhile to convert existing code to Data Pump, and definitely so for new developments.
Incrementally Updated Global Statistics
In partitioned tables, each partition can have optimizer statistics. In addition, there is a global statistic on the entire table that is not partition dependent—the number of distinct values across the table, for example. This global statistic is collected only when instructed in the dbms_stats.gather_*_stats procedures; not by default. If the global stats parameter is not specified, they are not collected and the optimizer derives them from the partition stats. Of course, the calculated global stats are always more accurate than the derived ones so it's always advisable to collect them.
But there used to be a little problem:. When a partition data changes but other partitions remain unchanged, the global stats might become stale. Thus the stats collection program had to go through the entire table to collect the global stats even though only one partition has a change.
Not so anymore. In Oracle Database 11g, you can instruct the optimizer to collect the global stats incrementally from the changed partition alone, not by doing another full table scan. This is done by setting a table parameter for stats collection—INCREMENTAL—to TRUE, as shown below:
begin dbms_stats.set_table_prefs( 'SH','SALES','INCREMENTAL','TRUE'); end; /
Now you should collect the stats on this table, and specifically the partition SALES_1995, with a granularity of AUTO.
begin dbms_stats.gather_table_stats ( ownname => 'SH', tabname => 'SALES', partname => 'SALES_1995', granularity => 'AUTO' ); end;
This method of stats collection instructs the database to incrementally update the global stats from those collected from the new partition.
Data Pump Gets Better
As I mentioned previously, Data Pump has been the tool of choice for moving large amounts of data, or for taking "logical" backups of the data efficiently, since the previous release. Similar to Export/Import, it's platform independent (for instance you can export from Linux to import into Solaris). It got a few enhancements in Oracle Database 11g.
One of the big issues with Data Pump was that the dumpfile couldn't be compressed while getting created. That was something easily done in the older Export/Import utility. In Oracle Database 11g, Data Pump can compress the dumpfiles while creating them. This is done via a parameter COMPRESSION in the expdp command line. The parameter has three options:
- METDATA_ONLY - only the metadata is compressed; the data is left as it is (available in Oracle Database 10.2 as well).
- DATA_ONLY - only the data is compressed; the metadata is left alone.
- ALL - both the metadata and data are compressed.
- NONE - this is the default; no compression is performed.
Here is how you compress the export of the table UNITS_FACT:
$ expdp global/global directory=dump_dir tables=units_fact dumpfile=units_fact_comp.dmp compression=all
For comparison purposes, export without compression:
$ expdp global/global directory=dump_dir tables=units_fact dumpfile=units_fact_uncomp.dmp
Now if you check the files created:
$ ls -l -rw-r----- 1 oracle dba 2576384 Jul 6 22:39 units_fact_comp.dmp -rw-r----- 1 oracle dba 15728640 Jul 6 22:36 units_fact_uncomp.dmp
The compression ratio is 100*(15728640-2576384)/15728640 or about 83.61%! That's fairly impressive; the uncompressed dumpfile is 15MB while the compressed one is 1.5MB.
If you compress the dumpfile using gzip:
$ gzip units_factl_uncomp.dmp -rw-r----- 1 oracle dba 3337043 Jul 6 22:36 units_fact_uncomp.dmp.gz
The compressed file is about 3.2MB, double the size of the compressed file in Data Pump. So, in addition to the compression being potentially more efficient, the decompression really adds value. When importing the dumpfile, the import does not have to decompress the file first; it decompresses as it reads it, making the process really fast.
The two other enhancements in Data Pump are:
- Encryption: the dumpfile can be encrypted while getting created. The encryption uses the same technology as TDE (Transparent Data Encryption) and uses the wallet to store the master key. This encryption occurs on the entire dumpfile, not just on the encrypted columns as it was in the case of Oracle Database 10g.
- Masking: when you import data from production to QA, you may want to make sure sensitive data such as social security number, etc. are obfuscated (altered in such a way that they are not identifiable). Data Pump in Oracle Database 11g enables you do that by creating a masking function and then using that during import.
You can read about both these features in the security installment.
Data Pump Legacy Mode (Release 2 only)
If you are a seasoned export/import (the Original Export, or “exp” tool) user, you have meticulously prepared the exp/imp scripts with the correct (and often carefully optimized) parameters. But alas; the original export is desupported now! So, you want to move on to the Data Pump; but what about your beloved scripts which have seen sweat and blood? In your scripts, instead of exp, you would want to use expdp; but it’s not as simple as just replacing the executable. What about all the parameters?
For instance, consider the legacy exp parameter “owner” which performs the export of all the objects of a specific user. The corresponding parameter for expdp is not owner; it’s “schema”. So, you not only have to replace “exp” with “expdp” but replace all the parameters as well – a task that is not just time consuming but seriously error-prone too.
Have heart. In Release 2 there is a special mode of Data Pump that can handle the legacy “exp” utility parameters. Let’s consider the above issue. The legacy script looks like this:
$ exp \"/ as sysdba\" rows=n owner=sh
Instead of replacing the parameter “owner” by “schema”, just run it in expdp:
$ expdp \"/ as sysdba\" rows=n owner=sh
Here is the output:
Export: Release 18.104.22.168.0 - Production on Sat Sep 26 19:02:25 2009 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 22.214.171.124.0 - Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options Legacy Mode Active due to the following parameters: Legacy Mode Parameter: "owner=sh" Location: Command Line, Replaced with: "schemas=sh" Legacy Mode Parameter: "rows=FALSE" Location: Command Line, Replaced with: "content=metadata_only" Legacy Mode has set reuse_dumpfiles=true parameter. Legacy Mode has set nologfile=true parameter. Starting "SYS"."SYS_EXPORT_SCHEMA_01": "/******** AS SYSDBA" content=metadata_only schemas=sh reuse_dumpfiles=true nologfile=true Processing object type SCHEMA_EXPORT/USER … the rest of the output comes here …
Note how the utility automatically detected that the parameter that was relevant in the legacy exp. Instead of complaining and aborting the execution with an error, it gracefully replaced the parameters relevant for the expdp command, as shown in the bold in the output. Your script didn’t have to change.
Now that you know how the tool automatically replaces “owner” with “schema”, you might be interested in knowing how the other parameters are affected. Some parameters are ignored, e.g. “buffers”. The original export used this since the rows were imported using conventional manner. Data Pump uses direct path so this parameter is irrelevant now and is ignored.
Most parameters are automatically converted to their Data Pump counterparts. You saw an example earlier; another examples is “constraints=n” in original exp is converted to “exclude=constraints” in Data Pump. This conversion is also smart: Data Pump functionalities are taken into account.
Consider the “consistent=y” parameter in original export, which exports the rows as of the time the export first started. The corresponding functionality in Data Pump is provided by the parameter “flashback”. However, it does not take “y” or “n”; it expects a timestamp. During conversion, Data Pump notes the time and enters the value against the flashback parameter.
Other parameters for which Data Pump has no equivalent functionality stops with error. For instance, “volsize” parameter in original export is used to specify the volume size of tape devices. Since Data Pump does not support tape devices, the presence of parameter stops the job with error. (To determine how each legacy mode parameters affect the expdp and impdp, refer to Chapter 4 of the Utilities Guide.)
While on the subject of the original Export utility, you should be aware of a caveat. Do you recall that it is possible to create segment-less tables? Here is a refresher: for such tables segments are created on the fly when the first row is inserted. Until then the tables have no segment. If the original export does not find the segments, it ignores those tables during export. If you want to export the metadata alone, be aware that the dumpfile will not have any reference to those tables.
Of course, Data Pump being up with the times does not have this limitation. It does not ignore segment-less tables while exporting.Exploit Clustering in Data Pump (Release 2 Only)
You know that Data Pump uses several “slaves” to perform the work in parallel. If you have a RAC database, sometimes it may be beneficial to use all the instances of the database rather than just one instance for all the slaves. A new parameter – quite appropriately named “cluster” – accomplishes that. It is specified as
The defalt value of the parameter is “y”. You should specify “n” if you want to disable it.This can be used in both Data Pump Export and Import.
Let’s consider a twist. What If you have more than two instances and while you want to run Data Pump slaves in cluster mode, you want to limit them to only a handful of instances only; not all?
It’s really simple. All you have to do is to use another parameter, service_name. Create a service name that is defined on only those instances and use it in Data Pump.
Here is an example where you have four instances and you want to use only instances 2 and 3 for the Data Pump job. Create a service name – nodes_2_3 – as shown below:
$ srvctl add service -d <DatabaseName> -s nodes_2_3 -a RAC2,RAC3 -r RAC1,RAC4 -P BASIC
Then you should use the parameter in Data Pump:
Now the DP job will use RAC2 and RAC3 instances only, i.e. nodes 2 and 3.
There is one caveat: If you start the expdp process on nodes 1 or 4, the Data Pump processes can start there as well, in addition to nodes 2 and 3. As always, when you start this Data Pump job in a RAC database with parallelism, make sure the dumpfiles are created on shared storage since the slave processes on multiple nodes must be able to access them.
Disable Direct Path in Data Pump
By default the Data Pump jobs use the Direct Path Insert, i.e. the /*+ APPEND */ hint to load the data in Direct Path mode. While this makes the process of loading faster, it loads above the high water mark of the table and locks the table until the loading is completed. In addition, since it always loads in a fresh block, it wastes space when the dumpfile contains just a few rows.
What if you want to load a table while it is also being subjected to DML activities? You can do that by choosing a new parameter – data_options – which can alter the default behavior. Here is an example:
$ impdp dumpfile=sales.dmp tables=sales directory=input_dir data_options=disable_append_hint
The value disable_append_hint forces the Data Pump job to load with regular inserts; not Direct Path Insert. Now DMLs can run on the table(s) being loaded.
Another value of the parameter is skip_constraint_errors, which allows the loading to continue even if there are constraint violations on the constraints that are non-deferred. This property helps in loading tables en-masse and then enable the constraints one time with novalidate clause. This operation makes it quite useful in a data warehouse.
Truly Online Index Rebuild
Remember the ONLINE clause while rebuilding an index?
alter index in_tab_01 rebuild online;
The clause rebuilds the index without affecting the DML accessing it. It does so by tracking which blocks are being accessed and at the end merging these blocks with the newly built index. To accomplish this task the operation had to get an exclusive lock at the end of the process. Although short in duration, it was a lock nevertheless, and DMLs had to wait.
In Oracle Database 11g, the online rebuild is truly online: it does not hold an exclusive lock. The DMLs are not affected.
Different Tablespace for Temporary Tables
When you create a global temporary table, where does the allocation come from for the space occupied? It comes from the user's temporary tablespace. Usually this is not going to be an issue, but in some special cases, you may want to free up the temporary tablespace for the purpose it is supposed to be for (sorting, mostly). Sometimes you may want create the temporary tables to use another temporary tablespace on faster, more efficient disks to make the data access faster. In those cases you had no choice but to make that tablespace the user's temporary tablespace.
In Oracle Database 11g you can use another temporary tablespace for your global temporary tables. Let's see how. First you create another temporary tablespace:
SQL> create temporary tablespace etl_temp 2> tempfile '+DG1/etl_temp_01.dbf' 3> size 1G; Tablespace created.
Then, you create the GTT with a new tablespace clause:
SQL> create global temporary table data_load ( 2> input_line varchar2 (2000) 3> ) 4> on commit preserve rows 5> tablespace etl_temp; Table created.
This temporary table is now created on tablespace etl_temp instead of the user's default temporary tablespace—TEMP.
SQL*Plus Error Logging
Suppose you have a SQL script called myscript.sql:
set puase on set lines 132 pages 0 trimsppol on select * from nonexistent_table /
Note there are several errors in the script: the first line has "pause" misspelled, the second line has "trimspool" misspelled, and finally the third line has a select statement from a table that does not even exist. When you run the script via SQL*Plus prompt, unless you spooled the output, you will not be able to check the error afterward. Even if you spooled, you would have access to the physical server to examine the spool file, which may not be possible.
Oracle Database 11g has a perfect solution: Now you can log the errors coming from SQL*Plus on a special table. You should issue, as a first command:
SQL> set errorlogging on
Now you run the script:
The run will produce the following error messages:
SP2-0158: unknown SET option "puase" SP2-0158: unknown SET option "trimsppol" select * from nonexistent_table * ERROR at line 1: ORA-00942: table or view does not exist
which you may or may not have seen, depending on how you ran the script—in the foreground from an SQL*Plus prompt or in the background as a script invocation. After the script completes, you can log into the database and check the errors in a table named SPERRORLOG.
sql> col timestamp format a15 sql> col username format a15 sql> col script format a10 sql> col identifier format a15 sql> col statement format a20 sql> col message format a20 sql> select timestamp, username, script, statement, message 2> from sperrorlog;
Note that you checked the error from a different session, not the session where the script was run. In fact the script has finished and the session has been terminated anyway. This gives you a powerful ability to check errors after they occurred in SQL*Plus sessions that were impossible, or at least difficult, to track otherwise.
The table SPERRORLOG is a special table that is created for this purpose only. You can also create your own table and populate that with errors from SQL*Plus. This table should be created as:
SQL> create table my_sperror_log 2 ( 3 username varchar2(256), 4 timestamp timestamp, 5 script varchar2(1024), 6 identifier varchar(256), 7 message clob, 8 statement clob 9 ) 10 / Table created.
Now you can use this table in error logging instead of the default table.
SQL> set errorlogging on table sh.my_sperror_log; SQL> @myscript
Now MY_ERROR_LOG (and not SPERRORLOG) will hold the error log. You can truncate all the rows in the table by issuing
SQL> set errorlogging on truncate
There is an optional IDENTIFIER clause that allows you to tag the errors from specific sessions. Suppose you issue the command:
SQL> set errorlogging on identifier MYSESSION1
Now if you run the script, the records will be created with the column called IDENTIFIER populated with the value MYSESSION1. You can extract those records only by issuing the query:
select timestamp, username, script, statement, message from sperrorlog where identifier = 'MYSESSION1';
You will see the records from that session only. This is very useful if you are trying to isolate errors in multiple scripts and sessions.
Shrinking the Temporary Tablespace
You probably already know that temporary tablespaces are special; normal rules of space management may not apply to them. When a temporary segment is allocated, it is not deallocated. This is not really a problem, since the temporary segments (which are what the temporary tablespaces are for) are not part of the schema and are not stored across database recycles. The space is reused for another user or another query. Anyway, since the space is not deallocated, the temporary tablespaces just keep growing. But what if you want to trim them to make room for other tablespaces?
Until now the only option was to drop and recreate the tablespace—a rather trivial task that can be done almost always online. However there is a little "but": What if you can't afford to accept anything other than 100-percent uptime? In Oracle Database 11g, you can easily do that, by shrinking the temporary tablespace.
Here is how the tablespace TEMP1 is shrunk:
alter tablespace temp1 shrink space;
This deallocates all the unused segments from the tablespace and shrinks it. After the above operation, you can check the view DBA_TEMP_FREE_SPACE to check how much the allocated space and free space currently is.
SQL> select * from dba_temp_free_space; TABLESPACE_NAME TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE ------------------------------ --------------- --------------- ---------- TEMP 179306496 179306496 178257920
In a relatively quiet database the shrink operation might shrink the temporary tablespace to almost empty. You know that's just artificial; as the subsequent activities will expand the tablespace, you might want to leave some space inside, say 100MB. You can do it as follows:
SQL> alter tablespace temp shrink space keep 100m; Tablespace altered. SQL> select * from dba_temp_free_space; TABLESPACE_NAME TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE ------------------------------ --------------- --------------- ---------- TEMP 105906176 1048576 104857600
All the space but 100MB was released. This approach helps you manage the space in various tablespaces. Now you can borrow the space from inside a temporary tablespace to give to other tablespace temporarily (no pun intended). Later when that space is no longer needed, you can give it back to the temporary tablespace. When you combine this feature with the tablespace for global temporary tables you can resolve many difficult space management issues in temporary tablespaces.
SQL*Plus Shows BFILE
The datatype BFILE in Oracle Database stores the pointer to an external file, not the contents of the file itself. It serves a specific purpose: the database space is conserved while the users have access to the data in the files, which is great for some applications such as ones using content and multi-media rich databases.
If you have a column with BFILE in a table and you want to select the rows of the tale in SQL*Plus, you used to get an error. In Oracle Database 11g, SQL*Plus shows you the location of the file as a value in the column.
Let's see an example. First you have to create a directory to store the BFILE.
SQL> create directory doc_dir as '/u01/oracle/docs'; Directory created.
Now, create the table.
SQL> create table docs 2 ( 3 doc_id number(20), 4 doc_file bfile, 5 doc_type varchar2(10) 6 ); Table created.
Create the row.
SQL> insert into docs values 2 ( 3 1, 4 bfilename('DOC_DIR','metric_daily_report.pdf'), 5 'PDF' 6 ); 1 row created.
Now if you select this row in SQL*Plus:
SQL> col doc_file format a50 SQL> select * from docs; DOC_ID DOC_FILE DOC_TYPE ---------- -------------------------------------------------- ---------- 1 bfilename('DOC_DIR', 'metric_daily_report.pdf') PDF
The output under the column DOC_FILE, which is a BFILE, shows the location of the file instead of erroring out.
Parameter Files from Memory
Consider this scenario: in the course of diagnosing some database issue, you modified a lot of parameters, all in memory. Later you forget which parameters you changed. Unless you remembered to put these parameters in the initialization parameter file (pfile or spfile), these changes will be lost. Most likely you didn't put them there as you were experimenting with the values and weren't sure which specific value to use. Of course, you can go through the alert log and pick out all the changes made—a procedure not only painful but error-prone as well.
In Oracle Database 11g, you can use a very simple command to create a pfile or an spfile from the parameter values from memory.
SQL> create spfile from memory; File created. SQL> create pfile from memory; File created.
This will create an spfile or pfile with the values from the memory. The pfile will have an entry at the top of the file like this:
# Oracle init.ora parameter file generated by instance ODEL11 on 08/28/2007 14:52:14
This saves you a lot or effort and risk in capturing the parameters changed in the memory. This feature also allows you to create a pfile in another name and then compare the current pfile to this generated one to see which parameters were changed.
SQL> create pfile='/tmp/a' from memory;
This statement creates the pfile as /tmp/a. Now in unix, you can use the simple diff command to show the differences in these two files.
Reset with Care
In course of a performance tuning process you are examining the impact of changing several dynamic initialization parameters directly on the spfile, and suddenly you are concerned that you have lost track of what you changed. Since these are changed in the spfile, you can't count on the spfile to tell you what values you changed.
Well, here comes the cavalry: The
ALTER SYSTEM RESET command resets the value to the default value, overriding what you have the spfile:
alter system reset license_max_sessions;
This command was available in earlier releases as well but with one fundamental difference: it required a SID clause earlier in RAC databases. If you wanted to reset it on all instances, you would have specified SID='*' clause. In Oracle Database 11g, the SID clause is optional and the default is all instances. So, if you omit the clause, it would have errorred out earlier; now it succeeds, but the effect may not be what you wanted. So, watch out.
Instance CPU Caging (Release 2 Only)
Many organizations choose to run multiple database instances on a single server for a variety of reasons. Regardless of the reason, it’s not generally a good idea. The best alternative is to consolidate all these apps into a single database running on the server. That will result in only one set of Oracle background processes and only one set of common tablespaces (system, sysaux, undo, temp, etc.).
In contrast, multiple instances will each require their own set of processes and storage and add stress on the hardware. What’s worse, one instance may completely monopolize the resources of the hardware such as CPU, I/O channels, and memory. The instance memory can be reserved for an instance by setting the SGA size but the PGA memory can’t. CPU and I/O can’t be reserved for a specific instance, with some possible concessions via specialized O/S throttling tools.
In Oracle Database 11g Release 2, you can accomplish a limited CPU allocation among instances via a new feature known as instance caging. (Please note that this restriction applies to the user processes, not background ones such as pmon, smon, arch, etc.) This is accomplished via the Database Resource Manager and the system parameter cpu_count. By default the CPU_COUNT parameter is set by the instance by querying the number of CPUs from the O/S. Using the resource manager, you can alter this to a lower number to limit that instance to that many CPUs.
Let’s see how it works. First, prepare a Resource Manager Plan. Check first; you may already have one enabled. The plan may not have anything else but it must have a CPU directive.
begin dbms_resource_manager.clear_pending_area(); dbms_resource_manager.create_pending_area(); dbms_resource_manager.create_plan ( plan => 'lim_1_cpu', comment => 'Plan to limit CPUs to just 1', max_iops => null, max_mbps => null ); dbms_resource_manager.create_plan_directive( plan => 'lim_1_cpu', group_or_subplan => 'OTHER_GROUPS', comment => 'Other Groups', mgmt_p1 => null, mgmt_p2 => 100 ); dbms_resource_manager.create_plan_directive( plan => 'lim_1_cpu', group_or_subplan => 'SYS_GROUP', comment => 'System Groups', mgmt_p1 => 50, mgmt_p2 => null ); dbms_resource_manager.submit_pending_area(); end; /
Then activate that plan:
SQL> alter system set resource_manager_plan = 'LIM_1_CPU'; System altered.
Then set the CPU count to 1.
SQL> alter system set cpu_count = 1;
This directive will now limit the user processes to one CPU only for all user processes. Remember, the background processes will consume all the CPUs anyway.
Scheduler Email Notification (Release 2 Only)
Scheduler has long offered various advantages over the older DBMS_JOB functionality. Now you have one more reason to like it.
When a job completes, how do you know? Querying the DBA_SCHEDULER_JOBS view repeatedly will tell you, but that’s hardly practical. A more practical solution is via an email. But that brings its own set of problems – from change control approval to not being able to modify the source code.
In Oracle Database 11g Release 2, you don’t have to resort to these options; there is a much more elegant alternative whereby the Scheduler can send an email after completion. It can even specify in the email whether the completion ended in success or failure.
To demonstrate, let’s create a job to run a stored procedure named process_etl2:
begin dbms_scheduler.create_job ( job_name => 'process_etl2', job_type => 'STORED_PROCEDURE', job_action => 'process_etl2', start_date => SYSTIMESTAMP, repeat_interval => 'freq=minutely; bysecond=0', enabled => TRUE); end; /
Now, to enable the email send function, we have to set some email-related parameters such as the email server name and how the sender’s details should be specified.
BEGIN dbms_scheduler.set_scheduler_attribute( 'email_server', 'mail.proligence.com:25' ); dbms_scheduler.set_scheduler_attribute( 'email_sender', 'firstname.lastname@example.org' ); END; /
Note the SMTP server is given in the format server[:port]. If the port is not given, default 25 is assumed. Now we can add the email notification property to the job:
begin dbms_scheduler.add_job_email_notification ( job_name => 'process_etl2', recipients => 'email@example.com', subject => 'Job: process_etl', body => 'ETL Job Status', events => 'job_started, job_succeeded'); END; /
The parameters of the procedure are self-explanatory. One very important one is EVENTS, which specifies the events during which the emails should be sent. In this example we have specified that the email is sent when the job starts and when it succeeds (but not when it fails).
The EVENTS parameter can have table values job_failed, job_broken, job_sch_lim_reached, job_chain_stalled, job_over_max_dur, which represent if a job failed at the end, if a job was broken, if its limit in the scheduler was reached, if a chain which this job belongs to got stalled and if the job went over its duration, respectively.
If you want to find out the status of the email notification sent by the various jobs placed under this notification system, you can check the new data dictionary view USER_SCHEDULER_NOTIFICATIONS.
SQL> desc user_scheduler_notifications Name Null? Type ----------------------------------------- -------- ---------------------------- JOB_NAME NOT NULL VARCHAR2(30) JOB_SUBNAME VARCHAR2(30) RECIPIENT NOT NULL VARCHAR2(4000) SENDER VARCHAR2(4000) SUBJECT VARCHAR2(4000) BODY VARCHAR2(4000) FILTER_CONDITION VARCHAR2(4000) EVENT VARCHAR2(19) EVENT_FLAG NOT NULL NUMBER
Let’s check the contents of this view.
SQL> select job_name, recipient, event 2 from user_scheduler_notifications; JOB_NAME RECIPIENT EVENT ------------------------- -------------------- ------------------- PROCESS_ETL2 firstname.lastname@example.org JOB_STARTED PROCESS_ETL2 email@example.com JOB_SUCCEEDED
The body column shows the actual mail that was sent:
SQL> select BODY, event_flag 2 from user_scheduler_notifications 3 / BODY -------------------------------------------------------------------------------- EVENT_FLAG ---------- ETL Job Status 1 ETL Job Status 2
The emails will keep coming based on the error defined. You can also define a filter that results in the notification system sending out emails only if the error codes are ORA-600 or ORA-7445. The expression should be a valid SQL predicate (the WHERE clause without the “where” keyword).
BEGIN DBMS_SCHEDULER.add_job_email_notification ( job_name => 'process_etl2', recipients => 'firstname.lastname@example.org', subject => 'Job: process_etl', body => 'ETL Job Status', filter_condition => ':event.error_code = 600 or :event.error_code = 7445', events => 'job_started, job_succeeded'); END; /
Did you notice that the body was a simple “ETL Job Status”? This is not quite useful. It may be worth letting the default value take over for that. The email notifications are sent with the following body by default
Job: %job_owner%.%job_name%.%job_subname% Event: %event_type% Date: %event_timestamp% Log id: %log_id% Job class: %job_class_name% Run count: %run_count% Failure count: %failure_count% Retry count: %retry_count% Error code: %error_code% Error message: %error_message%'
As you can see the body has a lot of variables such as %job_owner%. These variables are explained here:
To remove the notification, you can use another procedure in the same package:
begin dbms_scheduler.remove_job_email_notification ( job_name => 'process_etl2'); end; /
Email notification makes the job system complete. Remember, emails are triggered by the Scheduler Job system, not by the code inside the programs or procedures called by the Scheduler. This allows you to set the notification schedule independent of the actual code under the job.
SQL*Plus Implicit Commit (Release 2 Only)
SQL*Plus, the tool from time immemorial, has not had much change lately, except one in the Oracle Database 11g Release 2. There is a very subtle but potentially impacting change in the behavior of the tool. So far, when you execute a transaction and do not issue an explicit commit or rollback, SQL*Plus issues one when you it ends. If the end is normal (i.e. you issued exit or disconnect) then it executes a commit, otherwise the transaction is rolled back.
There may be a few cases where you do not want that default behavior. You may want it to rollback in case of an exit if the user has not issued an explicit commit. In this Release 2, you can alter that behavior by a new parameter called exitcommit. Let’s see how it works on two different unix prompts.
However, this parameter affects only the EXIT statement. If you are still in SQL*Plus but simply ended the session by issuing a DISCONNECT, this parameter has no effect. In that case, the default behavior of a commit will still be the case. On session 1, after the updating the value to 4, issue a disconnect instead of exit.
SQL> disconnect On session 2, check the value SQL> select col1 from test; COL1 ---------- 4
The operation was committed because the DISCONNECT command is not affected by the EXITCMMIT setting. When you are in SQL*Plus but want to connect to a different user, you issue:
SQL> connect differentuser/
This action issues an implicit DISCONNECT; so all the transactions are committed. You may want to keep the facts in mind – exitcommit affects EXIT statements only; not DISCONNECTS – whether explicit or implicit.Back to Series TOC