Manageability Features of Oracle Database 12c


You must be familiar with crsctl command to control the cluster environment. These commands are pretty powerful, often with irreversible results. What can you do to make sure you have entered the command that will produce the intended results; or not produce something you are going to regret in the future? In the earlier versions there really was not a simple way to find out. You just had to rely on your deep knowledge of the command to get that right; mitigate the risk a bit by trying them out on test systems; biting your nails and hoping for the best. Not anymore. In Oracle Database 12c, the crsctl command has a new verb—eval—to evaluate the commands without actually executing it. This is a preview of the commands before taking the leap.

When you want to stop the resource for a SCAN listener 'ora.LISTENER_SCAN3.lsnr', you would have issued:

$ crsctl stop resource ora.LISTENER_SCAN3.lsnr

If you want to see the impact of the stop instead of actually stopping it, you would issue:


$ crsctl eval stop resource ora.LISTENER_SCAN3.lsnr 

Here is the output:

Stage Group 1:
--------------------------------------------------------------------------------
Stage Number Required Action
--------------------------------------------------------------------------------

1 Y Resource 'ora.LISTENER_SCAN3.lsnr' (1/1) will be
in state [OFFLINE]

--------------------------------------------------------------------------------

The output shows the the intended results. In this case your action will be successful; and hopefully that is what you intended in the first place. Now, let’s see the impact of another action – stopping the resource for a SCAN VIP; not the SCAN listener:

$ crsctl eval stop resource ora.scan1.vip

Stage Group 1:
--------------------------------------------------------------------------------
Stage Number Required Action
--------------------------------------------------------------------------------

1 N Error code [222] for entity [ora.scan1.vip].
Message is [CRS-2529: Unable to act on
'ora.scan1.vip' because that would require
stopping or relocating
'ora.LISTENER_SCAN1.lsnr', but the force option
was not specified].

--------------------------------------------------------------------------------

The output in this case is different; it clearly showed that the command would have failed with CRS-2529 had we actually executed it (without the eval verb) because a dependent resource – the SCAN listener – would have also stopped as an unintended consequence. If that was your intention anyway, the output helpfully suggests, then you should have used the –f option. So, let’s do exactly that and verify that errors will not exist:

$ crsctl eval stop resource ora.scan1.vip -f

Stage Group 1:
--------------------------------------------------------------------------------
Stage Number Required Action
--------------------------------------------------------------------------------

1 Y Resource 'ora.LISTENER_SCAN1.lsnr' (1/1) will be
in state [OFFLINE]

2 Y Resource 'ora.scan1.vip' (1/1) will be in state
[OFFLINE]

--------------------------------------------------------------------------------

Now it’s clear; the modified command (with the “-f” flag will work. In this case, the output also shows you the different stages of the command execution and the sequence they will be executed. The SCAN listener 1 will be stopped and be placed in offline state The SCAN VIP 1 will be then put in the offline state This clearly gave you the would-be results without actually executing them. But that is not all, though.



The feature is also helpful to determine what-if analysis of unissued commands. For instance you want to find out what would happen if the network failed. Well, don’t speculate; just issue the command:

$ crsctl eval fail resource ora.net1.network

Stage Group 1:
--------------------------------------------------------------------------------
Stage Number Required Action
--------------------------------------------------------------------------------

1 Y Resource 'ora.net1.network' (prolabtas61) will
be in state [ONLINE|INTERMEDIATE] on server
[prolabtas61]
Y Resource 'ora.net1.network' (prolabtas62) will
be in state [ONLINE|INTERMEDIATE] on server
[prolabtas62]
Y Resource 'ora.net1.network' (prolabtas63) will
be in state [ONLINE|INTERMEDIATE] on server
[prolabtas63]
Y Resource 'ora.net1.network' (prolabtas64) will
be in state [ONLINE|INTERMEDIATE] on server
[prolabtas64]

--------------------------------------------------------------------------------
The details of various failures will be clearly there on the screen for you to see. The eval option is available for these actions only (use crsctl eval and then the action):
  • activate policy: to see the effect of activating a policy in the server pool, if you are using them
  • add resource: to see the effect of adding a resource such as a network or a diskgroup
  • fail resource: to see the effect of the failure of a resource in the cluster, as shown in the example in the earlier paragraphs
  • modify resource: to see the effect of modifying a resource attribute such as check interval
  • start resource: to see the effect of starting of the resource
  • stop resource: to see the effect of stopping of the resource, as shown in the example above
  • relocate resource: to see the effect of relocation of resource
  • add server: to see the effect of adding a server to the serverpool
  • delete server: to see the effect of deleting a server from the serverpool
  • relocate server: to see the effect of the relocation of the server in the serverpool
  • add serverpool: to see the effect of adding a serverpool
  • delete serverpool: to see the effect of deleting a serverpool
  • modify serverpool: to see the effect of modifying the attribute of a serverpool such as dependencies

This feature is also available for the other RAC management command—srvctl. The eval option is given as a parameter. Here is an example where you would want to evaluate the effect of the stop database command:
$ srvctl stop database -d CONA –eval


PGA Limit


Program Global Area (PGA) is an area in memory that is used by sessions for session-specific tasks such as sorting and direct path reads. The idea behind this area of the memory is simple; it does not need to be shared between sessions and hence the buffer cache does not need to be polluted by that data. However, PGA has its own set of problems; the biggest of which is the lack of an algorithm for aging out unnecessary chunks of memory. So, unlike the buffer cache which has a fixed size, the PGA can grow to accommodate demands from the session. In some cases, the PGA eats up so much memory that the system starts swapping and starving the other demands for memory. In some cases the instance aborts. In earlier versions, a parameter—PGA_AGGREGATE_TARGET—provided some control; but keep in mind that that is not a limit. The amount specificied by pga_aggregate_target is merely a target that Oracle tries to follow by controlling untunable memory; but there is no hard ceiling. Well, not anymore. In Oracle Database 12c, a new parameter called PGA_AGGREGATE_LIMIT sets a hard limit on the amount of PGA taken up by an instance. When the overall PGA occupied by all the sessions in that instance exceed the limit, Oracle kills the session holding the most untunable PGA memory, releasing all PGA memory held by that session. That killed session gets this message:

ORA-00028: your session has been killed
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT of 4000 MB

Note the new Oracle error ORA-4036. This is an extremely useful feature to control runaway PGA consumption by sessions issuing untuned queries. The parameter pga_aggregate_limit, if not explicitly defined, defaults to the greater of these three numbers:
  • 2 GB
  • 3 MB times the value of the processes parameter
  • 2 times the value of the parameter pga_aggregate_target

But this limit does not exceed the 1.2 times the total physical memory after taking away the total SGA size. Don’t like this new feature and want to go back to the previous (pre-12c) behavior? Sure; just set the value of pga_aggregate_limit to 0 and the parameter will not have any impact.



Queryable Inventory



If you have installed and managed several Oracle installations you know how important it is to maintain the installation inventory. The inventory contains information on the different patches installed in that Oracle Home, and is checked by subsequent patches to determine compatibility (or, more important, conflicts with existing patches). It’s stored on the host, typically in the oraInventory directory and you generally query it by using the OPatch utility with the option "lsinventory" as shown below:

$ $ORACLE_HOME/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.0
Copyright (c) 2012, Oracle Corporation. All rights reserved.

Oracle Home : /u01/app/12.1.0/grid
Central Inventory : /u01/app/oraInventory
from : /u01/app/12.1.0/grid/oraInst.loc
OPatch version : 12.1.0.1.0
OUI version : 12.1.0.1.0
Log file location : /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2013-10-06_19-14-44PM_1.log

Lsinventory Output file location : /u01/app/12.1.0/grid/cfgtoollogs/opatch/lsinv
   /lsinventory2013-10-06_19-14-44PM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Grid Infrastructure 12c 12.1.0.1.0
There are 1 products installed in this Oracle Home.


There are no Interim patches installed in this Oracle Home.


Patch level status of Cluster nodes :

Patching Level Nodes
-------------- -----
0 prolabtas64,prolabtas63,prolabtas62,prolabtas61

--------------------------------------------------------------------------------

OPatch succeeded.

While this system of checking the inventory works, it leaves a lot to be desired. In many cases you want to get inventory information in real time, or you would like to have the information in a manner you can easily format. In Oracle Database 12c, you have the answer—you can get the information by issuing an SQL query, which you are all too familiar with. But that’s not all; you also get the output in XML format, which allows you to export it into any formatting tool to produce the output in any desired manner by using stylesheets. A new package—DBMS_QOPATCH—allows us to do that. Let’s see an example:

SQL> set lines 300
SQL> set trimspool on
SQL> set pages 45
SQL> set pau on
SQL> set long 9999
SQL> select dbms_qopatch.get_opatch_lsinventory
2 from dual;

GET_OPATCH_LSINVENTORY
----------------------------------------------------------
  OracleHome-0d68bc5c-6448-4131-821a-9a4c931c6053 oracle_home oneoff /u02/app/oragrid oracle_home /u01/app/oraInventory      false     OUIComponent-d0a3c7c7-b9da-46ff-a5fb-9d4fbf7ee022 Installs an optional preconfigured starter database,      product options, management tools, networking services, utilities, and basic client software for an Oracle Database      server. This option also supports Automatic Storage Management database configuration. 12.1.0.1.0  en       Oracle Database 12c Installs an optional preconfigured starter database, product options, management tools,      networking services, utilities, and basic client software for an Oracle Database server. This option also supports Automatic Storage Management database configuration. 12.1.0.1.0  … output truncated … 

The output is truncated for brevity; but you get the picture. A huge advantage of this feature is that since it’s SQL based, you can easily call it from a client; not just from the server. For instance, here we connect to the database from a Windows laptop and query the inventory:

c:\>sqlplus sys/oracle@prolabdbs61:1521/APP as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Sun Oct 6 19:42:11 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select dbms_qopatch.get_opatch_lsinventory
2 from dual;

GET_OPATCH_LSINVENTORY
--------------------------------------------------------------

 ...output truncated ... 

The function returns the results in XMLTYPE datatype, which allows us to query it with XPath. Here is an example of how we can extract the location of the inventory:

SQL> select extract ( dbms_qopatch.get_opatch_lsinventory,'/InventoryInstance/oracleHome/path')
2 from dual;

EXTRACT(DBMS_QOPATCH.GET_OPATCH_LSINVENTORY,'/INVENTORYINSTANCE/ORACLEHOME/PATH'
--------------------------------------------------------------------------------
/u02/app/oragrid

But checking the inventory is not the only thing you can do with this feature. One of the most practical use of the feature is to check for possible conflict in patches. Suppose you got a patch file from Oracle Support and you want to know if it will conflict with any of the installed patches in your database. Just call another built in function in this package. Here is an example where I am checking if file p1234567 will conflict with any of the installed patches:

SQL> select dbms_qopatch.patch_conflict_detection('p12345667')
2 from dual;
Here are some more examples to demonstrate the power of this SQL based inventory utility:
  • dbms_qopatch.get_opatch_preqs(‘12345678’) to get pre-requisites for the patch number 12345678
  • dbms_qopatch.get_opatch_olays(‘12345678’) to get overlay patches for the patch 12345678
  • dbms_qopatch.is_patch_installed(‘12345678’) to check if patch 12345678 has been installed

EM Express

As a DBA, sometimes you just need a quick and dirty way to manage the database. SQL*Plus is perfect for that; but sometimes, just sometimes, you yearn for that graphical interface. Besides, performance metrics are best visualized in a  graphical interface; they just don't cut it that way on a command line. But at the same time, you don't want to invest a lot of time and resources on some grahphical tool either. Oracle Enterprise Manager Cloud Control (formerly known as Grid Control) is a perfect tool for all things graphical, if you have it configurd. If you don't have it in your organization, or are yet to configure, what are your options?

Simple. In this release, Oracle includes a tool called Enterprise Manager Express. It's a simple and lightweight, web-based tool that is already available with Oracle Database 12c installation. Being lightweight, it does not need much footprint on the database server. Since it runs on the database server directly, it does not need any firewall ports to be opened between servers. Let's see how to configure EM Express if it is not done already:

Steps

1. Make sure the dispatcher parameters is using TCP protocol and listening to the service CONA1XDB:
SQL> show parameter dispatcher

NAME TYPE VALUE
---------------- ------ ------------------------------
dispatchers string (PROTOCOL=TCP) (SERVICE=CONA1XDB)
2. Make sure the HTTPS listener is running and get the port number
SQL> select dbms_xdb_config.getHttpsPort() from dual;

DBMS_XDB_CONFIG.GETHTTPSPORT()
------------------------------
5500
Once configured, you can bring up the EM Express interface using the following URL on your browser: https://hostname:5500/em

The EM Express tool does not have the full functionality of the Cloud Control tool; but a very small subset of it. Its interface has many screens; let's focus on only one--probably the one you will use the most.. Fig 1 shows the “Performance Hub” tab. This screen shows Figure 1 Performance Hub.

emexpfig1-700

On the Performance Hub, just below the overall Active Sessions graph, there are several tabs. By default the "Summary" tab is selected. Choose the tab next to it - "Activity". It will show you the various wait classes the database instance is being sujected to now and in the last one hour (shown below):

emexpfig2-700

Another great feature of the Oracle Database 12c is Spot ADDM, which allows you to order ADDM analysis immediately instead of waiting for the next AWR collection to be over. Click on the ADDM tab to bring up the ADDM reports and what kinds of finding they have. This is a great place to take stock of the various ADDM recomemndations in one place.

emexpfig3-700

A really useful feature of EM Express is the ability to capture the data as Active Reports. Suppose you want to capture the data you see on the performance hub and send to a colleague. If you take a screenshot, it's less than useful. A screenshot is static; doesn't allow the colleague to scroll and select as it is possible in a real EM Express screen. But don't worry. Notice the little diskette icon on the top of the screen with the tag "Save Report"? Clicking on that allows you to save the report as html; but not just any html. It's now saved as an active report, i.e. the report can be scrolled back and forth and the mouse clicks will provide contextual information. It's easy to show than describe an active report. Scroll up and down, move your mouse to the various memory areas and see how the report lives up than just being a snapshot of a slice of time.

To go through the features of the EM Express would need a book. I hope you saw enough of EM Express to give it a try. Remember, EM Express is not a replacement of the Clound Control; it is designed to offer a very small subset of the latter's functionality, which keeps the EM Express simple.

Smart Flash

Database Smart Flash Cache (not to be confused with the Smart Flash Cache in Exadata) is not new. It has been around since Oracle Database 11g R2. When the database buffer cache is full, the buffers are aged out to make room for the new buffers and subsequent requests for those buffers cause a disk I/O. To avoid that, you must increase the size of the buffer cache—a task that asks for significantly high investment in memory. Memory is expensive and therefore usually this strategy is not financially feasible. The Database Smart Flash Cache (DBFC) is an area on flash cards; not the system memory. When the buffers are aged out of the buffer cache, they are written to this area. When the buffers are requested again, the DBFC fulfills the request instead of the disk—making the I/O faster. This feature is available on Oracle Enterprise Linux and Solaris only.

In the earlier release, you could define only one device, or file or ASM diskgroup for the instance. In Oracle Database 12c, you can not only define multiple devices; but define different size for each device as well. Suppose you have 2 devices and here are the sizes you want for each device:
  • /dev/sdh – 4G
  • /dev/sdi – 2G
All you have to do it to have the following parameters in the initialization parameter file:
db_flash_cache_file = '/dev/sdh','/dev/sdi'
db_flash_cache_size = 4G,2G
Remember, you have to recycle the instance to enable the effect of these parameters. Once configured, you can check the use of the DBFC using the following query:
select name, bytes, singleblkrds, singleblkrdtim_micro
from v$flashfilestat;

NAME BYTES SINGLEBLKRDS SINGLEBLKRDTIM_MICRO
--------------- ---------- ------------ --------------------
/dev/sdh 4294967296 0 0
/dev/sdi 2147483648 0 0
The output shows that there are two devices – sdh and sdi, of 4GB and 2GB respectively. The columns SINGLEBLKRDS and SINGLEBLKRDTIM_MICRO show the number of single block reads and the amount of time spent on that in micro-seconds. As you keep using the DBFC, these numbers will change and offer insights into how the DBFC is helping the system.

You could have created an ASM diskgroup from them and used the diskgroup as DBFC in prior releases too.
So, what is new in using two flash devices—you might ask. Fair question. The big difference is the independent control on flash devices. For instance, you can enable or disable either device without affecting the other. To disable a device, simply put 0 as its size. Let’s see how we can disable the second device (/dev/sdi):
alter system set db_flash_cache_size = 2G, 0;
If you check the flash device status:
select name, enabled from v$flashfilestat;

NAME ENABLED
--------------- ----------
/dev/sdh 1
/dev/sdi 0
The ENABLED parameter shows 0 against that device, i.e. disabled. To re-enable it, simply reset the size back to the original value:
SQL> alter system set db_flash_cache_size = 4G,2G;

System altered.

SQL> select name, enabled from v$flashfilestat;

NAME ENABLED
--------------- ----------
/dev/sdh 1
/dev/sdi 1
The flash device is now active (ENABLED = 1). This is how you can have a number of flash devices and depending on the demand, you can enable and disable them selectively instead of just a monolithic flash area.Note: you can't
dynamically resize the flash disks once they are configured. For instance you can't resize /dev/sdh to 2GB dynamically. You can only make it 0 or 4GB. To resize them, you need to bounc the instance.

Grid Scheduler

You are all familiar with the Oracle Scheduler (aka DBMS Scheduler), introduced in Oracle Database 10g. Among many things, it had the ability to call operating system commands; not just database objects. In this release, the Oracle Scheduler is tightly integrated with the Cloud Control to make it quite versatile and now it is called Grid Scheduler. An improved scheduler is not complete without closing some other gaps. This new scheduler can process some additional types of jobs.

To create a job, you call CREATE_JOB procedure of the DBMS_SCHEDULER package. You mention the type of the job in the job_type parameter of the CREATE_JOB procedure. In the previous releases the parameter supported only these types:
  • PLSQL_BLOCK- for unnamed PL/SQL blocks
  • STORED_PROCEDURE - for named stored procedures
  • EXECUTABLE - for OS executables
  • CHAIN- for chains of jobs, the success of which kicks of dependent jobs
Now the same parameter supports the following additional job types:


SQL Script

Quite often you had to write an SQL script to be executed by wither cron or some other scheduler. Well, no more; now the Grid Scheduler can do it for you. The parameter job_type set to SQL_SCRIPT will execute a script specified in the job_action parameter. The important point is that the job is a SQL*Plus script; so you can use all SQL*Plus commands like set linesize, etc. Let’s see an example of a simple SQL*Plus script, called test.sql located in /tmp:
set lines 32000
set pages 0
spool /tmp/test.lst
prompt About to execute script
prompt .
select rpad('x',32000,'x') from dual;
prompt .
prompt After SQL
spool off
I have deliberately kept the SQL script smal and simple. The idea is to show the execution of an SQL*Plus script that creates a spool file. This job needs “credentials” to connect to the server and run the SQL Plus script. We need two different types of credentials:
  1. The credential to connect to the server.
  2. The credential to connect to the database as a user.
We define the credentials using the new package called DBMS_CREDENTIAL. Here is how we create the OS credentials called ORACLE_OSUSER.
BEGIN
dbms_credential.create_credential (
CREDENTIAL_NAME => 'ORACLE__OSUSER',
USERNAME => 'oracle',
PASSWORD => 'oracle',
DATABASE_ROLE => NULL,
WINDOWS_DOMAIN => NULL,
COMMENTS => 'Oracle OS User',
ENABLED => true
);
END;
/

By the way, credentials are not new. In the previous versions you also had credentials for jobs created using DBMS_SCHEDULER.CREATE_CREDENTIAL procedure. That procedure is now deprecated and you should use DBMS_CREDENTIAL package for all credentialing.

Next, we need to define another credential to connect to the database as the user SYSTEM.

BEGIN
dbms_credential.create_credential (
CREDENTIAL_NAME => 'SYSTEM',
USERNAME => 'SYSTEM',
PASSWORD => 'oracle',
DATABASE_ROLE => NULL,
WINDOWS_DOMAIN => NULL,
COMMENTS => 'Oracle SYSTEM User',
ENABLED => true
);
END;
/

Once these credentials are created, you can create the job.
BEGIN
dbms_scheduler.create_job (
job_name => 'sqlplus1',
job_type => 'SQL_SCRIPT',
job_action => '/tmp/test.sql',
enabled => true,
START_DATE => sysdate,
REPEAT_INTERVAL => 'FREQ=MINUTELY; byminute=1',
AUTO_DROP => false,
COMMENTS => 'SQLPLus Script Example 1',
CREDENTIAL_NAME => 'ORACLE_OSUSER'
);
END;
/

The job uses the credential ORACLE_OSUSER to connect to the server; but it does not know the connect credentials to the database. To do that you have to special attribute of the job—connect_credential_name—and set it to the database user credential you created earlier.

BEGIN
dbms_scheduler.set_attribute (
name => 'SQLPLUS1',
attribute => 'connect_credential_name',
value => 'SYSTEM'
);
END;
/

Now this job executes the script /tmp/test.sql as the user SYSTEM. If you ever want to alter the database user, you can change the attribute. But the extended functionality does not apply only to the SQL Scripts. You can defined two additional types of jobs.


Backup Scripts

RMAN backups are vital to any database of some importance. Earlier you probably used cron jobs in Unix and batch jobs in Windows to define jobs. You don’t need to anymore. You can defined RMAN jobs as RDBMS Scheduler Jobs with the job_type set to BACKUP_SCRIPT. The credentials you pass should be for the OS username of the oracle user. The RMAN session runs under that OS user and is kicked off from the $ORACLE_HOME/bin/rman.


External Scripts

You can also define any shell or batch scripts as the job action. In this case, the job_type needs to be set to EXTERNAL_SCRIPT. The exact action specified in the job_action parameter is called by the command interpreter, e.g. cmd.exe in Windows or sh in Unix.


Composite Activity Monitoring

A SQL is running slow, or consuming too much resource like CPU or memory. What would you do? In Oracle Database 11g you saw the introduction of the real time monitoring of SQL statements which yielded enough input into the workings of the SQL statement. However, what would you do if you have more than one SQL statement? In fact, in real life, you will likely see an application executing several SQL statements all of which could be connected to impact the overall response time. Therefore it is important to monitor the collection of SQLs in the session instead of one SQL at a time. In this release, you can perform real time monitoring on an activity; not just one SQL statement. Let’s see how.

By default, Oracle picks up an SQL statement for real time monitoring if it executes a parallel scan or takes more than 5 seconds on CPU or I/O. However, in activity monitoring, you may have several SQL statements some of which may or may not fit those criteria and therefore will be excluded from the monitoring. You can define the boundaries of the activity to be monitored as well as designate a name by which you can get reports on the activity. This is what you have to execute to enable the monitoring just before the activity starts. I have assumed SQL*Plus; but it can be any application.
variable ret number
begin
:ret := dbms_sql_monitor.begin_operation (
dbop_name => 'Op1',
dbop_eid => 1,
forced_tracking => 'Y',
attribute_list => 'fromprog=prog1, beforemod=prog2, routingnum=1'
);
end;
/
Here are the meaning of the parameters:
  • dbop_name – this is the name of the activity I gave, for me to easily identify it later
  • dbop_eid – this is the execution ID inside the activity. This allows me to define more than one steps inside the activity.
  • forced_tracking – by default, Oracle picks up an SQL statement for real time monitoring if it executes a parallel scan or takes more than 5 seconds on CPU or I/O. This parameter forces the monitoring regardless of the selectivity due the criteria.
  • attribute_list – Here you have defined some arbitrary attributes that allow you to pull the report in an even fine-tuned manner. You give the attributes in the format name=value separated by commas.
Now run the group of SQLs in this composite activity. For the same of simplicity, let’s assume only one SQL. I have deliberately written this SQL in such a way that it consumes some significant amount of CPU.
declare
i number;
begin
select max(t1.object_id)
into i
from all_objects t1, all_objects t2
;
end;
/
Finally, after the operation is complete, use the following SQL to end the monitoring operation. Note how I have used the same name and execution ID of Op1 and 1 respectively.
begin
:ret := dbms_sql_monitor.end_operation (
dbop_name => 'Op1',
dbop_eid => 1
);
end;
/
EM Express also displays the monitored SQLs in the tab labeled Monitored SQL:

emexpfig4-700

Regardless of how the activity is enabled for monitoring, you can get a report on its performance by executing the following SQL. Note how I used the same activity name and execution ID I used earlier – Op1 and 1:
set lines 32767 pages 0 trimspool on
set long 99999
var rep clob
begin
:rep := dbms_sql_monitor.REPORT_SQL_MONITOR (
dbop_name => 'Op1',
dbop_exec_id => 1,
report_level => 'ALL',
type => 'TEXT'
);
end;
/
print rep
The function report_sql_monitor returns the report contents, which I captured in a variable named “rep”; and I used print rep to print the contents on the screen. In the report I used ‘TEXT’ for the parameter type; so the report shows up in the text format, as shown below:
SQL Monitoring Report

Global Information
------------------------------
Status : DONE
Instance ID : 1
Session : ARUP (246:1181)
DBOP Name : Op1
DBOP Execution ID : 1
First Refresh Time : 10/10/2013 14:57:51
Last Refresh Time : 10/10/2013 15:38:13
Duration : 2422s
Module/Action : SQL*Plus/-
Service : plug4
Program : sqlplus@........... (TNS V1-V3)

Global Stats
====================================================================================================
| Elapsed | Cpu | IO | Application | Concurrency | Cluster | Other | Buffer | Read | Read | Write  ...
| Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) | Waits(s) | Waits(s) | Gets | Reqs | Bytes | ...|
====================================================================================================
| 1337 | 349 | 369 | 0.00 | 0.33 | 0.00 | 618 | 98902 | 414K | 5GB | 266K | 34GB | -88.68% |
====================================================================================================
The text format is a bit hard to read. You can easily make a pretty HTML output by changing the parameter type to ‘HTML’:
 type => 'HTML'

After executing the report generation PL/SQL block, print the rep variable but store it to a file instead of just displaying on the screen.
spool report.html
print rep
This produces a file named report.html which you can open in any web browser. Here is an example of that file:

realtimefig2-700

But the power of the Composite Monitoring does not stop here. The report function accepts several parameters to fine tune report by filtering out non-essential parts. For instance, you can set the parameter parallel_filter to filter out parallel query slaves. It is not possible to go through the full power of the tool in this short article; but I hope you got the general idea.

Database Replay Consolidation

In Oracle Database 11g Release 1 we saw the introduction of a very powerful feature—Database Capture and Replay, which allows you to capture the workload from a source system and replay against a target system to see if the target system can withstand as well as the source system. Or in some cases, you made enhancements in the target system and the replay proved (or disproved) that the enhancements are adequate. If you have forgotten about it, you can bruch up your memory here. In the earlier releases, you could capture the load from a system and replay only that on a new system. You couldn't replay many workstreams from many sources in the same target system simultaneously.

That may lead to a problem when you are trying to consolidate databases into a single one. Take for instance the workload characteristics of two different databases – DB1 and DB2, as shown in the figure below:
workloadfig1-700

What would happen if you put both the workloads in the same database, e.g. DBX? Since the peaks of the databases do not coincide, the DBX database will not see a huge spike in workload and it will be weel within its capacity. But consider a different workload characteristic of DB2:
workloadfig2-700
This will create a spike on DBX at the point where both the peaks coincide, forcing the database to undergo a severe performance bottleneck at the area shown in the circle. It's something you can identify only by replayng both workloads simulataneously; not one after the other. This is where Oracle Database 12c's Consolidated Workload feature works. Let's see how we replay workload from two different source. At this point I assume that the workload has been captured and pre-processed. Again, read up on the link shown earlier to go through these steps (identical to them in Oracle Database 11g).

First, we set the replay directory where the capture files are located:
begin
DBMS_WORKLOAD_REPLAY.SET_REPLAY_DIRECTORY (
replay_dir => 'DBCAPTURE'
);
end;
/

Then, we need to establish a "replay schedule", a term to associate a consolidated replay to a directory containing files from multiple sources.


begin
DBMS_WORKLOAD_REPLAY.BEGIN_REPLAY_SCHEDULE (
schedule_name => 'CONS_REP_SCHED'
);
end;
/

You must initialize the replay as before; but use the consolidated replay part:


begin
DBMS_WORKLOAD_REPLAY.INITIALIZE_CONSOLIDATED_REPLAY (
REPLAY_NAME => 'REP1',
SCHEDULE_NAME => 'CONS_REP_SCHED'
);
end;
/

Finally, prepare the database for the consolidated replay:


begin
DBMS_WORKLOAD_REPLAY.prepare_consolidated_replay(SYNCHRONIZATION=>true);
end;
/

Up to this point, you were simply preparing the database. Now is the time to kick off actal replay clients. From a different Unix terminal (don't close the previous one with the SQL commands), start the replay clients:

$ wrc system/oracle mode=replay

Workload Replay Client: Release 12.1.0.1.0 - Production on Sun Oct 13 21:46:16 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.


Wait for the replay to start (21:46:16)

The command will wait at this point. From the other screen (where you gave the SQL commands), issue:


begin
dbms_workload_replay.start_consolidated_replay;
end;
/
When the above procedure starts, you will see the following message in the Unix window where you gave the wrc command.
Replay client 1 started (21:49:36)
Once the replay is complete, you can then perform the normal analysis as it is the case in non-consolidated replays in previous versions. The biggest advantage of the consolidated run is to check the impact of the diferent workloads when they co-exist. You can react to it by either adding capacity or staggering the workloads so that the peaks never resonate. Had you replayed them independently, you would not have found the answer. By replaying a consolidated workload, you could locate the peak and isolate the problem area.

In Oracle Database 12c, you can also get a compare period report for Workload Replay that shows the workload and performance across various replays. It’s impossible to describe all these features in this short article; so I would stringly advise you to check out the manual for the details on DBM_WORKLOAD_REPLAY package.


Data Masking

You database may contain a variety of sensitive information—some of which could be trade secret or just something you are required to protect by law, e.g. credit card numbers or national ID numbers (such as Social Security Numbers in the US). You have made sure that only the authorized users are created as database users; and encrypted the sensitive columns using Transparent Database Encryption (TDE) so that the data is protected in backups as well. However, TDE, as the name implies, is transparent, i.e. it allows a legitimate user to access the data in cleartext. That might not be acceptable in some cases. For instance you may want to make sure the developers who access the database do not see the real values of the sensitive data; but some altered version of it. You don't want to completly hide it since some application logic may depend on it. An example of such obfuscation is the US Social Security Number, in the format 123-45-6789, to be masked in such a way so that only last 4 digits are visible. The rest of digits are to be replaced by X’es; so 123456789 should be displayed as XXXXX6789.

Traditionally, you probably approached the problem in one of the two ways:
  1. By creating a view on th table and altering the columns in the view. Anyone selecting from the view will see the altered data. It solves the problem but creates another problem: views need to have INSTEAD OF triggers on thm to simulate the effect of updates on the table. With a lot of tables, this becomes difficult and unmanageable; not to mention a drag on performance.
  2. By putting the masking code in the application logic. This takes the implementation away from the database; making it prone different versions of the same requirement.

In Oracle Database 12c, this is quite simple to accomplish using the DBMS_REDACT package. Let’s see it with an example table—CUSTOMERS:

SQL> desc customers
Name Null? Type
----------------------------------------- -------- ----------------------------
CUST_ID NUMBER
CUST_NAME VARCHAR2(30)
SSN VARCHAR2(9)
ZIPCODE VARCHAR2(5)
EMAIL VARCHAR2(30)
If you check the data, you will see something like this:
SQL> select * from customers;

CUST_ID CUST_NAME SSN ZIPCO EMAIL
---------- --------------------------- --------- ----- ------------------------------
1 John Smith 123456789 12345 john.smith@poligence.com
2 Jane Doe 234567890 23456 jdoe@proligence.com

2 rows selected.
Let's make sure a user called John can select from the table:
SQL> grant create session to john identified by john;

Grant succeeded.

SQL> grant select on arup.customers to john;

Grant succeeded.

You want to mask the SSN field so that only the last 4 digits are visible and the rest are replaced by X’es. To do so, you need to create a redaction policy on the table as shown below:
begin
dbms_redact.add_policy (
object_schema => 'ARUP',
object_name => 'CUSTOMERS',
policy_name => 'Customer_Redaction',
expression => 'USER!=''ARUP''',
column_name => 'SSN',
function_type => dbms_redact.partial,
function_parameters => 'VVVVVVVVV,VVVVVVVVV,*,1,5'
);
end;
/
Let’s examin the SQL in closer detail.  The parameters object_schema, object_name, column_name and policy_name are self explanatory. In this policy you don't want the ownr the table ("ARUP") to see the masked data, do you? Therefore, you put the exclusion logic in the parameter "expression". The policy is applied if the USER (the currently connected user) is not "ARUP". You can put any expression as long as it returns true or false. So, you can put here something like SYS_CONTEXT('USERENV','IP_ADDRESS') to mask it from all servers except a few, etc. The parameter function_type specifies the type of redaction to apply; in this case we chose to use a partial redaction (only parts of the value redacted; not all). The exact redaction logic to apply to the column is specified here. The format of this parameter is:

FromFormat, ToFormat, MaskingCharacter, StartPosition, Offset

  • FromFormat is the format of the input string. Each V represents a positional value.
  • ToFormat is the format of the output string. Each V represents the correspnding V from the FromString
  • MaskingCharacter is the character that is used for masking
  • StartPosition is the starting position of the masking of the input string. In this case we want to mask the 1st to 5th characters. So this value is 1.
  • Offset is the number of characters to be masked. This is 5 in this case since we want to mask 5 characters.

After the PL/SQL is executed, let's login as John and see the value from the table:
-- Connect as JOHN
SQL> conn john/john@pdba
SQL> select * from arup.customers;

CUST_ID CUST_NAME SSN ZIPCO EMAIL
---------- -------------------------- --------- ----- ------------------------------
1 John Smith *****6789 12345 john.smith@poligence.com
2 Jane Doe *****7890 23456 jdoe@proligence.com
2 rows selected.
Notice how the first 5 characters of the SSN column is masked with *'s? That masking is automatically applied at the database level to any user except "ARUP" as specified in the expression parameter. This is applied to the table; there is no view necessary.

But SSN is not the only column. We need to mask the ZIPCODE as well with all random characters. We can do that by altering the policy to add another column, with the following SQL. Notice the function_type parameter.
begin
 dbms_redact.alter_policy (
object_schema => 'ARUP',
object_name => 'CUSTOMERS',
policy_name => 'Customer_Redaction',
action => dbms_redact.add_COLUMN,
column_name => 'ZIPCODE',
function_type => dbms_redact.random
);
end;
/
For the EMAILcolumn, we have a slightly more complex requirement--to mask the value before the "@" character and leve the other part (after "@") intact. To do that, we will use a regular expression filter in the function_type parameter. The DBMS_REDACT package supplies a lot of templates. We will use one such template for email patterns as shown below.
begin
dbms_redact.alter_policy (
object_schema => 'ARUP',
object_name => 'CUSTOMERS',
policy_name => 'Customer_Redaction',
action => dbms_redact.add_COLUMN,
column_name => 'EMAIL',
function_type => dbms_redact.regexp,
regexp_pattern => dbms_redact.re_pattern_email_address,
regexp_replace_string => dbms_redact.re_redact_email_name,
regexp_position => dbms_redact.re_beginning,
regexp_occurrence => dbms_redact.re_all
);
end;
/
With this, when we select from the table as JOHN we will see the data in the following manner:
SQL> select * from arup.customers;

CUST_ID CUST_NAME SSN ZIPCO EMAIL
---------- -------------------------- --------- ----- ------------------------------
1 John Smith *****6789 xe5Y4 xxxx@poligence.com
2 Jane Doe *****7890 1 Y:K xxxx@proligence.com

2 rows selected.
The redaction is applied to the table not only during selects; but during DML activities as well. Since this is completely transparent to the application and the users, it can be applied selectively without any change in application code. You can disable and re-enable any policy based on the need.

Multiple JDK Versions

If you deal with java code, you know that Java is an interpreted language. A java compiler and a runtime engine is needed for the code. As newer versions of java are released, you can download from the oracle.com website and install it on the server. All you have to do is to change the path so that the newer version of Java VM takes effect leaving the old one intact, should you go back to that one for something. This is great; but what do you do for the Java VM inside the database engine? You can't just update it by pointing to a new location or path. You have to upgrade the database to have the new version of Java VM. And if you don't want to do that, you were out of luck.

Not in Oracle Database 12c. You can have multiple JDK version for the embededed Java VM in the database and you can load a different version. The Oracle Database 12c comes with Java 6. You can upgrade the VM to Java 7. The perl script update_javavm_binaries.pl in $ORACLE_HOME/javavm/install allows you to do that. Let's see how. First, you can check what you have:
$ cd $ORACLE_HOME/javavm/install
$ perl update_javavm_binaries.pl
Nothing needs to be done... already in version 6
If you want to upgrade to JDK 7, shutdown the database and execute the same perl script but with the number 7 as a parameter:
$ perl update_javavm_binaries.pl 7
Although the perl script performs all the necessary recompilations, it’s always a good idea to recompile the oracle executable to ensure a proper linking of the libraries. This is how you recompile in a Unix environment:
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk ioracle
Then bring the database up and execute a SQL script located in $ORACLE_HOME/javavm/install directory: update_javavm_db.sql. This SQL script executes, among other things, the following SQL:
create or replace java system;
This is necessary for the proper update of the JDK. Note that if you want the default JDK to be version 7 in the new database to be created, just execute the perl script and the SQL update before the database is created. In case of a RAC database, you will need to perform these on the nodes of the RAC database.

Upgrade Automation


Do you love upgrades? I'm sure you don't mind the actual task of the upgrade as much as the need to find all the pre-requisites and pre-conditions that precede the upgrade. That task is very important and without the meticulous check you may end up with a botched upgrade. Well, fear no more. Relief is available in Oracle Database 12c in form of the Pre-upgrade tool. The tool, which is a script named preupgrd.sql is located in the directory $ORACLE_HOME/rdbms/admin. Ideally you should copy it to another working directory and execute it there. Let’s see how we use the tool. Login to the database as SYS and execute the script.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Oct 14 10:29:56 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> @preupgrd.sql
Loading Pre-Upgrade Package...
Executing Pre-Upgrade Checks...
Pre-Upgrade Checks Complete.
************************************************************

Results of the checks are located at:
/u02/app/oracle/cfgtoollogs/CONA/preupgrade/preupgrade.log

Pre-Upgrade Fixup Script (run in source database environment):
/u02/app/oracle/cfgtoollogs/CONA/preupgrade/preupgrade_fixups.sql

Post-Upgrade Fixup Script (run shortly after upgrade):
/u02/app/oracle/cfgtoollogs/CONA/preupgrade/postupgrade_fixups.sql

************************************************************

Fixup scripts must be reviewed prior to being executed.

************************************************************

************************************************************
====>> USER ACTION REQUIRED <<====
************************************************************

The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
prior to attempting your upgrade.
Failure to do so will result in a failed upgrade.


1) Check Tag: OLS_SYS_MOVE
Check Summary: Check if SYSTEM.AUD$ needs to move to SYS.AUD$ before upgrade
Fixup Summary:
"Execute olspreupgrade.sql script prior to upgrade."
+++ Source Database Manual Action Required +++


2) Check Tag: PURGE_RECYCLEBIN
Check Summary: Check that recycle bin is empty prior to upgrade
Fixup Summary:
"The recycle bin will be purged."

You MUST resolve the above errors prior to upgrade

************************************************************
The tool produces some specific, actionable items, as shown in the output. It shows three very important outputs:
  • A logfile with detailed output 
  • A script to be run before the upgrade 
  • A script to be run after the upgrade

The errors are self explanatory and you should take steps to correct them. The beauty of the script is in the generation of the log file and the automatic fixup scripts. First, let’s look at the logfile, which is clearly mentioned in the output as /u02/app/oracle/cfgtoollogs/CONA/preupgrade/preupgrade.log.
Oracle Database Pre-Upgrade Information Tool 10-14-2013 10:30:18
Script Version: 12.1.0.1.0 Build: 006
**********************************************************************
Database Name: CONA
Version: 12.1.0.1.0
Compatible: 12.1.0.0.0
Blocksize: 8192
Platform: Linux x86 64-bit
Timezone file: V18
**********************************************************************
[Renamed Parameters]
[No Renamed Parameters in use]
**********************************************************************
**********************************************************************
[Obsolete/Deprecated Parameters]
[No Obsolete or Desupported Parameters in use]
**********************************************************************
[Component List]
**********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
--> JServer JAVA Virtual Machine [upgrade] VALID
--> Oracle XDK for Java [upgrade] VALID
--> Real Application Clusters [upgrade] OPTION OFF
--> Oracle Workspace Manager [upgrade] VALID
--> OLAP Analytic Workspace [upgrade] VALID
--> Oracle Label Security [upgrade] VALID
--> Oracle Database Vault [upgrade] VALID
--> Oracle Text [upgrade] VALID
--> Oracle XML Database [upgrade] VALID
--> Oracle Java Packages [upgrade] VALID
--> Oracle Multimedia [upgrade] VALID
--> Oracle Spatial [upgrade] VALID
--> Oracle Application Express [upgrade] VALID
--> Oracle OLAP API [upgrade] VALID
**********************************************************************
[ Unsupported Upgrade: Tablespace Data Supressed ]
**********************************************************************
**********************************************************************
[Pre-Upgrade Checks]
**********************************************************************
ERROR: --> SYSTEM.AUD$ (audit records) Move

An error occured retrieving a count from SYSTEM.AUD$
This can happen when the table has already been cleaned up.
The olspreupgrade.sql script should be re-executed.



WARNING: --> Existing DBMS_LDAP dependent objects

Database contains schemas with objects dependent on DBMS_LDAP package.
Refer to the Upgrade Guide for instructions to configure Network ACLs.
USER APEX_040200 has dependent objects.

ERROR: --> RECYCLE_BIN not empty.
Your recycle bin contains 17 object(s).
It is REQUIRED that the recycle bin is empty prior to upgrading.
Immediately before performing the upgrade, execute the following
command:
EXECUTE dbms_preup.purge_recyclebin_fixup;


**********************************************************************
[Pre-Upgrade Recommendations]
**********************************************************************

*****************************************
********* Dictionary Statistics *********
*****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
[Post-Upgrade Recommendations]
**********************************************************************

*****************************************
******** Fixed Object Statistics ********
*****************************************

Please create stats on fixed objects two weeks
after the upgrade using the command:
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
************ Summary ************

2 ERRORS exist that must be addressed prior to performing your upgrade.
2 WARNINGS that Oracle suggests are addressed to improve database performance.
0 INFORMATIONAL messages messages have been reported.

After your database is upgraded and open in normal mode you must run
rdbms/admin/catuppst.sql which executes several required tasks and completes
the upgrade process.

You should follow that with the execution of rdbms/admin/utlrp.sql, and a
comparison of invalid objects before and after the upgrade using
rdbms/admin/utluiobj.sql

If needed you may want to upgrade your timezone data using the process
described in My Oracle Support note 977512.1
***********************************
The log and the output above clearly states which actions to take before starting the upgrade. But to make it simpler, the tool provides a “fixup” script (actually two scripts—one each for before and after the upgrade) to make it really simple and automatic for you. Here is how the pre-upgrade fixup script looks like (the location of this script is in the output of the tool as well as in the log):
REM Pre-Upgrade Script Generated on: 2013-10-14 10:30:09
REM Generated by Version: 12.1.0.1 Build: 006
SET ECHO OFF SERVEROUTPUT ON FORMAT WRAPPED TAB OFF LINESIZE 750;
BEGIN
dbms_output.put_line ('Pre-Upgrade Fixup Script Generated on 2013-10-14 10:30:09 Version: 12.1.0.1 Build: 006');
dbms_output.put_line ('Beginning Pre-Upgrade Fixups...');
END;
/
BEGIN
dbms_preup.clear_run_flag(TRUE);
END;
/
BEGIN
-- ***************** Fixup Details ***********************************
-- Name: OLS_SYS_MOVE
-- Description: Check if SYSTEM.AUD$ needs to move to SYS.AUD$ before upgrade
-- Severity: Error
-- Action: ^^^ MANUAL ACTION REQUIRED ^^^
-- Fix Summary:
-- Execute olspreupgrade.sql script prior to upgrade.

dbms_preup.run_fixup_and_report('OLS_SYS_MOVE');
END;
/
BEGIN
-- ***************** Fixup Details ***********************************
-- Name: DBMS_LDAP_DEPENDENCIES_EXIST
-- Description: Check for dependency on DBMS_LDAP package
-- Severity: Warning
-- Action: ^^^ MANUAL ACTION REQUIRED ^^^
-- Fix Summary:
-- Network Objects must be reviewed manually.

dbms_preup.run_fixup_and_report('DBMS_LDAP_DEPENDENCIES_EXIST');
END;
/
BEGIN
-- ***************** Fixup Details ***********************************
-- Name: PURGE_RECYCLEBIN
-- Description: Check that recycle bin is empty prior to upgrade
-- Severity: Error
-- Action: Fixup routine
-- Fix Summary:
-- The recycle bin will be purged.

dbms_preup.run_fixup_and_report('PURGE_RECYCLEBIN');
END;
/
BEGIN
dbms_output.put_line ('');
dbms_output.put_line ('**********************************************************************');
dbms_output.put_line (' [Pre-Upgrade Recommendations]');
dbms_output.put_line ('**********************************************************************');
dbms_output.put_line ('');
END;
/
BEGIN
dbms_output.put_line (' *****************************************');
dbms_output.put_line (' ********* Dictionary Statistics *********');
dbms_output.put_line (' *****************************************');
dbms_output.put_line ('');
dbms_output.put_line ('Please gather dictionary statistics 24 hours prior to');
dbms_output.put_line ('upgrading the database.');
dbms_output.put_line ('To gather dictionary statistics execute the following command');
dbms_output.put_line ('while connected as SYSDBA:');
dbms_output.put_line (' EXECUTE dbms_stats.gather_dictionary_stats;');
dbms_output.put_line ('');
dbms_output.put_line ('^^^ MANUAL ACTION SUGGESTED ^^^');
dbms_output.put_line ('');
END;
/
BEGIN dbms_preup.fixup_summary(TRUE); END;
/
BEGIN
dbms_output.put_line ('**************** Pre-Upgrade Fixup Script Complete *********************');
END;
/

REM Pre-Upgrade Script Closed At: 2013-10-14 10:30:25
Running this script before the upgrade takes care of the pre-requisites before the upgrade process starts. The tool also produces a fixup script after the upgrade is complete. This is how the script looks like:
REM Post Upgrade Script Generated on: 2013-10-14 10:30:09
REM Generated by Version: 12.1.0.1 Build: 006
SET ECHO OFF SERVEROUTPUT ON FORMAT WRAPPED TAB OFF LINESIZE 750;
BEGIN
dbms_output.put_line ('Post Upgrade Fixup Script Generated on 2013-10-14 10:30:09 Version: 12.1.0.1 Build: 006');
dbms_output.put_line ('Beginning Post-Upgrade Fixups...');
END;
/
BEGIN
dbms_preup.clear_run_flag(FALSE);
END;
/
BEGIN
dbms_output.put_line ('**********************************************************************');
dbms_output.put_line (' [Post-Upgrade Recommendations]');
dbms_output.put_line ('**********************************************************************');
dbms_output.put_line ('');
END;
/
BEGIN
dbms_output.put_line (' *****************************************');
dbms_output.put_line (' ******** Fixed Object Statistics ********');
dbms_output.put_line (' *****************************************');
dbms_output.put_line ('');
dbms_output.put_line ('Please create stats on fixed objects two weeks');
dbms_output.put_line ('after the upgrade using the command:');
dbms_output.put_line (' EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;');
dbms_output.put_line ('');
dbms_output.put_line ('^^^ MANUAL ACTION SUGGESTED ^^^');
dbms_output.put_line ('');
END;
/
BEGIN dbms_preup.fixup_summary(FALSE); END;
/
BEGIN
dbms_output.put_line ('*************** Post Upgrade Fixup Script Complete ********************');
END;
/
REM Post Upgrade Script Closed At: 2013-10-14 10:30:25
In this case, the script didn’t have to do anything much except to provide some onstructions to peform some manual actions; but it may be some extended script in some cases. The point is that the activities are all scripted and laid out for you to execute instead of spending long hours reading the README file and Release Notes extensively.


Parallel Database Upgrade


A very important task, perhaps the most imporant, during database upgrades is running the scripts catupgrd.sql and catpatch.sql. It's also the most time consuming. Don’t you want the catpatch.sql or catupgrd.sql to run faster? The scripts perform some critical tasks like updating the the catalog, recompile dependent objects, etc. Some of them could be performed in parallel. In Oracle Database 12c, a new tool can perform this upgrade in parallel, making the process much faster.

Important: a pre-reqisite of this parallel upgrade is that you must have run the Pre-Upgrade Check Tool (explained in the preceding section) before invoking this parallel upgrade. First, just follow the normal upgrade activities:
$ sqlplus / as sysdba
SQL> shutdown immediate
SQL> startup upgrade
At this stage, you would usually execute catupgrd.sql and catpatch.sql SQL scripts. Instead, call a perl script catctl.pl, as shown below. It’s important that you use the perl interpreter in the Oracle Home; as the default Unix perl interprater may not have all the functionality.
$ cd $ORACLE_HOME/rdbms/bin
$ ORACLE_HOME/perl/bin/perl catctl.pl catupgrd.sql
You can use some parameters in the perl script. Two are very important:


Listing

To show the steps and different phases of the execution, use the –y option:
$ $ORACLE_HOME/perl/bin/perl catctl.pl catupgrd.sql -y

Analyzing file catupgrd.sql
14 scripts found in file catupgrd.sql
Next path: catalog.sql
32 scripts found in file catalog.sql
Next path: catproc.sql
37 scripts found in file catproc.sql
Next path: catptabs.sql
61 scripts found in file catptabs.sql
Next path: catpdbms.sql
205 scripts found in file catpdbms.sql
Next path: catpdeps.sql
77 scripts found in file catpdeps.sql
Next path: catpprvt.sql
260 scripts found in file catpprvt.sql
Next path: catpexec.sql
26 scripts found in file catpexec.sql
Next path: cmpupgrd.sql
16 scripts found in file cmpupgrd.sql
… output truncated for brevity …

Parallel Degree


The perl script runs with 4 parallel processes by default. If you want, you can make it up to 8 processes. The –n option does it:
$ORACLE_HOME/perl/bin/perl catctl.pl catupgrd.sql -n 8
Setting it to 0 makes the perl script run in serial mode.

Using this technique, you can expedite the database upgrade process significanlty, provided you have enough CPUs to spare.