Oracle Database 10g: The Top 20 Features for DBAs

Join Oracle Magazine's 2003 "DBA of the Year" Arup Nanda as he presents his list of the top Oracle Database 10g features for database administrators

Follow along! Download Oracle Database 10g


Week 19
Scheduler

Tired of cumbersome manual management of intervals in dbms_job and need a new scheduling system inside the database? Look no further than the database itself.

Some of you may use the dbms_job package extensively to submit database jobs to be run in the background, control the time or interval of a run, report failures, and much more. However, I have a feeling that most of you don't.

The problem with the package is that it can handle only PL/SQL code segments—just anonymous blocks and stored program units. It cannot handle anything outside the database that is in an operating system command file or executable. To do so, you would have to resort to using an operating system scheduling utility such as cron in Unix or the AT command in Windows. Or, you could use a third-party tool, one that may even extend this functionality by providing a graphical user interface.

Even so, dbms_job has a distinct advantage over these alternatives: it is active only when the database is up and running. If the database is down, the jobs don't run. A tool outside the database must manually check if the database is up—and that can be difficult. Another advantage is that dbms_job is internal to the database; hence you can access it via a database access utility such as SQL*Plus.

The Oracle Database 10g Scheduler feature offers the best of all worlds: a job scheduler utility right inside the database that is sufficiently powerful to handle all types of jobs, not just PL/SQL code segments, and that can help you create jobs either with or without associated programs and/or schedules. Best of all, it comes with the database at no additional cost. In this installment, we'll take a look at how it works.

Creating Jobs Without Programs

Perhaps the concept can be best introduced through examples. Suppose you have created a shell script to move archived log files to a different filesystem as follows:

/home/arup/dbtools/move_arcs.sh
We can specify the OS executable directly without creating it as a program first.
begin
   dbms_scheduler.create_job
   (
      job_name      => 'ARC_MOVE_2',
      schedule_name => 'EVERY_30_MINS',
      job_type      => 'EXECUTABLE',
      job_action    => '/home/arup/dbtools/move_arcs.sh',
      enabled       => true,
      comments      => 'Move Archived Logs to a Different Directory'
   );
end;
/
Similarly, you can create a job without a named schedule.
begin
   dbms_scheduler.create_job
   (
      job_name        => 'ARC_MOVE_3',
      job_type        => 'EXECUTABLE',
      job_action      => '/home/arup/dbtools/move_arcs.sh',
      repeat_interval => 'FREQ=MINUTELY; INTERVAL=30',
      enabled         => true,
      comments        => 'Move Archived Logs to a Different Directory'
   );
end;
/
One advantage of Scheduler over dbms_job is pretty clear from our initial example: the ability to call OS utilities and programs, not just PL/SQL program units. This ability makes it the most comprehensive job management tool for managing Oracle Database and related jobs. However, you may have noted another, equally important advantage: the ability to define intervals in natural language. Note that in the above example we wanted our schedule to run every 30 minutes; hence the parameter REPEAT_INTERVAL is defined with a simple, English-like expression (not a PL/SQL one) :

'FREQ=MINUTELY; INTERVAL=30'
A more complex example may help convey this advantage even better. Suppose your production applications become most active at 7:00AM and 3:00PM. To collect system statistics, you want to run Statspack from Monday to Friday at 7:00AM and 3:00PM only. If you use DBMS_JOB.SUBMIT to create a job, the NEXT_DATE parameter will look something like this:
DECODE
(
   SIGN
   (
      15 - TO_CHAR(SYSDATE,'HH24')
   ), 
   1,
      TRUNC(SYSDATE)+15/24,
   TRUNC
   (
      SYSDATE +
      DECODE
      ( 
          TO_CHAR(SYSDATE,'D'), 6, 3, 1
      )
    )
    +7/24
)
Is that code easy to understand? Not really.

Now let's see the equivalent job in DBMS_SCHEDULER. The parameter REPEAT_INTERVAL will be as simple as:

'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=7,15'
Furthermore, this parameter value can accept a variety of intervals, some of them very powerful. Here are some more examples:
  • Last Sunday of every month:
    FREQ=MONTHLY; BYDAY=-1SUN
    
  • Every third Friday of the month:
    FREQ=MONTHLY; BYDAY=3FRI
    
  • Every second Friday from the end of the month, not from the beginning:
    FREQ=MONTHLY; BYDAY=-2FRI
    
The minus signs before the numbers indicate counting from the end, instead of the beginning.

What if you wanted to verify if the interval settings are correct? Wouldn't it be nice to see the various dates constructed from the calendar string? Well, you can get a preview of the calculation of next dates using the EVALUATE_CALENDAR_STRING procedure. Using the first example—running Statspack every day from Monday through Friday at 7:00AM and 3:00PM—you can check the accuracy of your interval string as follows:

set serveroutput on size 999999

declare
   L_start_date    TIMESTAMP;
   l_next_date     TIMESTAMP;
   l_return_date   TIMESTAMP;
begin
   l_start_date := trunc(SYSTIMESTAMP);
   l_return_date := l_start_date;
   for ctr in 1..10 loop
      dbms_scheduler.evaluate_calendar_string(
        'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=7,15',
         l_start_date, l_return_date, l_next_date
      );
      dbms_output.put_line('Next Run on: ' ||
          to_char(l_next_date,'mm/dd/yyyy hh24:mi:ss')
      );
      l_return_date := l_next_date;
   end loop;
end;
/
The output is:
Next Run on: 03/22/2004 07:00:00
Next Run on: 03/22/2004 15:00:00
Next Run on: 03/23/2004 07:00:00
Next Run on: 03/23/2004 15:00:00
Next Run on: 03/24/2004 07:00:00
Next Run on: 03/24/2004 15:00:00
Next Run on: 03/25/2004 07:00:00
Next Run on: 03/25/2004 15:00:00
Next Run on: 03/26/2004 07:00:00
Next Run on: 03/26/2004 15:00:00
This confirms that your settings are correct.

Associating Jobs with Programs

In the above case, you created a job independently of any program. Now let's create one that refers to an operating system utility or program, a schedule to specify how many times something should run, and then join the two to create a job.

First you need to make the database aware that your script is a program to be used in a job. To create this program, you must have the CREATE JOB privilege.

begin
    dbms_scheduler.create_program
    (
       program_name   => 'MOVE_ARCS',
       program_type   => 'EXECUTABLE',
       program_action => '/home/arup/dbtools/move_arcs.sh',
       enabled        => TRUE,
       comments       => 'Moving Archived Logs to Staging Directory'
    );
end;
/
Here you have created a named program unit, specified it as an executable, and noted what the program unit is called.

Next, you will create a named schedule to be run every 30 minutes called EVERY_30_MINS. You would do that with:

begin
    dbms_scheduler.create_schedule
    (
       schedule_name   => 'EVERY_30_MINS',
       repeat_interval => 'FREQ=MINUTELY; INTERVAL=30',
       comments        => 'Every 30-mins'
    );
end;
/

Now that the program and schedule are created, you will associate the program to the schedule to create a job.

begin
   dbms_scheduler.create_job
   (
      job_name      => 'ARC_MOVE',
      program_name  => 'MOVE_ARCS',
      schedule_name => 'EVERY_30_MINS',
      comments      => 'Move Archived Logs to a Different Directory',
      enabled       => TRUE
   );
end;
/
This will create a job to be run every 30 minutes that executes the shell script move_arcs.sh. It will be handled by the Scheduler feature inside the database—no need for cron or the AT utility.

Classes, Plans, and Windows

A good job scheduling system worth its salt must support the ability to prioritize jobs. For instance, the statistics collection job suddenly goes into the OLTP workload window affecting performance there. To ensure the stats collection job doesn't consume resources affecting OLTP, you would use job classes, resource plans, and Scheduler Windows.

For example, while defining a job, you can make it part of a job class, which maps to a resource consumer group for allocation of resources. To do that, first you need to define a resource consumer group called, say, OLTP_GROUP.

begin
   dbms_resource_manager.clear_pending_area();
   dbms_resource_manager.create_pending_area();
   dbms_resource_manager.create_consumer_group (
       consumer_group => 'oltp_group',   
       comment => 'OLTP Activity Group'
   );
   dbms_resource_manager.submit_pending_area();
end;
/
Next, you need to create a resource plan.
begin
   dbms_resource_manager.clear_pending_area();
   dbms_resource_manager.create_pending_area();
   dbms_resource_manager.create_plan
      ('OLTP_PLAN', 'OLTP Database Activity Plan');
   dbms_resource_manager.create_plan_directive(
      plan => 'OLTP_PLAN',
      group_or_subplan => 'OLTP_GROUP',
      comment => 'This is the OLTP Plan',
      cpu_p1 => 80, cpu_p2 => NULL, cpu_p3 => NULL, cpu_p4 => NULL,
      cpu_p5 => NULL, cpu_p6 => NULL, cpu_p7 => NULL, cpu_p8 => NULL,
      parallel_degree_limit_p1 => 4,
      active_sess_pool_p1 => NULL,
      queueing_p1 => NULL,
      switch_group => 'OTHER_GROUPS',
      switch_time => 10,
      switch_estimate => true,
      max_est_exec_time => 10,
      undo_pool => 500,
      max_idle_time => NULL,
      max_idle_blocker_time => NULL,
      switch_time_in_call => NULL
   );
   dbms_resource_manager.create_plan_directive(
      plan => 'OLTP_PLAN',
      group_or_subplan => 'OTHER_GROUPS',
      comment => NULL,
      cpu_p1 => 20, cpu_p2 => NULL, cpu_p3 => NULL, cpu_p4 => NULL,
      cpu_p5 => NULL, cpu_p6 => NULL, cpu_p7 => NULL, cpu_p8 => NULL,
      parallel_degree_limit_p1 => 0,
      active_sess_pool_p1 => 0,
      queueing_p1 => 0,
      switch_group => NULL,
      switch_time => NULL,
      switch_estimate => false,
      max_est_exec_time => 0,
      undo_pool => 10,
      max_idle_time => NULL,
      max_idle_blocker_time => NULL,
      switch_time_in_call => NULL
   );
   dbms_resource_manager.submit_pending_area();
end;
/
Finally, you create a job class with the resource consumer group created earlier.
begin
   dbms_scheduler.create_job_class(
      job_class_name => 'OLTP_JOBS',
      logging_level => DBMS_SCHEDULER.LOGGING_FULL,
      log_history => 45,
      resource_consumer_group => 'OLTP_GROUP',
      comments => 'OLTP Related Jobs'
   );
end;
/
Let's examine the various parameters in this call. The parameter LOGGING_LEVEL sets how much log data is tracked for the job class. The setting LOGGING_FULL indicates that all activities on jobs in this class—creation, deletion, run, alteration, and so on—will be recorded in the logs. The logs can be seen from the view DBA_SCHEDULER_JOB_LOG and are available for 45 days as specified in the parameter LOG_HISTORY (the default being 30 days). The resource consumer group associated with this class is also specified. The job classes can be seen from the view DBA_SCHEDULER_JOB_CLASSES.

When you create a job, you can optionally associate a class to it. For instance, while creating COLLECT_STATS, a job that collects optimizer statistics by executing a stored procedure collect_opt_stats(), you could have specified:

begin
   dbms_scheduler.create_job
   (
      job_name        => 'COLLECT_STATS',
      job_type        => 'STORED_PROCEDURE',
      job_action      => 'collect_opt_stats',
      job_class       => 'OLTP_JOBS',
      repeat_interval => 'FREQ=WEEKLY; INTERVAL=1',
      enabled         => true,
      comments        => 'Collect Optimizer Stats'
   );
end;
/
This command will place the newly created job in the class OLTP_JOBS, which is then governed by the resource plan OLTP_GROUP, which will restrict how much CPU can be allocated to the process, the maximum number of executions before it is switched to a different group, the group to switch to, and so on. Any job defined in this class will be governed by the same resource plan directive. This capability is particularly useful for preventing different types of jobs from taking over the resources of the system.

The Scheduler Window is a time frame with an associated resource plan used for activating that plan-thereby supporting different priorities for the jobs over a time frame. For example, some jobs, such as batch programs to update databases for real-time decision-support, need high priority during the day but become low priority at night (or vice-versa). You can implement this schedule by defining different resource plans and then activating them using Scheduler Windows.

Monitoring

After a job is issued, you can monitor its status from the view DBA_SCHEDULER_JOB_LOG, where the column STATUS shows the current status of the job. If it shows FAILED, you can drill down further to find out the cause from the view DBA_SCHEDULER_JOB_RUN_DETAILS.

Administration

So far, we've discussed how to create several types of objects: programs, schedules, job classes, and jobs. What if you want to modify some of them to adjust to changing needs? Well, you can do that via APIs provided in the DBMS_SCHEDULER package.

From the Database tab of the Enterprise Manager 10g home page, click on the Administration link. This will bring up the Administration Screen, shown in Figure 1. All the Scheduler related tasks are found under the heading "Scheduler" to the bottom right-hand corner, shown inside a red ellipse in the figure.

figure 1
Figure 1: Administration page

All the tasks related to scheduler, such as creating, deleting, and maintaining jobs, can be easily accomplished through the hyper-linked task in this page. Let's see a few of these tasks. We created all these tasks earlier, so the clicking on the Jobs tab will show a screen similar to Figure 2.

figure 2
Figure 2: Scheduled jobs

Clicking on the job COLLECT_STATS allows you to modify its attributes. The screen shown in Figure 3 shows up when you click on "Job Name."

figure 3
Figure 3: Job parameters

As you can see, you can change parameters of the job as well as the schedule and options by clicking on the appropriate tabs. After all changes are made, you would press the button "Apply" to make the changes permanent. Before doing so, you may want to click the button marked "Show SQL", which shows the exact SQL statement that will be issued—if for no other reason than to see what APIs are called, thereby enabling you to understand the workings behind the scene. You can also store the SQL in a script and execute it later, or store it as a template for the future.

Conclusion

Scheduler in Oracle Database 10g is a giant leap from the original dbms_job interface. For more information on these features and other more advanced ones, see Chapter 25 of the Oracle Database Administrator's Guide .

Next Week: Best of the Rest

Back to Series Index


Please rate this document:

Excellent Good Average Below Average Poor

Send us your comments