By Tom Kyte
Our technologist modifies formats, makes zeros zeros, and avoids slides.
When I format a date, there are leading zeros on the hours (for example, 09:30 a.m.). How can I remove these?
There is a nice format model modifier, fill mode (FM), that you can use with dates and numbers. It suppresses white space and other filler characters (like this leading zero). For example:
select to_char(dt,'hh:mi am'), to_char(dt,'fmhh:mi am'), to_char(dt,'fmhhfm:mi am') from ( select to_date('09:01 am','hh:mi am') dt from dual ) / TO_CHAR( TO_CHAR( TO_CHAR( -------- -------- -------- 09:01 am 9:1 am 9:01 am
Notice the three different outputs. The first column shows what you are probably retrieving right now: The time components are padded out to two characters with leading zeros when necessary. The second column shows the effect of FM: The leading zeros are suppressed. That, however, is probably not what you want; the value 9:1 is not very clear. The third column is probably what you really want: The leading zeros on the hour are turned off, but not for the rest of the time components. FM works like a toggle switch—it turns off blank padding (white space) when it is first encountered and turns it back on when it is seen again. It is very handy for changing the formatting of dates and times. Consider the difference between the following two columns:
select to_char(sysdate,'Day Month dd'), to_char(sysdate,'fmDay Month dd') from dual; NO_FM ------------------------------------- Sunday November 02 FM ------------------------------------- Sunday November 2
The second version, using FM, is generally the expected output from a nicely formatted date like that. The first version, without FM, blank-pads the names of days and months in order to achieve uniform output (a fixed-length string).
I am puzzled by why I am getting negative zeros in this query:
select c1, c2, c1 - (trunc(to_date(c2,'YYYYMMDDHH24MISS') + 1) - to_date(c2,'YYYYMMDDHH24MISS')) * (24*3600) result from t where c3 != 'N05416776X0001' C1 C2 RESULT ---- -------------- ------ ... 1229 20030731235942 1211 82 20030731235838 -0
Well, when I execute your query, I see different results. I am, however, able to reproduce your issue using the set numformat SQL*Plus command, which I assume you must have used to format the output. The problem is that the result column is, in fact, not zero; it is a very small number, and your formatting commands are hiding that fact from you. Consider the following:
select c1, c2, c1-(trunc(to_date(c2, 'YYYYMMDDHH24MISS')+1) -to_date(c2, 'YYYYMMDDHH24MISS')) * (24*3600) result from ( select 82 c1, 20030731235838 c2 from dual ) / C1 C2 RESULT ----------- ---------- ---------- 82 2.0031E+13 -2.240E-36
Notice how, by default, SQL*Plus will format the numbers using exponential notation (it does a "best fit" on the formats). You used to get a fixed format display and hence suppressed the decimals from the display:
SQL> set numformat 99999999999999 SQL> / C1 C2 RESULT -------- --------------- ---------- 82 20030731235838 -0
So, the result is really not negative zero, but rather a very small negative number. Be careful with formats; they can be used to format information nicely, but they can also hide information from you.
I have noticed that the DBA_JOBS.NEXT_DATE value shifts a little every day. How can I use this mechanism for business-critical operations? For one of my servers, the time shift is more than 10 minutes now. Is something wrong in this case?
This is the expected, defined behavior. It is easy to make it so that the DBA_JOBS .NEXT_DATE value does not "slide."
In the database, the jobs run "on or about the scheduled time." When the job runs, the database will simply take the string you specified as the interval and evaluate it (select it from DUAL , for example). Assume, for instance, that you scheduled a job with a start time of midnight tonight and an interval of SYSDATE+1 . Further, assume that the job scheduled to run at midnight actually runs at 12:00:20 a.m. (20 seconds after midnight, which is very close to midnight but off by seconds). The job queue facility will take your string SYSDATE+1 and evaluate it. The next scheduled start time will therefore be 12:00:20 a.m. tomorrow! Suppose the process repeats itself (always about 20 seconds off). In two weeks' time, the job will be running at about 12:05 a.m. In a month, it will be 10 minutes past the hour, and so on.
The solution is to use date functions that evaluate to a fixed time in the future. For example, suppose you want a job to run at 1 a.m. every night. Your interval should be:
This date function will always evaluate to 1 a.m. tomorrow, regardless of when you evaluate it.
I have a schema on an Oracle8i Release 3 (8.1.7) database with which I am using Oracle Forms 6i as the user interface. At the moment, it has only one user schema under which the tables and so on have been created.
I am unsure which is the best, most robust way to allow users with different logons to use my application using Oracle Database's built-in utilities. I have been told that I will most likely have to use synonyms if I have multiple logins for the users to be able to run the application as themselves, because they will need to have <schema>.<procedure> privileges to execute a procedure. Is this true?
Well, if you ask me, there is absolutely nothing wrong with your application using <schema>.<procedure>. In fact, I think it is a good practice. But, if you haven't done that and you won't or are not allowed to do that, you can add the following to your code:
alter session set current_schema=schema_name;
Consider, for example, after logging in as some user other than SCOTT :
SQL> desc dept ERROR: ORA-04043: object dept does not exist SQL> alter session set 2 current_schema=scott; Session altered. SQL> desc dept Name Null? Type ------ -------- -------- DEPTNO NOT NULL NUMBER(2) DNAME VARCHAR2(14) LOC VARCHAR2(13)
The session's CURRENT_SCHEMA setting is used to supply the default schema name for your session. So, whenever the database would have put your current username into the query, it will put in SCOTT (or whatever schema you set it to). Note that this does not affect security in any way, shape, or form; it only affects the default schema name used. If I did not have access to SCOTT.DEPT , the above describe would have failed.
I wonder whether Oracle Database can limit the objects created (to 10; for example) for a particular user. Is it possible to implement such requirements?
System event triggers are an excellent facility for implementing these unique, custom rules. You can use a BEFORE CREATE trigger to count the number of existing objects in the schema and force the object creation to fail if the object count exceeds your maximum. Listing 1 shows an example of such a trigger.
Alternatively, you can place the triggers on each of the schemas for which you want this behavior, instead of at the database level. To see this in action, I'll simply set up a test account:
SQL> drop user a cascade; User dropped. SQL> create user a identified by a; User created. SQL> grant create session, 2 create table to a; Grant succeeded.
Now I'll connect as that new user and create 10 objects:
SQL> connect a/a SQL> begin 2 for i in 1 .. 10 3 loop 4 execute immediate 'create table t' || i || ' (x int)'; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed.
So far everything is normal, but when I try to create that 11th object, I observe this:
SQL> create table t11 ( x int ); create table t11 ( x int ) * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-20001: You can only do 10 things! ORA-06512: at line 13
You might ask yourself, why isn't there a command to just do this? The fact is that the database could not possibly anticipate every unique customization or requirement out there, but it can give us the necessary tools with which to implement the customizations ourselves. For example, instead of this simple "limit to 10" rule, suppose you have a set of customer requests like these:1. "I'd like Oracle Database to make it so I could grant 'alter these 20 users only' to some user."
2. "Normally, the ALTER USER grant will permit someone to change anyone's password, including those of DBAs. I would like to have a table that described which users could alter which other users' passwords so, for example, a help desk analyst could be permitted to alter only these five users, not any user. So, I'd like the grant to actually look at this table to see if George is able to alter Mary's password."
3. "I'd like Oracle Database to make it so this user can alter any account that begins with APP_ ."
These are all valid requests based on help desk requirements (letting a relatively low-privileged help desk technician reset passwords, but not the DBA passwords!), but they are so unique and so customized that we cannot anticipate them all. But given that the database has stored procedures that run with definer's rights, I can easily implement the most customized, unique security protocols in the world. For example, to address the first customer request, I might create the procedure shown in Listing 2.
Since this procedure executes with the base privileges of the definer (so the owner of this procedure, not GEORGE , must have ALTER USER privileges), I was able to implement the first customer's requirement in a couple of seconds! It would be just as easy to meet the requirements for customer two and three. In fact, any of an infinite number of variations is possible this way. And they are as secure and auditable as you care to make them.
I am trying to use the RR format as supported in Oracle9i. According to the format of RR, if the current year and the specified year are both in the range of 0-49, the date will be in the current century. Suppose that the current year is 2003 and the specified date is 27-oct-17. Does that mean that with the RR format it will be the year 2017 and with the YYYY format it will be 1917? Will there be two different values displayed for the same date?
The answer in this case is that it would be 2017 with both RR and YY. It would be year 17 with YYYY. Consider the following:
alter session set nls_date_format='yyyy'; select to_date('27-oct-17','dd-mon-yy') c1, to_date('27-oct-17','dd-mon-rr') c2, to_date('27-oct-17','dd-mon-yyyy') c3 from dual / C1 C2 C3 ---- ---- ---- 2017 2017 0017
That aside for a moment, I'll make a very important point. The only correct, proper, professional, smart, intelligent thing to do is what we all learned in 1999 the hard way. It takes four characters to properly input a year. Period. The user interface should accept exactly four digits—no more, no less. Using two digits for data that requires four is something you should not permit. The ambiguities are countless. When you see 27-oct-17, you have no idea what date that really is.
But in answer to your question, if you take data away, data is lost. If you demand using two characters to display something that takes a minimum of four, you will lose data; you will corrupt it at some point.
Use YYYY, and you'll never have to worry about it!
I have records like this:
Time Amount ------------------- ------ 11/22/2003 12:22:01 100 11/22/2003 12:22:03 200 11/22/2003 12:22:04 300 11/22/2003 12:22:06 200 11/22/2003 12:22:45 100 11/22/2003 12:22:46 200 11/22/2003 12:23:12 100 11/22/2003 12:23:12 200
I need to get the sum for the amounts where the time value for the records is within three seconds of the previous or following record. From the above inputs, I would expect to get three rows: the first four records belong together, then the next two, and lastly the last two. I am looking for "instances" where I define an instance such that the records within the instance are no more than three seconds apart. I think I might need to use LAG and LEAD functions.
This is an interesting problem, and you are right—the analytic functions are the right approach. I wrote down the objectives for my approach as a list:
Take a row, and if the prior row is within three seconds, it stays in this group. Otherwise, I'll start a new group. I can use LAG to look back a row and get the difference in time between the current and the preceding row.
Once I mark the records that represent a time difference greater than three seconds, I need to carry that group ID forward.
Then, I can sum up the data for this group ID.
I'll tackle each of these steps one by one to see what happens. I loaded up a small table T with your data, and to save space, I'll be using an NLS_DATE_FORMAT of hh24:mi:ss so I'll see only the time component of my dates. Starting with the first requirement, I mark the beginning of each new group (or as you called them, each "instance"):
select time, amount, case when time-ltime > 3/24/60/60 or ltime is null then row_num end rn from ( select time, amount, lag(time) over (order by time) ltime, row_number() over (order by time) row_num from t ) order by 1 / TIME AMOUNT RN -------- ------ -- 12:22:01 100 1 12:22:03 200 12:22:04 300 12:22:06 200 12:22:45 100 5 12:22:46 200 12:23:12 100 7 12:23:12 200 8 rows selected.
At the beginning of each group, I output the return value from the ROW_NUMBER() analytic function. I marked the first row of each group by subtracting the value of the prior row's time (retrieved using LAG()) from the current row. If that resulting value was NULL—indicating the first row in the result set—or was greater than three seconds, I marked that row. Now I need to carry forward these unique values, so that all of the rows are in the same group. I can use the analytic function MAX() for this. I'll take the above query (QUERY-FROM-ABOVE)—minus the ORDER BY 1—and execute this query:
select time, amount, max(rn) over(order by time) max_rn from (QUERY-FROM-ABOVE) order by 1 / TIME AMOUNT MAX_RN -------- ------ ------ 12:22:01 100 1 12:22:03 200 1 12:22:04 300 1 12:22:06 200 1 12:22:45 100 5 12:22:46 200 5 12:23:12 100 7 12:23:12 200 7 8 rows selected.
That carries the group identifiers forward for me. This works because MAX(RN) OVER (ORDER BY TIME) uses the implicit range window of "between current row and unbounded preceding," meaning when the
The last thing to do now is complete the aggregation. I've marked all my groups; applying the aggregation is easy now. Using the above query (without the ORDER BY ), I can do the following:
select min(time), max(time), sum(amount) from (QUERY-FROM-ABOVE) group by max_rn order by 1 / MIN(TIME MAX(TIME SUM(AMOUNT) -------- -------- ----------- 12:22:01 12:22:06 800 12:22:45 12:22:46 300 12:23:12 12:23:12 300 3 rows selected.
And you have exactly what you need. Analytics are extremely powerful once you get to know them.