As Published In
Oracle Magazine
July/August 2000

TECHNOLOGY: Ask Tom

  

Back to Basics

By Tom Kyte Oracle Employee ACE

 

With e-initiatives everywhere, DBAs have to ensure robust performance and scalability. Here are some Q&A highlights about sizing tables and pinning a sequence in the SGA.  

What's the difference between record size and row size? I'm trying to size tables; I've read about numerous ways to calculate the size of the table, such as multiplying the record length by the number of records to find the total size of the table. Is this calculation correct?

If by "record" you mean raw input record length, then no, this calculation is not correct. It can give you a ballpark estimate, but it's not completely accurate by any means. For starters, for each column in the record, you must also consider the overhead.

For example, say I load the number 1,000,000,000 into a numeric field. That first number will take 2 bytes of storage. Say I then load the number 99,999 into that same field; that will take 4 bytes of storage.

Dates always take 7 bytes. For varchars and chars, it depends on the actual number of characters and the field length, so storage overhead for these types of fields can vary widely.

You also have to consider block overhead. The system uses a couple of bytes for each block. In addition, you have to take into account the block size itself: The bigger the block size, the more rows you can store per block. The result is less total block overhead.

For example, if I configure my system to use 2K blocks, the system might get two rows for each block. If I configure 8K blocks, however, my system would get about nine rows per block. Although the block overhead would be about the same on a block-by-block basis, my 2K block size would result in four times as many blocks and hence four times as much block overhead.

With that said, my tried-and-true method for sizing a table involves three simple steps (you'll see these steps in action in the answer to the "More Sizing Matters" question): 

  1. Load some representative rows into a table.
  2. Analyze the table.
  3. Multiply the blocks column found in user_tables by the factor needed to make your sample as large as the anticipated number of total rows; this will be the estimated table size.

If you get a decent sample, meaning a fairly representative set of the data your table will be holding, this process works well. If you cannot generate a decent sample, you won't be able to size this table no matter what. No simple formula is going to give you an answer if you don't know what the input data will be and the impact of overhead in your particular database tables and their contents.

More Sizing Matters

How can I find out the size of a table in bytes?

There are two ways of looking at table size. One is the amount of space physically allocated to the table, whether it uses the space or not. This aspect of table size is visible in the USER_SEGMENTS view; you can use this query to get the number of bytes:

select segment_name, bytes
  from user_segments where
  segment_type = 'TABLE';

The other aspect of table size is the amount of space a table actually is using within the allocated space. To find the amount, you can use this query:

scott@dev8i> analyze table emp compute statistics;
Table analyzed.
scott@dev8i> select num_rows * avg_row_len
  from user_tables
  where table_name = 'EMP';

 

NUM_ROWS*AVG_ROW_LEN
--------------------
     560
scott@dev8i> select bytes
  from user_segments
  where segment_name = 'EMP';

 

  BYTES
----------
  524288

The result of the query shows that the emp table is using 560 bytes of the 524,288 bytes allocated to it.

One final note: Make sure you are aware of the ramifications of using the analyze command and how it may affect the optimizer. Either analyze all of your tables regularly or delete the statistics after running the analyze command. 

Maximum Row Size

Okay, now that I've got all these sizes figured out, I'm still wondering: What's the maximum row size in an Oracle table?

The answer to that depends on the release of Oracle that you're running. For Oracle8, Release 8.0 and later, the answer is 4,000GB (or 4GB per LOB, 1,000 LOBs per table). Just take the maximum varchar2 size (4000) or char size (2000) and add them up—4000x1000=4,000,000 bytes of structured data.

For Oracle7 Release 7.x, the limit is 254 columns per table in which the varchar2 maximum is 2000 bytes, so that's about a total of 508,000 bytes structured, or 1 long per table at 2GB. 

Pinning a Sequence in the SGA

I'm running Oracle8 Release 8.0.5 (Enterprise Edition) for Windows NT Server. Our team lead has asked me to "pin a sequence" into the system global area (SGA). I have a vague understanding of what is required and have researched various books, all of which talk about using this technique to tune performance—but none of these books seem to provide the actual syntax or explain how to do it! Can you explain how to pin a sequence in the SGA?

Before diving into the details, here's a recap of what a sequence is for beginning DBAs: A sequence generates a serial list of unique numbers for numeric columns of a database's tables. Sequences are independent of tables, so more than one table can use the same sequence. That's why they're a perfect candidate for pinning in the SGA. The first step to pinning a sequence is to make sure the dbms_shared_pool package exists; that's the package that lets you pin elements in the shared pool.

The quick way to check for the package is simply to try to grant privileges to someone:

sys@ORA806.WORLD> connect sys/manager
Connected.
sys@ORA806.WORLD> grant execute on dbms_shared_pool to
tkyte;
grant execute on dbms_shared_pool to tkyte
     *
ERROR at line 1:
ORA-04042: procedure, function, package, or package body
does not exist

In this example, the results show that the package (which is always owned by sys, by the way) doesn't exist. At this point, you would connect as sys or internal (and only as one of these users) using svrmgrl and run the dbmspool.sql and prvtpool.plb (see Listing 1). These two SQL scripts comprise the package. 

Code Listing 1: Install the package 

$ cd $ORACLE_HOME/rdbms/admin
$ svrmgrl

Oracle Server Manager Release 3.0.6.0.0 - Production
(c) Copyright 1999, Oracle Corporation. All Rights Reserved.

Oracle8 Enterprise Edition Release 8.0.6.0.0 - Production
With the Partitioning and Objects options
PL/SQL Release 8.0.6.0.0 - Production

SVRMGR> connect internal
Connected.
SVRMGR> @dbmspool.sql
Statement processed.
Statement processed.
Statement processed.
Statement processed.
SVRMGR> @prvtpool.plb
Statement processed.
Statement processed.
 

After you've installed the package, you can grant execute privileges for a user account (as we tried in the first step); this time, it will succeed:

SVRMGR> grant execute on dbms_shared_pool
to tkyte;
Statement processed.
SVRMGR>

Next, we'll try to actually pin the sequence into the SGA using the keep procedure:

SVRMGR> execute
sys.dbms_shared_pool.keep('OWNER.SEQUENCE_NAME','Q');
ORA-06564: object OWNER.SEQUENCE_NAME does not exist
ORA-06512: at "SYS.DBMS_UTILITY", line 67
ORA-06512: at "SYS.DBMS_SHARED_POOL", line 35
ORA-06512: at "SYS.DBMS_SHARED_POOL", line 43
ORA-06512: at line 2

These error messages tell us that the sequence name doesn't exist. Let's try it again, but this time let's create a sequence name first. Here's how things should look when you use the syntax with valid inputs:

SVRMGR> create sequence foo start with 1 increment by 1;
Statement processed.
SVRMGR> execute sys.dbms_shared_pool.keep('TKYTE.FOO','Q');
Statement processed.

This is what you should see in a successful pin operation on that sequence. That's all there is to it. 

A Closer Look at Hot Backup Mode

Does Oracle Database Server write to data files during hot backup mode at the same time it writes to the redo log file? If a tablespace is in hot backup mode and data for a table in that tablespace is changed, do changes go to data files, or do they just go to the redo log?

Next Steps


 ASK Tom
Tom Kyte answers your most difficult technology questions. Highlights from that forum appear in this column.

 FOLLOW Tom on Twitter
 

READ
 more Tom
 Oracle Database Concepts 11g Release 2 (11.2)
 Expert Oracle Database Architecture: Oracle Database 9i, 10g, and 11g Programming Techniques and Solutions,Second Edition

 DOWNLOAD Oracle Database 11g Release 2

FOLLOW Oracle Database
 on Twitter
 on Facebook
 

During a hot backup, data is written to the datafiles all the time. Extra information is written to the the redo log, this is true, but the datafiles are written to as well. Here's what Oracle's Concepts manual says about this: "If a database is operating in ARCHIVELOG mode, you can back up any datafile in it while the database is open, while the associated tablespace is online, and while the specific datafiles are online and currently in normal use. This type of datafile backup is considered an online datafile backup.

An online datafile backup is a copy of fuzzy or inconsistent data. A datafile that is online or being recovered is said to be "fuzzy" because the blocks are not necessarily written in the order they are changed. Therefore, all of the data within the online datafile backup is not guaranteed to be consistent with respect to a specific point in time. However, a fuzzy datafile backup is easily made consistent during database recovery procedures.

When the backup of an online tablespace (or individual datafile) starts, Oracle stops recording the occurrence of checkpoints in the headers of the online datafiles being backed up. This means that when a datafile is restored, it has "knowledge" of the most recent datafile checkpoint that occurred before the online tablespace backup, not any that occurred during it. As a result, Oracle asks for the appropriate set of redo log files to apply should recovery be needed. Once an online backup is completed, Oracle advances the file header to the current database checkpoint."

This excerpt tells us that the datafiles being backed up are 'fuzzy or inconsistent' and that the extra redo information is used to make them consistent if you use these files to restore and recover a portion of the database. The only difference as far as the datafile is concerned is that while in hot backup mode the checkpoints won't be recorded in the datafile header (although the blocks will be flushed to the datafiles). The last completed checkpoint remains in the datafile header until the backup is complete, so a recovery process can tell how many archived redo log files are needed to fully recover the datafiles in question.

 

Resizing Redo Log Files

I'm running Oracle7 Release 7.3.4 in archivelog mode. My redo log file size is 1MB, and it generates numerous archive log files. Can I increase the size of my redo logs without shutting down my server?

You must add larger redo log files and then drop the existing small ones. There's no way to resize a log file, whether the database is up or down. Just add new ones, issue an alter system switch logfile statement to use the new ones, and then drop the old small ones.


Tom Kyte's bio and article index

Send us your comments